Usage I: Speech (and manual sign language, e.g., ASL) has become the indispensable means for sharing ideas, observations, and feelings, and for conversing about the past and future. Speech so engages the brain in self-conscious deliberation, however, that we often overlook our place in Nonverbal World (see below, Neuro-notes V).
Usage II: "Earth's inhabitants speak some 6,000 different languages" (Raloff 1995).
Anatomy. To speak we produce complex sequences of body movements and
articulations, not unlike the motions of gesture. Evolutionary recent speech-production areas of
the neocortex, basal ganglia, and cerebellum enable us to talk, while
evolutionary recent areas of the neocortex give heightened sensitivity
a. to voice sounds (see AUDITORY
CUE), and b. to positions of the fingers and
Babble. 1. "Manual babbling has now been reported to occur in deaf children exposed to signed languages from birth" (Petitto and Marentette 1991:1493). 2. "Instead of babbling with their voices, deaf babies babble with their hands, repeating the same motions over and over again" (Fishman 1992:66). 3. Babies babble out of the right side of their mouths, according to a study presented at the 2001 Society for Neuroscience meeting in San Diego by University of Montreal researchers Siobhan Holowka and Laura Ann Petitto; non-speech cooing and laughter vocalizations are, on the other hand, symmetrical or emitted from the left (Travis 2001). "Past studies of adults speaking have established that people generally open the right side of the mouth more than the left side when talking, whereas nonlinguistic tasks requiring mouth opening are symmetric or left-centered" (Travis 2001:347).
Evolution I. Spoken language is considered to be between 200-thousand (Lieberman 1991) and two-million (Gibson 1993) years old. The likely precursor of speech is sign language (see HANDS, MIME CUE). Our ability a. to converse using manual signs and b. to manufacture artifacts (e.g., the Oldowan stone tools manufactured 2.4-to-1.5 m.y.a.) evolved in tandem on eastern Africa's savannah plains. Signing may not have evolved without artifacts, nor artifacts without signs. (N.B.: Anthropologists agree that some form of communication was needed to pass the knowledge of tool design on from one generation to the next.)
Evolution II. Handling, seeing, making, and carrying stone implements stimulated the creation of conceptual categories, available for word labels, which came in handy, e.g., for teaching the young. Through an intimate relationship with tools and artifacts, human beings became information-sharing primates of the highest order.
Evolution III. Preadaptations for vocal speech involved the human
tongue. Before saying words, the tongue had been a humble manager of "food
tossing." Through acrobatic maneuvers, chewed morsels were distributed to
premolars and molars for finer grinding and pulping. (The trick was not getting
bitten in the process.) As upright posture evolved, the throat grew in length,
and the voice box was retrofit lower in the windpipe. As a result the larynx,
originally for mammalian calling, increased its vocal range as the dexterous
tongue waited to speak.
Evolution IV. ". . . the earliest linguistic systems emerged out of vocalizations like those of the great apes. The earliest innovation was probably an increase in the number of distinctive calls" (Foley 1997:70; see TONE OF VOICE, Evolution).
Gestural origin. "[David B.] Givens has called our attention to matters too often ignored: the biological imperative to communicate, present along the whole evolutionary track; the persistence, out of awareness, of very ancient bodily signals and their penetration of all our social interaction; and the powerful neoteny--human gestures and sign language signs make use of some of the same actions to signal semantically related messages. These same powerful influences, it seems from the study of sign languages, are beneath and behind language as we know it today. Thus it should be easier to construct a theory of gesture turning into language, complete with duality of patterning and syntactic structures, and thence into spoken language, than to find spoken language springing full grown from a species but one step removed from the higher apes" (Stokoe 1986:180-81).
Gestures. 1. Speaking gestures aid memory and thought, research from the University of Chicago suggests. In a study of 40 children and 36 adults (published in the November, 2001 issue of Psychological Science), subjects performed 20 percent better on a memory test when permitted to gesture with their hands while explaining how they had solved a math problem. Those asked to keep their hands still as they explained did perform as well. Gesture and speech are integrally linked, according to Susan Goldin-Meadow, an author of the study. Goldin-Meadow noted that gestures make thinking easier because they enlist spatial and other nonverbal areas of the brain. 2. A growing body of evidence suggests that teaching babies ASL may improve their ability to speak. Again, this indicates a link between manual signing and vocal speech. Babies express cognitive abilities through certain hand gestures (e.g., by pointing with the index finger) earlier than they do through articulated words (the latter require more refined oral motor skills, which very young babies do not yet possess).
Law. According to the Federal Rules of Evidence (Article VIII.
Hearsay), "A 'statement' is (1) an oral or written assertion or (2) nonverbal
conduct of a person, if it is intended by the person as an assertion"
(Rule 801. Definitions).
Media. 1. According to the CBS Evening News show (October 17, 1995), the earliest known recording of a human voice was made on a wax cylinder in 1888 by Thomas Edison. The voice says, "I'll take you around the world." 2. The world's second most-recorded human voice is that of singer Frank Sinatra; the most recorded is that of crooner Bing Crosby (Schwartz 1995).
Sex differences I. "During phonological tasks [i.e., the processing of afferent (incoming), rhyming, vocal sounds], brain activation in males is lateralized to the left inferior frontal gyrus regions; in females the pattern of activation is very different, engaging more diffuse neural systems that involve both the left and right inferior frontal gyrus (Shaywitz et al. 1995:607).
Sex differences II: (Source: "Study: Women Listen More than Men [Associated Press, Copyright 2000]," Nov. 28, 2000) Score one for exasperated women: Research suggests that men really do listen with just half their brains. "In a study of 20 men and 20 women, brain scans showed that men when listening mostly used the left sides of their brains, the region long associated with understanding language. Women in the study, however, used both sides. Other studies have suggested that women "can handle listening to two conversations at once," said Dr. Joseph T. Lurito, an assistant radiology professor at Indiana University School of Medicine. "One of the reasons may be that they have more brain devoted to it." Lurito's findings, presented Tuesday at the Radiological Society of North America's annual meeting, don't necessarily mean women are better listeners. It could be that "it's harder for them," Lurito suggested, since they apparently need to use more of their brains than men to do the same task. "I don't want a battle of the sexes," he said. "I just want people to realize that men and women" may process language differently. In the study, functional magnetic resonance imaging or fMRI was used to measure brain activity by producing multidimensional images of blood flow to various parts of the brain. Inside an MRI scanner, study participants wore headphones and listened to taped excerpts from John Grisham's novel "The Partner," while researchers watched blood-flow images of their brains, displayed on a nearby video screen. Listening resulted in increased blood flow in the left temporal lobes of the men's brains. In women, both temporal lobes showed activity" (Source: Discovery.com News, December 12, 2000).
Vocal recognition. In his EMOVOX project ("Voice variability related to speaker-emotional state in Automatic Speaker Verification"), Prof. Klaus Scherer (Department of Psychology, University of Geneva) and his colleagues are researching the effects of emotion on speech to improve the effectiveness of automatic speaker verification (as used, e.g., in security systems).
RESEARCH REPORTS: 1. "The general model encompassing
both spoken and signed languages to be presented here assumes that the key lies
in describing both with a single vocabulary, the vocabulary of neuromuscular
activity--i.e. gesture" (Armstrong, Stokoe, and Wilcox 1995:6). 2. "With
all due respect to my esteemed colleague [Iain Davidson], our disagreement
doesn't really rest so much on whether or not I see a Broca's area on [fossil
cranium] 1470, whichever Homo it turns out to be . . . . Our disagreement really
stems from whether or not the manufacture of stone tools gives us any insights
to previous cognitive behavioral patterns, and as I wrote back in 1969,
'Culture: A Human Domain,' in CA [Current Anthropology], I think
there are more similarities than not between language behavior and stone tool
making, and I haven't retreated from this position, because I haven't seen
effective rebuttal, just denial" (Ralph L. Holloway, posting on Anthro-L, June
21, 1996, 4:04 PM). 3. "We tend to perceive speech sounds in terms of
'articulatory gestures,' whose boundaries and distinctions correspond to
articulatory (i.e., somato-motor) features, not just sound features . . ."
Neuro-notes I. Speaking is our most complex activity, requiring ca. 140,000 neuromuscular events per second to succeed. No animal on earth can match a human's extraordinary coordination of lips, jaws, tongue, larynx, pharynx, speech centers, basal ganglia, cerebellum, emotions, and memory, all required to utter a phrase.
Neuro-notes II. During the 1990-2000 Decade of the Brain, neuroscientists established that flaking a stone tool and uttering a word (e.g., handaxe) make use of the same--and closely related--brain areas. So nearly alike, in fact, are the neural pathways for manual dexterity and speech that a handaxe itself may be deciphered as though it were a paleolithic word or petrified phrase. Because a. the word "handaxe," and b. the perception of the worked stone (for which it stands) both exist as mental concepts (the neural templates for each are linked in the brain).
Neuro-notes III. Speech rests on an incredibly simple ability to pair stored mental concepts with incoming data from the senses. Ivan Pavlov (1849-1936; the Russian physiologist who discovered the conditioned response), e.g., observed dogs in his laboratory as they paired the sound of human footsteps (incoming data) with memories of meat (stored mental concepts). Not only did the meat itself cause Pavlov's dogs to salivate, but the mental concept of meat--i.e., memories of mealtimes past--was also called up by the sound of human feet. (N.B.: Pairing one sensation with memories of another [a process known as sensitization or associative learning] is an ability given to sea slugs, as well.)
Neuro-notes IV. Tool use itself probably increased concept
formation. MRI studies, reveal that children who make early, skilled use of the
digits of the right hand (e.g., in playing the violin) develop larger areas in
the left sensory cortex devoted to fingering. Thus, Pleistocene youngsters who
were precociously introduced to tool-making may have developed enhanced neural
circuitry for the task.
Neuro-notes V. In an unpublished Carnegie Mellon University study, 18 volunteers were asked to do a language task and a visual task at the same time. Magnetic resonance imaging (MRI) measured the amount of brain tissue used by each task in "voxels." Performed separately, the language and visual tasks each activated 37 voxels. Performed at the same time, however, the brain activated only 42 voxels rather than the expected 74. "The brain can only be activated a limited amount and you have to decide where to use that activation,' says Marcel A. Just, PhD, from the Center for Cognitive Imaging at Carnegie Mellon. He plans a study in which subjects will be tested doing multiple tasks while in a driving simulator. One of those tasks will involve using a cell phone" (Lawrence 2001).
Neuro-notes VI. Mirror neurons: There is growing evidence of a crucial role for mirror neurons in human speech: "Taken together, all these data show that gestures precede speech and that mirror neurons are probably the critical brain cells in language development and language evolution" (Iacoboni 2008:87).Neuro-notes VII. Mirror neurons: Consider Atsushi Iriki's abstract for the 2012 conference on "Mirror Neurons: New Frontiers 20 Years After Their Discovery": "The brain mechanisms that subserve tool use may bridge the gap between gesture and language--the site of such integration seems to be the parietal and extending opercular cortices."
Neuro-notes IX. Mirror neurons: "In the first weeks after birth [and '. . . probably subserved by the mirror [neuron] system . . .' (p. 21)] infants have been documented by experimental studies to imitate a variety of gestures, such as . . . vocal (vowel) productions . . ." [p. 24; source: Braten, Stein, and Colwyn Trevarthen (2007). Chapter 1: "Prologue," in Braten, Stein (Ed.), On Being Moved: From Mirror Neurons to Empathy (2007; Amsterdam: John Benjamins), pp. 21-34].
See also VERBAL CENTER.
Copyright 1998 - 2013 (David B. Givens/Center for Nonverbal Studies)
Speech can be strengthened with gestures. Photo of Raphael Palmeiro testifying before a U.S. Congressional hearing on March 17, 2005. Note the pointing index finger, aimed at committee members, as Palmiero says, "I have never used steroids. Period. I don't know how to say it any more clearly than that. Never" (Givens 2008:10). (Picture credit: unknown.)