SPEECH & SIGN LANGUAGE

Emphatic Words


Out of the abundance of the heart the mouth speaketh. --Matthew, XII, 34

Talk on, my son; say anything that comes to your mind or to the tip of your tongue . . . --Miguel de Cervantes (Don Quixote, 1605:695)

Nixon: "But they were told to uh"
Haldeman: "uh and refused uh"
Nixon: [Expletive deleted.] --Excerpt from the Nixon Tape Transcripts (Lardner 1997)


Spoken language. 1. A verbal and vocal means of communicating emotions, perceptions, and thoughts by the articulation of words. 2. The organization of systems of sound into language, which has enabled Homo sapiens a. to transcend the limits of individual memory, and b. to store vast amounts of information.

Usage I: Speech (and manual sign language, e.g., ASL) has become the indispensable means for sharing ideas, observations, and feelings, and for conversing about the past and future. Speech so engages the brain in self-conscious deliberation, however, that we often overlook our place in Nonverbal World (see below, Neuro-notes V).

Usage II: "Earth's inhabitants speak some 6,000 different languages" (Raloff 1995).

Anatomy. To speak we produce complex sequences of body movements and articulations, not unlike the motions of gesture. Evolutionary recent speech-production areas of the neocortex, basal ganglia, and cerebellum enable us to talk, while additional evolutionary recent areas of the neocortex give heightened sensitivity a. to voice sounds (see AUDITORY CUE), and b. to positions of the fingers and hands.

Babble. 1. "Manual babbling has now been reported to occur in deaf children exposed to signed languages from birth" (Petitto and Marentette 1991:1493). 2. "Instead of babbling with their voices, deaf babies babble with their hands, repeating the same motions over and over again" (Fishman 1992:66). 3. Babies babble out of the right side of their mouths, according to a study presented at the 2001 Society for Neuroscience meeting in San Diego by University of Montreal researchers Siobhan Holowka and Laura Ann Petitto; non-speech cooing and laughter vocalizations are, on the other hand, symmetrical or emitted from the left (Travis 2001). "Past studies of adults speaking have established that people generally open the right side of the mouth more than the left side when talking, whereas nonlinguistic tasks requiring mouth opening are symmetric or left-centered" (Travis 2001:347).

Evolution I. Spoken language is considered to be between 200-thousand (Lieberman 1991) and two-million (Gibson 1993) years old. The likely precursor of speech is sign language (see HANDS, MIME CUE). Our ability a. to converse using manual signs and b. to manufacture artifacts (e.g., the Oldowan stone tools manufactured 2.4-to-1.5 m.y.a.) evolved in tandem on eastern Africa's savannah plains. Signing may not have evolved without artifacts, nor artifacts without signs. (N.B.: Anthropologists agree that some form of communication was needed to pass the knowledge of tool design on from one generation to the next.)

Evolution II. Handling, seeing, making, and carrying stone implements stimulated the creation of conceptual categories, available for word labels, which came in handy, e.g., for teaching the young. Through an intimate relationship with tools and artifacts, human beings became information-sharing primates of the highest order.

Evolution III. Preadaptations for vocal speech involved the human tongue. Before saying words, the tongue had been a humble manager of "food tossing." Through acrobatic maneuvers, chewed morsels were distributed to premolars and molars for finer grinding and pulping. (The trick was not getting bitten in the process.) As upright posture evolved, the throat grew in length, and the voice box was retrofit lower in the windpipe. As a result the larynx, originally for mammalian calling, increased its vocal range as the dexterous tongue waited to speak.

Evolution IV. ". . . the earliest linguistic systems emerged out of vocalizations like those of the great apes. The earliest innovation was probably an increase in the number of distinctive calls" (Foley 1997:70; see TONE OF VOICE, Evolution).

Evolution V: My latest take on the issue (as of Aug. 23, 2015), in which I propose the Superimposition Theory of Language Origins:

PALM-UP AND PALM-DOWN GESTURES: PRECURSORS TO THE ORIGIN OF LANGUAGE

By David B. Givens, Center for Nonverbal Studies, Spokane, Washington USA

ABSTRACT

Human palm-up and palm-down hand gestures communicate about what Gregory Bateson has termed the "contingencies of social relationship" (1987, p. 372). Respectively, these gestures communicate about deference (as in a submissive social stance) and assertiveness (a more dominant posture). Palm-up-and-down cues have neural roots in an ancient, shared caudal-hindbrain, rh8-upper-spinal compartment that links laryngeal communication and vocalization to pectoral communication and gesture (Bass and Chagnaud 2013). The present paper provides observational data from three spoken conversations to explore the generally contrastive meanings of palm-up and palm-down cues in face-to-face interaction. It goes on to explore the gestures' neural circuitry, in the human nervous system, and traces the likely evolution of these human circuits to those of precursor vertebrates. The paper concludes with a "Superimposition Theory" to explain the origin of human speech. It is hypothesized that the older system of social communication--as exemplified in today's palm-up-and-down gestures--is a likely precursor to the newer communication system evident in speech. Building on the evolutionary older system of pectoral body movements and laryngeal vocalizations--used to call attention to the signaling animal (viz., "I am here")--the newer system of speech communication uses pectoral movements and laryngeal sounds to call attention to objects in the environment ("It is there"). Linguistic expression about objects was likely grafted--superimposed--onto ancient patterns of laryngeal and pectoral communication established millions of years before.

Gestural origin. "[David B.] Givens has called our attention to matters too often ignored: the biological imperative to communicate, present along the whole evolutionary track; the persistence, out of awareness, of very ancient bodily signals and their penetration of all our social interaction; and the powerful neoteny--human gestures and sign language signs make use of some of the same actions to signal semantically related messages. These same powerful influences, it seems from the study of sign languages, are beneath and behind language as we know it today. Thus it should be easier to construct a theory of gesture turning into language, complete with duality of patterning and syntactic structures, and thence into spoken language, than to find spoken language springing full grown from a species but one step removed from the higher apes" (Stokoe 1986:180-81).

Gestures. 1. Speaking gestures aid memory and thought, research from the University of Chicago suggests. In a study of 40 children and 36 adults (published in the November, 2001 issue of Psychological Science), subjects performed 20 percent better on a memory test when permitted to gesture with their hands while explaining how they had solved a math problem. (Those asked to keep their hands still as they explained did not perform as well.) Gesture and speech are integrally linked, according to Susan Goldin-Meadow, an author of the study. Goldin-Meadow noted that gestures make thinking easier because they enlist spatial and other nonverbal areas of the brain. 2. A growing body of evidence suggests that teaching babies ASL may improve their ability to speak. Again, this indicates a link between manual signing and vocal speech. Babies express cognitive abilities through certain hand gestures (e.g., by pointing with the index finger) earlier than they do through articulated words (the latter require more refined oral motor skills, which very young babies do not yet possess).

Gestures & Speech I. Both gestures and speech are physical articulations, each enabled by neuromuscular movements of specific bony and/or cartilaginous body parts. "Beginning ca. 500 million years ago in the ancient chordate spinal cord and hindbrain--in a shared caudal hindbrain, rh8-upper-spinal compartment--circuits for vocal-laryngeal and gestural-pectoral communication provide neural linkage between voiced words and forelimb cues (Bass and Chagnaud 2012)" (Givens forthcoming [ms. in press, page no. forthcoming]).

Gestures & Speech II. Gestures and speech are linked in the brain. Muscles that today move the human pectoral girdle and larynx evolved from hypobranchial muscles that originally opened the gill openings and mouths of ancient fishes. Neurocircuits that mediate our pectoral and laryngeal movements are connected in the posterior hindbrain and anterior spinal cord (Bass and Chagnaud 2012). The sonic (acoustic) properties of these bodily regions (vocalizing and pectoral vibration, respectively) were recruited for social signaling in a watery world. The sounds were basically "assertion displays" used to announce a sender's physical presence to others.

Gestures & Speech III. Gestures and speech are both physical actions: ". . . words themselves can be seen as actions [that should] . . . be the units of analysis for the anthropological study of language use" (Duranti 1997, p. 214). Anthropologist Bronislaw Malinowski: "Thus in its primary functions it [speech] is one of the chief cultural forces and an adjunct to bodily activities [including gesture]" (1935, p. 7).

Gestures & Speech IV. Both gestures and speech can serve as assertions: "In speaking we do not just establish meaningful sequences of sounds to be judged only in terms of grammaticalness and truth values. Rather, in saying something, we are always doing something. This is true not only of such obvious cases as commands, warnings, promises, and threats, but of assertions as well. Even the simple act of stating something about ourselves or others is a social act, it is the act of informing (this means that assertions are in principle no different from other kinds of speech acts)" (Duranti 1997, p. 222).

Gestures & Speech V. Both gestures and speech are adapted for social communication: "It is important that all communicative gestures of animals--head bobbing and hand waving of lizards, singing of whales or nightingales, cries of migrating geese, squeaking of mice, grunts of baboons--are both self-regulatory (felt within the body or guided by interested subjective attention to objects and events in the world) and adapted for social communication, intersubjectively. . ." (Trevarthen 2011, p. 13).

Law. According to the Federal Rules of Evidence (Article VIII. Hearsay), "A 'statement' is (1) an oral or written assertion or (2) nonverbal conduct of a person, if it is intended by the person as an assertion" (Rule 801. Definitions).

Media. 1. According to the CBS Evening News show (October 17, 1995), the earliest known recording of a human voice was made on a wax cylinder in 1888 by Thomas Edison. The voice says, "I'll take you around the world." 2. The world's second most-recorded human voice is that of singer Frank Sinatra; the most recorded is that of crooner Bing Crosby (Schwartz 1995).

Sex differences I. "During phonological tasks [i.e., the processing of afferent (incoming), rhyming, vocal sounds], brain activation in males is lateralized to the left inferior frontal gyrus regions; in females the pattern of activation is very different, engaging more diffuse neural systems that involve both the left and right inferior frontal gyrus (Shaywitz et al. 1995:607). Research on 1,400 brain-scan images suggests few significant, measurable differences between the brains of women and men, pertaining to speech or any other human verbal or nonverbal behavior (Joel, Daphna et al. [2015]. "Sex Beyond the Genitals: The Human Brain Mosaic." Proceedings of the National Academy of Sciences).

Sex differences II: (Source: "Study: Women Listen More than Men [Associated Press, Copyright 2000]," Nov. 28, 2000) Score one for exasperated women: Research suggests that men really do listen with just half their brains. "In a study of 20 men and 20 women, brain scans showed that men when listening mostly used the left sides of their brains, the region long associated with understanding language. Women in the study, however, used both sides. Other studies have suggested that women "can handle listening to two conversations at once," said Dr. Joseph T. Lurito, an assistant radiology professor at Indiana University School of Medicine. "One of the reasons may be that they have more brain devoted to it." Lurito's findings, presented Tuesday at the Radiological Society of North America's annual meeting, don't necessarily mean women are better listeners. It could be that "it's harder for them," Lurito suggested, since they apparently need to use more of their brains than men to do the same task. "I don't want a battle of the sexes," he said. "I just want people to realize that men and women" may process language differently. In the study, functional magnetic resonance imaging or fMRI was used to measure brain activity by producing multidimensional images of blood flow to various parts of the brain. Inside an MRI scanner, study participants wore headphones and listened to taped excerpts from John Grisham's novel "The Partner," while researchers watched blood-flow images of their brains, displayed on a nearby video screen. Listening resulted in increased blood flow in the left temporal lobes of the men's brains. In women, both temporal lobes showed activity" (Source: Discovery.com News, December 12, 2000).

Vocal recognition. In his EMOVOX project ("Voice variability related to speaker-emotional state in Automatic Speaker Verification"), Prof. Klaus Scherer (Department of Psychology, University of Geneva) and his colleagues are researching the effects of emotion on speech to improve the effectiveness of automatic speaker verification (as used, e.g., in security systems).

RESEARCH REPORTS: 1. "The general model encompassing both spoken and signed languages to be presented here assumes that the key lies in describing both with a single vocabulary, the vocabulary of neuromuscular activity--i.e. gesture" (Armstrong, Stokoe, and Wilcox 1995:6). 2. "With all due respect to my esteemed colleague [Iain Davidson], our disagreement doesn't really rest so much on whether or not I see a Broca's area on [fossil cranium] 1470, whichever Homo it turns out to be . . . . Our disagreement really stems from whether or not the manufacture of stone tools gives us any insights to previous cognitive behavioral patterns, and as I wrote back in 1969, 'Culture: A Human Domain,' in CA [Current Anthropology], I think there are more similarities than not between language behavior and stone tool making, and I haven't retreated from this position, because I haven't seen effective rebuttal, just denial" (Ralph L. Holloway, posting on Anthro-L, June 21, 1996, 4:04 PM). 3. "We tend to perceive speech sounds in terms of 'articulatory gestures,' whose boundaries and distinctions correspond to articulatory (i.e., somato-motor) features, not just sound features . . ." (Deacon 1997:359-60).

Neuro-notes I. Speaking is our most complex activity, requiring ca. 140,000 neuromuscular events per second to succeed. No animal on earth can match a human's extraordinary coordination of lips, jaws, tongue, larynx, pharynx, speech centers, basal ganglia, cerebellum, emotions, and memory, all required to utter a phrase.

Neuro-notes II. During the 1990-2000 Decade of the Brain, neuroscientists established that flaking a stone tool and uttering a word (e.g., handaxe) make use of many of the same--and closely related--brain areas. So nearly alike are the neural pathways for manual dexterity and speech that a handaxe itself may be deciphered as though it were a paleolithic word or petrified phrase. Because a. the word "handaxe," and b. the perception of the worked stone (for which it stands) both exist as mental concepts. The neural templates for each are linked in the brain.

Neuro-notes III. Speech rests on an incredibly simple ability to pair stored mental concepts with incoming data from the senses. Ivan Pavlov (1849-1936; the Russian physiologist who discovered the conditioned response), e.g., observed dogs in his laboratory as they paired the sound of human footsteps (incoming data) with memories of meat (stored mental concepts). Not only did the meat itself cause Pavlov's dogs to salivate, but the mental concept of meat--memories of mealtimes past--was also called up by the sound of human feet. (N.B.: Pairing one sensation with memories of another [a process known as sensitization or associative learning] is an ability given to sea slugs, as well.)

Neuro-notes IV. Tool use itself probably increased concept formation. MRI studies, reveal that children who make early, skilled use of the digits of the right hand (e.g., in playing the violin) develop larger areas in the left sensory cortex devoted to fingering. Thus, Pleistocene youngsters who were precociously introduced to tool-making may have developed enhanced neural circuitry for the task.

Neuro-notes V. In an unpublished Carnegie Mellon University study, 18 volunteers were asked to do a language task and a visual task at the same time. Magnetic resonance imaging (MRI) measured the amount of brain tissue used by each task in "voxels." Performed separately, the language and visual tasks each activated 37 voxels. Performed at the same time, however, the brain activated only 42 voxels rather than the expected 74. "The brain can only be activated a limited amount and you have to decide where to use that activation,' says Marcel A. Just, PhD, from the Center for Cognitive Imaging at Carnegie Mellon. He plans a study in which subjects will be tested doing multiple tasks while in a driving simulator. One of those tasks will involve using a cell phone" (Lawrence 2001).

Neuro-notes VI. Mirror neurons: There is growing evidence of a crucial role for mirror neurons in human speech: "Taken together, all these data show that gestures precede speech and that mirror neurons are probably the critical brain cells in language development and language evolution" (Iacoboni 2008:87).

Neuro-notes VII. Mirror neurons: Consider Atsushi Iriki's abstract for the 2012 conference on "Mirror Neurons: New Frontiers 20 Years After Their Discovery": "The brain mechanisms that subserve tool use may bridge the gap between gesture and language--the site of such integration seems to be the parietal and extending opercular cortices."

Neuro-notes VIII. Mirror neurons: According to Egolf (2012), "Gestures lead then speech follows, suggesting further that mirror neurons are critical for speech and language development. The interdependence of speech and gesture dashes some cold water on the espoused dichotomy between verbal and nonverbal communication" (p. 90). (Source: Egolf, Donald B. [2012]. Human Communication and the Brain [Plymouth, U.K.: Lexington Books].)

Neuro-notes IX. Mirror neurons: "In the first weeks after birth [and '. . . probably subserved by the mirror [neuron] system . . .' (p. 21)] infants have been documented by experimental studies to imitate a variety of gestures, such as . . . vocal (vowel) productions . . ." [p. 24; source: Braten, Stein, and Colwyn Trevarthen (2007). Chapter 1: "Prologue," in Braten, Stein (Ed.), On Being Moved: From Mirror Neurons to Empathy (2007; Amsterdam: John Benjamins), pp. 21-34].


FACIAL DISPLAYS (SYNTACTIC)

Markers. These are facial expressions which serve as markers for words (see WORD) and or clauses in conversations. They can include grammatical information and help organise the structure of the conversation (Bavelas and Chovil, 2000, 2006). For example, raising an eyebrow and or opening eyes to emphasise a word.

(John White)

KINDA SORTA

Vocal shrug. In English speakers, the verbal practice of inserting a "kind of" or "sort of" phrase into a sentence to hedge its accuracy, truth or veracity. Nonverbally, abbreviated versions of the phrases, used as "throw away" comments, may be likened to shoulder-shrugs of uncertainty.

Usage. Kinda-sorta phrases are used in political talk shows to suggest that pundits' vocal comments may not be entirely true as stated, giving them an opportunity to recant, restate or rephrase. The verbal remarks may align with nonverbal hedge cues derived from the shrug (see SHOULDER-SHRUG DISPLAY).

Neuro-notes. That we live in a perennially uncertain world is reflected by the brain's innate ability to function, verbally as well as nonverbaly, despite prevailing cognitive doubt and gaps in certitude.

See also VERBAL PAUSE.

See also VERBAL CENTER.

Copyright 1998 - 2020 (David B. Givens & John White/Center for Nonverbal Studies)
Speech can be strengthened with gestures. Photo of Raphael Palmeiro testifying before a U.S. Congressional hearing on March 17, 2005. Note the pointing index finger, aimed at committee members, as Palmiero says, "I have never used steroids. Period. I don't know how to say it any more clearly than that. Never" (Givens 2008:10). (Picture credit: unknown.)