Funded Grants


Gesture, Language and the Brain

One of the most important findings of our century is that human languages around the world are very similar to one another. Most cultures use spoken languages, which appear in every corner of the globe in which people are able to hear and speak. These languages draw their sounds from a small set of the possible sounds we can produce, and then combine these sounds in sequence to form words, phrases, and sentences. In every culture there are words for a remarkably similar set of concrete and abstract concepts of objects and actions; and there is also a remarkably similar set of principles for combining these words to form the phrases and sentences of the language. For example, only a few of the many possible orderings of the words is actually found in languages of the world, and these few are common to many hundreds and thousands of the different and distinct languages. Human children are equally capable of learning any of the languages, and acquire their native language on a similar timetable, regardless of their culture or family circumstances.

Perhaps even more surprising, when humans are not capable of hearing or speaking for example, when they are born deaf, when they grow up working in noisy environments (e.g., sawmills), or when their cultural rules involve long periods of life when speaking is prohibited (e.g., in certain Aboriginal cultures when a woman is widowed) - they readily develop sign languages, using their hands and eyes to express themselves in very comparable ways. Recent scientific findings have shown that sign languages which develop naturally among groups of deaf individuals have the same properties as naturally developed spoken languages. This is not because the signed languages borrow these properties from the surrounding spoken language. Sign languages that develop entirely independently of spoken languages have similar linguistic properties, even in remote regions where the users have never heard and never been taught to use speech or read print.

The properties of human languages are not widespread across species; animal communication systems do not involve principles of order and combination of words or elements, and (despite some publicity to the contrary) animals reared in human families do not appear capable of readily acquiring a human language. In short, humans appear to have evolved special abilities for communication that are specific to our species and that are widespread across our members. An extremely important question, then, concerns what brain mechanisms have evolved in humans to permit and support these abilities.

Previous research has shown that spoken languages are processed primarily by certain regions of the left hemisphere of the brain. These brain areas are larger in humans than in non-human primates, and lie at the intersection of the auditory and motor areas needed for processing and producing speech. Unfortunately, we have only limited understanding of the details of what these brain areas are doing during language processing, because most techniques for studying the brain are not possible to use on humans. However, modern techniques for observing brain activation in healthy human volunteers is helping to provide more information.

One extremely important question is whether these left hemisphere brain areas are activated during the processing of sign languages, as they are during the processing of spoken languages. If the same brain regions subserve the processing of sign languages and spoken languages, as some of the limited available research indicates, it would suggest that brain areas initially specialized for the processing and production of speech may have evolved to control the processing of more abstract properties of language - its sequencing and its grammatical properties regardless of whether the language is spoken or signed. In contrast, differences in the brain regions activated for the processing of spoken and signed languages will provide important insights into the role of input and output channels in processing language, and may help us understand how these types of languages differ.

The research in the present proposal therefore focuses on asking what regions of the brain are activated during the processing of various aspects of sign languages in deaf individuals who are native signers. We will use functional magnetic resonance imaging (fMRI) to determine which specific regions of the brain are active during specific tasks, as reflected by the level of oxygen in blood flow to these regions. If one compares a task involving processing a sign to one involving mere visual processing, it is possible to observe those brain regions specifically involved in understanding the sign. Similarly, comparing brain activation during processing a sentence in sign language to that during processing a random sequence of signs or of gestures can permit us to determine which brain regions are specifically involved in understanding the signed sentence.

There are two reasons why studying brain activation during sign language processing will be especially revealing. First, looking for similarities in the brain regions involved in spoken and sign language processing will reveal those brain regions devoted to processing the abstract properties of all human languages. Second, in the visual-gestural medium we can compare the processing of sign language to the processing of a very similar but non-linguistic behavior: gesture. This second comparison we hope will permit us to gain insight into how the brain mechanisms underlying language might have evolved in humans. In the speech/auditory medium, there is no obvious elaborate use of sound from which spoken languages might have derived; our evolutionary early use of grunts or pre-linguistic sounds is apparently gone. It is therefore not possible to compare how the brain processes spoken languages to how it processes the non-linguistic precursors of speech. In contrast, however, in the visual-gestural medium, humans have both sign languages and the more widespread and simpler ability to 'gesture.' Hearing individuals gesture naturally while they speak, producing very simple gestures one at a time. When hearing people are asked to convey information entirely through gesture, without speaking, they are quickly capable of producing gestures in sequence in order to communicate. These gestural abilities are thought to be the source from which the much more systematic and complex properties of sign languages emerge, over generations, as deaf individuals begin to use these abilities for all of their communicative needs. We therefore intend to film hearing people producing these different forms of gesture, as well as filming deaf people producing sign language, and we will compare patterns of brain activation in hearing and deaf individuals as they attempt to comprehend these materials. Observing how the active brain regions change, as we move from simple through complex gesture and finally to sign language, can provide truly novel insight into how the brain may have modified its operations as language abilities evolved in humans.

As noted, this research will have significant scientific benefits, providing researchers with new information about the brain mechanisms important for language processing, and how they may have become specialized and adapted for language. At the same time, our research has potentially profound benefits for the larger society and its health and welfare. Deafness affects millions of individuals in the world, including many children. The most devastating effect of deafness is, of course, on the ability to acquire and process language, as well as to acquire literacy. Our research is important for understanding the mechanisms by which deaf children and adults may be able to acquire natural sign languages, acquire sign languages devised as replacements for spoken languages in deaf education, or adapt to cochlear implants or other prosthetic devices providing access to spoken language. Our findings will therefore help to provide appropriate resources for supporting healthy development in deaf children and adults. Our proposed research will provide extremely important information about the adaptability and plasticity of the brain to perform human language processing through multiple channels and multiple media. Understanding the limits of such plasticity, and the flexibility and nature of the underlying brain mechanisms for language, can assist us in determining where our efforts in habilitative methods for deaf individuals should be directed. Such information will also be crucial for rehabilitating hearing individuals with congenital language disabilities, or with language impairments resulting from stroke or disease.

Finally, an even broader import of our research is in its ability to reveal the deep universal properties of the human mind and brain, and its ability to perform in complex and characteristically human ways even when the sensory channels are profoundly limited and changed. Language is one of the peak achievements of the human mind. Only within the last 20 years have we learned that human language abilities flourish even within sign languages, and thus, are not restricted to speech. But because of limits on the scientific techniques available for observing the brain, we have not yet learned a great deal about how the brain actually performs this remarkable feat. It is our hope that this program of research can make an important contribution to this enterprise.