Grantee: Princeton University, Princeton, NJ, USA
Researcher: Asif A. Ghazanfar, Ph.D.
Grant Title: Vocal communication emerges and evolves through coupled oscillations
https://doi.org/10.37717/220020238
Program Area: Understanding Human Cognition
Grant Type: Scholar Award
Amount: $600,000
Year Awarded: 2010
Duration: 6 years
In the context of vocal communication, the brain is like an AM radio. In an AM radio transmission, a signal is generated at a frequency that corresponds to a specific rate of amplitude modulation (the 'AM'). This AM signal carries information content like music or a talk-show and is broadcast via an antenna from the local radio station. To capture this transmitted AM signal, your radio uses another antenna, but this antenna is non-specific--it will capture any and all AM signals that impinge upon it. To hear a specific radio station, your radio uses a tuner. This tuner can be used to pick out one specific frequency and does so using a principle called 'resonance', a process that filters out unwanted signals by only amplifying that one specific frequency.
During vocal communication, the speaker is the radio station broadcasting an AM speech signal via the mouth, and that this signal (as well as all the other noises in the background) is picked up by the antenna-like ears of the listener. The listener's brain is pre-tuned to the specific AM frequency of the speech signal and thus amplifies it through resonance. What is this particular radio station? Well, it's not actually a specific frequency, but rather a narrow range: 3-8Hz. The structure of the speech signal is amplitude modulated with a 3-8Hz frequency that resonates with on-going oscillations in the auditory regions of the listener's brain that are also 3-8Hz in frequency. To further amplify this vocal signal in noisy environments, humans evolved rhythmic facial movements with the same frequency to accompany speech. These rhythms divide up or 'chunk' the speech rhythm so that the listener's brain can efficiently extract meaningful information from the signal. According to this framework, vocal communication emerges through the interactions between the brain and body of the signaler along with the body and brain of the receiver.
This perspective makes explicit the idea that behavior is a circular process, flowing through the nervous system into the muscles and re-entering the nervous system through the sense organs on each side of a communicative exchange. That is, while we typically think that signals between parts of the brain take place only through anatomical connections, in fact, neural states can influence other neural states with the environment as a conduit. Take, for example, just one side of a conversation. A speaker, without any awareness of doing so, adjusts the production of his or her own speech patterns by monitoring how it sounds in a given context. The motor areas of the brain generates a program to say something, the body complies, producing an acoustic signal that mixes in the air with ambient noise. This sound travels back to the speaker's ears and auditory system and thus acts as a feedback signal telling the motor system to adjust vocal output if necessary. Research in my lab focuses on showing how a similar process occurs when two people or two monkeys are communicating with each other. That is, we're trying to understand how communication between the motor parts of the speaker's brain and the auditory parts of the listener's brain is coordinated through the vocal signal, with the air as a conduit--and how this process may have changed over the course of human evolution.