Grantee: University of California - Los Angeles, Merced, CA, USA
Researcher: Anne Warlaumont, Ph.D.
Grant Title: Understanding the emergence of speech vocalizations in human infancy
https://doi.org/10.37717/220020507
Program Area: Understanding Human Cognition
Grant Type: Scholar Award
Amount: $600,000
Year Awarded: 2017
Duration: 6 years
Over the first year of life, the vocalizations infants produce change remarkably, as do the ways that infants use those vocalizations to communicate (Oller, 2000). Neonates produce primarily cries and short, quiet non-cry vocalizations. By 3 months, infants typically have expanded their repertoires significantly, exploring pitch, amplitude, and vocal quality. They also often produce some very primitive pre-consonant sounds, although those are usually not produced with precise articulation at this age. At 6 months, many but not all children have begun producing at least one well-formed (precisely timed) consonant-vowel sequence (such as “ba” or “muh”) on a regular basis. At 9 months, almost all typically developing children produce well-formed (canonical) syllables, and many produce multiple different consonant and vowel types. By 12 months, many children produce at least one or two recognizable words with consistent meaning associated to the sound. My research seeks to document how this dramatic vocal learning unfolds and to understand the neural, social, and physical mechanisms involved. Our work involves both research with human participants and building computational models.
On the human side, my work focuses on utilizing long-form audio recordings of children’s vocalizations and auditory environments, collected “in the wild”. Some key findings to date are that infant and adult vocalizations tend to be hierarchically clustered over the course of a day; adult responses to infant vocalizations tend to be contingent on the type of vocalization a child made; and small differences in vocalization quantity and in adult response patterns appear to lead to larger differences in developmental trajectories of demographic and clinical groups. Currently, I am leading an effort to analyze infant vocalization as process of foraging in acoustic space. We are borrowing methods used to study animal foraging in space and adult human foraging in memory, which are in turn borrowed from work in physics studying particle diffusion. My lab also has human listeners label events of interest within daylong home audio recordings, to ask questions that currently cannot be addressed using automated methods, and we are also working to build automated methods that produce more accurate and more specific labels.
On the computational modeling side, my collaborators and I simulate how neural, mechanical, and social mechanisms jointly contribute to infant’s vocal learning. A main finding from this line of work is that reward-modulated Hebbian learning coupled may play an important role in infants’ learning. Computational modeling is particularly important in studies of speech development because of the great difficulty in obtaining information about the state of the nervous system as infants perform tasks and because of the major limitations on what types of experimental manipulations can be performed on human infant participants. My work in this domain helps to bridge our understanding of neural function from invasive animal studies with studies of human communication. Furthermore, understanding infant vocal development provides information critical to understand how the human speech capacity evolved, and simulation provides a powerful tool to ask questions about human vocal evolution.