Grantee: Texas A&M University, College Station, TX, USA
Researcher: Heather Bortfeld, Ph.D.
Grant Title: Neural correlates of infant word recognition
https://doi.org/10.37717/220020237
Program Area: Bridging Brain, Mind & Behavior
Grant Type: Research Award
Amount: $100,000
Year Awarded: 2002
Duration: 1 year
The ability to identify words in continuous speech is fundamental to language processing, yet this is an ability that is often taken for granted by adults listening to their native language. However, anyone who has listened to fluent conversation in an unfamiliar language has encountered the basic problem of word-shape identification: What one experiences is a stream of babble lacking any of the readily identifiable words, phrases, or sentences that one perceives when listening to a familiar language. It is not at all clear where one word ends and the next begins, and no known word-shapes are available to help anchor one's lexical or syntactic analyses of the speech stream. For infants, whose experience of language must at first be of just such a stream of babble, development of word-shape segmentation and recognition skills is an absolute prerequisite to further language acquisition. If infants cannot break input utterances into constituent components and determine whether the resulting auditory objects are exemplars of particular linguistic types, it will be impossible for them to learn what individual components mean and how these parts fit together. The representational units (e.g., words) emerging from segmentation serve as the basis upon which infants develop syntactic and semantic knowledge. Therefore, understanding how a child cracks the continuous auditory code and comes to identify the individual units within it is fundamental to understanding first language development.
Infants show sensitivity to the rhythmic characteristics of their native language at birth. This is because infants are exposed to this aspect of their native language in utero, with the prenatal environment serving as a filter through which rhythmic properties of the speech signal can pass. Newborn infants are also able to distinguish languages that have the same rhythmic pattern as their native language from languages with distinct rhythmic patterns. However, they are unable to distinguish among languages with the same rhythmic pattern. But by five months of age, infants have enough experience with their native language to distinguish it from other languages with the same rhythmic pattern, indicating that they have become sensitive to the specific features that make their language unique.
But how do infants begin to recognize individual words within the speech stream? Some research has examined which aspects of a language's rhythmic pattern cue word boundaries. This work has highlighted the importance of the overall shape of a word, particularly the quality of its initial sound segment, to an infant's ability to locate it within fluent speech. This describes a general bias that infants may develop based on exposure to the rhythmic properties of their native language. However, it does not address how infants use specific information within words to distinguish them from surrounding speech. Other research has examined the particular characteristics of words that infants can use to guide their segmentation. For example, specific cues within words appear to help, such as the different contexts in which variants of a given sound can occur and in which certain strings of sounds are allowed. However, both these forms of sound-specific sensitivity develop subsequent to the emergence of initial segmentation abilities.
Recognition of isolated words occurs much earlier. For example, infants prefer to listen to their own names over another name by 4.5 months of age. But recognition of individual words has been pursued as a separate question from how infants begin to segment the speech stream. I have recently examined how infants' early recognition of over-familiarized words, such as their own names, may help them isolate words within fluent speech. Recent data from our lab shows that five-month-old infants can use their first name as a 'foot-in-the-door' for subsequent segmentation. Following familiarization with two segments of continuous speech, one containing several instances of an infant's own name followed by a word (e.g., Autumn's bike) and one containing several instances of another name followed by another word (e.g., Emma's cup), infants prefer to listen to the word that followed their names (e.g., bike) relative to the word that followed another name during familiarization (e.g., cup). Apparently, infants' recognition of their own names helps them pull the word that follows it out of the speech stream as well. In a subsequent study, I found no difference between infants' looking times towards a word that followed another name during familiarization and a completely unfamiliar word not included in the familiarization stimuli. Not only did five-month-old infants prefer to listen to the word paired with their own names relative to the word paired with another name, but they were not able to parse the word that followed another name out of the speech stream. These findings indicate that infants' recognition of their names is a key tool they can use to break apart fluent speech early on in development.
Most of what we know about infant language processing comes from infants' behavior in tasks like the ones I have just described. These methodologies exploit infants' tendency to look more or less towards auditory or visual objects, depending on whether these objects are familiar or not. This "preferential looking" is taken by researchers to indicate recognition. One problem with this approach is that infants sometimes prefer to attend to familiar stimuli and sometimes prefer to attend to novel stimuli. Yet familiarity and novelty effects, as the two directions of preferential looking are called, are both considered valid indices of recognition. If an infant in an experiment prefers to listen to a previously familiarized word relative to an unfamiliar word, then the infant is said to recognize that word. Likewise, if an infant prefers to listen to an unfamiliar word relative to a familiarized word, then the infant is said to prefer the novel word because he or she recognizes the familiar word. Ultimately, it is the heightened looking time towards one or the other word that indicates recognition, regardless of the direction of that looking. Understandably, the lack of predictability in looking time direction is one aspect of this research that is often criticized. One approach to addressing this problem has been to analyze the characteristics of infants (e.g., age of the child) and stimuli (e.g., complexity of the task) across many studies to determine how the two factors systematically contribute to novelty and familiarity effects. Nonetheless, the problem remains that an infant's preference--conveyed behaviorally--is not an 'on-line' (or direct) measure of that infant's language processing capacity.
In the adult language processing literature, on-line measures form the basis for theoretical interpretation. Multiple sources of on-line measures have become available for use with adults, from functional magnetic resonance imaging (fMRI) to eye-tracking technology. These techniques have advanced the field of adult language processing. On the other hand, there has been no new on-line measure to advance our understanding of infants' behavioral responses to auditory stimuli. If the goal of the field is to develop models of language development, with constraints on developmental hypotheses provided by adult language processing research, and vice-versa, then we need to be using equally powerful measures. As it currently stands, we have very different types of measures. On-line measures of infant language processing like the work currently done with adults will help bridge the two areas. Unfortunately, methodological and ethical constraints have made it difficult to collect neurophysiological evidence of language processing from awake, behaving infants.
Optical neuroimaging has emerged as a possible source of such on-line measures. Near InfarRed Spectroscopy (LAIRS) can provide "functional images" of an infant's brain by tracking changes in blood flow and oxygenation in the brain. This technology provides the first opportunity to follow the emergence of word recognition in infants and how it relates to an increase in hemoglobin concentration in specific brain regions. The benefit of LAIRS to the field of language development is two-fold: It can provide an important, advanced alternative to the off-line behavioral measures that are traditionally used, and it will allow us to look at localization of function as it applies to the emergence of infant word recognition. I propose a series of studies in which I will replicate the recent behavioral findings with first name recognition and segmentation (described earlier) in three different age groups (4-, 6-, and 8-month-olds) using optical imaging of infants' cerebral activation as the critical measure. The three hypotheses I am most interested in testing are: 1) Whether optical functional images of the frontal cortex of the infant brain will reveal blood volume and oxygenation increases consistent with emerging word recognition abilities, 2) Whether optical functional images of the frontal cortex of the infant brain will be distinct for semantic recognition (e.g., of one's own name) and episodic recognition (e.g., of a recently familiarized word), and 3) whether optical functional images of the infant brain will verify the neurological reality of familiarity and novelty effects in traditional looking time paradigms. This pilot work will help us understand how LAIRS can be used as a tool for studying the development of word recognition in infants, in particular, and infant cognition, in general.