Funded Grants


Neural correlates of auditory fill-in

A fundamental question is how listeners segregate and analyze a sound-producing object in a complex acoustic environment. This is exemplified by the cocktail party problem, which raises the question of how an individual attends to a single person speaking in a crowded cocktail party, amidst a cacophony of other voices and sounds. An essential step in solving this problem is identifying and perceptually segregating auditory objects, analogous to perceptually grouping objects in visual scenes. In this vein the general problem has been termed 'auditory scene analysis'[1]. In the cocktail party example, one must segregate the voice of the person to whom they are listening. Once segregated from extraneous sounds (background noise), the object of interest (foreground signal) can be further analyzed. In the above example the listener can analyze the speech signal (the foreground sound) coming from the person of interest, while ignoring the cocktail party background.

Determining how the brain solves this type of problem has far-reaching societal significance both in terms of the human condition and voice and speech recognition systems. While brains of young healthy individuals can solve the problem rather easily, elderly and developmentally impaired individuals have difficulty isolating and analyzing sound sources. It is well known that elderly people with hearing aids have particular difficulty functioning in acoustically complex noisy environments like cocktail parties. What is less known is that this difficulty is a general affliction of many elderly, even older people without the high frequency hearing loss also associated with aging. That is one reason that turning up the amplification in hearing aids cannot treat auditory scene analysis problems. As the population ages this problem will become more prevalent. Dyslexic individuals also suffer from difficulty perceptually grouping and analyzing sounds in noisy environments. Gaining a better understanding of the mechanisms underlying auditory scene analysis will help us mitigate related problems encountered by aging and dyslexic individuals, and the social isolation often associated with them.

Also, the computations underlying auditory scene analysis, which the brain of healthy humans performs with little effort by the person, has eluded engineers trying to build speech and voice recognition systems. Accordingly, such systems require quiet, simple acoustical environments to operate well. Increasing the understanding of how the brain solves the auditory scene analysis problem will help engineers design better systems for speech recognition.

Within the context of auditory scene analysis an interesting phenomenon, auditory fill-in, occurs when a foreground sound is interrupted by noise. If a foreground sound (such as speech, an FM glide, or a tone) is interrupted by noise (such as a cough), subjects hear the foreground continuing through the noise, even at noise intensities loud enough that the foreground sound can not be heard over the noise. Why doesn't the interrupting noise make the foreground less perceptible? Because the nervous system 'fills-in' missing information (i.e., creates an illusion). This can be demonstrated when silent 'gaps' are introduced into a foreground, and loud noise bursts are superimposed during the silence. Under these conditions, subjects perceive the foreground as continuing through the noise, even though the foreground signal that should have occurred during the noise was deleted (review in [1]). Studying illusions such as fill-in is scientifically valuable because of the incongruity between the physical stimulus and the percept. Investigators can exploit this incongruity to determine where in the brain neural activity becomes more strongly associated with the illusory percept than the physical stimulus. When neural responses closely related to the illusory percept are found, this relationship can be used to advance the understanding of how brain mechanisms create the perception. We propose to identify the neuronal contribution to perception, specifically to auditory fill-in, by investigating perception and single unit responses from auditory cortex.

Investigations of complex perceptions and illusions in the auditory system of animals are rare. Of course, relating single neuron firing patterns to perceptual and cognitive function is difficult. If the behavioral and physiological experiments are not carefully designed and implemented, interpretations can be severely compromised. Making quantitative comparisons also requires care. For example, a single neuron's ability to detect or discriminate a sound parameter on any single stimulus presentation depends not only on the mean response, but also on the trial-to-trial response variability. In visual system studies, powerful statistical techniques (using signal detection theory) that account for neuronal response variability have demonstrated direct links between single neuron responses and perception (e.g., [2]), yet a similar approach in the auditory system remains relatively untapped. We propose to apply and extend these techniques to auditory cortical studies of fill-in.

By combining behavior and physiology, carefully framing the behavioral question, and using rigorous quantitative techniques we will determine neural correlates of fill-in using analyses that allow for direct comparisons of auditory cortical neurons' responses to behavioral performance of monkeys. The results will significantly advance the understanding of neural response contributions to auditory perceptions and decision-making.

This new direction of the lab's research is novel by using experimental and analytical techniques established from visual cortical studies and applying them to a new problem: determining the neural correlates of an auditory illusion. By using an auditory illusion we can identify what parts of the brain are involved in creating percepts that deviate significantly from the world, and therefore how the brain creates unique perceptions. Experiments combining psychophysical and single-unit physiology to quantitatively investigate the link between neural activity and auditory perception are virtually non-existent. Outside of sound localization, to our knowledge, no attempt has been made to investigate high-level perceptual and illusory phenomena, such as fill-in or other aspects of auditory scene analysis, with such an approach. However the potential gains, in terms of advancing the understanding of the neural substrates of auditory processing, are tremendous and therefore motivate taking this different approach to investigating auditory cortex.