Funded Grants


Real-time neural connectivity in natural language perception and production

When we talk with other people, read a journal, or write an email, our brains perform a formidable task in integrating information arriving via the eyes, ears, and hands with linguistics analysis, intention, memory, and control of mouth and hand movements. All this must happen on-line while we continue to listen, speak, read, or write. It thus seems very logical to assume that language function is implemented in the form of networks, with continuous rapid exchange of information between several brain areas. Interacting neural networks during visual perception and motor performance have been demonstrated in animal studies where neural activity can be recorded directly from the cortex. However, at present it is not possible to investigate real-time information transfer within large-scale neural networks in the human brain during natural language performance. Why is that?

We would need to identify groups of brain areas with similar time courses of activation. Such correlation would indicate that these areas talk to each other, and systematic time lags between the time courses would suggest a specific sequence of activation within the network. Modern functional imaging tools have made it possible to identify active brain areas safely from signals recorded outside of the head. When a brain area is active, communication between neurons becomes more intense and the electrical current in this area is increased. The neurons now need more oxygen and that is brought to the hungry cells by increasing blood flow to this brain area. Imaging methods which detect changes in blood flow or oxygenation (functional magnetic resonance imaging, fMRI, positron emission tomography, PET) provide accurate localization of the active areas (1 mm). Unfortunately, these measures vary slowly, over periods of seconds, when the actual neural changes take place in milliseconds (0.001 sec). These techniques are thus not appropriate for analysis of real-time connectivity.

For accurate timing, one needs to turn to neurophysiological methods (magnetoencephalography, MEG, electroencephalography, EEG). They detect the electric (EEG) and magnetic fields (MEG) directly associated with neuronal currents. With these methods, however, localization of the active areas is quite difficult due to the complex mathematical relationship between electric current and electromagnetic field patterns, and to the large changes of electric conductivity at the borders between brain, skull, and scalp. Fortunately, magnetic field passes the skull and the scalp essentially unaffected. With MEG, one can readily determine both the locations of active brain areas with reasonable accuracy (1 cm) and the relative timing of activation in these areas. MEG thus seems like an obvious choice for network analysis. However, the additional demand of finding not only active areas but groups of areas with correlated time courses of activation makes the problem enormously demanding conceptually, mathematically, and computationally.

We have now succeeded in developing a technique where we first compute correlation measures between the MEG sensors, and from these signals identify pairs of brain areas with correlated time courses of activation. When brought together, these multiple pairs of brain areas form large-scale neural networks. The method has been successfully tested on a finger movement task. Here, muscle activity recorded from the finger served as an external, non-brain, reference signal which simplified the analysis considerably. We are now ready to face the challenge of searching for neural networks during natural internally-driven language performance, where no external timing signals nor prior assumptions about connected areas are available. The first tests on real-time functional connectivity during reading are promising.

Our aim in this project is to extract, visualize, and quantify large-scale neural networks and the relative timing within these networks during real-life language tasks. These results will be compared with earlier research on language function obtained with the currently available experimental designs and analysis techniques. We hope to track neural networks supporting natural speech perception, speech production, free discussion, reading, and writing, and the possible interaction between perception and production.

Linguistic communication is an integral part of human cognition, and of humanity itself. Impairments in language perception or production are experienced as exceptionally limiting and even unbearable. Our method holds great promise for elucidating the neural basis of language disorders. For example, stuttering is a developmental language disorder in which timing within the neural network is likely to be the main issue and which emerges fully, and can thus be best investigated, during natural speech production. In aphasia, where local brain damage results in deterioration of language function, it would be very informative to evaluate entire brain systems involved in the patients's natural language performance and the possible changes with rehabilitation.

Finally, one may ask whether language affects the overall patterns of connectivity in our brains. If it does, an obvious question is how much such language-driven connectivity might influence any other tasks we perform. The possible effect of different languages on the organization of functional networks in the human brain may turn out to be an exciting future line of research.