Funded Grants


Causal Inference with spiking neurons

Dealing with uncertainties is necessary to the survival of any living organism. Recent years have seen the growing use of models formalizing sensory perception, motor control or behavioral strategies as probabilistic inference in elementary causal models. Excitable neural structures face similar problems than behaving organisms: they receive noisy and ambiguous inputs, must accumulate evidence over time, combine unreliable cues, and compete with other neurons representing alternative interpretations of the sensory input. In my group, we apply such normative models, particularly Bayesian networks, in order to further our understanding of the function and dynamics of biological neural networks.

The originality of this research is that these models are applied at the level of microscopic structures such as single synapse, neurons or microcircuits, whose computations are strongly constrained by neural biophysics and dynamics. In particular, single spikes are our elementary unit of coding and meaning. This contrasts with the influential “rate models” encoding information by the mean activity of large neural populations. Rate models account for the huge variability of cortical neural response but ignore the dazzling complexity of biological neurons and networks, which makes them poorly predictive. Moreover, why would the most sophisticated and metabolically expensive nervous system, the mammal brain, "choose" to compute with such unreliable units at the expense of so many costly spikes?

Our working hypotheses are two-folds. First, we suppose that single neurons and networks are specifically and precisely tuned to estimate sensory or motor variables as reliably as possible based on their unreliable inputs and prior experience. And second, their firing dynamics insures self-consistency, i.e. these estimates can be decoded from output spike trains by postsynaptic integration. These two principles are enough to entirely constrain the structure, dynamics and plasticity of the corresponding spiking neural network. We were able to show recently that this purely functional approach converges towards powerful descriptive models of spiking neurons, e.g. adaptive integrate and fire neurons, chaotic balanced spiking networks and generalized linear models.

This approach could profoundly change our views of neural coding and computation. The modularity of Bayesian models ensures that neural structures implementing elementary causal models can be used as building blocks to construct hierarchical models underlying perception and behavior. In particular, we can directly predict the impact of biophysical mechanisms (e.g. effects of acetylcholine on Ih currents) on normal or pathological behavior (e.g. focused attention, hallucinations). Moreover, sensory cells should not be described by the traditional receptive field (RF), but by a “predictive field” (PF), i.e. the impact of their preferred stimulus on the sensory input. Indeed, sensory neurons should constantly reshape each other’s RF in order to solve ambiguities in the sensory scene. We will thus explore entirely new directions for the study of sensory neural representations. Finally, the Poisson variability observed in cortical neurons, overwhelmingly interpreted as either neuronal of sensory noise, could in fact not be noise at all. Such statistics are expected in large networks tracking dynamic inputs deterministically with as few spikes as possible, questioning the fundamental assumptions behind rate coding.