Funded Grants


Integrative theory of memory and cognitive processes

Approaches to human cognition have experienced unprecedented, explosive growth recently because of transformative technologies – experimental techniques that can directly probe and perturb the biophysics of the brain during cognitive tasks, and artificial computing algorithms from machine learning that perform complex tasks with human-level performance by mapping them onto a series of neural-like computations. Even with these impressive advances in the fields of experimental neuroscience and machine learning, neither approach in isolation can explain how important acts of cognition, such as perception, learning and memory work.

My approach is to develop integrative theories to describe how cognitive behaviors emerge from the cooperative activity of multi-scale neural processes. These theories are based, as building blocks, on neural network models flexible enough to accommodate sufficient levels of biological detail at the neuronal, synaptic and circuit levels. They are based on high quality data from well-designed imaging, electrophysiology and behavior experiments, and on new and existing mathematical and computational frameworks, derived from machine learning and statistical physics, adapted for biological fidelity. This cross-disciplinary approach can provide important insights about how neural circuits in the brain learn and execute cognitive functions ranging from working memory to reasoning and intuition.

My current and future work is based on a general framework that I developed over the past several years. For example, during my PhD thesis, which focused on the mathematical analysis of neural networks, I discovered and characterized a phase transition between internally generated and sensory dominated dynamics. I suggested that this phase transition, in which sensory stimuli actively suppress ongoing activity might be relevant to how the brain interprets subtle cues within the context of its internal experiential and motivational state to extract unambiguous representations of the external world. Electrophysiology and imaging studies in a number of brain regions have since validated the predictions of this work, including details of its frequency dependence and the time course for switching neural activity. This phase transition is also crucial to subsequently developed methods used to construct networks that perform interesting functions. My network studies employed methods borrowed from random matrix theory and statistical mechanics. These tools have now become an essential part of the data analysis and modeling arsenal.

My Princeton colleagues and I currently develop and employ a broad set of methods from physics, engineering and computer science to construct new conceptual frameworks to describe the relationship between cognitive processes and biophysics across many scales of biological organization. Work along these lines has uncovered how sensitivity to natural stimuli arises in neurons, how this selectivity influences sensorimotor learning, and how the neural sequences observed in different brain regions arise from plastic yet largely disordered circuits.

In the future, I will pursue several interrelated research directions, including (1) exploring whether neural sequences provide a substrate for working memory, specifically during accumulation of evidence in favor of a decision and during imitation learning of repetitive behaviors, (2) uncovering how general cognitive maps for representing behaviorally relevant contextual variables are implemented through network population dynamics, and (3) developing an unsupervised learning procedure for training recurrently connected networks to execute complex cognitive actions, and relating this to how animals and humans learn.