Funded Grants


Components and computations for building adaptive cognitive maps of space

The emergence of our sense of self from the coalescence of sensory inputs and the internal machinations of the mind remains one of the greatest unsolved scientific mysteries. 17th century philosopher John Locke famously proposed that all knowledge is born of two sources: sensation and reflection. In many ways, this idea sparked a centuries-long debate on how external sensory inputs and internal neural computations combine to generate cognition. With the development of new technologies and theories in the years after Locke’s proposal, what was once fodder only for philosophers, would become the topic of rigorous scientific study. In the 1800’s, physician-scientist Ramon y Cajal would lay pen to the delicate branching structures of neurons and propose that individual brain cells compose the mind. Much later, experimentalist BF Skinner would champion the argument that stimuli reign, with sensory inputs and their effects driving all behavior. And with the rise of artificial intelligence and computer science in the 21st century, the dialogue would be pulled back towards determining the intrinsic components that generate cognition.

These efforts led to critical insights on how external sensory inputs are processed, with research illuminating how sensory stimuli are represented by in the cortex and identified as a particular sound, sight or smell. But the complexity inherent to internal cognitive processes, such as thought or recollection, left deducing the cells and circuits that implement such phenomena a daunting prospect. Over the last few decades however, the tractable response properties of parahippocampal neurons has provided a new access key to the neural signatures of two cognitive processes: self-localization and memory. Defined by functionally discrete response properties, neurons in the parahippocampus are proposed to support memory and navigation by generating an internal neural map of space. Medial entorhinal (MEC) grid cells provide the neural metric of this map, encoding distance traveled by firing in multiple, regularly spaced locations. The neural metric for orientation is provided by head direction cells, while border cells code the location of environmental boundaries.

My research focuses on leveraging this system to understand how external sensory inputs meld with internal computations to generate neural codes capable of supporting spatial cognition. Here, we propose a research program aimed at shifting the dialogue on the mechanisms underlying self-localization and memory. First, we plan to leverage computational and statistical methods to develop new theories on how MEC neurons represent space, support navigation and provide the building blocks for encoding spatial memories. Second, we seek to discovery how sensory inputs drive, correct and modify neural representations to generate adaptive cognitive maps of space. My lab has already achieved significant success along these lines including: the application of statistical models to define MEC coding properties, demonstrating how behavior adaptively changes the previously proposed static path-integration mode of MEC and discovering new ways in which sensory inputs calibrate self-motion signals. Future work in my lab extends upon these themes and looks to apply theories gained in the spatial network to other cortical regions involved in cognition.