The brain integrates perceptual experiences into stable and unified representations of the environment. This process is called cognitive mapping process and it manifests in our memories and guides our everyday behavior.
Cognitive maps are thought to reside in the hippocampal formation, a high-level brain structure in the mediotemporal lobe (MTL) known to represent space. Many of the processing steps necessary to form cognitive maps however occur much earlier in the cortical hierarchy.
My long term goal is to understand how the brain derives the unified panorama we experience from sensory inputs, how we store it in our memories, and how these in turn affect the way we perceive and interact with the world.
My work combines functional magnetic resonance imaging (3T & 7T-fMRI), eye tracking and computational modeling to examine how human viewing behavior and brain activity along the visual streams relate to memory and cognitive map-like
processing in the MTL. Tightly-controlled psychophysical experiments and naturalistic virtual reality help me to address this question from multiple angles, complemented by machine learning to characterize and map neural populations activity across the brain.
Our brain receives visual input that is in many respects similar to a video acquired with a shaking camera. When our eyes move, every object in the environment changes its position on the retina, which induces motion and makes it difficult to perceive a unified visual scene.
Before more complex information about the environment can be read out, our brain therefore needs to first stabilize our perception during movements. In my very first project, I studied how our brain achieves this remarkable feat.
It integrates visual input with its very own motor commands that induced the movement, teasing apart whether incoming sensory signals were self-induced or not.
My co-authors and I found a whole network of regions engaged in this process, including the earliest visual cortices in the brain (Nau et al. 2018, NeuroImage).
In another study, we examined a related but higher-order mechanism in the MTL, known to map space during navigation. We asked whether the same MTL-mechanism represents visual space as well, hence where we are looking rather than where we are. A critical neural component here are entorhinal grid cells: neurons that fire at different locations tessellating space with a hexagonal grid.
We refined fMRI-proxy measures for grid-cell population activity during navigation, and adapted it to a viewing task, to show that the human entorhinal cortex indeed represents visual space with a grid-cell like code (Nau et al. 2018, Nature Neuroscience).
These results yield exciting implications, many of which we discussed in our review article (Nau et al. 2018, Trends Cogn. Sci). Most importantly, it shows that MTL mechanisms support domain-general computations in the brain, not limited to navigation, and that viewing behavior and visual paradigms enable a powerful read-out of these high-level cognitive processes. This review also provides an overview on visual and gaze related processing in the MTL, how it interacts with activity in sensory cortices and the computational challenges these codes might solve. We propose that viewing and navigation are guided by a common MTL mechanism that allows us to explore the world.
Having worked on low and mid-level vision as well as high-level spatial processing in the MTL, I wondered how these processes interact.
Because cognitive mapping engages an entire hierarchy of brain regions, I was convinced that we needed to study the neural tuning of many regions from a network-level persepctive but simulatenously in great detail and in the context of the behavior these regions are thought to support.
How can we solve this scientific challenge? One possible solution we have been working on recently is behavioral encoding modeling,
a type of predictive model that uses information about the current behavior to predict brain activity.
We combined this approach with 7T-fMRI and virtual reality to study how the human brain derives a sense of direction from environmental cues.
Most importantly, our results suggest that the strength, width and topology of directional coding in the human brain reflects how well participants have memorized the environment and the objects within it (Nau et al. 2019, BioRxiv).
This means that what we remember shapes how the entire network of regions involved in cognitive mapping processes environmental information.
Moreover, we believe that such behavioral encoding models can be further developed to be used also in other cognitive domains, recording techniques and species to study the neural underpinnings of behavior with high precision and a brain-network-level perspective as shown here.
Now, I work on several follow-up ideas, for example by probing how visual memory signals emanating from the MTL impact perceptual processing in upstream brain areas, and how memories guide viewing behavior during scene viewing and recall. Please see the Publications & Preprints section for more and recent work. Thank you for reading!