A Computational Model of Learning Flexible Navigation in a Maze by Layout-Conforming Replay of Place Cells

Abstract

Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. Such layout-conforming replay sheds a light on how the activity of place cells supports the learning of flexible navigation in a dynamically changing maze. However, existing computational models of replay fall short of generating layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploring a maze. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along a path in the maze, which models layout-conforming replay. During replay in rest, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal’s location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.

Publication
Frontiers in Computational Neuroscience 17
Yuanxiang Gao
Yuanxiang Gao

My research interests include neural network, spatial memory, synaptic plasticity, continuous attractor neural network, reinforcement learning.

Related