Abstract : A key problem in Neuroscience is to understand the mechanistic and algorithmic processing of information in biological neural networks. There are two classical pieces of lens for looking at the problem: single-neuron computation and circuit dynamics, but they are hardly put together under an intermediate level that bridges this dichotomy mathematically. The firing of group of neurons can be viewed as the collective output of a neural population integrating synaptic inputs over time. However, the continuous time-series nature of data flow in the recurrent and nested real brain structures poses a stringent challenge for delineating the underlying transformations, especially in traditional deep neural-net paradigms. Even more critically, multi-regional experimental recordings for this purpose are lacking.
We here propose the serial circuits for memory functions (the hippocampus) as a practical system to address this long-standing problem, and attempt to articulate the unmet need for finding interpretable neural transformation, in light of existing methods, to a mathematic audience. Rather than input-output transformations, the best neuron-population dynamics research in Systems Neuroscience characterized multi-regional representations of outputs (i.e., neuronal firing) only. We have designed large-scale Neuropixels recording across the input/output regions for the hippocampus in mice performing cognitive tasks in virtual reality, forcing data-driven input sets for the said problem. A conception of the project is to develop analytical and machine-learning methods to deconvolve the signals temporally integrated and transformed by recurrent and nested networks, which are a canonical multi-regional neural circuit architecture. The derived input-output transformations might potentially benefit understanding of biological and artificial intelligence by being compared against computational models of memory circuits. The result will offer insight for mechanistically based computations in the brain.