Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:
- The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;
- Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);
- The difference between classical approaches to modeling brains and the machine learning approach;
- The concept of universality – that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;
- How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).
- David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.
- Barak Lab
- Twitter: @SussilloDavid
- The papers we discuss or mention:
- Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.
- Computation Through Neural Population Dynamics.
- Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.
- Dynamics of random recurrent networks with correlated low-rank structure.
- Quality of internal representation shapes learning performance in feedback neural networks.
- Feigenbaum’s universality constant original paper: Feigenbaum, M. J. (1976) “Universality in complex discrete dynamics”, Los Alamos Theoretical Division Annual Report 1975-1976
- Talks
Timestamps:
0:00 – Intro
5:41 – Best scientific moment
9:37 – Why do you do what you do?
13:21 – Computation via dynamics
19:12 – Evolution of thinking about RNNs and brains
26:22 – RNNs vs. minds
31:43 – Classical computational modeling vs. machine learning modeling approach
35:46 – What are models good for?
43:08 – Ecological task validity with respect to using RNNs as models
46:27 – Optimization vs. learning
49:11 – Universality
1:00:47 – Solutions dictated by tasks
1:04:51 – Multiple solutions to the same task
1:11:43 – Direct fit (Uri Hasson)
1:19:09 – Thinking about the bigger picture