Talia and I discuss her work on how our visual system is organized topographically, and divides into three main categories: big inanimate things, small inanimate things, and animals. Her work is unique in that it focuses not on the classic hierarchical processing of vision (though she does that, too), but what kinds of things are represented along that hierarchy. She also uses deep networks to learn more about the visual system. We also talk about her keynote talk at the Cognitive Computational Neuroscience conference and plenty more.
How does knowledge in the world get into our brains and integrated with the rest of our knowledge and memories? Anna and I talk about the complementary learning systems framework introduced in 1995 that posits a fast episodic hippopcampal learning system and a slower statistical cortical learning system. We then discuss her work that advances and adds missing pieces to the CLS framework, and explores how sleep and sleep cycles contribute to the process. We also discuss how her work might contribute to AI systems by using multiple types of memory buffers, a little about being a woman in science, and how it’s going with her brand new lab.
Brad and I discuss how Moore’s law is on its last legs and his ideas for how neuroscience – in particular neural algorithms – may help computing continue to scale in a post-Moore’s law world. We also discuss neuromporphics in general and more
Brad and I discuss the state of neuromorphics and its relation to neuroscience and artificial intelligence. He describes his work adding new neurons to deep learning networks during training, called neurogenesis deep learning, inspired by how neurogenesis in the dentate gyrus of the hippocampus helps learn new things while keeping previous memories intact. We also talk about his method to transform deep learning networks into spiking neural networks so they can run on neuromorphic hardware, and the neuromorphics workshop he puts on every year, the Neuro Inspired Computational Elements (NICE) workshop.
Nando is a principal scientist at Deepmind and has an appointment at CIFAR, the Canadian Institute for Advanced Research. We talk about why he studies artificial intelligence, many of his current projects advancing machine learning in modern challenging areas like meta-learning, teaching machines how to program other machines, training networks using few training examples, and more.