Tim and I talk about the upcoming Cognitive Computational Neuroscience conference, where he’ll be delivering a keynote address, conferences in general, his Tolman Eichenbaum machine that mimics neurons in the hippocampus and entorhinal cortex to abstract and generalize the structure of knowledge, his work using MEG to measure replay in humans, and more.
Tony and I talk about his idea that AI should take inspiration from DNA. That is, DNA through evolution can be thought of as a compressed bottleneck of innate information for our brains to start with, unlike the tabula rasa deep learning systems in vogue these days. We also talk about his experiences starting the COSYNE conference, his work with auditory decision making in rodents, how he plans to revolutionize neuroscience using DNA barcodes to solve the connectome, and more.
Federico and I discuss the difference between weak and strong emergence and how they relate to theories about brain function and consciousness, like the free energy principle, integrated information theory, and oscillatory mechanisms.
Rafal and I discuss many of the ways back-propagation could be approximated in brains as detailed in his recent Trends in Cognitive Sciences review. We also cover how brains and machines learn, the free energy principle with its predictions and implications related to back-prop and understanding brains in general, and more.
Francisco and I discuss language and brains, his company cortical.io that uses his Semantic Folding Theory about how brains process language to perform natural language processing on text for many purposes, and the world of making and running companies like his own.