PLAY PODCASTS
Brain Inspired

Brain Inspired

152 episodes — Page 3 of 4

BI 138 Matthew Larkum: The Dendrite Hypothesis

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more. Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments) 0:00 - Intro 5:31 - Background: Dendrites 23:20 - Cortical neuron bodies vs. branches 25:47 - Theories of cortex 30:49 - Feedforward and feedback hierarchy 37:40 - Dendritic integration hypothesis 44:32 - DIT vs. other consciousness theories 51:30 - Mac Shine Q1 1:04:38 - Are dendrites conceptually useful? 1:09:15 - Insights from implementation level 1:24:44 - How detailed to model? 1:28:15 - Do action potentials cause consciousness? 1:40:33 - Mac Shine Q2

Jun 6, 20221h 51m

BI 137 Brian Butterworth: Can Fish Count?

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics. Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds 0:00 - Intro 3:19 - Why Counting? 5:31 - Dyscalculia 12:06 - Dyslexia 19:12 - Counting 26:37 - Origins of counting vs. language 34:48 - Counting vs. higher math 46:46 - Counting some things and not others 53:33 - How to test counting 1:03:30 - How does the brain count? 1:13:10 - Are numbers real?

May 27, 20221h 17m

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more. Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience  The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series) 0:00 - Intro 4:32 - The Blind Spot 15:53 - Phenomenology and interpretation 22:51 - Personal stories: appreciating phenomenology 37:42 - Quantum physics example 47:16 - Scientific explanation vs. phenomenological description 59:39 - How can phenomenology and science complement each other? 1:08:22 - Neurophenomenology 1:17:34 - Use of language 1:25:46 - Mutual constraints

May 17, 20221h 34m

BI 135 Elena Galea: The Stars of the Brain

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control. Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops. 0:00 - Intro 5:23 - The changing story of astrocytes 14:58 - Astrocyte research lags neuroscience 19:45 - Types of astrocytes 23:06 - Astrocytes vs neurons 26:08 - Computational roles of astrocytes 35:45 - Feedback control 43:37 - Energy efficiency 46:25 - Current technology 52:58 - Computational astroscience 1:10:57 - Do names for things matter

May 6, 20221h 17m

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience. Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics. 0:00 - Intro 3:34 - Background 8:20 - Bee experiments 14:30 - Bee flight and navigation 28:05 - Landing 33:06 - Umwelt and perception 37:26 - Bee-inspired aerial robotics 49:10 - Motion camouflage 51:52 - Cognition in bees 1:03:10 - Small vs. big brains 1:06:42 - Pain in bees 1:12:50 - Subjective experience 1:15:25 - Deep learning 1:23:00 - Path forward

Apr 27, 20221h 26m

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning. Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep. 0:00 - Intro 2:48 - Background and types of memory 14:44 -Consciousness and memory 23:32 - Phases and sleep and wakefulness 28:19 - Sleep, memory, and learning 33:50 - Targeted memory reactivation 48:34 - Problem solving during sleep 51:50 - 2-way communication with lucid dreamers 1:01:43 - Confounds to the paradigm 1:04:50 - Limitations and future studies 1:09:35 - Lucid dreaming app 1:13:47 - How sleep can inform AI 1:20:18 - Advice for students

Apr 15, 20221h 29m

BI 132 Ila Fiete: A Grid Scaffold for Memory

Announcement: I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here. Support the show to get full episodes, full archive, and join the Discord community. Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework. The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain. 0:00 - Intro 3:36 - "Neurophysicist" 9:30 - Bottom-up vs. top-down 15:57 - Tool scavenging 18:21 - Cognitive maps and hippocampus 22:40 - Hopfield networks 27:56 - Internal scaffold 38:42 - Place cells 43:44 - Grid cells 54:22 - Grid cells encoding place cells 59:39 - Scaffold model: stacked hopfield networks 1:05:39 - Attractor landscapes 1:09:22 - Landscapes across scales 1:12:27 - Dimensionality of landscapes

Apr 3, 20221h 17m

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Support the show to get full episodes, full archive, and join the Discord community. Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs". Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems. 0:00 - Intro 3:10 - Background 9:19 - Bottom-up vs. top-down 14:42 - Levels of abstraction 22:46 - Biological neuromodulation 33:18 - Inventing neuromodulators 41:10 - How far along are we? 53:31 - Multiple realizability 1:09:40 -Modeling dendrites 1:15:24 - Across-species neuromodulation

Mar 26, 20221h 26m

BI 130 Eve Marder: Modulation of Networks

Support the show to get full episodes, full archive, and join the Discord community. Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains. The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks). 0:00 - Intro 3:58 - Background 8:00 - Levels of ambiguity 9:47 - Stomatogastric nervous system 17:13 - Structure vs. function 26:08 - Role of theory 34:56 - Technology vs. understanding 38:25 - Higher cognitive function 44:35 - Adaptability, resilience, evolution 50:23 - Climate change 56:11 - Deep learning 57:12 - Dynamical systems

Mar 13, 20221h 0m

BI 129 Patryk Laurent: Learning from the Real World

Support the show to get full episodes, full archive, and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world. Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network. 0:00 - Intro 2:22 - Patryk's background 8:37 - Importance of diverse skills 16:14 - What is intelligence? 20:34 - Important brain principles 22:36 - Learning from the real world 35:09 - Language models 42:51 - AI contribution to neuroscience 48:22 - Criteria for "real" AI 53:11 - Neuroscience for AI 1:01:20 - What can we ignore about brains? 1:11:45 - Advice to past self

Mar 2, 20221h 21m

BI 128 Hakwan Lau: In Consciousness We Trust

Support the show to get full episodes, full archive, and join the Discord community. Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness. Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. 0:00 - Intro 4:37 - In Consciousness We Trust 12:19 - Too many consciousness theories? 19:26 - Philosophy and neuroscience of consciousness 29:00 - Local vs. global theories 31:20 - Perceptual reality monitoring and GANs 42:43 - Functions of consciousness 47:17 - Mental quality space 56:44 - Cognitive maps 1:06:28 - Performance capacity confounds 1:12:28 - Blindsight 1:19:11 - Philosophy vs. empirical work

Feb 20, 20221h 25m

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Support the show to get full episodes, full archive, and join the Discord community. Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram? Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon. 0:00 - Intro 4:05 - Response to Randy Gallistel 10:45 - Computation in the brain 14:52 - Instinct and memory 19:37 - Dynamics of memory 21:55 - Wiring vs. connection strength plasticity 24:16 - Changing one's mind 33:09 - Optogenetics and memory experiments 47:24 - Forgetting as learning 1:06:35 - Folk psychological terms 1:08:49 - Memory becoming instinct 1:21:49 - Instinct across the lifetime 1:25:52 - Boundaries of memories 1:28:52 - Subjective experience of memory 1:31:58 - Interdisciplinary research 1:37:32 - Communicating science

Feb 10, 20221h 42m

BI 126 Randy Gallistel: Where Is the Engram?

Support the show to get full episodes, full archive, and join the Discord community. Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views. Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem 0:00 - Intro 6:50 - Cognitive science vs. computational neuroscience 13:23 - Brain as computing device 15:45 - Noam Chomsky's influence 17:58 - Memory must be stored within cells 30:58 - Theoretical support for the idea 34:15 - Cerebellum evidence supporting the idea 40:56 - What is the write mechanism? 51:11 - Thoughts on deep learning 1:00:02 - Multiple memory mechanisms? 1:10:56 - The role of plasticity 1:12:06 - Trying to convince molecular biologists

Jan 31, 20221h 19m

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Support the show to get full episodes, full archive, and join the Discord community. Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence. From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning. 0:00 - Intro 4:16 - Tony Zador 5:38 - Doris Tsao 10:44 - Blake Richards 15:46 - Deductive, inductive, abductive inference 16:32 - NAISys 33:09 - Evolution, development, learning 38:23 - Learning: plasticity vs. dynamical structures 54:13 - Different kinds of understanding 1:03:05 - Do we understand evolution well enough? 1:04:03 - Neuro-AI fad? 1:06:26 - Are your problems bigger or smaller now?

Jan 19, 20221h 11m

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Support the show to get full episodes, full archive, and join the Discord community. Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks. Hiesinger Neurogenetics LaboratoryTwitter: @HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter 0:00 - Intro 3:01 - The Self-Assembling Brain 21:14 - Including growth in networks 27:52 - Information unfolding and algorithmic growth 31:27 - Cellular automata 40:43 - Learning as a continuum of growth 45:01 - Robustness, autonomous agents 49:11 - Metabolism vs. connectivity 58:00 - Feedback at all levels 1:05:32 - Generality vs. specificity 1:10:36 - Whole brain emulation 1:20:38 - Changing view of intelligence 1:26:34 - Popular and wrong vs. unknown and right

Jan 5, 20221h 39m

BI 123 Irina Rish: Continual Learning

Support the show to get full episodes, full archive, and join the Discord community. Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks. Irina's website.Twitter: @irinarishRelated papers:Beyond Backprop: Online Alternating Minimization with Auxiliary Variables.Towards Continual Reinforcement Learning: A Review and Perspectives.Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish. 0:00 - Intro 3:26 - AI for Neuro, Neuro for AI 14:59 - Utility of philosophy 20:51 - Artificial general intelligence 24:34 - Back-propagation alternatives 35:10 - Inductive bias vs. scaling generic architectures 45:51 - Continual learning 59:54 - Neuro-inspired continual learning 1:06:57 - Learning trajectories

Dec 26, 20211h 18m

BI 122 Kohitij Kar: Visual Intelligence

Support the show to get full episodes and join the Discord community. Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition. VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter: @KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse Engineering Vision 0:00 - Intro 3:49 - Background 13:51 - Where are we in understanding vision? 19:46 - Benchmarks 21:21 - Falsifying models 23:19 - Modeling vs. experiment speed 29:26 - Simple vs complex models 35:34 - Dorsal visual stream and deep learning 44:10 - Modularity and brain area roles 50:58 - Chemogenetic perturbation, DREADDs 57:10 - Future lab vision, clinical applications 1:03:55 - Controlling visual neurons via image synthesis 1:12:14 - Is it enough to study nonhuman animals? 1:18:55 - Neuro/AI intersection 1:26:54 - What is intelligence?

Dec 12, 20211h 33m

BI 121 Mac Shine: Systems Neurobiology

Support the show to get full episodes, full archive, and join the Discord community. Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence. Shine LabTwitter: @jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics. 0:00 - Intro 6:32 - Background 10:41 - Holistic approach 18:19 - Importance of thalamus 35:19 - Thalamus circuitry 40:30 - Cerebellum 46:15 - Predictive processing 49:32 - Brain as dynamical attractor landscape 56:48 - System 1 and system 2 1:02:38 - How to think about the thalamus 1:06:45 - Causality in complex systems 1:11:09 - Clinical applications 1:15:02 - Ascending arousal system and neuromodulators 1:27:48 - Implications for AI 1:33:40 - Career serendipity 1:35:12 - Advice

Dec 2, 20211h 43m

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

Support the show to get full episodes, full archive, and join the Discord community. James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior. James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory 0:00 - Intro 3:57 - Guest Intros 15:04 - Organizing memories for generalization 26:48 - Teacher, student, and notebook models 30:51 - Shallow linear networks 33:17 - How to optimize generalization 47:05 - Replay as a generalization regulator 54:57 - Whole greater than sum of its parts 1:05:37 - Unpredictability 1:10:41 - Heuristics 1:13:52 - Theoretical neuroscience for AI 1:29:42 - Current personal thinking

Nov 21, 20211h 40m

BI 119 Henry Yin: The Crisis in Neuroscience

Support the show to get full episodes, full archive, and join the Discord community. Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter. Yin lab at Duke.Twitter: @HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control. 0:00 - Intro 5:40 - Kuhnian crises 9:32 - Control theory and cybernetics 17:23 - How much of brain is control system? 20:33 - Higher order control representation 23:18 - Prediction and control theory 27:36 - The way forward 31:52 - Compatibility with mental representation 38:29 - Teleology 45:53 - The right number of subjects 51:30 - Continuous measurement 57:06 - Artificial intelligence and control theory

Nov 11, 20211h 6m

BI 118 Johannes Jäger: Beyond Networks

Support the show to get full episodes, full archive, and join the Discord community. Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell. Yogi's website and blog: Untethered in the Platonic Realm.Twitter: @yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness. 0:00 - Intro 4:10 - Yogi's background 11:00 - Beyond Networks - limits of dynamical systems models 16:53 - Kevin Mitchell question 20:12 - Process metaphysics 26:13 - Agency in evolution 40:37 - Agent-environment interaction, open-endedness 45:30 - AI and agency 55:40 - Life and intelligence 59:08 - Deep learning and neuroscience 1:03:21 - Mental autonomy 1:06:10 - William Wimsatt's biopsychological thicket 1:11:23 - Limtiations of mechanistic dynamic explanation 1:18:53 - Synthesis versus multi-perspectivism 1:30:31 - Specialization versus generalization

Nov 1, 20211h 36m

BI 117 Anil Seth: Being You

Support the show to get full episodes, full archive, and join the Discord community. Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science. Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self, is that it's rooted in predicting our bodily states to control them. We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests. Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself 0:00 - Intro 6:32 - Megan Peters Q: Communicating Consciousness 15:58 - Human vs. animal consciousness 19:12 - BEING YOU A New Science of Consciousness 20:55 - Megan Peters Q: Will the hard problem go away? 30:55 - Steve Fleming Q: Contents of consciousness 41:01 - Megan Peters Q: Phenomenal character vs. content 43:46 - Megan Peters Q: Lempels of complexity 52:00 - Complex systems and emergence 55:53 - Psychedelics 1:06:04 - Free will 1:19:10 - Consciousness vs. life vs. intelligence

Oct 19, 20211h 32m

BI 116 Michael W. Cole: Empirical Neural Networks

Support the show to get full episodes, full archive, and join the Discord community. Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent. The Cole Neurocognition lab.Twitter: @TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors? 0:00 - Intro 4:58 - Cognitive control 7:44 - Rapid Instructed Task Learning and Flexible Hub Theory 15:53 - Patryk Laurent question: free will 26:21 - Kendrick Kay question: fMRI limitations 31:55 - Empirically-estimated neural networks (ENNs) 40:51 - ENNs vs. deep learning 45:30 - Clinical relevance of ENNs 47:32 - Kanaka Rajan question: a proposed collaboration 56:38 - Advantage of modeling multiple regions 1:05:30 - How ENNs work 1:12:48 - How ENNs might benefit artificial intelligence 1:19:04 - The need for causality 1:24:38 - Importance of luck and serendipity

Oct 12, 20211h 31m

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Support the show to get full episodes, full archive, and join the Discord community. Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer. Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory 0:00 - Intro 2:38 - Conscious Mind, Resonant Brain 11:49 - Theoretical method 15:54 - ART, learning, and consciousness 22:58 - Conscious vs. unconscious resonance 26:56 - Györy Buzsáki question 30:04 - Remaining mysteries in visual system 35:16 - John Krakauer question 39:12 - Jay McClelland question 51:34 - Any missing principles to explain human cognition? 1:00:16 - Importance of an early good career start 1:06:50 - Has modeling training caught up to experiment training? 1:17:12 - Universal development code

Oct 2, 20211h 23m

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

Support the show to get full episodes, full archive, and join the Discord community. Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more. Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors) 0:00 - Intro 5:26 - Philosophy contributing to mind science 15:45 - Trend toward hyperspecialization 21:38 - Practice-focused philosophy of science 30:42 - Computationalism 33:05 - Philosophy of mind: identity theory, functionalism 38:18 - Computations as descriptions 41:27 - Pluralism and perspectivalism 54:18 - How much of brain function is computation? 1:02:11 - AI as computationalism 1:13:28 - Naturalizing representations 1:30:08 - Are you doing it right?

Sep 22, 20211h 38m

BI 113 David Barack and John Krakauer: Two Views On Cognition

Support the show to get full episodes, full archive, and join the Discord community. David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher. David's webpage.John's Lab.Twitter: David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2 Timestamps 0:00 - Intro 3:13 - David's philosophy and neuroscience experience 20:01 - Renaissance person 24:36 - John's medical training 31:58 - Two Views on the Cognitive Brain 44:18 - Representation 49:37 - Studying populations of neurons 1:05:17 - What counts as representation 1:18:49 - Does this approach matter for AI?

Sep 12, 20211h 30m

BI ViDA Panel Discussion: Deep RL and Dopamine

Sep 2, 202157 min

BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine

BI 112: Ali Mohebi and Ben Engelhard The Many Faces of Dopamine Announcement: Ben has started his new lab and is recruiting grad students. Check out his lab here and apply! Engelhard Lab   Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning – dopamine (DA) neurons fire when our reward expectations aren’t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more. Dopamine: A Simple AND Complex Story by Daphne Cornelisse Guests Ali Mohebi @mohebial Ben Engelhard Timestamps: 0:00 – Intro 5:02 – Virtual Dopamine Conference 9:56 – History of dopamine’s roles 16:47 – Dopamine circuits 21:13 – Multiple roles for dopamine 31:43 – Deep learning panel discussion 50:14 – Computation and neuromodulation

Aug 26, 20211h 13m

BI NMA 06: Advancing Neuro Deep Learning Panel

Aug 19, 20211h 20m

BI NMA 05: NLP and Generative Models Panel

BI NMA 05: NLP and Generative Models Panel This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs). Panelists Brad Wyble. @bradpwyble. Kyunghyun Cho. @kchonyc. He He. @hhexiy. João Sedoc. @JoaoSedoc. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Second panel, about linear systems, real neurons, and dynamic networks. Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. Fourth panel, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization. Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Aug 13, 20211h 23m

BI NMA 04: Deep Learning Basics Panel

BI NMA 04: Deep Learning Basics Panel This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization. Guests Amita Kapoor Lyle Ungar @LyleUngar Surya Ganguli @SuryaGanguli The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Second panel, about linear systems, real neurons, and dynamic networks. Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs). Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.   Timestamps:  

Aug 6, 202159 min

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

Erik, Kevin, and I discuss... well a lot of things. Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence. Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory. Timestamps 0:00 - Intro 3:28 - The Revelations - Erik's novel 15:15 - Innate - Kevin's book 22:56 - Cycle of progress 29:05 - Brains for movement or consciousness? 46:46 - Freud's influence 59:18 - Theories of consciousness 1:02:02 - Meaning and emergence 1:05:50 - Reduction in neuroscience 1:23:03 - Micro and macro - emergence 1:29:35 - Agency and intelligence

Jul 28, 20211h 38m

BI NMA 03: Stochastic Processes Panel

Panelists: Yael Niv.@yael_nivKonrad [email protected] BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam [email protected] BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim [email protected] BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps. This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 22, 20211h 0m

BI NMA 02: Dynamical Systems Panel

Panelists: Adrienne [email protected] [email protected] [email protected] 054 Kanaka Rajan: How Do We Switch Behaviors? This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. Other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 15, 20211h 15m

BI NMA 01: Machine Learning Panel

Panelists: Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei. This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Other panels: Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

Jul 12, 20211h 27m

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more. Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt. Timestamps: 0:00 - Intro 11:11 - Background and approaches 27:00 - Understanding distinct from explanation 36:00 - Explanations as programs (early explanation) 40:42 - Explaining classes of phenomena 52:05 - Constitutive (neuro) vs. etiological (AI) explanations 1:04:04 - Do nonphysical objects count for explanation? 1:10:51 - Advice for early philosopher/scientists

Jul 6, 20211h 25m

BI 109 Mark Bickhard: Interactivism

Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle. For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette. Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep. Timestamps 0:00 - Intro 5:06 - Previous and upcoming book 9:17 - Origins of Mark's thinking 14:31 - Process vs. substance metaphysics 27:10 - Kinds of emergence 32:16 - Normative emergence to normative function and representation 36:33 - Representation in Interactivism 46:07 - Situation knowledge 54:02 - Interactivism vs. Enactivism 1:09:37 - Interactivism vs Predictive/Bayesian brain 1:17:39 - Interactivism vs. Free energy principle 1:21:56 - Microgenesis 1:33:11 - Implications for neuroscience 1:38:18 - Learning as variation and selection 1:45:07 - Implications for AI 1:55:06 - Everything is a clock 1:58:14 - Is Mark a philosopher?

Jun 26, 20212h 3m

BI 108 Grace Lindsay: Models of the Mind

Grace's websiteTwitter: @neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11. Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book. Timestamps 0:00 - Intro 4:19 - Cognition beyond vision 12:38 - Models of the Mind - book overview 14:00 - The good and bad of using math 21:33 - I quiz Grace on her own book 25:03 - Birth of AI and computational approach 38:00 - Rediscovering old math for new neuroscience 41:00 - Topology as good math to know now 45:29 - Physics vs. neuroscience math 49:32 - Neural code and information theory 55:03 - Rate code vs. timing code 59:18 - Graph theory - can you deduce function from structure? 1:06:56 - Multiple realizability 1:13:01 - Grand Unified theories of the brain

Jun 16, 20211h 26m

BI 107 Steve Fleming: Know Thyself

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea. Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness. Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness. Timestamps 0:00 - Intro 3:25 - Steve's Career 10:43 - Sub-personal vs. personal metacognition 17:55 - Meditation and metacognition 20:51 - Replay tools for mind-wandering 30:56 - Evolutionary cultural origins of self-awareness 45:02 - Animal metacognition 54:25 - Aging and self-awareness 58:32 - Is more always better? 1:00:41 - Political dogmatism and overconfidence 1:08:56 - Reliance on AI 1:15:15 - Building self-aware AI 1:23:20 - Future evolution of metacognition

Jun 6, 20211h 29m

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Jackie and Bob discuss their research and thinking about curiosity. Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI). Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data. Timestamps: 0:00 - Intro 4:15 - Central scientific interests 8:32 - Advent of mathematical models 12:15 - Career exploration vs. exploitation 28:03 - Eye movements and active sensing 35:53 - Status of eye movements in neuroscience 44:16 - Why are we curious? 50:26 - Curiosity vs. Exploration vs. Intrinsic motivation 1:02:35 - Directed vs. random exploration 1:06:16 - Deep exploration 1:12:52 - How to know what to pay attention to 1:19:49 - Does AI need curiosity? 1:26:29 - What trait do you wish you had more of?

May 27, 20211h 31m

BI 105 Sanjeev Arora: Off the Convex Path

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets. Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou. Timestamps 0:00 - Intro 7:32 - Computational complexity 12:25 - Algorithms 13:45 - Deep learning vs. traditional optimization 17:01 - Evolving view of deep learning 18:33 - Reproducibility crisis in AI? 21:12 - Surprising effectiveness of deep learning 27:50 - "Optimization" isn't the right framework 30:08 - Infinitely wide nets 35:41 - Exponential learning rates 42:39 - Data as the next frontier 44:12 - Neuroscience and AI differences 47:13 - Focus on algorithms, architecture, and objective functions 55:50 - Advice for deep learning theorists 58:05 - Decoding minds

May 17, 20211h 1m

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more. John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study. Timestamps 0:00 - Intro 16:20 - Where are we broadly in science of creativity? 18:23 - Origins of creativity research 22:14 - Divergent and convergent thought 26:31 - Secret Chord Labs 32:40 - Familiar surprise 38:55 - The Eureka Factor 42:27 - Dual process model 52:54 - Creativity and jazz expertise 55:53 - "Be creative" behavioral study 59:17 - Stimulating the creative brain 1:02:04 - Brain circuits underlying creativity 1:14:36 - What does this tell us about creativity? 1:16:48 - Intelligence vs. creativity 1:18:25 - Switching between creative modes 1:25:57 - Flow states and insight 1:34:29 - Creativity and insight in AI 1:43:26 - Creative products vs. process

May 7, 20211h 50m

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind. Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements. Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal's website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos. Timestamps 0:00 - Intro 6:14 - What Ken wants 11:22 - What Randal wants 22:29 - Brain preservation 27:18 - Aldehyde stabilized cryopreservation 31:51 - Scan and copy vs. gradual replacement 38:25 - Building a roadmap 49:45 - Limits of current experimental paradigms 53:51 - Our evolved brains 1:06:58 - Counterarguments 1:10:31 - Animal models for whole brain emulation 1:15:01 - Understanding vs. emulating brains 1:22:37 - Current challenges

Apr 26, 20211h 27m

BI 102 Mark Humphries: What Is It Like To Be A Spike?

Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes, how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode! The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion. Timestamps: 0:00 - Intro 3:25 - Writing a book 15:37 - Mark's main interest 19:41 - Future explanation of brain/mind 27:00 - Stochasticity and excitation/inhibition balance 36:56 - Dendritic computation for network dynamics 39:10 - Do details matter for AI? 44:06 - Spike failure 51:12 - Dark neurons 1:07:57 - Intrinsic spontaneous activity 1:16:16 - Best scientific moment 1:23:58 - Failure 1:28:45 - Advice

Apr 16, 20211h 32m

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book. The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own  learning. Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie. 0:00 - Intro 6:38 - Brain organoids 18:48 - Glial cell plasticity 24:50 - Whole brain emulation 35:28 - Industry vs. academia 45:32 - Intro to book: How To Motivate Your Students To Love Learning 48:29 - Steve's childhood influences 57:21 - Developing one's own intrinsic motivation 1:02:30 - Real-world assignments 1:08:00 - Keys to motivation 1:11:50 - Peer pressure 1:21:16 - Autonomy 1:25:38 - Wikipedia real-world assignment 1:33:12 - Relation to running a lab

Apr 6, 20211h 45m

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests: Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not? Timestamps: 0:00 - Intro 5:04 - Andrew Saxe 7:04 - Thomas Naselaris 7:46 - John Krakauer 9:03 - Federico Turkheimer 11:57 - Steve Potter 13:31 - David Krakauer 17:22 - Dean Buonomano 20:28 - Konrad Kording 22:00 - Uri Hasson 23:15 - Rodrigo Quian Quiroga 24:41 - Jim DiCarlo 25:26 - Marcel van Gerven 28:02 - Mazviita Chirimuuta 29:27 - Brad Love 31:23 - Patrick Mayo 32:30 - György Buzsáki 37:07 - Pieter Roelfsema 37:26 - David Poeppel 40:22 - Paul Cisek 44:52 - Talia Konkle 47:03 - Steve Grossberg

Mar 28, 202150 min

BI 100.4 Special: What Ideas Are Holding Us Back?

In the 4th installment of our 100th episode celebration, previous guests responded to the question: What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why? As usual, the responses are varied and wonderful! Timestamps: 0:00 - Intro 6:41 - Pieter Roelfsema 7:52 - Grace Lindsay 10:23 - Marcel van Gerven 11:38 - Andrew Saxe 14:05 - Jane Wang 16:50 - Thomas Naselaris 18:14 - Steve Potter 19:18 - Kendrick Kay 22:17 - Blake Richards 27:52 - Jay McClelland 30:13 - Jim DiCarlo 31:17 - Talia Konkle 33:27 - Uri Hasson 35:37 - Wolfgang Maass 38:48 - Paul Cisek 40:41 - Patrick Mayo 41:51 - Konrad Kording 43:22 - David Poeppel 44:22 - Brad Love 46:47 - Rodrigo Quian Quiroga 47:36 - Steve Grossberg 48:47 - Mark Humphries 52:35 - John Krakauer 55:13 - György Buzsáki 59:50 - Stefan Leijnan 1:02:18 - Nathaniel Daw

Mar 21, 20211h 4m

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3): Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing? It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing. Timestamps: 0:00 - Intro 3:56 - Wolgang Maass 5:34 - Paul Humphreys 9:16 - Chris Eliasmith 12:52 - Andrew Saxe 16:25 - Mazviita Chirimuuta 18:11 - Steve Potter 19:21 - Blake Richards 22:33 - Paul Cisek 26:24 - Brad Love 29:12 - Jay McClelland 34:20 - Megan Peters 37:00 - Dean Buonomano 39:48 - Talia Konkle 40:36 - Steve Grossberg 42:40 - Nathaniel Daw 44:02 - Marcel van Gerven 45:28 - Kanaka Rajan 48:25 - John Krakauer 51:05 - Rodrigo Quian Quiroga 53:03 - Grace Lindsay 55:13 - Konrad Kording 57:30 - Jeff Hawkins 102:12 - Uri Hasson 1:04:08 - Jess Hamrick 1:06:20 - Thomas Naselaris

Mar 17, 20211h 8m

BI 100.2 Special: What Are the Biggest Challenges and Disagreements?

In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on. Timestamps: 0:00 - Intro 7:10 - Rodrigo Quian Quiroga 8:33 - Mazviita Chirimuuta 9:15 - Chris Eliasmith 12:50 - Jim DiCarlo 13:23 - Paul Cisek 16:42 - Nathaniel Daw 17:58 - Jessica Hamrick 19:07 - Russ Poldrack 20:47 - Pieter Roelfsema 22:21 - Konrad Kording 25:16 - Matt Smith 27:55 - Rafal Bogacz 29:17 - John Krakauer 30:47 - Marcel van Gerven 31:49 - György Buzsáki 35:38 - Thomas Naselaris 36:55 - Steve Grossberg 48:32 - David Poeppel 49:24 - Patrick Mayo 50:31 - Stefan Leijnen 54:24 - David Krakuer 58:13 - Wolfang Maass 59:13 - Uri Hasson 59:50 - Steve Potter 1:01:50 - Talia Konkle 1:04:30 - Matt Botvinick 1:06:36 - Brad Love 1:09:46 - Jon Brennan 1:19:31 - Grace Lindsay 1:22:28 - Andrew Saxe

Mar 12, 20211h 25m

BI 100.1 Special: What Has Improved Your Career or Well-being?

Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go... Timestamps: 0:00 - Intro 6:13 - David Krakauer 8:50 - David Poeppel 9:32 - Jay McClelland 11:03 - Patrick Mayo 11:45 - Marcel van Gerven 12:11 - Blake Richards 12:25 - John Krakauer 14:22 - Nicole Rust 15:26 - Megan Peters 17:03 - Andrew Saxe 18:11 - Federico Turkheimer 20:03 - Rodrigo Quian Quiroga 22:03 - Thomas Naselaris 23:09 - Steve Potter 24:37 - Brad Love 27:18 - Steve Grossberg 29:04 - Talia Konkle 29:58 - Paul Cisek 32:28 - Kanaka Rajan 34:33 - Grace Lindsay 35:40 - Konrad Kording 36:30 - Mark Humphries

Mar 9, 202142 min