PLAY PODCASTS
Brain Inspired

Brain Inspired

152 episodes — Page 2 of 4

BI 188 Jolande Fooken: Coordinating Action and Perception

Support the show to get full episodes, full archive, and join the Discord community. Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics. Jolande's website. Twitter: @ookenfooken. Related papers I am a parent. I am a scientist. Eye movement accuracy determines natural interception strategies. Perceptual-cognitive integration for goal-directed action in naturalistic environments. 0:00 - Intro 3:27 - Eye movements 8:53 - Hand-eye coordination 9:30 - Hand-eye coordination and naturalistic tasks 26:45 - Levels of expertise 34:02 - Yarbus and eye movements 42:13 - Varieties of experimental paradigms, varieties of viewing the brain 52:46 - Career vision 1:04:07 - Evolving view about the brain 1:10:49 - Coordination, robots, and AI

May 27, 20241h 28m

BI 187: COSYNE 2024 Neuro-AI Panel

Support the show to get full episodes, full archive, and join the Discord community. Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. COSYNE.

Apr 20, 20241h 3m

BI 186 Mazviita Chirimuuta: The Brain Abstracted

Support the show to get full episodes, full archive, and join the Discord community. Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today. Mazviita's University of Edinburgh page. The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. Previous Brain Inspired episodes: BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind 0:00 - Intro 5:28 - Neuroscience to philosophy 13:39 - Big themes of the book 27:44 - Simplifying by mathematics 32:19 - Simplifying by reduction 42:55 - Simplification by analogy 46:33 - Technology precedes science 55:04 - Theory, technology, and understanding 58:04 - Cross-disciplinary progress 58:45 - Complex vs. simple(r) systems 1:08:07 - Is science bound to study stability? 1:13:20 - 4E for philosophy but not neuroscience? 1:28:50 - ANNs as models 1:38:38 - Study of mind

Mar 25, 20241h 43m

BI 185 Eric Yttri: Orchestrating Behavior

Support the show to get full episodes, full archive, and join the Discord community. As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University. Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space. We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more. Yttri Lab Twitter: @YttriLab Related papers Opponent and bidirectional control of movement velocity in the basal ganglia. B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors. 0:00 - Intro 2:36 - Eric's background 14:47 - Different animal models 17:59 - ANNs as models for animal brains 24:34 - Main question 25:43 - How circuits produce appropriate behaviors 26:10 - Cerebellum 27:49 - What do motor cortex and basal ganglia do? 49:12 - Neuroethology 1:06:09 - What is a behavior? 1:11:18 - Categorize behavior (B-SOiD) 1:22:01 - Real behavior vs. ANNs 1:33:09 - Best era in neuroscience

Mar 6, 20241h 44m

BI 184 Peter Stratton: Synthesize Neural Principles

Support the show to get full episodes, full archive, and join the Discord community. Peter Stratton is a research scientist at Queensland University of Technology. I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman. What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right? Peter's website. Related papers Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI? Making a Spiking Net Work: Robust brain-like unsupervised machine learning. Global segregation of cortical activity and metastable dynamics. Unlocking neural complexity with a robotic key 0:00 - Intro 3:50 - AI background, neuroscience principles 8:00 - Overall view of modern AI 14:14 - Moravec's paradox and robotics 20:50 -Understanding movement to understand cognition 30:01 - How close are we to understanding brains/minds? 32:17 - Pete's goal 34:43 - Principles from neuroscience to build AI 42:39 - Levels of abstraction and implementation 49:57 - Mental disorders and robustness 55:58 - Function vs. implementation 1:04:04 - Spiking networks 1:07:57 - The roadmap 1:19:10 - AGI 1:23:48 - The terms AGI and AI 1:26:12 - Consciousness

Feb 20, 20241h 30m

BI 183 Dan Goodman: Neural Reckoning

Support the show to get full episodes, full archive, and join the Discord community. You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute. All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick. We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains. So what does it mean that modern neural networks disregard spiking altogether? Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics. Neural Reckoning Group. Twitter: @neuralreckoning. Related papers Neural heterogeneity promotes robust learning. Dynamics of specialization in neural modules under resource constraints. Multimodal units fuse-then-accumulate evidence across channels. Visualizing a joint future of neuroscience and neuromorphic engineering. 0:00 - Intro 3:47 - Why spiking neural networks, and a mathematical background 13:16 - Efficiency 17:36 - Machine learning for neuroscience 19:38 - Why not jump ship from SNNs? 23:35 - Hard and easy tasks 29:20 - How brains and nets learn 32:50 - Exploratory vs. theory-driven science 37:32 - Static vs. dynamic 39:06 - Heterogeneity 46:01 - Unifying principles vs. a hodgepodge 50:37 - Sparsity 58:05 - Specialization and modularity 1:00:51 - Naturalistic experiments 1:03:41 - Projects for SNN research 1:05:09 - The right level of abstraction 1:07:58 - Obstacles to progress 1:12:30 - Levels of explanation 1:14:51 - What has AI taught neuroscience? 1:22:06 - How has neuroscience helped AI?

Feb 6, 20241h 28m

BI 182: John Krakauer Returns… Again

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like Whether brains actually reorganize after damage The role of brain plasticity in general The path toward and the path not toward understanding higher cognition How to fix motor problems after strokes AGI Functionalism, consciousness, and much more. Relevant links: John's Lab. Twitter: @blamlab Related papers What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond. Against cortical reorganisation. Other episodes with John: BI 025 John Krakauer: Understanding Cognition BI 077 David and John Krakauer: Part 1 BI 078 David and John Krakauer: Part 2 BI 113 David Barack and John Krakauer: Two Views On Cognition Time stamps 0:00 - Intro 2:07 - It's a podcast episode! 6:47 - Stroke and Sherrington neuroscience 19:26 - Thinking vs. moving, representations 34:15 - What's special about humans? 56:35 - Does cortical reorganization happen? 1:14:08 - Current era in neuroscience

Jan 19, 20241h 25m

BI 181 Max Bennett: A Brief History of Intelligence

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination. The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book. Twitter: @maxsbennett Book: A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. 0:00 - Intro 5:26 - Why evolution is important 7:22 - Maclean's triune brain 14:59 - Breakthrough 1: Steering 29:06 - Fish intelligence 40:38 - Breakthrough 3: Mentalizing 52:44 - How could we improve the human brain? 1:00:44 - What is intelligence? 1:13:50 - Breakthrough 5: Speaking

Dec 25, 20231h 27m

BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding

Support the show to get full episodes, full archive, and join the Discord community. Welcome to another special panel discussion episode. I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on. There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe. Aspirational Neuroscience Panelists: Anton Arkhipov, Allen Institute for Brain Science. @AntonSArkhipov Konrad Kording, University of Pennsylvania. @KordingLab Tomás Ryan, Trinity College Dublin. @TJRyan_77 Srinivas Turaga, Janelia Research Campus. Dong Song, University of Southern California. @dongsong Zhihao Zheng, Princeton University. @zhihaozheng 0:00 - Intro 1:45 - Ken Hayworth 14:09 - Panel Discussion

Dec 11, 20231h 29m

BI 179 Laura Gradowski: Include the Fringe with Pluralism

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc. We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more. Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh. Facing the Fringe. Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe 0:00 - Intro 3:57 - What is fringe? 10:14 - What makes a theory fringe? 14:31 - Fringe to mainstream 17:23 - Garcia effect 28:17 - Fringe to mainstream: other examples 32:38 - Fringe and consciousness 33:19 - Words meanings change over time 40:24 - Pseudoscience 43:25 - How fringe becomes mainstream 47:19 - More fringe characteristics 50:06 - Pluralism as a solution 54:02 - Progress 1:01:39 - Encyclopedia of theories 1:09:20 - When to reject a theory 1:20:07 - How fringe becomes fringe 1:22:50 - Marginilization 1:27:53 - Recipe for fringe theorist

Nov 27, 20231h 39m

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions. Eric's website. Related papers Predictive learning as a network mechanism for extracting low-dimensional latent space representations. A scale-dependent measure of system dimensionality. From lazy to rich to exclusive task representations in neural networks and neural codes. Feedback through graph motifs relates structure and function in complex networks. 0:00 - Intro 4:15 - Reflecting on the rise of dynamical systems in neuroscience 11:15 - DST view on macro scale 15:56 - Intuitions 22:07 - Eric's approach 31:13 - Are brains more or less impressive to you now? 38:45 - Why is dimensionality important? 50:03 - High-D in Low-D 54:14 - Dynamical motifs 1:14:56 - Theory for its own sake 1:18:43 - Rich vs. lazy learning 1:22:58 - Latent variables 1:26:58 - What assumptions give you most pause?

Nov 13, 20231h 35m

BI 177 Special: Bernstein Workshop Panel

Support the show to get full episodes, full archive, and join the Discord community. I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion! Program: How can machine learning be used to generate insights and theories in neuroscience? Panelists: Katrin Franke Lab website. Twitter: @kfrankelab. Ralf Haefner Haefner lab. Twitter: @haefnerlab. Martin Hebart Hebart Lab. Twitter: @martin_hebart. Johannes Jaeger Yogi's website. Twitter: @yoginho. Fred Wolf Fred's university webpage. Organizers: Alexander Ecker | University of Göttingen, Germany Fabian Sinz | University of Göttingen, Germany Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany

Oct 30, 20231h 13m

BI 176 David Poeppel Returns

Support the show to get full episodes, full archive, and join the Discord community. David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis. David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki. Poeppel lab Twitter: @davidpoeppel. Related papers We don’t know how the brain stores anything, let alone words. Memory in humans and deep language models: Linking hypotheses for model augmentation. The neural ingredients for a language of thought are available. 0:00 - Intro 11:17 - Across levels 14:598 - Nature of memory 24:12 - Using the right tools for the right question 35:46 - LLMs, what they need, how they've shaped David's thoughts 44:55 - Across levels 54:07 - Speed of progress 1:02:21 - Neuroethology and mental illness - patreon 1:24:42 - Language of Thought

Oct 14, 20231h 23m

BI 175 Kevin Mitchell: Free Agents

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex. We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more. Kevin's website. Twitter: @WiringtheBrain Book: Free Agents: How Evolution Gave Us Free Will 4:27 - From Innate to Free Agents 9:14 - Thinking of the whole organism 15:11 - Who the book is for 19:49 - What bothers Kevin 27:00 - Indeterminacy 30:08 - How it all began 33:08 - How indeterminacy helps 43:58 - Libet's free will experiments 50:36 - Creativity 59:16 - Selves, subjective experience, agency, and free will 1:10:04 - Levels of agency and free will 1:20:38 - How much free will can we have? 1:28:03 - Hierarchy of mind constraints 1:36:39 - Artificial agents and free will 1:42:57 - Next book?

Oct 3, 20231h 46m

BI 174 Alicia Juarrero: Context Changes Everything

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool. In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains. Book: Context Changes Everything: How Constraints Create Coherence 0:00 - Intro 3:37 - 25 years thinking about constraints 8:45 - Dynamics in Action and eliminativism 13:08 - Efficient and other kinds of causation 19:04 - Complexity via context independent and dependent constraints 25:53 - Enabling and limiting constraints 30:55 - Across scales 36:32 - Temporal constraints 42:58 - A constraint cookbook? 52:12 - Constraints in a mechanistic worldview 53:42 - How to explain using constraints 56:22 - Concepts and multiple realizabillity 59:00 - Kevin Mitchell question 1:08:07 - Mac Shine Question 1:19:07 - 4E 1:21:38 - Dimensionality across levels 1:27:26 - AI and constraints 1:33:08 - AI and life

Sep 13, 20231h 45m

BI 173 Justin Wood: Origins of Visual Intelligence

Support the show to get full episodes, full archive, and join the Discord community. In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin! Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture. Wood lab. Related papers: Controlled-rearing studies of newborn chicks and deep neural networks. Development of collective behavior in newborn artificial agents. A newborn embodied Turing test for view-invariant object recognition. Justin mentions these papers: Untangling invariant object recognition (Dicarlo & Cox 2007) 0:00 - Intro 5:39 - Origins of Justin's current research 11:17 - Controlled rearing approach 21:52 - Comparing newborns and AI models 24:11 - Nativism vs. empiricism 28:15 - CNNs and early visual cognition 29:35 - Smoothness and slowness 50:05 - Early biological development 53:27 - Naturalistic vs. highly controlled 56:30 - Collective behavior in animals and machines 1:02:34 - Curiosity and critical periods 1:09:05 - Controlled rearing vs. other developmental studies 1:13:25 - Breaking natural rules 1:16:33 - Deep RL collective behavior 1:23:16 - Bottom-up and top-down

Aug 30, 20231h 35m

BI 172 David Glanzman: Memory All The Way Down

Support the show to get full episodes, full archive, and join the Discord community. David runs his lab at UCLA where he's also a distinguished professor.  David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.  So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on. David's Faculty Page. Related papers The central importance of nuclear mechanisms in the storage of memory. David mentions Arc and virus-like transmission: The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer. Structure of an Arc-ane virus-like capsid. David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life  Symposium. Related episodes: BI 126 Randy Gallistel: Where Is the Engram? BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Aug 7, 20231h 30m

BI 171 Mike Frank: Early Language and Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition. We discuss that, his love for developing open data sets that anyone can use, The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches How early language learning in children differs from LLM learning Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue. Language & Cognition Lab Twitter: @mcxfrank. I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions: Related papers: Pragmatic language interpretation as probabilistic inference. Toward a “Standard Model” of Early Language Learning. The pervasive role of pragmatics in early language. The Structure of Developmental Variation in Early Childhood. Relational reasoning and generalization using non-symbolic neural networks. Unsupervised neural network models of the ventral visual stream.

Jul 22, 20231h 24m

BI 170 Ali Mohebi: Starting a Research Lab

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future. Ali's website. Twitter: @mohebial

Jul 11, 20231h 17m

BI 169 Andrea Martin: Neural Dynamics and Language

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains. Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over  time. One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more. Andrea's website. Twitter: @andrea_e_martin. Related papers A Compositional Neural Architecture for Language An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions Neural dynamics differentially encode phrases and sentences during spoken language comprehension Hierarchical structure in language and action: A formal comparison Andrea mentions this book: The Geometry of Biological Time.

Jun 28, 20231h 41m

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives? Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives. This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion! AWARE: Glimpses of Consciousness Umbrella Films 0:00 - Intro 19:42 - Mechanistic reductionism 45:33 - Changing views during lifetime 53:49 - Did making the film alter your views? 57:49 - ChatGPT 1:04:20 - Materialist assumption 1:11:00 - Science of consciousness 1:20:49 - Transhumanism 1:32:01 - Integrity 1:36:19 - Aesthetics 1:39:50 - Response to the film

Jun 2, 20231h 54m

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward. Poirazi Lab Twitter: @YiotaPoirazi. Related papers Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks. Illuminating dendritic function with computational models. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Pyramidal Neuron as Two-Layer Neural Network 0:00 - Intro 3:04 - Yiota's background 6:40 - Artificial networks and dendrites 9:24 - Dendrites special sauce? 14:50 - Where are we in understanding dendrite function? 20:29 - Algorithms, plasticity, and brains 29:00 - Functional unit of the brain 42:43 - Engrams 51:03 - Dendrites and nonlinearity 54:51 - Spiking neural networks 56:02 - Best level of biological detail 57:52 - Dendrify 1:05:41 - Experimental work 1:10:58 - Dendrites across species and development 1:16:50 - Career reflection 1:17:57 - Evolution of Yiota's thinking

May 27, 20231h 27m

BI 166 Nick Enfield: Language vs. Reality

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go. For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"  In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately. From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language. Nick's website Twitter: @njenfield Book: Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. Papers: Linguistic concepts are self-generating choice architectures 0:00 - Intro 4:23 - Is learning about language important? 15:43 - Linguistic Anthropology 28:56 - Language and truth 33:57 - How special is language 46:19 - Choice architecture and framing 48:19 - Language for thinking or communication 52:30 - Agency and language 56:51 - Large language models 1:16:18 - Getting language right 1:20:48 - Social relationships and language

May 9, 20231h 27m

BI 165 Jeffrey Bowers: Psychology Gets No Respect

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work. However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more. Website Twitter: @jeffrey_bowers Related papers: Deep Problems with Neural Network Models of Human Vision. Parallel Distributed Processing Theory in the Age of Deep Networks. Successes and critical failures of neural networks in capturing human-like speech recognition. 0:00 - Intro 3:52 - Testing neural networks 5:35 - Neuro-AI needs psychology 23:36 - Experiments in AI and neuroscience 23:51 - Why build networks like our minds? 44:55 - Vision problem spaces, solution spaces, training data 55:45 - Do we implement algorithms? 1:01:33 - Relational and combinatorial cognition 1:06:17 - Comparing representations in different networks 1:12:31 - Large language models 1:21:10 - Teaching LLMs nonsense languages

Apr 12, 20231h 38m

BI 164 Gary Lupyan: How Language Affects Thought

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we  partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics. And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test. Lupyan Lab. Twitter: @glupyan. Related papers: Hidden Differences in Phenomenal Experience. Verbal interference paradigms: A systematic review investigating the role of language in cognition. Gary mentioned Richard Feynman's Ways of Thinking video. Gary and Andy Clark's Aeon article: Super-cooperators. 0:00 - Intro 2:36 - Words and communication 14:10 - Phenomenal variability 26:24 - Co-operating minds 38:11 - Large language models 40:40 - Neuro-symbolic AI, scale 44:43 - How LLMs have changed Gary's thoughts about language 49:26 - Meaning, grounding, and language 54:26 - Development of language 58:53 - Symbols and emergence 1:03:20 - Language evolution in the LLM era 1:08:05 - Concepts 1:11:17 - How special is language? 1:18:08 - AGI

Apr 1, 20231h 31m

BI 163 Ellie Pavlick: The Mind of a Language Model

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more. Language Understanding and Representation Lab Twitter: @Brown_NLP Related papers Semantic Structure in Deep Learning. Pretraining on Interactions for Learning Grounded Affordance Representations. Mapping Language Models to Grounded Conceptual Spaces. 0:00 - Intro 2:34 - Will LLMs make us dumb? 9:01 - Evolution of language 17:10 - Changing views on language 22:39 - Semantics, grounding, meaning 37:40 - LLMs, humans, and prediction 41:19 - How to evaluate LLMs 51:08 - Structure, semantics, and symbols in models 1:00:08 - Dimensionality 1:02:08 - Limitations of LLMs 1:07:47 - What do linguists think? 1:14:23 - What is language for?

Mar 20, 20231h 21m

BI 162 Earl K. Miller: Thoughts are an Emergent Property

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition. Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.  In episode 160, Ole Jensen discussed his work in humans showing that  low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics. Miller lab. Twitter: @MillerLabMIT. Related papers: An integrative theory of prefrontal cortex function. Annual Review of Neuroscience. Working Memory Is Complex and Dynamic, Like Your Thoughts. Traveling waves in the prefrontal cortex during working memory. 0:00 - Intro 6:22 - Evolution of Earl's thinking 14:58 - Role of the prefrontal cortex 25:21 - Spatial computing 32:51 - Homunculus problem 35:34 - Self 37:40 - Dimensionality and thought 46:13 - Reductionism 47:38 - Working memory and capacity 1:01:45 - Capacity as a principle 1:05:44 - Silent synapses 1:10:16 - Subspaces in dynamics

Mar 8, 20231h 23m

BI 161 Hugo Spiers: Navigation and Spatial Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on. Spiers Lab. Twitter: @hugospiers. Related papers Predictive maps in rats and humans for spatial navigation. From cognitive maps to spatial schemas. London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London. Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.

Feb 24, 20231h 34m

BI 160 Ole Jensen: Rhythms of Cognition

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence. The Neuronal Oscillations Group. Twitter: @neuosc. Related papers Shaping functional architecture by oscillatory alpha activity: gating by inhibition FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex The theta-gamma neural code A pipelining mechanism supporting previewing during visual exploration and reading. Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity. 0:00 - Intro 2:58 - Oscillations import over the years 5:51 - Oscillations big picture 17:62 - Oscillations vs. traveling waves 22:00 - Oscillations and algorithms 28:53 - Alpha oscillations and working memory 44:46 - Alpha as the controller 48:55 - Frequency tagging 52:49 - Timing of attention 57:41 - Pipelining neural processing 1:03:38 - Previewing during reading 1:15:50 - Previewing, prediction, and large language models 1:24:27 - Dyslexia

Feb 7, 20231h 28m

BI 159 Chris Summerfield: Natural General Intelligence

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples. Human Information Processing Lab. Twitter: @summerfieldlab. Book: Natural General Intelligence: How understanding the brain can help us build AI. Other books mentioned: Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal The Mind is Flat by Nick Chater. 0:00 - Intro 2:20 - Natural General Intelligence 8:05 - AI and Neuro interaction 21:42 - How to build AI 25:54 - Umwelts and affordances 32:07 - Different kind of intelligence 39:16 - Ecological validity and AI 48:30 - Is reward enough? 1:05:14 - Beyond brains 1:15:10 - Large language models and brains

Jan 26, 20231h 28m

BI 158 Paul Rosenbloom: Cognitive Architectures

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book On Computing: The Fourth Great Scientific Domain. He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers. All of what I just said, and much of what we discuss, can be found in Paul's memoir, In Search of Insight: My Life as an Architectural Explorer. Paul's website. Related papers Working memoir: In Search of Insight: My Life as an Architectural Explorer. Book: On Computing: The Fourth Great Scientific Domain. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains. Common Model of Cognition Bulletin. 0:00 - Intro 3:26 - A career of exploration 7:00 - Alan Newell 14:47 - Relational model and dichotomic maps 24:22 - Cognitive architectures 28:31 - SOAR cognitive architecture 41:14 - Sigma cognitive architecture 43:58 - SOAR vs. Sigma 53:06 - Cognitive architecture community 55:31 - Common model of cognition 1:11:13 - What's missing from the common model 1:17:48 - Brains vs. cognitive architectures 1:21:22 - Mapping the common model onto the brain 1:24:50 - Deep learning 1:30:23 - AGI

Jan 16, 20231h 35m

BI 157 Sarah Robins: Philosophy of Memory

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting). Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory. We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory. Sarah's website. Twitter: @SarahKRobins. Related papers: Her Memory chapter, with Felipe de Brigard, in the book Mind, Cognition, and Neuroscience: A Philosophical Introduction. Memory and Optogenetic Intervention: Separating the engram from the ecphory. Stable Engrams and Neural Dynamics. 0:00 - Intro 4:18 - Philosophy of memory 5:10 - Making a move 6:55 - State of philosophy of memory 11:19 - Memory traces or the engram 20:44 - Taxonomy of memory 25:50 - Cognitive ontologies, neuroscience, and psychology 29:39 - Optogenetics 33:48 - Memory traces vs. neural dynamics and consolidation 40:32 - What is the boundary of a memory? 43:00 - Process philosophy and memory 45:07 - Memory vs. imagination 49:40 - Constructivist view of memory and imagination 54:05 - Is memory for the future? 58:00 - Memory errors and intelligence 1:00:42 - Memory and AI 1:06:20 - Creativity and memory errors

Jan 2, 20231h 20m

BI 156 Mariam Aly: Memory, Attention, and Perception

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health. Aly Lab. Twitter: @mariam_s_aly. Related papers Attention promotes episodic encoding by stabilizing hippocampal representations. The medial temporal lobe is critical for spatial relational perception. Cholinergic modulation of hippocampally mediated attention and perception. Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex. How hippocampal memory shapes, and is shaped by, attention. Attentional fluctuations and the temporal organization of memory. 0:00 - Intro 3:50 - Mariam's background 9:32 - Hippocampus history and current science 12:34 - hippocampus and perception 13:42 - Relational information 18:30 - How much memory is explicit? 22:32 - How attention affects hippocampus 32:40 - fMRI levels vs. stability 39:04 - How is hippocampus necessary for attention 57:00 - How much does attention affect memory? 1:02:24 - How memory affects attention 1:06:50 - Attention and memory relation big picture 1:07:42 - Current state of memory and attention 1:12:12 - Modularity 1:17:52 - Practical advice to improve attention/memory 1:21:22 - Mariam's challenges

Dec 23, 20221h 40m

BI 155 Luiz Pessoa: The Entangled Brain

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X does function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also BI 152 Michael L. Anderson: After Phrenology: Neural Reuse). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more. Laboratory of Cognition and Emotion. Twitter: @PessoaBrain. Book: The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together 0:00 - Intro 2:47 - The Entangled Brain 16:24 - How to think about complex systems 23:41 - Modularity thinking 28:16 - How to train one's mind to think complex 33:26 - Problem or principle? 44:22 - Complex behaviors 47:06 - Organization vs. structure 51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity 55:15 - Principles of organization: High Distributed Functional Connectivity 1:00:50 - Principles of organization: Networks as Functional Units 1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops 1:08:53 - Open and closed loops 1:16:43 - Principles of organization: Connectivity with the Body 1:21:28 - Consciousness 1:24:53 - Emotions 1:32:49 - Emottions and AI 1:39:47 - Emotion as a concept 1:43:25 - Complexity and functional organization in AI

Dec 10, 20221h 54m

BI 154 Anne Collins: Learning with Working Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Anne Collins runs her  Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI. Computational Cognitive Neuroscience Lab. Twitter: @ccnlab or @Anne_On_Tw. Related papers: How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective.  Beyond simple dichotomies in reinforcement learning. The Role of Executive Function in Shaping Reinforcement Learning. What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience. 0:00 - Intro 5:25 - Dimensionality of learning 11:19 - Modularity of function and computations 16:51 - Is working memory a thing? 19:33 - Model-free model-based dichotomy 30:40 - Working memory and RL 44:43 - How working memory and RL interact 50:50 - Working memory and attention 59:37 - Computations vs. implementations 1:03:25 - Interpreting results 1:08:00 - Working memory and AI

Nov 29, 20221h 22m

BI 153 Carolyn Dicey-Jennings: Attention and the Self

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception. Carolyn's website. Books: The Attending Mind. Aeon article: I Attend, Therefore I Am. Related papers The Subject of Attention. Consciousness and Mind. Practical Realism about the Self. 0:00 - Intro 12:15 - Reconceptualizing attention 16:07 - Types of attention 19:02 - Predictive processing and attention 23:19 - Consciousness, identity, and self 30:39 - Attention and the brain 35:47 - Integrated information theory 42:05 - Neural attention 52:08 - Decoupling oscillations from spikes 57:16 - Selves in other organisms 1:00:42 - AI and the self 1:04:43 - Attention, consciousness, conscious perception 1:08:36 - Meaning and attention 1:11:12 - Conscious entrainment 1:19:57 - Is attention a switch or knob?

Nov 18, 20221h 25m

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively. Michael's website. Twitter: @mljanderson. Book: After Phrenology: Neural Reuse and the Interactive Brain. Related papers Neural reuse: a fundamental organizational principle of the brain. Some dilemmas for an account of neural representation: A reply to Poldrack. Debt-free intelligence: Ecological information in minds and machines Describing functional diversity of brain regions and brain networks. 0:00 - Intro 3:02 - After Phrenology 13:18 - Typical neuroscience experiment 16:29 - Neural reuse 18:37 - 4E cognition and representations 22:48 - John Krakauer question 27:38 - Gibsonian perception 36:17 - Autoencoders without representations 49:22 - Pluralism 52:42 - Alex Gomez-Marin question - metaphysics 1:01:26 - Stimulus-response historical neuroscience 1:10:59 - After Phrenology influence 1:19:24 - Origins of neural reuse 1:35:25 - The way forward

Nov 8, 20221h 45m

BI 151 Steve Byrnes: Brain-like AGI Safety

Support the show to get full episodes, full archive, and join the Discord community. Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI. Steve's website.Twitter: @steve47285Intro to Brain-Like-AGI Safety.

Oct 30, 20221h 31m

BI 150 Dan Nicholson: Machines, Organisms, Processes

Support the show to get full episodes, full archive, and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more. Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology.Related episode: BI 118 Johannes Jäger: Beyond Networks. 0:00 - Intro 2:49 - Philosophy and science 16:37 - Role of history 23:28 - What Is Life? And interaction with James Watson 38:37 - Arguments against the machine conception of organisms 49:08 - Organisms as streams (processes) 57:52 - Process philosophy 1:08:59 - Alfred North Whitehead 1:12:45 - Process and consciousness 1:22:16 - Artificial intelligence and process 1:31:47 - Language and symbols and processes

Oct 15, 20221h 38m

BI 149 William B. Miller: Cell Intelligence

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more. William's website.Twitter: @BillMillerMD.Book: Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. 0:00 - Intro 3:43 - Bioverse 7:29 - Bill's cell appreciation origins 17:03 - Microbiomes 27:01 - Complexity of microbiomes and the "Era of the cell" 46:00 - Robustness 55:05 - Cell vs. human intelligence 1:10:08 - Artificial intelligence 1:21:01 - Neuro-AI 1:25:53 - Hard problem of consciousness

Oct 5, 20221h 33m

BI 148 Gaute Einevoll: Brain Simulations

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception). Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast. 0:00 - Intro 3:25 - Beautiful and messy models 6:34 - In Silico 9:47 - Goals of human brain project 15:50 - Brain simulation approach 21:35 - Degeneracy in parameters 26:24 - Abstract principles from simulations 32:58 - Models as tools 35:34 - Predicting brain signals 41:45 - LFPs closer to average 53:57 - Plasticity in simulations 56:53 - How detailed should we model neurons? 59:09 - Lessons from predicting signals 1:06:07 - Scaling up 1:10:54 - Simulation as a tool 1:12:35 - Oscillations 1:16:24 - Manifolds and simulations 1:20:22 - Modeling cortex like Hodgkin and Huxley

Sep 25, 20221h 28m

BI 147 Noah Hutton: In Silico

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project. In Silico website.Rent or buy In Silico.Noah's website.Twitter: @noah_hutton. 0:00 - Intro 3:36 - Release and premier 7:37 - Noah's background 9:52 - Origins of In Silico 19:39 - Recurring visits 22:13 - Including the critics 25:22 - Markram's shifting outlook and salesmanship 35:43 - Promises and delivery 41:28 - Computer and brain terms interchange 49:22 - Progress vs. illusion of progress 52:19 - Close to quitting 58:01 - Salesmanship vs bad at estimating timelines 1:02:12 - Brain simulation science 1:11:19 - AGI 1:14:48 - Brain simulation vs. neuro-AI 1:21:03 - Opinion on TED talks 1:25:16 - Hero worship 1:29:03 - Feedback on In Silico

Sep 13, 20221h 37m

BI 146 Lauren Ross: Causal and Non-Causal Explanation

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints. Lauren's website.Twitter: @ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science. 0:00 - Intro 2:46 - Lauren's background 10:14 - Jim Woodward legacy 15:37 - Golden era of causality 18:56 - Mechanistic explanation 28:51 - Pathways 31:41 - Cascades 36:25 - Topology 41:17 - Constraint 50:44 - Hierarchy of explanations 53:18 - Structure and function 57:49 - Brain and mind 1:01:28 - Reductionism 1:07:58 - Constraint again 1:14:38 - Multiple realizability

Sep 7, 20221h 22m

BI 145 James Woodward: Causation with a Human Face

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures. Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology. 0:00 - Intro 4:14 - Causation with a Human Face & Functionalist approach 6:16 - Interventionist causality; Epistemology and metaphysics 9:35 - Normative and descriptive 14:02 - Rationalist approach 20:24 - Normative vs. descriptive 28:00 - Varying notions of causation 33:18 - Invariance 41:05 - Causality in complex systems 47:09 - Downward causation 51:14 - Natural laws 56:38 - Proportionality 1:01:12 - Intuitions 1:10:59 - Normative and descriptive relation 1:17:33 - Causality across disciplines 1:21:26 - What would help our understanding of causation

Aug 28, 20221h 25m

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes, full archive, and join the Discord community. Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more. Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words. Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more. EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) 0:00 - Intro 4:35 - Language and cognition 15:38 - Grasping for meaning 21:32 - Are large language models producing language? 23:09 - Next-word prediction in brains and models 32:09 - Interface between language and thought 35:18 - Studying language in nonhuman animals 41:54 - Do we understand language enough? 45:51 - What do language models need? 51:45 - Are LLMs teaching us about language? 54:56 - Is meaning necessary, and does it matter how we learn language? 1:00:04 - Is our biology important for language? 1:04:59 - Future outlook

Aug 17, 20221h 11m

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics. Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience 0:00 - Intro 4:38 - Control engineer 9:52 - Control vs. dynamical systems 13:34 - Building vs. understanding 17:38 - Mixed feedback signals 26:00 - Robustness 28:28 - Eve Marder 32:00 - Loneliness 37:35 - Across levels 44:04 - Neuromorphics and neuromodulation 52:15 - Barrier to adopting neuromorphics 54:40 - Deep learning influence 58:04 - Beyond energy efficiency 1:02:02 - Deep learning for neuro 1:14:15 - Role of philosophy 1:16:43 - Doing it right

Aug 5, 20221h 24m

BI 142 Cameron Buckner: The New DoGMA

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.  Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter). 0:00 - Intro 4:55 - Interpreting old philosophy 8:26 - AI and philosophy 17:00 - Empiricism vs. rationalism 27:09 - Domain-general faculties 33:10 - Faculty psychology 40:28 - New faculties? 46:11 - Human faculties 51:15 - Cognitive architectures 56:26 - Language 1:01:40 - Beyond dichotomous thinking 1:04:08 - Lower-level faculties 1:10:16 - Animal cognition 1:14:31 - A Forward-Looking Theory of Content

Jul 26, 20221h 43m

BI 141 Carina Curto: From Structure to Dynamics

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition. Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis 0:00 - Intro 4:25 - Background: Physics and math to study brains 20:45 - Beautiful and ugly models 35:40 - Topology 43:14 - Topology in hippocampal navigation 56:04 - Topology vs. dynamical systems theory 59:10 - Combinatorial linear threshold networks 1:25:26 - How much more math do we need to invent?

Jul 12, 20221h 31m

BI 140 Jeff Schall: Decisions and Eye Movements

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time). Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time. 0:00 - Intro 6:51 - Neurophysiology old and new 14:50 - Linking propositions 24:18 - Psychology working with neurophysiology 35:40 - Neuron doctrine, population doctrine 40:28 - Strong Inference and deep learning 46:37 - Model mimicry 51:56 - Scientific fads 57:07 - Current projects 1:06:38 - On leaving academia 1:13:51 - How academia has changed for better and worse

Jun 30, 20221h 20m

BI 139 Marc Howard: Compressed Time and Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes, full archive, and join the Discord community. Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales. Theoretical Cognitive Neuroscience Lab.  Twitter: @marcwhoward777. Related papers: Memory as perception of the past: Compressed time in mind and brain. Formal models of memory based on temporally-varying representations. Cognitive computation using neural representations of time and space in the Laplace domain. Time as a continuous dimension in natural and artificial networks. DeepSITH: Efficient learning via decomposition of what and when across time scales. 0:00 - Intro 4:57 - Main idea: Laplace transforms 12:00 - Time cells 20:08 - Laplace, compression, and time cells 25:34 - Everywhere in the brain 29:28 - Episodic memory 35:11 - Randy Gallistel's memory idea 40:37 - Adding Laplace to deep nets 48:04 - Reinforcement learning 1:00:52 - Brad Wyble Q: What gets filtered out? 1:05:38 - Replay and complementary learning systems 1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki 1:15:10 - Obstacles

Jun 20, 20221h 20m