
About
The podcast focuses on topics in theoretical/computational neuroscience and is primarily aimed at students and researchers in the field.
Latest Episodes
View all 39 episodesS1 Ep 39On modeling neural population activity with mean-field models - with Tilo Schwalger - #39
Starting with the work of pioneers like Wilson and Cowan in the 1970s, mean‑field models have become a dominant tool for modeling neural activity at the level of neuronal populations. Despite their popularity, most mean‑field models have been heuristic and not systematically derived from the underlying 'microscopic' dynamics of individual neurons. Today's guest has made important contributions towards remedying this situation.
S1 Ep 38On extracting spiking network models from experiments - with Richard Gao - #38
While some models aim to explain qualitative features of brain activity, other aim to reproduce experimental data quantitatively. If so, model parameters must be adjusted to make the model predictions fit the experimental data. A complication is that in most neurobiological applications, there is not a unique best fit: many parameter combinations give equally good model fits. Recently, the guest, together with colleagues, made the tool AutoMIND to fit spiking network models to data.
S1 Ep 37On reproducibility of modeling and 10 years with the Potjans-Diesmann network model - with Hans Ekkehard Plesser - #37
Reproducibility is key for scientific progress. If research results cannot be reproduced and trusted, other researchers cannot build on them. Reproducibility is a challenge also in computational neuroscience, and today's guest has worked on how this can be remedied, for example, through standardized model description and model sharing. He also recently organised a workshop celebrating a decade with the (reproducible) Potjans-Diesmann neural network model, which has become an important community tool.
S1 Ep 36On low-dimensional manifolds in motor cortex - with Sara Solla - #36
Historically, the analysis of neural recordings focused on responses of single neurons recorded by single-contact electrodes. Modern electrodes with multiple electrode contacts can instead record spikes (action potentials) from hundreds of neurons simultaneously. Manifold analysis of the overall population activity of these neurons has become a critical tool for interpretation of such data. The podcast guest is a pioneer in the development and use of such analysis.
S1 Ep 35On modeling metabolic networks in the brain – with Polina Shichkova - #35
Neurons need particular sodium and potassium concentration gradients across their membranes to function. These gradients are set up by so-called ion pumps which require energy stored in ATP molecules to run. ATP is the common energy currency in the brain and is produced from nutrients delivered by the blood by a complicated set of chemical reactions known as a metabolic network. Today's guest has just published a comprehensive model of such a network and explains how it can shed light on differences between young and brains.
S1 Ep 34On balanced neural networks - with Nicolas Brunel - #34
An important discovery that has come out of computational neuroscience, is that cortical neurons in vivo appear to receive so-called balanced inputs. In the balanced state the excitatory and inhibitory synaptic inputs to a neuron are about equal, and action potentials occur when a fluctuation temporarily makes the excitation dominate. The theory, for example, explains the observed irregular firing of cortical neurons in the background state. Today's guest was one of the key developers of the theory in the late 1990s.
S1 Ep 33On computational neurotechnology for the clinic - with Anthony Burkitt, Nada Yousif & Esra Neufeld - #33
How can computational neuroscience contribute to developing neurotechnology to help people with brain disorders and disabilities? This was the topic of a panel debate I hosted at the 34th Annual Computational Neuroscience Meeting in Florence in July this year. Electric or magnetic recording and/or stimulation are key clinical tools for helping patients, and the three panelists have all used computational methods to aid this endeavor.
S1 Ep 32On IIT and adversarial testing of consciousness theories - with Christof Koch - #32
In an adversarial collaboration researchers with opposing theories jointly investigate a disputed topic by designing and implementing a study in a mutually agreed unbiased way. Results from adversarial testing of two well-known theories for consciousness, Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT), were presented earlier this year. In this podcast one of the proponents and developers of IIT describes this candidate theory, and also the design of, and results from, the adversarial study.
S1 Ep 31On how to cure brain diseases - with Nicole Rust - #31
A promise of basic neuroscience research is that the new insights will lead to new cures for brain diseases. But has that happened so far? Today's guest, an accomplished professor of neuroscience, decided to investigate. Her book "Elusive cures: why neuroscience hasn't solved brain disorders - and how we can change that" came out this summer. Here she argues that we need to consider the brain as a complex adaptive system, not as a chain of dominos as in the typical linear thinking.
S1 Ep 29On co-dependent excitatory and inhibitory plasticity - with Tim Vogels - #30
Synaptic plasticity underlies several key brain functions including learning, information filtering and homeostatic regulation of overall neural activity. While several mathematical rules have been developed for plasticity both at excitatory and inhibitory synapses, it has been difficult to make such rules co-exist in network models. Recently the group of the guest has explored how co-dependent plasticity rules can remedy the situation and, for example, assure that long-term memories can be stored in excitatory synapses while inhibitory synapses assure long-term stability.
S1 Ep 29On the philosophy of simplification in computational neuroscience - with Mazviita Chirimuuta and Terrence Sejnowski - #29
Computational neuroscientists rely on simplification when they make their models. But what is the right level of simplification? When should we, for example, use a biophysically detailed model and when a simplified abstract model when modelling neural dynamics? What are the problems of simplifying too much, or too little? This was the topic of the panel discussion between a science philosopher (MC), author of the recent book "The Brain Abstracted", and an experienced modeler (TS) at the FENS Regional Meeting in Oslo in June 2025.
S1 Ep 28On whole-cell modeling of bacteria - with Markus Covert - #28
A future computational neuroscience project could be to model not only the signal processing properties of neurons, but also all processes that keep a neuron alive for, say, a 100-year life span. In 2012 the group of the guest published the first such whole-cell model for a very simple bacterium (M. genitalia). In 2020 a model of the larger E. coli bacterium comprising 10.000 equations and 19.000 model parameters was presented. How are such models built, and what can they do?
S1 Ep 27On construction and clinical use of multipurpose neuron models - with Etay Hay - #27
Numerous neuron models have been made, but most of them are "single-purpose" in that they are made to address a single scientific question. In contrast, multipurpose neuron models are made to be used to address many scientific questions. In 2011, the guest published a multipurpose rodent pyramidal-cell model which has been actively used by the community ever since. We talk about how such models are made, and how his group later built human neuron models to explore network dynamics in brains of depressed patients.
S1 Ep 26On the population code in visual cortex - with Kenneth Harris - #26
With modern electrical and optical measurement techniques, we can now measure neural activity in hundreds or thousands of neurons simultaneously. This allows for the investigation of population codes, that is, of how groups of neurons together encode information. In 2019 today's guest published a seminal paper with collaborators at UCL in London where analysis of optophysiological data from 10.000 neurons in mouse visual cortex revealed an intriguing population code balancing the needs for efficient and robust coding. We discuss the paper and (towards the end) also how new AI tools may be a game-changer for neuroscience data analysis.
S1 Ep 25On growing synthetic dendrites – with Hermann Cuntz - #25
The observed variety of dendritic structures in the brains is striking. Why are they so different, and what determine the branching patterns? Following the dictum "if you understand it, you can build it", the lab of the guest builds dendritic structures in a computer and explore the underlying principles. Two key principles seem to be to minimize (i) the overall length of dendrites and (ii) the path length from the synapses to the soma.
S1 Ep 24On neuroscience foundation models - with Andreas Tolias - #24
The term "foundation model" refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.
S1 Ep 23On human whole-brain models - with Viktor Jirsa - #23
A holy grail of the multiscale approach for physical brain modelling is to link the different scales from molecules, via cells and local neural networks, up to whole-brain models. The goal of the Virtual Brain Twin project, lead by today's guest, is to use personalized human whole-brain models to aid clinicians in treating brain ailments. The podcast discusses how such models are presently made using neural field models, starting with neuron population dynamics rather than molecular dynamics.
S1 Ep 22On 40 years with the Hopfield network model - with Wulfram Gerstner - #22
In 1982 John Hopfield published the paper "Neural networks and physical systems with emergent collective computational abilities" describing a simple network model functioning as an associative and content-addressable memory. The paper started a new subfield in computational neuroscience and led to the influx of numerous theoretical scientists, in particular physicists, to the field. The podcast guest wrote his PhD thesis on the model in the early 1990s, and we talk about the history and present impact of the model.
S1 Ep 21On models for short-term memory - with Pawel Herman - #21
The leading theory for learning and memorization in the brain is that learning is provided by synaptic learning rules and memories stored in synaptic weights between neurons. But this is for long-term memory. What about short-term, or working, memory where objects are kept in memory for only a few seconds? The traditional theory held that here the mechanism is different, namely persistent firing of select neurons in areas such as prefrontal cortex. But this view is challenged by recent synapse-based models explored by today's guest and others.
S1 Ep 20On neuro-AI on the boat - part 2 of 2 - with Cristina Savin, Tim Vogels, Mikkel Lepperød, Paul Middlebrooks - #20
In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this second part we discuss the topic with Cristina Savin and Tim Vogels and round off with a brief discussion with Mikkel Lepperød, the main organizer of the workshop, about what he learned from the workshop.