PLAY PODCASTS
Ep 255: Does this research explain how LLMs work?
Episode 256

Ep 255: Does this research explain how LLMs work?

ToKCast

January 14, 20261h 22m

Audio is streamed directly from the publisher (mcdn.podbean.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

I take a look at these three papers: 1. https://www.arxiv.org/abs/2512.22471 2. https://arxiv.org/abs/2512.23752 3. https://arxiv.org/abs/2512.22473 Collectively titled "The Bayesian Attention Trilogy" along with some other material - in particular an interview with one of the authors "Vishal Misra" - https://www.engineering.columbia.edu/faculty-staff/directory/vishal-misra For those familiar with my output on this you can probably skip to about halfway through at 42:40. Prior to this is a lot of background on Induction, Bayesianism, Critical Rationalism and so on that people may have heard from me before in different contexts - although for what it's worth these are new ways of expressing those ideas. At the end I am reacting to a video found here: https://www.youtube.com/watch?v=uRuY0ozEm3Q