PLAY PODCASTS

About

https://thevalmy.com/

Latest Episodes

View all 142 episodes

Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey Irving

Full

Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey IrvingRelease date: 2026-03-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationGeoffrey Irving, Chief Scientist at the UK AI Security Institute, explains why our theoretical understanding of machine learning remains fragile even as models surpass experts on critical security tasks. He details AISI’s work on frontier model evaluations, red teaming, and threat modeling across biosecurity, cybersecurity, and loss-of-control risks. The conversation explores reward hacking, eval awareness, and why current safety techniques may struggle to deliver high reliability. Listeners will also hear how AISI is funding foundational research to build stronger guarantees for AI safety. Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions: https://bit.ly/granolablindspotSponsors: Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week 4 at https://serval.com/cognitive Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro’s full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:09) From physics to ML (08:52) AGI uncertainty and threats (Part 1) (18:08) Sponsors: Serval | Claude (21:29) AGI uncertainty and threats (Part 2) (27:35) Control, autonomy, alignment (Part 1) (34:02) Sponsor: Tasklet (35:14) Control, autonomy, alignment (Part 2) (38:44) Inside the UK AC (51:02) Evaluations and jailbreaking (01:01:17) Emerging capabilities and misuse (01:14:20) Agents and reward hacking (01:26:09) Theoretical alignment agenda (01:38:39) Debate and formal methods (01:51:19) Limits of formalization (02:02:27) Future risks and governance (02:16:23) Episode Outro (02:18:58) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

Mar 6, 20262h 18m

Timothy Williamson: Philosophy’s Most Formidable Living Mind

Full

Podcast: Theories of Everything with Curt Jaimungal Episode: Timothy Williamson: Philosophy’s Most Formidable Living MindRelease date: 2026-01-13Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationFree ZWILLING Four Star Chef's Knife on your 3rd box ($144.99 value) + 10 Free Meals and your first box ship free with code CURTJHFZWL at https://hellofresh.yt.link/4u4Vh7m! This is an interview with Oxford’s Timothy Williamson. He’s one of the most cited living philosophers, and simultaneously one of the most controversial (yet respected). He dismantles physicalism, solipsism, and reductionism––explaining why consciousness is philosophically overrated and why AI in its current form likely lacks genuine mental states. This will be a tour‐de‐force episode into all things related to looking deeply and fundamentally. If you’re interested in consciousness, free will, art, language, and meaning, I believe you’ll love this episode. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe SUPPORT: - Support me on Substack: https://curtjaimungal.substack.com/subscribe - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 JOIN MY SUBSTACK (Personal Writings): https://curtjaimungal.substack.com LISTEN ON SPOTIFY: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e TIMESTAMPS: - 00:00:00 - Vagueness & Sorites Paradox - 00:07:12 - Identity, Physicalism, Non-Physicals - 00:22:30 - Realism vs. Anti-Realism - 00:29:50 - The Problem of Skepticism - 00:35:40 - Cognitive Heuristics & Doubt - 00:43:00 - Solipsism's Appeal & Pitfalls - 00:50:00 - Solipsism: A Critique - 00:57:30 - Pluralism & Consciousness - 01:06:00 - AI, Mental States, Ontology - 01:15:50 - Mind, Knowledge, Meaning - 01:26:00 - Philosophical Heuristics - 01:32:00 - Counterfactuals & Logic - 01:38:00 - Personal Philosophy LINKS MENTIONED: - Overfitting and Heuristics in Philosophy [Book]: https://www.amazon.com/Overfitting-Heuristics-Philosophy-Rutgers-Lectures/dp/0197779212 - Timothy Williamson's Published Papers: https://scholar.google.com/citations?user=IH-44VwAAAAJ&hl=en - Sorites Paradox: https://plato.stanford.edu/entries/sorites-paradox/ - Philosophical Investigations [Book]: https://www.amazon.com/Philosophical-Investigations-Ludwig-Wittgenstein/dp/0631205691 - I Do Not Exist [Paper]: https://academic.oup.com/book/53296/chapter-abstract/422023005 - O'Shaughnessy Ventures: https://www.osv.llc/ - Barry Loewer & Eddy Chen [TOE]: https://youtu.be/xZnafO__IZ0 - Bas Van Fraassen [TOE]: https://youtu.be/lhpRAWxvY5s - Matthew Segall [TOE]: https://youtu.be/DeTm4fSXpbM - Jennifer Nagel [TOE]: https://youtu.be/CWZVMZ9Tm7Q - Leo Gura [TOE]: https://youtu.be/YspFR9JAq3w - Iain McGilchrist [TOE]: https://youtu.be/M-SgOwc6Pe4 - The Consciousness Iceberg [TOE]: https://youtu.be/65yjqIDghEk - Karl Friston [TOE]: https://youtu.be/uk4NZorRjCo - Geoffrey Hinton [TOE]: https://youtu.be/b_DUft-BdIE - Elan Barenholtz [TOE]: https://youtu.be/A36OumnSrWY - Ben Goertzel & Joscha Bach [TOE]: https://youtu.be/xw7omaQ8SgA - Claudia de Rham [TOE]: https://youtu.be/Ve_Mpd6dGv8 - Stephen Wolfram [TOE]: https://youtu.be/0YRlQQw0d-4 - Elan Barenholtz & Will Hahn [TOE]: https://youtu.be/Ca_RbPXraDE - Greg Kondrak [TOE]: https://youtu.be/FFW14zSYiFY - Robert Sapolsky [TOE]: https://youtu.be/z0IqA1hYKY8 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs Guests do not pay to appear. Theories of Everything receives revenue solely from viewer donations, platform ads, and clearly labelled sponsors; no guest or associated entity has ever given compensation, directly or through intermediaries. #science Learn more about your ad choices. Visit megaphone.fm/adchoices

Jan 19, 20261h 57m

The Waltz

Full

Podcast: In Our Time Episode: The WaltzRelease date: 2024-04-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationMelvyn Bragg and guests discuss the dance which, from when it reached Britain in the early nineteenth century, revolutionised the relationship between music, literature and people here for the next hundred years. While it may seem formal now, it was the informality and daring that drove its popularity, with couples holding each other as they spun round a room to new lighter music popularised by Johann Strauss, father and son, such as The Blue Danube. Soon the Waltz expanded the creative world in poetry, ballet, novellas and music, from the Ballets Russes of Diaghilev to Moon River and Are You Lonesome Tonight.WithSusan Jones Emeritus Professor of English Literature at the University of OxfordDerek B. Scott Professor Emeritus of Music at the University of LeedsAndTheresa Buckland Emeritus Professor of Dance History and Ethnography at the University of RoehamptonProducer: Simon TillotsonReading list: Egil Bakka, Theresa Jill Buckland, Helena Saarikoski, and Anne von Bibra Wharton (eds.), Waltzing Through Europe: Attitudes towards Couple Dances in the Long Nineteenth Century, (Open Book Publishers, 2020)Theresa Jill Buckland, ‘How the Waltz was Won: Transmutations and the Acquisition of Style in Early English Modern Ballroom Dancing. Part One: Waltzing Under Attack’ (Dance Research, 36/1, 2018); ‘Part Two: The Waltz Regained’ (Dance Research, 36/2, 2018)Theresa Jill Buckland, Society Dancing: Fashionable Bodies in England, 1870-1920 (Palgrave Macmillan, 2011)Erica Buurman, The Viennese Ballroom in the Age of Beethoven (Cambridge University Press, 2022) Paul Cooper, ‘The Waltz in England, c. 1790-1820’ (Paper presented at Early Dance Circle conference, 2018)Sherril Dodds and Susan Cook (eds.), Bodies of Sound: Studies Across Popular Dance and Music (Ashgate, 2013), especially ‘Dancing Out of Time: The Forgotten Boston of Edwardian England’ by Theresa Jill BucklandZelda Fitzgerald, Save Me the Waltz (first published 1932; Vintage Classics, 2001)Hilary French, Ballroom: A People's History of Dancing (Reaktion Books, 2022)Susan Jones, Literature, Modernism, and Dance (Oxford University Press, 2013)Mark Knowles, The Wicked Waltz and Other Scandalous Dances: Outrage at Couple Dancing in the 19th and Early 20th Centuries (McFarland, 2009)Rosamond Lehmann, Invitation to the Waltz (first published 1932; Virago, 2006)Eric McKee, Decorum of the Minuet, Delirium of the Waltz: A Study of Dance-Music Relations in 3/4 Time (Indiana University Press, 2012)Eduard Reeser, The History of the Walz (Continental Book Co., 1949)Stanley Sadie (ed.), The New Grove Dictionary of Music and Musicians, Vol. 27 (Macmillan, 2nd ed., 2000), especially ‘Waltz’ by Andrew LambDerek B. Scott, Sounds of the Metropolis: The 19th-Century Popular Music Revolution in London, New York, Paris and Vienna (Oxford University Press, 2008), especially the chapter ‘A Revolution on the Dance Floor, a Revolution in Musical Style: The Viennese Waltz’Joseph Wechsberg, The Waltz Emperors: The Life and Times and Music of the Strauss Family (Putnam, 1973)Cheryl A. Wilson, Literature and Dance in Nineteenth-century Britain (Cambridge University Press, 2009)Virginia Woolf, The Voyage Out (first published 1915; William Collins, 2013)Virginia Woolf, The Years (first published 1937; Vintage Classics, 2016)David Wyn Jones, The Strauss Dynasty and Habsburg Vienna (Cambridge University Press, 2023)Sevin H. Yaraman, Revolving Embrace: The Waltz as Sex, Steps, and Sound (Pendragon Press, 2002)Rishona Zimring, Social Dance and the Modernist Imagination in Interwar Britain (Ashgate Press, 2013)

Dec 25, 202552 min

Brian Armstrong

Full

Podcast: Tetragrammaton with Rick Rubin Episode: Brian ArmstrongRelease date: 2025-12-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationBrian Armstrong is the co-founder and CEO of Coinbase, the largest cryptocurrency exchange in the United States by trading volume and users. He launched Coinbase in 2012 after working as a software engineer at Airbnb, where he experienced firsthand the frictions of global payment systems. Under his leadership, Coinbase grew into a publicly traded company on Nasdaq in 2021 and now serves over 100 million verified users in more than 100 countries. Beyond Coinbase, Armstrong has co-founded initiatives like ResearchHub and NewLimit and is a prominent advocate for an open, crypto-powered financial system. ------ Thank you to the sponsors that fuel our podcast and our team: Athletic Nicotine https://www.AthleticNicotine.com/tetra Use code 'TETRA' ------ Squarespace https://Squarespace.com/tetra Use code 'TETRA' ------ LMNT Electrolytes https://DrinkLMNT.com/tetra Use code 'TETRA' ------ Sign up to receive Tetragrammaton Transmissions https://www.tetragrammaton.com/join-newsletter

Dec 25, 20252h 11m

Cass Sunstein on Liberalism and Rights in the Age of AI

Full

Podcast: Conversations with Tyler Episode: Cass Sunstein on Liberalism and Rights in the Age of AIRelease date: 2025-11-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization Cass Sunstein is one of the most widely cited legal scholars of all time and among the most prolific writers working today. This year alone he has five books out, including Imperfect Oracle on the strengths and limits of AI and On Liberalism: In Defense of Freedom. In his second appearance on the show, he brings his characteristic intellectual range to exploring liberalism's present precariousness and AI's implications for law and speech. Tyler and Cass discuss whether liberalism is self-undermining or simply vulnerable to illiberal forces, the tensions in how a liberal immigration regime would work, whether new generations of liberal thinkers are emerging, if Derek Parfit counts as a liberal, Mill's liberal wokeism, the allure of Mises' "cranky enthusiasm for freedom," whether the central claim of The Road to Serfdom holds up, how to blend indigenous rights with liberal thought, whether AIs should have First Amendment protections, the argument for establishing a right not to be manipulated, better remedies for low-grade libel, whether we should have trials run by AI, how Bob Dylan embodies liberal freedom, Cass' next book about animal rights, and more. Read a full transcript enhanced with helpful links, or watch the full video on the new dedicated Conversations with Tyler channel. Recorded October 10th, 2025. This episode was made possible through the support of the John Templeton Foundation. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Cass on X Sign up for our newsletter Join our Discord Email us: [email protected] Learn more about Conversations with Tyler and other Mercatus Center podcasts here.

Dec 1, 20251h 19m

Iason Gabriel: Value Alignment and the Ethics of Advanced AI Systems

Full

Podcast: The Gradient: Perspectives on AI Episode: Iason Gabriel: Value Alignment and the Ethics of Advanced AI SystemsRelease date: 2025-11-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationEpisode 143I spoke with Iason Gabriel about:* Value alignment* Technology and worldmaking* How AI systems affect individuals and the social worldIason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.You can find him on his website and Twitter/X.Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at [email protected] for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (01:18) Iason’s intellectual development* (04:28) Aligning language models with human values, democratic civility and agonism* (08:20) Overlapping consensus, differing norms, procedures for identifying norms* (13:27) Rawls’ theory of justice, the justificatory and stability problems* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy* (23:45) Actor Network Theory and alignment* (27:25) Value alignment and Iason’s starting points* (33:10) The Ethics of Advanced AI Assistants, AI’s impacts on social processes and users, personalization* (37:50) AGI systems and social power* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre’s conception in After Virtue* (45:00) The Challenge of Value Alignment* (45:25) Technologists as worldmakers* (51:30) Technological determinism, collective action problems* (55:25) Iason’s goals with his work* (58:32) OutroLinksPapers:* AI, Values, and Alignment (2020)* Aligning LMs with Human Values (2023)* Toward a Theory of Justice for AI (2023)* The Ethics of Advanced AI Assistants (2024)* A matter of principle? AI alignment as the fair treatment of claims (2025) Get full access to The Gradient at thegradientpub.substack.com/subscribe

Nov 27, 202558 min

Tony Hawk

Full

Podcast: Tetragrammaton with Rick Rubin Episode: Tony HawkRelease date: 2025-09-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationTony Hawk is a professional skateboarder widely regarded as one of the most influential figures in the sport’s history. Rising to prominence in the 1980s and 1990s, he became known for pioneering tricks like the 900 and for pushing skateboarding into the mainstream through competitions, video games, and media appearances. He became a household name through victorious X Games performances, and his career highlights include being the first to land the 900 in competition, earning over 70 contest victories, and dominating vert skating across two decades. As an ambassador for the sport, he also founded the Tony Hawk Foundation to support youth skateparks, further cementing his influence and legacy in skateboarding culture. ------ Thank you to the sponsors that fuel our podcast and our team: Athletic Nicotine https://www.AthleticNicotine.com/tetra Use code 'TETRA' ------ LMNT Electrolytes https://DrinkLMNT.com/tetra Use code 'TETRA' ------ Squarespace https://Squarespace.com/tetra Use code 'TETRA' ------ Sign up to receive Tetragrammaton Transmissions https://www.tetragrammaton.com/join-newsletter

Sep 30, 20251h 29m

Working Definition episode 3: Freedom, with Tyler Cowen

Full

Podcast: Working DefinitionEpisode: Working Definition episode 3: Freedom, with Tyler CowenRelease date: 2025-08-29Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, Tyler Cowen and I discuss freedom. We talk about how people talk about freedom, whether you can define freedom, freedom's relation to other concepts, the relevance of free will and consciousness, what it might mean for a country to be free, and much more. In short, we do some philosophy! I hope you enjoy it.. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit endsdontjustifythemeans.com

Sep 1, 20251h 0m

Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?

Full

Podcast: Doom Debates! Episode: Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?Release date: 2025-08-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationVitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What’s Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron’s outroLinksVitalik’s website: https://vitalik.eth.limoVitalik’s Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky’s explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe

Aug 13, 20252h 26m

Trump’s tech bros: The enigma of Peter Thiel

Full

Podcast: FT Tech Tonic Episode: Trump’s tech bros: The enigma of Peter ThielRelease date: 2025-07-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPeter Thiel is unlike any other Trump tech bro. As well as a wildly successful investor, he’s seen as a thinker - the philosopher king of Silicon Valley. Thiel’s acolytes in the tech world and Washington include vice-president JD Vance but his relationship with the Trump camp is complicated. And there are still questions about what, if anything, he wants with the president.In the final episode of this season of Tech Tonic, Murad Ahmed speaks to FT columnist Gillian Tett about Thiel’s political philosophy, and to Tabby Kinder, the FT’s West Coast financial editor, about his influence in Silicon Valley.Free to read:How Peter Thiel and Silicon Valley funded the sudden rise of JD VanceA time for truth and reconciliation (written by Peter Thiel)How a little-known French literary critic became a bellwether for the US rightPalantir’s ‘revolving door’ with government spurs huge growthThis season of Tech Tonic is presented by Murad Ahmed and produced by Josh Gabert-Doyon. The senior producer is Edwin Lane and the executive producer is Flo Phillips. Sound design by Sam Giovinco. Breen Turner and Samantha Giovinco. Original music by Metaphor Music, Manuela Saragosa and Topher Forhecz are the FT’s acting co-heads of audio.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Jul 8, 202531 min

Ep 114: Flying Cars Are About to Change the World — Joby CEO JoeBen Bevirt

Full

Podcast: Joe Lonsdale: American Optimist Episode: Ep 114: Flying Cars Are About to Change the World — Joby CEO JoeBen BevirtRelease date: 2025-06-04Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationJoeBen Bevirt has spent two decades building electric vertical take-off and landing (eVTOL) aircraft, and now he's on the cusp of commercial approval and rollout. Will flying cars be as transformational as the automobile? How will air taxis impact our cities and the way we live? And how did JoeBen achieve this feat of ingenuity?This week we're joined by the Founder and CEO of Joby Aviation, an American aviation company pioneering eVTOL aircraft for air taxi service. All-electric, virtually silent, and traveling up to 200mph with a pilot and four passengers, Joby is opening new possibilities in the skies above — starting at the price of an Uber Black. The implications for productivity and quality of life are massive, saving the average person an hour or two a day sitting in traffic and unlocking new swaths of land for development.I'm proud that 8VC co-led Joby's first investment round about a decade ago, when many others, even flying enthusiasts, thought it was a pipedream. Since then, Joby has single-handedly shaped an entire new industry, from engineering breakthroughs to regulatory pathways, ensuring that American aviation stays ahead of China. Joby expects its first passenger rides in Dubai within a year and is working closely with the Trump administration as it nears the final stages of FAA approval. Inspired by SpaceX, Joby is vertically integrated and plans to aggressively ramp manufacturing here in the U.S., backed by a $500 million investment from Toyota (bringing Toyota's total investment near $900 million). While we await the first passenger flights, Joby is also building out its infrastructure nationwide — and they're looking for real estate and partners! You can contact JoeBen and the team here: [email protected]:00 Episode Intro 01:38 Flying cars are here 04:00 JoeBen's journey 05:48 Battery progress & hydrogen breakthroughs 08:50 Air taxi for the price of Uber Black 12:35 When will commercial flights start? 20:30 Why Joby is the industry leader 24:20 Why China is copying Joby 28:00 How air taxis will change your life 32:10 How Joby will transform real estate 35:45 Solving intractable problems This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit blog.joelonsdale.com

Jun 5, 202538 min

Richard Ngo - A State-Space of Positive Posthuman Futures (Worthy Successor, Episode 8)

Full

Podcast: The Trajectory Episode: Richard Ngo - A State-Space of Positive Posthuman Futures (Worthy Successor, Episode 8)Release date: 2025-04-25Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - https://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...About The Trajectory:AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.Ask questions of our speakers in our live Philosophy Circle calls:https://bit.ly/PhilosophyCircleStay in touch:-- Newsletter: bit.ly/TrajectoryTw-- X: x.com/danfaggella-- Blog: danfaggella.com/trajectory-- YouTube: youtube.com/@trajectoryai

Apr 28, 20251h 46m

AI, data centers, and power economics, with Azeem Azhar

Full

Podcast: Complex Systems with Patrick McKenzie (patio11) Episode: AI, data centers, and power economics, with Azeem AzharRelease date: 2025-02-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatrick McKenzie (patio11) is joined by Azeem Azhar, writer of the Exponential View newsletter, to discuss the massive data center buildout powering AI and its implications for our energy infrastructure. The conversation covers the physical limitations of modern datacenters, the challenges of electricity generation, the societal ripples from historical largescale infrastructure investments like railways and telecommunications, and the future of energy including solar, nuclear and geothermal power. Through their discussion, Patrick and Azeem explain why our mental models for both computing and energy systems need to be updated.–Full transcript available here: www.complexsystemspodcast.com/ai-llm-data-center-power-economics/–Sponsors: Safebase | CheckReady to save time and close deals faster? Inbound security reviews shouldn’t slow down your team or your sales cycle. Leading companies use SafeBase to eliminate up to 98% of inbound security questionnaires, automate workflows, and accelerate pipeline. Go to safebase.io/podcast Check is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Recommended in this episode:Azeem’s newsletter: https://www.exponentialview.co/ Azeem Azhar’s guest essay: The 19th-Century Technology That Threatens A.I. https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlElectric Twin: https://www.electrictwin.com/ Video of Elon Musk’s Colossus https://www.youtube.com/watch?v=Tw696JVSxJQ Complex Systems with Travis Dauwalter on the electrical grid: https://open.spotify.com/episode/5JY8e84sEXmHFlc8IR2kRb?si=35ymIC0UQ5SKdV8rrBcgIw Complex Systems with Austin Vernon on fracking: https://open.spotify.com/episode/0YDV1XyjUCM2RtuTcBGYH9?si=YshjUXPEQBiScNxrNaI-Gw Complex Systems with Casey Handmer on direct capture of CO2 to turn into hydrocarbon: https://open.spotify.com/episode/0GHegWgLSubYxvATmbWhQu?si=xNYBjn0ZTX2IT_pAZ5Ozsg –Twitter:@azeem@patio11–Timestamps:(00:00) Intro (00:27) The power economics of data centers(01:12) Historical infrastructure rollouts(04:58) The telecoms bubble (06:22) Unprecedented enterprise spend on AI capabilities(11:12) Let's have your LLM talk to my LLM(16:44) Is there a saturation point?(19:25) Sponsors: Safebase | Check(21:55) What’s in a data center?(24:52) The challenges of data centers(29:40) Geographical considerations for data centers(36:53) Energy consumption and future needs(40:48) Challenges in building transmission lines(41:35) The solar power learning curve(43:51) Small modular nuclear reactors(51:26) Geothermal energy and fracking(01:01:34) The future of AI and energy systems(01:12:57) Wrap

Mar 11, 20251h 13m

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Full

Podcast: 80,000 Hours Podcast Episode: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anywayRelease date: 2025-02-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationTechnology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.Links to learn more, highlights, video, and full transcript.This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.Host Rob and Allan also cover:The most exciting beneficial applications of AIWhether and how we can influence the development of technologyWhat DeepMind is doing to evaluate and mitigate risks from frontier AI systemsWhy cooperative AI may be as important as aligned AIThe role of democratic input in AI governanceWhat kinds of experts are most needed in AI safety and governanceAnd much moreChapters:Cold open (00:00:00)Who's Allan Dafoe? (00:00:48)Allan's role at DeepMind (00:01:27)Why join DeepMind over everyone else? (00:04:27)Do humans control technological change? (00:09:17)Arguments for technological determinism (00:20:24)The synthesis of agency with tech determinism (00:26:29)Competition took away Japan's choice (00:37:13)Can speeding up one tech redirect history? (00:42:09)Structural pushback against alignment efforts (00:47:55)Do AIs need to be 'cooperatively skilled'? (00:52:25)How AI could boost cooperation between people and states (01:01:59)The super-cooperative AGI hypothesis and backdoor risks (01:06:58)Aren’t today’s models already very cooperative? (01:13:22)How would we make AIs cooperative anyway? (01:16:22)Ways making AI more cooperative could backfire (01:22:24)AGI is an essential idea we should define well (01:30:16)It matters what AGI learns first vs last (01:41:01)How Google tests for dangerous capabilities (01:45:39)Evals 'in the wild' (01:57:46)What to do given no single approach works that well (02:01:44)We don't, but could, forecast AI capabilities (02:05:34)DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)How 'structural risks' can force everyone into a worse world (02:15:01)Is AI being built democratically? Should it? (02:19:35)How much do AI companies really want external regulation? (02:24:34)Social science can contribu

Feb 14, 20252h 44m

Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward Hughes

Full

Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward HughesRelease date: 2025-02-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, Edward Hughes, researcher at Google DeepMind, and Aron Vallinder, an independent researcher and PIBBSS fellow discuss their pioneering research on cultural evolution and cooperation among large language model agents. The conversation delves into the study's design, exploring how different AI models exhibit cooperative behavior in simulated environments, the implications of these findings for future AI development, and the potential societal impacts of autonomous AI agents. They elaborate on their experimental setup involving different LLMs like Claude, Gemini 1.5, and GPT-4.0 in a cooperative donor-recipient game, shedding light on how various AI models handle cooperation and their potential societal impacts. Key points include the importance of understanding externalities, the role of punishment and communication, and future research directions involving mixed-model societies and human-AI interactions. The episode invites listeners to engage in this fast-growing field, stressing the need for more hands-on research and empirical evidence to navigate the rapidly evolving AI landscape.Link to Aron & Edward's research paper "Cultural Evolution of Cooperation among LLMAgents"SPONSORS:Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitiveNetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitiveShopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitiveCHAPTERS:(00:00) Teaser(00:42) About the Episode(03:26) Introduction(03:40) The Rapid Evolution of AI(04:58) Human Cooperation and Society(07:03) Cultural Evolution and Stories(08:39) Mechanisms of Cultural Evolution (Part 1)(20:56) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite(23:35) Mechanisms of Cultural Evolution (Part 2)(27:07) Experimental Setup: Donor Game (Part 1)(37:35) Sponsors: Shopify(38:55) Experimental Setup: Donor Game (Part 2)(44:32) Exploring AI Societies: Claude, Gemini, and GPT-4(45:50) Striking Graphical Differences(48:08) Experiment Results and Implications(50:54) Prompt Engineering and Cooperation(57:40) Mixed Model Societies(01:00:35) Future Research Directions(01:03:10) Human-AI Interaction and Influence(01:05:20) Complexifying AI Games(01:18:14) Evaluations and Feedback Loops(01:20:50) Open Source and AI Safety(01:23:23) Reflections and Future Work(01:30:04) Outro

Feb 14, 20251h 29m

AI in 2030, Scaling Bottlenecks, and Explosive Growth

Full

Podcast: Epoch After Hours Episode: AI in 2030, Scaling Bottlenecks, and Explosive GrowthRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn our first episode of Epoch After Hours, Ege, Tamay and Jaime dig into what they expect AI to look like by 2030; why economists are underestimating the likelihood of explosive growth; the startling regularity in technological trends like Moore's Law; Moravec’s paradox, and how we might overcome it; and much more!

Jan 18, 20252h 2m

Ajeya Cotra on AI safety and the future of humanity

Full

Podcast: AI Summer Episode: Ajeya Cotra on AI safety and the future of humanityRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation’s grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

Jan 16, 20251h 13m

Nora Belrose - AI Development, Safety, and Meaning

Full

Podcast: Machine Learning Street Talk (MLST) Episode: Nora Belrose - AI Development, Safety, and MeaningRelease date: 2024-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)

Nov 30, 20242h 29m

The Road to Autonomous Intelligence with Andrej Karpathy

Full

Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: The Road to Autonomous Intelligence with Andrej KarpathyRelease date: 2024-09-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAndrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and how AI capabilities could be further integrated with human cognition. Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead.Sign up for new podcasts every week. Email feedback to [email protected] us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @KarpathyShow Notes: (0:00) Introduction(0:33) Evolution of self-driving cars(2:23) The Tesla vs. Waymo approach to self-driving (6:32) Training Optimus with automotive models(10:26) Reasoning behind the humanoid form factor(13:22) Existing challenges in robotics(16:12) Bottlenecks of AI progress (20:27) Parallels between human cognition and AI models(22:12) Merging human cognition with AI capabilities(27:10) Building high performance small models(30:33) Andrej’s current work in AI-enabled education(36:17) How AI-driven education reshapes knowledge networks and status(41:26) Eureka Labs(42:25) What young people study to prepare for the future

Sep 5, 202444 min

Joscha Bach - AGI24 Keynote (Cyberanimism)

Full

Podcast: Machine Learning Street Talk (MLST) Episode: Joscha Bach - AGI24 Keynote (Cyberanimism)Release date: 2024-08-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at https://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness

Aug 21, 202457 min
Copyright 2026 Peter Hartree