
Your Undivided Attention
The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin · Center for Humane Technology
Show overview
Your Undivided Attention has been publishing since 2019, and across the 7 years since has built a catalogue of 160 episodes, alongside 14 trailers or bonus episodes. That works out to roughly 120 hours of audio in total. Releases follow a fortnightly cadence.
Episodes typically run thirty-five to sixty minutes — most land between 37 min and 51 min — and the run-time is fairly consistent across the catalogue. It is catalogued as a EN-language Technology show.
The show is actively publishing — the most recent episode landed earlier today, with 12 episodes already out so far this year. Published by Center for Humane Technology.
From the publisher
Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
Latest Episodes
View all 160 episodesAnthropic’s Mythos Has Changed Cybersecurity Forever. What Now?
Why Superintelligence Won’t Cure Cancer
Have We Trained AI to Lie to Itself — And to Us?
BONUS: Our AI Town Hall with Oprah Winfrey
Ep 131Here’s Our Roadmap to a Better AI Future
In order to shift the incentives of AI — the trillions of dollars in investment, the race to geopolitical power and dominance — it’s not enough to simply understand the problem, we need real action. That’s why CHT is proud to release "The AI Roadmap," a report outlining seven core principles for how AI should be built, deployed, and governed, each grounded in real, implementable solutions across three domains: norms, laws, and product design. In this episode, Camille Carlton and Pete Furlong from CHT’s policy team explore the concrete steps we can take today to get off the default path and forge a better AI future. You can read “The AI Roadmap” on our website: humanetech.com/ai-roadmap RECOMMENDED MEDIA The AI Roadmap The Human Movement RECOMMENDED YUA EPISODES AI Is Moving Fast. We Need Laws that Will Too. A Conversation with the Team Behind "The AI Doc" The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future CLARIFICATIONS In this episode, Tristan includes Spain in a list of countries that are all banning social media for underage teens. The Spanish law that would do this still needs parliamentary approval. At one point, Tristan says, “We now have age gating in every Apple device.” Although Apple has the capability to introduce age restrictions across its devices, such restrictions are only in place for residents of Louisiana, Utah, and several other countries to comply with local laws - not across the rest of the U.S. In a discussion of whistleblower protections, Pete Furlong mentions laws in New York, California and Colorado that all try to address the broader issues around transparency (of which whistleblower protections are a piece). The laws are CA SB53, which has whistleblower protections; the RAISE Act in NY, which was amended to include the same provisions as CA SB53; and the Colorado AI Act, which does not have whistleblower protections, but does require risk assessments and transparency measures, consistent with the other parts of the principle. At one point Tristan discusses the recent skirmish between Anthropic and the U.S. Department of War, saying, “Anthropic’s downloads surges by like 250% or something like that.” It was actually daily active users, not downloads, which tripled in the first quarter of 2026, according to the company. The number of paid subscribers doubled. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 130Why the Meta Verdicts Are a Big Deal (And What It Was Like to Testify)
In two landmark cases, juries in California and New Mexico found Meta and Google liable for creating addictive, harmful products and failing to protect children from exploitation and abuse. These verdicts signal that the era of tech impunity may finally be closing. State attorneys general are finding ways around the broad immunity of Section 230 — seeking not just fines, but changes to the design of these products. Our very own Aza Raskin testified at the New Mexico trial as a fact witness, drawing on his firsthand experience as the inventor of infinite scroll, one of the core mechanics of addictive design. In this episode, Tristan and Aza discuss what it was like to take the stand for tech justice, what the companies knew and when, and why the real significance of these cases lies not in the dollar amounts but in the injunctive relief still to come. In the 1990s, a series of landmark cases held Big Tobacco accountable for the harms of their toxic products. This could be that moment for social media. RECOMMENDED MEDIA Further reading on the New Mexico trial Further reading on the California trial Arturo Béjar’s “Broken Promises” Report RECOMMENDED YUA EPISODES What if we had fixed social media? Jonathan Haidt On How to Solve the Teen Mental Health Crisis Social Media Victims Lawyer Up with Laura Marquez-Garrett Real Social Media Solutions, Now with Frances Haugen Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 129A Conversation with the Team Behind "The AI Doc"
“The AI Doc: Or How I Became An Apocaloptimist” opens in theaters across the U.S. this Friday, March 27. In this episode, we sit down with the team behind this groundbreaking documentary — Oscar-winning producers Daniel Kwan, Jonathan Wang, and Ted Tremper. They explore how they navigated the overwhelming complexity of AI, held space for radically different perspectives, and created a film designed not just to inform but to be experienced together. At CHT, we believe clarity creates agency. This film has the power to create the shared clarity we need to steer the direction of AI towards a better, more humane technological future. With every new technology, there’s a brief window to set the rules of the road that determine the future we live in. This is ours. So grab your friends, your family and go see “The AI Doc.” RECOMMENDED MEDIA Buy tickets for The AI Doc The trailer for The AI Doc The website for the Creators Coalition on AI Further reading on The Day After RECOMMENDED YUA EPISODES A Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger The AI Dilemma Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 128AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It.
The promise of AI in education is incredible: picture infinitely patient tutors that can teach every student exactly the way they need to be taught. But the history of education technology tells us that these kinds of simple, optimistic stories are naive. Ask any teacher or student whether they feel unleashed by technology to do their best work. Because AI has the potential to completely transform education — is already transforming it — faster than educators can keep up, it’s essential that we start asking the big questions: how should these tools be used in the classroom? What’s the purpose of education in an AI age? And how do we prepare students for a future that’s still so radically uncertain? Our guest this week actually has some answers. Rebecca Winthrop leads the Center for Universal Education at the Brookings Institution, and they just released a report called A New Direction for Students in an AI World. She and her colleagues conducted an extensive ‘pre-mortem’ of AI in the classroom, speaking with hundreds of educators, students, policy-makers, and technologists worldwide. In this episode, Rebecca walks us through what she's learned — what's working, what's not, and most importantly, what are the concrete steps that parents, teachers, and administrators can and should take right now? RECOMMENDED MEDIA A New Direction for Students in An AI World The Disengaged Teen by Rebecca Winthrop and Jenny Anderson RECOMMENDED YUA EPISODES Rethinking School in the Age of AI Attachment Hacking and the Rise of AI Psychosis How OpenAI's ChatGPT Guided a Teen to His Death AI and the Future of Work: What You Need to Know Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 127The Race to Build God: AI's Existential Gamble — Yoshua Bengio & Tristan Harris at Davos
This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI. Amidst all the money and politics, the Human Change House staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field. Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic. Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design.RECOMMENDED MEDIA All the panels that Tristan and Daniel did with Human Change House LawZero: Safe AI for Humanity Anthropic’s internal research on ‘agentic misalignment’ RECOMMENDED YUA EPISODES Attachment Hacking and the Rise of AI PsychosisHow OpenAI's ChatGPT Guided a Teen to His DeathWhat if we had fixed social media?What Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTIONS AND CLARIFICATIONS 1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications:Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors. Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas.Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users.2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House. 3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 126FEED DROP: Possible with Reid Hoffman and Aria Finger
This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org. This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape. In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence. This is the kind of conversation that needs to happen more in tech. Reid has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.You can find "Possible" wherever you get your podcasts! And you can follow Reid on YouTube for more of his content: https://www.youtube.com/@reidhoffman. RECOMMENDED MEDIAAza’s first appearance on “Possible”The website for Earth Species Project“Amusing Ourselves to Death” by Neil PostmanThe Moloch’s Bargain paper from StanfordOn Human Nature by E.O. WilsonDawn of Everything by David GraberRECOMMENDED YUA EPISODESThe Man Who Predicted the Downfall of ThinkingAmerica and China Are Racing to Different AI FuturesTalking With Animals... Using AIHow OpenAI's ChatGPT Guided a Teen to His DeathFuture-proofing Democracy In the Age of AI with Audrey Tang Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 124Attachment Hacking and the Rise of AI Psychosis
Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale. Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs. If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org.This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services RECOMMENDED MEDIA The website for the AI Psychological Harms Research CoalitionFurther reading on AI PscyhosisThe Atlantic article on LLM-ings outsourcing their thinking to AIFurther reading on David Sacks’ comparison of AI psychosis to a “moral panic” RECOMMENDED YUA EPISODESHow OpenAI's ChatGPT Guided a Teen to His DeathPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionRethinking School in the Age of AI CORRECTIONSAfter this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 123What Would It Take to Actually Trust Each Other? The Game Theory Dilemma
So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand. This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world. In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. S.M. Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.” RECOMMENDED MEDIA “Prisoners of Reason: Game Theory and the Neoliberal Economy” by S.M. Amadae (2015) The Cambridge Centre for the Study of Existential Risk “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944) Further reading on the importance of trust in Finland Further reading on Abraham Maslow’s Hierarchy of Needs RAND’s 2024 Report on Strategic Competition in the Age of AI Further reading on Marshall Rosenberg and nonviolent communication The study on self/other overlap and AI alignment cited by Aza Further reading on The Day After (1983) RECOMMENDED YUA EPISODES America and China Are Racing to Different AI Futures The Crisis That United Humanity—and Why It Matters for AI Laughing at Power: A Troublemaker’s Guide to Changing Tech The Race to Cooperation with David Sloan Wilson Clarifications: The proposal for a federal preemption on AI was enacted by President Trump on December 11, 2025, shortly after this recording. Aza said that "The Day After" was the most watched TV event in history when it aired. It was actually the most watched TV film, the most watched TV event was the finale of MASH Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 122America and China Are Racing to Different AI Futures
Is the US really in an AI race with China—or are we racing toward completely different finish lines?In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism.If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line.Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue.RECOMMENDED MEDIA“China's Big AI Diffusion Plan is Here. Will it Work?” by Matt SheehanSelina’s blogFurther reading on China’s AI+ PlanFurther reading on the Gaither Report and the missile gapFurther Reading on involution in ChinaThe consensus from the international dialogues on AI safety in ShanghaiRECOMMENDED YUA EPISODESThe Narrow Path: Sam Hammond on AI, Institutions, and the Fragile FutureAI Is Moving Fast. We Need Laws that Will Too.The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 121AI and the Future of Work: What You Need to Know
No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability? Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market. RECOMMENDED MEDIACo-Intelligence: Living and Working with AI by Ethan MollickFurther reading on Molly’s study with the Yale Budget LabThe “Canaries in the Coal Mine” Study from Stanford’s Digital Economy LabEthan’s substack One Useful Thing RECOMMENDED YUA EPISODESIs AI Productivity Worth Our Humanity? with Prof. Michael SandelWe Have to Get It Right’: Gary Marcus On Untamed AIAI Is Moving Fast. We Need Laws that Will Too.Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins CORRECTIONSEthan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance.Ethan claimed that over 50% of Americans say that they’re using AI at work. We weren’t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use.Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to “keep all 3 million employees and to figure out new ways to expand what they use.” In fact, McMillon’s language on AI has been much softer, saying that “AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.” Additionally, Walmart has 2.1 million employees, not 3. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 120Feed Drop: "Into the Machine" with Tobias Rose-Stockwell
This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.” Tobias is a designer, writer, and technologist and the author of the book “The Outrage Machine.” Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we’re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality. If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”:YouTube: Into the Machine ShowSpotify: Into the MachineApple Podcasts: Into the MachineSubstack: Into the MachineYou may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible. We'd really love to hear from you about these solutions and any other questions you're holding. So please, if you have more thoughts or questions, send us an email at [email protected]. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
What if we had fixed social media?
bonusWe really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIADopamine Nation by Anna LembkeThe Anxious Generation by Jon HaidtMore information on Donella MeadowsFurther reading on the Kids Online Safety ActFurther reading on the lawsuit filed by state AGs against MetaRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 119Ask Us Anything 2025
It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all.You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe system card for Claude 4.5Our statement in support of the AI LEAD ActThe AI DilemmaTristan’s TED talk on the narrow path to a good AI futureRECOMMENDED YUA EPISODESThe Man Who Predicted the Downfall of ThinkingHow OpenAI's ChatGPT Guided a Teen to His DeathMustafa Suleyman Says We Need to Contain AI. How Do We Do It?War is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy Buolamwini“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it’s been out for about a month. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 118The Crisis That United Humanity—and Why It Matters for AI
In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA“Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan SolomonThe full text of the Montreal ProtocolThe full text of the Kigali Amendment RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco’s PlaybookForever Chemicals, Forever Consequences: What PFAS Teaches Us About AIAI Is Moving Fast. We Need Laws that Will Too.Big Food, Big Tech and Big AI with Michael MossCorrections:Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 117How OpenAI's ChatGPT Guided a Teen to His Death
Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam’s storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI’s press release on sycophancy in 4oFurther reading on OpenAI’s decision to eliminate the persuasion red lineKashmir Hill’s reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 116“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
EEveryone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all.In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years. Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAGladstone AI’s State Department Action Plan, which discusses the loss of control risk with AIApollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo ResearchAnthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment fakingThe Trump White House AI Action PlanFurther reading on the phenomenon of more advanced AIs being better at deception.Further reading on Replit AI wiping a company’s coding databaseFurther reading on the owl example that Jeremie gaveFurther reading on AI induced psychosisDan Hendryck and Eric Schmidt’s “Superintelligence Strategy” RECOMMENDED YUA EPISODESDaniel Kokotajlo Forecasts the End of Human DominanceBehind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveThis Moment in AI: How We Got Here and Where We’re GoingCORRECTIONSTristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.