PLAY PODCASTS
AI Safety Newsletter

AI Safety Newsletter

Center for AI Safety

78 episodesEN-GB

Show overview

AI Safety Newsletter has been publishing since 2023, and across the 3 years since has built a catalogue of 78 episodes. That works out to roughly 20 hours of audio in total. Releases follow a fortnightly cadence.

Episodes typically run ten to twenty minutes — most land between 10 min and 13 min — and the run-time is fairly consistent across the catalogue. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-GB-language Technology show.

The show is actively publishing — the most recent episode landed 2 weeks ago, with 5 episodes already out so far this year. The busiest year was 2023, with 31 episodes published. Published by Center for AI Safety.

Episodes
78
Running
2023–2026 · 3y
Median length
11 min
Cadence
Fortnightly

From the publisher

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai

Latest Episodes

View all 78 episodes

AISN #72: New Research on AI Wellbeing

May 1, 202610 min

AISN #71: Cyberattacks & Datacenter Moratorium Bill

<p> Also, updates on the Anthropic vs. Pentagon court case..</p> <p> We’re Hiring. Opportunities at CAIS include: Head of Public Engagement, Principal, Special Projects, Program Manager, Operations Manager, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!</p><p><strong> AI Software Infrastructure Cyberattacks</strong></p><p> Recently, cyberattacks targeting the AI industry's software infrastructure stole private information potentially worth billions of dollars and inserted backdoors into developers’ computers. Google Threat Intelligence Group reported that one of the largest cyberattacks in this wave was carried out by North Korea-linked hackers.</p><p> The stolen data may be worth billions. Hackers stole and auctioned private data from Mercor, an AI training data supplier for OpenAI and Anthropic which was recently valued at $10 billion. Mercor collects AI training data from a large number of experts, as well as highly sensitive personal and biometric data for identity verification. This attack not only comprises the data that Mercor sells, but also internal data that could be used to impersonate their hired experts. A person familiar with the situation stated that Mercor has paid the hackers’ requested ransom, although it remains unclear if the hackers intend to release or sell the data [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:41) AI Software Infrastructure Cyberattacks</p><p>(02:34) Datacenter Moratorium and Export Controls Bill</p><p>(04:21) Anthropic v. Department of War Lawsuit</p><p>(07:23) In Other News</p><p>(07:26) Government</p><p>(07:46) Industry</p><p>(08:20) Civil Society</p> <p>---</p> <p><b>First published:</b><br/> April 10th, 2026 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/aisn-71-cyberattacks-and-datacenter?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/aisn-71-cyberattacks-and-datacenter</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!pUGs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9237e72-4b1f-4074-9026-f597aa42c5f4_915x1217.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!pUGs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9237e72-4b1f-4074-9026-f597aa42c5f4_915x1217.png" alt="Auction listing for Mercor.com hacked data, asking price negotiable." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!naLh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce0d5a2-e900-4cfd-a8ec-1070b990914f_1440x1080.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!naLh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce0d5a2-e900-4cfd-a8ec-1070b990914f_1440x1080.png" alt="OpenAI’s Stargate datacenter construction project in Abilene, Texas." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!jlnf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cbf5fb7-84f7-4212-96e3-ddc819706acf_1600x385.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!jlnf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cbf5fb7-84f7-4212-96e3-ddc819706acf_1600x385.png" alt="Legislative document proposing moratorium on dat

Apr 10, 20269 min

AISN #70: AI Layoffs and Automated Warfare

<p> Also, a new open letter advocating for pro-human values and control over AI development.</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition, we discuss AI automation and augmentation of warfare and technology jobs, as well as a new open letter outlining pro-human values in the face of AI development.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p> We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.</p><p> Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!</p><p><strong> AI-Driven Layoffs</strong></p><p> Several large software companies such as Amazon and Meta are planning to cut tens of thousands of employees, citing increased productivity with AI. This continues a growing but contested trend of layoffs in sectors where AI performs best, such as software development and marketing.</p><p> Layoffs affect almost half of some companies. Meta recently announced plans to let over [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:58) AI-Driven Layoffs</p><p>(03:14) AI Automation of Warfare</p><p>(05:36) Pro-Human Open Letter</p><p>(07:43) In Other News</p><p>(07:47) Government</p><p>(08:11) Industry</p> <p>---</p> <p><b>First published:</b><br/> March 24th, 2026 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-70-ai-layoffs?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-70-ai-layoffs</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!8ZIB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6187d2-2bb2-4f24-af01-f86c5a7d05be_1457x973.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!8ZIB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb6187d2-2bb2-4f24-af01-f86c5a7d05be_1457x973.png" alt="Software engineering employment has been dropping among the most at-risk early-career developers ever since the release of ChatGPT. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!atka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3893e66-3545-4010-94db-a762adfae7fc_1600x900.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!atka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3893e66-3545-4010-94db-a762adfae7fc_1600x900.png" alt="Footage from a Project Maven demo at Palantir’s AI Platform Conference, showing drone surveillance video overlaid with AI-assisted attack planning recommendations." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!Zq4h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1fe29e-7804-459c-9477-27d9a7f4941a_1600x845.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!Zq4h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1fe29e-7804-459c-9477-27

Mar 24, 20269 min

AISN #69: Department of War, Anthropic, and National Security

<p> Also, Anthropic Removes a Core Safety Commitment.</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition, we discuss the conflicts between Anthropic and the Department of War and Anthropic's recent removal of a core safety commitment.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p> We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.</p><p> Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!</p><p><strong> Pentagon Declares Anthropic a Supply Chain Risk to National Security</strong></p>Anthropic CEO Dario Amodei (left) and US Secretary of War Pete Hegseth (right)<p> Thursday, March 5th, the US Department of War (DoW) announced that Anthropic is designated a “supply chain risk,” meaning that Anthropic products cannot be used by the DoW or in any defense contracts. This comes after several weeks of tensions between the two organizations over whether Anthropic models would be used for [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:59) Pentagon Declares Anthropic a Supply Chain Risk to National Security</p><p>(05:51) Anthropic Drops Core Safety Commitment</p><p>(07:22) Opportunity for Experienced Researchers: AI and Society Fellowship</p><p>(07:58) In Other News</p><p>(08:02) Government</p><p>(09:07) Industry</p><p>(10:17) Civil Society</p> <p>---</p> <p><b>First published:</b><br/> March 13th, 2026 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-69-department?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-69-department</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!qfsg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78482108-1781-4c17-b6a0-413c12b9c95a_860x484.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!qfsg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78482108-1781-4c17-b6a0-413c12b9c95a_860x484.png" alt="Anthropic CEO Dario Amodei (left) and US Secretary of War Pete Hegseth (right)" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!aYDX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9696604e-a472-4452-8454-93876fbe85ca_657x549.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!aYDX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9696604e-a472-4452-8454-93876fbe85ca_657x549.png" alt="Donald J. Trump tweets: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of A

Mar 13, 202611 min

AISN #68: Moltbook Exposes Risky AI Behavior

<p> Plus: The Pentagon Accelerates AI and GPT-5.2 solves open mathematics problems..</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition, we discuss the AI agent social network Moltbook, Pentagon's new “AI-First” strategy, and recent math breakthroughs powered by LLMs.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p> We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.</p><p> Other opportunities at CAIS include: Research Engineer, Research Scientist, Director of Development, Special Projects Associate, and Special Projects Manager. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!</p><p><strong> Moltbook Sparks Safety Concerns</strong></p>Screencapture from Moltbook's home page. Source.<p> Moltbook is a new social network for AI agents. From nearly the moment it went live, human observers have noted numerous troubling patterns in what's being posted.</p><p> How Moltbook works. Moltbook is a Reddit-style social network built on a framework that lets personal AI assistants run locally and accept tasks via messaging platforms. Agents check Moltbook regularly (i.e., every [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(01:04) Moltbook Sparks Safety Concerns</p><p>(05:10) Pentagon Mandates AI-First Strategy</p><p>(07:59) AI Solves Open Math Problems</p><p>(10:41) In Other News</p><p>(10:45) Government</p><p>(11:31) Industry</p><p>(13:06) Civil Society</p><p>(14:52) Discussion about this post</p><p>(14:56) Ready for more?</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!h6E6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ed1aba3-f71d-4ad3-b3bc-083ba69cddf1_1176x652.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!h6E6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ed1aba3-f71d-4ad3-b3bc-083ba69cddf1_1176x652.png" alt="Screencapture from Moltbook’s home page. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!dA7j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d55db00-2460-4e2c-bd89-ac9e81df5bc9_1600x554.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!dA7j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d55db00-2460-4e2c-bd89-ac9e81df5bc9_1600x554.png" alt="Screen capture from the memorandum titled “Artificial Intelligence Strategy for the Department of War.” Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!J1Cs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71c8b149-80e8-4a96-82a8-566b40cbe377_1408x510.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!J1Cs!,w_1456,c_limit,f_auto,q_auto:good,fl_

Feb 2, 202615 min

AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek

<p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> Executive Order Blocks State AI Laws</strong></p><p> U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright.</p><p> A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework.</p><p> [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:34) Executive Order Blocks State AI Laws</p><p>(03:42) US Permits Nvidia to Sell H200s to China</p><p>(06:00) ChatGPT-5.2 and DeepSeek-v3.2 Arrive</p><p>(08:23) In Other News</p><p>(08:27) Industry</p><p>(09:13) Civil Society</p><p>(09:58) Government</p><p>(11:07) Discussion about this post</p><p>(11:11) Ready for more?</p> <p>---</p> <p><b>First published:</b><br/> December 17th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!3aKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdd982c6-a398-4819-813c-22607d008dff_1852x606.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!3aKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdd982c6-a398-4819-813c-22607d008dff_1852x606.png" alt="" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!GTIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4977231b-377d-4ddd-a0d9-e72d22f8221d_1302x822.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!GTIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4977231b-377d-4ddd-a0d9-e72d22f8221d_1302x822.png" alt="" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!QxSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944a1439-ef58-449e-8bcd-a6e78b40c29d_1326x828.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!QxSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944a1439-ef58-449e-8bcd-a6e78b40c29d_1326x828.png" alt="" style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode

Dec 17, 202511 min

AISN #66: AISN #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back

<p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition we discuss the new AI Dashboard, recent frontier models from Google and Anthropic, and a revived push to preempt state AI regulations.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> CAIS Releases the AI Dashboard for Frontier Performance</strong></p><p> CAIS launched its AI Dashboard, which evaluates frontier AI systems on capability and safety benchmarks. The dashboard also tracks the industry's overall progression toward broader milestones such as AGI, automation of remote labor, and full self-driving.</p><p> How the dashboard works. The AI Dashboard features three leaderboards—one for text, one for vision, and one for risks—where frontier models are ranked according to their average score across a battery of benchmarks. Because CAIS evaluates models directly across a wide range of tasks, the dashboard provides apples-to-apples comparisons of how different frontier models perform on the same set of evaluations and safety-relevant behaviors.</p><p> Ranking frontier models for [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:33) CAIS Releases the AI Dashboard for Frontier Performance</p><p>(04:05) Politicians Revive Push for Moratorium on State AI Laws</p><p>(06:39) Gemini 3 Pro and Claude Opus 4.5 Arrive</p><p>(09:17) In Other News</p><p>(09:20) Government</p><p>(10:15) Industry</p><p>(11:03) Civil Society</p><p>(12:00) Discussion about this post</p> <p>---</p> <p><b>First published:</b><br/> December 2nd, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!f-UV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14f08358-439a-4e39-a811-5d4f78ab870b_1786x958.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!f-UV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14f08358-439a-4e39-a811-5d4f78ab870b_1786x958.png" alt="Graph showing AI model performance over time titled "Average Score" with "Risk Index Lower is Better" subtitle." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!Y8M1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3db661b-4306-44ae-9a6c-3bb6a35a1929_1600x505.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!Y8M1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3db661b-4306-44ae-9a6c-3bb6a35a1929_1600x505.png" alt="Table showing AI model performance scores across reasoning, coding, and gaming benchmarks." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!Yav1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087dee12-73ff-4f2d-8b0f-d5df932ccdb1_1922x870.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!Yav1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%

Dec 2, 202512 min

AISN #65: Measuring Automation and Superintelligence Moratorium Letter

<p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> CAIS and Scale AI release Remote Labor Index</strong></p><p> The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance.</p><p> RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.</p>Examples of RLI Projects<p> Current [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:29) CAIS and Scale AI release Remote Labor Index</p><p>(02:04) Bipartisan Coalition for Superintelligence Moratorium</p><p>(04:18) In Other News</p><p>(05:56) Discussion about this post</p> <p>---</p> <p><b>First published:</b><br/> October 29th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!JvUw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe24bafcb-ca39-4266-a23e-40b80ed54605_4898x5109.jpeg" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!JvUw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe24bafcb-ca39-4266-a23e-40b80ed54605_4898x5109.jpeg" alt="Examples of RLI Projects" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!5KNO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18e8802-7260-41c0-913f-ee2c4c19c245_1600x945.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!5KNO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18e8802-7260-41c0-913f-ee2c4c19c245_1600x945.png" alt="Current AI agents complete at most 2.5% of projects in RLI, but are improving steadily." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!AjsK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbeff48-e3b1-4883-9030-968235dd3ee7_846x227.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!AjsK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbeff48-e3b1-4883-9030-968235dd3ee7_846x227.png" alt="Survey statistics showing U.S. adults' views on AI development and regulation." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https

Oct 29, 20256 min

AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms

<p> In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI.</p><p> As a reminder, we’re hiring a writer for the newsletter.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> Senate Bill Would Establish Liability for AI Harms</strong></p><p> Sens. Dick Durbin, (D-Ill) and Josh Hawley (R-Mo) introduced the AI LEAD Act, which would establish a federal cause of action for people harmed by AI systems to sue AI companies.</p><p> Corporations are usually liable for harms their products create. When a company sells a product in the United States that harms someone, that person can generally sue that company for damages under the doctrine of product liability. Those suits force companies to internalize the harms their products create—and incentivize them to make their products safer.</p><p> [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:35) Senate Bill Would Establish Liability for AI Harms</p><p>(02:48) China Tightens Export Controls on Rare Earth Metals</p><p>(05:28) A Definition of AGI</p><p>(08:31) In Other News</p><p>(10:19) Discussion about this post</p> <p>---</p> <p><b>First published:</b><br/> October 16th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!IY3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F579c41b1-9f1d-4f29-ab53-3c451e5e6e58_980x653.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!IY3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F579c41b1-9f1d-4f29-ab53-3c451e5e6e58_980x653.png" alt="A Chinese rare earth mine. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!PDPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d55bd85-caa6-4252-8cc7-6470a89c5f19_1600x1158.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!PDPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d55bd85-caa6-4252-8cc7-6470a89c5f19_1600x1158.png" alt="Spider chart comparing GPT-4 (2023) and GPT-5 (2025) capabilities across multiple dimensions. The chart shows performance metrics in areas like Knowledge, Reading & Writing, Math, Reasoning, Working Memory, Memory Storage, Memory Retrieval, Visual, Auditory, and Speed. The red line (GPT-5) generally extends further out than the blue line (GPT-4), suggesting projected improvements across most capabilities." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Oct 16, 202510 min

AISN #63: California’s SB-53 Passes the Legislature

<p> In this edition: California's legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom's desk. If signed into law, California would become the first US state to regulate catastrophic risk.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p> A note from Corin: I’m leaving the AI Safety Newsletter soon to start law school—but if you’d like to hear more from me, I’m planning to continue to write about AI in a new personal newsletter, Conditionals. On a related note, we’re also hiring a writer for the newsletter.</p><p><strong> California's SB-53 Passes the Legislature</strong></p><p> SB-53 is the Legislature's weaker sequel to last year's vetoed SB-1047. After Governor Gavin Newsom vetoed SB-1047 last year, he convened the Joint California Policy Working Group on AI Frontier Models. The group's June report recommended transparency, incident reporting, and whistleblower protections as near-term priorities for governing AI systems. SB-53 (the [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:49) California's SB-53 Passes the Legislature</p><p>(06:33) In Other News</p><p>(08:37) Discussion about this post</p> <p>---</p> <p><b>First published:</b><br/> September 24th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!JC0w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872749f2-34d8-4050-b5d2-9929a16c9a0c_1600x609.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!JC0w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872749f2-34d8-4050-b5d2-9929a16c9a0c_1600x609.png" alt="The introduction to SB-53’s text. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Sep 24, 20259 min

AISN #62: Big Tech Launches $100 Million pro-AI Super PAC

<p> Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases.</p> <p> In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> Big Tech Launches $100 Million pro-AI Super PAC</strong></p><p> Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms.</p><p> The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:46) Big Tech Launches $100 Million pro-AI Super PAC</p><p>(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization</p><p>(04:45) China Reverses Course on Nvidia H20 Purchases</p><p>(07:21) In Other News</p> <p>---</p> <p><b>First published:</b><br/> August 27th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!NQ_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31a08d1d-bc5e-43d0-9664-5d3797244a26_1500x500.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!NQ_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31a08d1d-bc5e-43d0-9664-5d3797244a26_1500x500.png" alt="Leading The Future’s branding. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!gjRH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3587369f-4268-4546-b4ed-9743fccad5d8_1600x505.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!gjRH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3587369f-4268-4546-b4ed-9743fccad5d8_1600x505.png" alt="An exerpt from Meta’s policies. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Aug 27, 202510 min

AISN #61: OpenAI Releases GPT-5

<p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition: OpenAI releases GPT-5.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> OpenAI Releases GPT-5</strong></p><p> Ever since GPT-4's release in March 2023 marked a step-change improvement over GPT-3, people have used ‘GPT-5’ as a stand-in to speculate about the next generation of AI capabilities. On Thursday, OpenAI released GPT-5. While state-of-the-art in most respects, GPT-5 is not a step-change improvement over competing systems, or even recent OpenAI models—but we shouldn’t have expected it to be.</p><p> GPT-5 is state of the art in most respects. GPT-5 isn’t a single model like GPTs 1 through 4. It is a system of two models: a base model that answers questions quickly and is better at tasks like creative writing (an improved [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:19) OpenAI Releases GPT-5</p><p>(06:20) In Other News</p> <p>---</p> <p><b>First published:</b><br/> August 12th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!dA-q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b4694cd-18b8-48e2-9b33-344f9f6604cd_1600x898.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!dA-q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b4694cd-18b8-48e2-9b33-344f9f6604cd_1600x898.png" alt="Graph titled "Game Progress with Clues" comparing performance of different AI models." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!ZEcb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6db7a75-0090-42ca-8439-c67d5cde44c0_632x876.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!ZEcb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6db7a75-0090-42ca-8439-c67d5cde44c0_632x876.png" alt="Bar graph comparing software engineering accuracy between GPT-5, OpenAI-3, and GPT-40, showing "with/without thinking" performance." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!VOUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa89976d9-abc7-44d4-9d7b-592dada46bc7_744x892.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!VOUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa89976d9-abc7-44d4-9d7b-592dada46bc7_744x892.png" alt="Bar graph "HealthBench Hard Hallucinations" comparing AI models' hallucination rates." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Aug 12, 20259 min

AISN #60: The AI Action Plan

<p> Also: ChatGPT Agent and IMO Gold.</p> <p> In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> The AI Action Plan</strong></p><p> On the 23rd, the White House released its AI Action Plan. The document is the outcome of a January executive order that required the President's Science Advisor, ‘AI and Crypto Czar’, and National Security Advisor (currently Michael Kratsios, David Sacks, and Marco Rubio) to submit a plan to “sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” President Trump also delivered an hour-long speech on the plan, and signed three executive orders beginning to implement some of its policies.</p><picture></picture>Trump displaying an executive order at the [...] <p>---</p><p><strong>Outline:</strong></p><p>(00:34) The AI Action Plan</p><p>(07:36) ChatGPT Agent and IMO Gold</p><p>(12:48) In Other News</p> <p>---</p> <p><b>First published:</b><br/> July 31st, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!yeVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf95488b-7af9-4342-aec3-fddfd3b5ee7c_1400x933.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!yeVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf95488b-7af9-4342-aec3-fddfd3b5ee7c_1400x933.png" alt="Trump displaying an executive order at the “Winning the AI Race” summit. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!YR3_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32c045cf-daf7-4254-8cdc-4dd861f2c397_884x802.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!YR3_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32c045cf-daf7-4254-8cdc-4dd861f2c397_884x802.png" alt="Bar graph titled "Humanity's Last Exam" showing accuracy percentages across different AI tools. The graph compares the performance of various AI configurations, with accuracy scores ranging from 20.3% to 41.6%. The highest performing setup is ChatGPT with browser and computer terminal access, while the baseline OpenAI model without tools shows the lowest accuracy." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/$s_!_NBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39879533-bbcb-4b77-a1b9-67d248591bf5_1446x852.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!_NBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39879533-bbcb-4b77-a1b9-67d248591bf5_1446x852.png" alt="Bar graph titled "Economically important tasks" comparing model performance across time periods. The graph shows win/tie rates for three different models (o4-mini, o3, and ChatGPT agent) against human performance, categorized by estimated task completion times ranging from

Jul 31, 202515 min

AISN #59: EU Publishes General-Purpose AI Code of Practice

<p> Plus: Meta Superintelligence Labs.</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> EU Publishes General-Purpose AI Code of Practice</strong></p><p> In June 2024, the EU adopted the AI Act, which remains the world's most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:31) EU Publishes General-Purpose AI Code of Practice</p><p>(04:50) Meta Superintelligence Labs</p><p>(06:17) In Other News</p> <p>---</p> <p><b>First published:</b><br/> July 15th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!glEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd30e7d8d-65ae-4c7c-aa81-f7e56c8b8c96_1360x966.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!glEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd30e7d8d-65ae-4c7c-aa81-f7e56c8b8c96_1360x966.png" alt="Flowchart showing systemic risk assessment and mitigation process with decision points." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Jul 15, 20259 min

AISN #58: Senate Removes State AI Regulation Moratorium

<p> Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use.</p> <p> In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> Senate Removes State AI Regulation Moratorium</strong></p><p> The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here's what happened.</p><p> A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:35) Senate Removes State AI Regulation Moratorium</p><p>(03:04) Judges Split on Whether Training AI on Copyrighted Material is Fair Use</p><p>(07:19) In Other News</p> <p>---</p> <p><b>First published:</b><br/> July 3rd, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!3W7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0121db23-e6ab-48b8-9f8e-50a6e3705f24_1600x1067.jpeg" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!3W7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0121db23-e6ab-48b8-9f8e-50a6e3705f24_1600x1067.jpeg" alt="Sen. Blackburn cosponsored the Kids Online Safety Act last year. (Source __T3A_LINK_IN_POST__.)" style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Jul 3, 20259 min

AISN #57: The RAISE Act

<p> In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> The RAISE Act</strong></p><p> New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S.</p><p> New York's RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance.</p><ol> <li> <p> Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the [...]</p></li></ol> <p>---</p><p><strong>Outline:</strong></p><p>(00:21) The RAISE Act</p><p>(04:43) In Other News</p> <p>---</p> <p><b>First published:</b><br/> June 17th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa39fa0-a05c-4785-9130-ab331a0e0e34_1600x427.png" target="_blank"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa39fa0-a05c-4785-9130-ab331a0e0e34_1600x427.png" alt="A diagram depicting the bill’s current status. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Jun 17, 20257 min

AISN #56: Google Releases Veo 3

<p> Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance.</p> <p> In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> Google Releases Veo 3</strong></p><p> Last week, Google made several AI announcements at I/O 2025, its annual developer conference. An announcement of particular note is Veo 3, Google's newest video generation model.</p><p> Frontier video and audio generation. Veo 3 outperforms other models on human preference benchmarks, and generates both audio and video.</p><picture></picture>Google showcasing a video generated with Veo 3. (Source)<p> If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:33) Google Releases Veo 3</p><p>(03:25) Opus 4 Demonstrates the Fragility of Voluntary Governance</p> <p>---</p> <p><b>First published:</b><br/> May 28th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda24a5e2-92d6-490e-b74f-88fa68203799_1600x900.png" target="_blank"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda24a5e2-92d6-490e-b74f-88fa68203799_1600x900.png" alt="Google showcasing a video generated with Veo 3. (Source __T3A_LINK_IN_POST__)" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad471014-fe58-4180-a67a-9b48862263b9_1600x602.png" target="_blank"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad471014-fe58-4180-a67a-9b48862263b9_1600x602.png" alt="Two box plots showing "Uplift Trial" results for bioweapons acquisition across different groups." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

May 28, 20258 min

AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States

<p> Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption.</p> <p> In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from regulating AI.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p> The Center for AI Safety is also excited to announce the Summer session of our AI Safety, Ethics, and Society course, running from June 23 to September 14. The course, based on our recently published textbook, is open to participants from all disciplines and countries, and is designed to accommodate full-time work or study.</p><p> Applications for the Summer 2025 course are now open. The final application deadline is May 30th. Visit the course website to learn more and apply.</p><p><strong> Trump Administration Rescinds AI Diffusion [...]</strong></p> <p>---</p><p><strong>Outline:</strong></p><p>(01:12) Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States</p><p>(04:14) Bills on Whistleblower Protections, Chip Location Verification, and State Preemption</p><p>(06:56) In Other News</p> <p>---</p> <p><b>First published:</b><br/> May 20th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45cc31a2-d027-43bd-9f4f-2b26b23e051b_1600x1066.png" target="_blank"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45cc31a2-d027-43bd-9f4f-2b26b23e051b_1600x1066.png" alt="President Trump with the Emirati president, Sheikh Mohammed bin Zayed, at the AI campus’ unveiling. (Source __T3A_LINK_IN_POST__.)" style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

May 20, 20259 min

AISN #54: OpenAI Updates Restructure Plan

<p> Plus, AI Safety Collaboration in Singapore.</p> <p> In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> OpenAI Updates Restructure Plan</strong></p><p> On May 5th, OpenAI announced a new restructure plan. The announcement walks back a December 2024 proposal that would have had OpenAI's nonprofit—which oversees the company's for-profit operations—sell its controlling shares to the for-profit side of the company. That plan drew sharp criticism from former employees and civil‑society groups and prompted a lawsuit from co‑founder Elon Musk, who argued OpenAI was abandoning its charitable mission.</p><p> OpenAI claims the new plan preserves nonprofit control, but is light on specifics. Like the original plan, OpenAI's new plan would have OpenAI Global LLC become a public‑benefit corporation (PBC). However, instead of the nonprofit selling its [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:31) OpenAI Updates Restructure Plan</p><p>(03:19) AI Safety Collaboration in Singapore</p><p>(05:42) In Other News</p> <p>---</p> <p><b>First published:</b><br/> May 13th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41e07002-c5fd-4c60-a259-24780e32f211_1600x1064.png" target="_blank"><img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41e07002-c5fd-4c60-a259-24780e32f211_1600x1064.png" alt="Singapore’s Minister for Digital Development and Information speaks at the conference. Source." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

May 13, 20258 min

AISN #53: An Open Letter Attempts to Block OpenAI Restructuring

<p> Plus, SafeBench Winners.</p> <p> Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.</p><p> In this edition: Experts and ex-employees urge the Attorneys General of California and Delaware to block OpenAI's for-profit restructure; CAIS announces the winners of its safety benchmarking competition.</p><p> Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.</p><p><strong> An Open Letter Attempts to Block OpenAI Restructuring</strong></p><p> A group of former OpenAI employees and independent experts published an open letter urging the Attorneys General (AGs) of California (where OpenAI operates) and Delaware (where OpenAI is incorporated) to block OpenAI's planned restructuring into a for-profit entity. The letter argues the move would fundamentally undermine the organization's charitable mission by jeopardizing the governance safeguards designed to protect control over AGI from profit motives.</p><p> OpenAI was founded with the charitable purpose to ensure that artificial general intelligence benefits all of humanity. OpenAI's original nonprofit structure, and later its capped-profit model, were designed to control profit motives in the development of AGI, which OpenAI defines as "highly autonomous systems that outperform humans at most economically valuable work." The structure was designed to prevent [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:32) An Open Letter Attempts to Block OpenAI Restructuring</p><p>(04:23) SafeBench Winners</p><p>(08:58) Other News</p> <p>---</p> <p><b>First published:</b><br/> April 29th, 2025 </p> <p><b>Source:</b><br/> <a href="https://newsletter.safe.ai/p/an-open-letter-attempts-to-block?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://newsletter.safe.ai/p/an-open-letter-attempts-to-block</a> </p> <p>---</p> <p>Want more? Check out our <a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Episode+description+footer" target="_blank" rel="noreferrer">ML Safety Newsletter</a> for technical safety research.</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=center_for_ai_safety&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://substackcdn.com/image/fetch/$s_!-8ts!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c22c79-f9b2-4fb5-af77-5626e122434f_1600x1394.png" target="_blank"><img src="https://substackcdn.com/image/fetch/$s_!-8ts!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c22c79-f9b2-4fb5-af77-5626e122434f_1600x1394.png" alt="Table comparing governance safeguards between today and proposed restructuring across six categories." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>

Apr 29, 202510 min
© 2026 All rights reserved