
EA Forum Podcast (Curated & popular)
263 episodes — Page 1 of 6
“My Dad Worked in a Slaughterhouse. I Made a Documentary About It.” by Jack Hancock-Fairs
“Save Our Pigs!” by LewisBollard
“Reflections on Anthropic and EA” by abrahamrowe
“How to actually give money away” by NickAllardice
“If You Do One Thing for Animals This Year, Do This” by Becca Rogers
[Linkpost] “Starfish” by Aaron Gertler 🔸
“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog
“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸
“My lover, effective altruism” by Natalie_Cargill
“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer
“The AI people have been right a lot” by Dylan Matthews
[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim
“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos
“The outsized benefits of removing bottlenecks: some personal experiences” by Rory Fenton
<p> I once ran the monitoring and evaluation department of a large nonprofit in rural Tanzania. We had 25 full-time data collectors who ran surveys with farmers on their crop yields and planting practices. My job was to oversee the team, analyze their data and produce reports with actionable recommendations.</p><p> The field team was incredibly productive and we were collecting tons of data. We were also producing very few actual reports and horrendously behind on everything. This seemed to me like a “time to work harder” problem but reading a business professor's novel about manufacturing helped me see it differently.</p><p> The correct solution was to tell my 25 field staff to stop working for two months.</p><p><strong> The most engrossing book I've read since Harry Potter</strong></p><p> The Goal by Eliyahu Goldratt is a novel about a manager given 6 months to turn around his factory or face closure, and the tactics he uses to get production up. It should not be good. It is very good. I’m not claiming it's great literature but it contains one of those ideas that, once seen, cannot be unseen. </p><p> A system moves at the speed of its slowest component, and nothing you do to [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:55) The most engrossing book Ive read since Harry Potter</p><p>(03:54) Rory, the bottleneck</p><p>(07:09) Bottles as bottlenecks</p><p>(08:23) Some observations on bottlenecks</p><p>(09:39) Conclusion</p> <p>---</p> <p><b>First published:</b><br/> April 8th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/7excF9Zsv3gKvAdAZ/the-outsized-benefits-of-removing-bottlenecks-some-personal?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/7excF9Zsv3gKvAdAZ/the-outsized-benefits-of-removing-bottlenecks-some-personal</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“We’re growing: CEA is increasing its font size” by OllieRodriguez
<p> “We” in this post refers to CEA leadership, who didn’t write this post.</p><p> In 2025, we grew engagement with CEA's programs by 20–25% across every tier of engagement.</p><p> But we recognize that growing the community is about more than engagement or metrics—at its core, growth is about the bigness of things. How can we claim that we’ve grown the community if the words on the page are the same size?</p><p> To match our ambition off the page, it's time to show ambition on the page. We’re increasing our font size from 15px to 25px across all platforms. You’ll see more from us—clearer text, bigger words, extremely long forum posts.</p><p> What this means</p><ul> <li> All of our programs will be bigger, in the sense that the documents we write about them will be absolutely huge.</li><li> The forum will have a larger font than any other sensible website—a bigger font means more impactful ideas.</li><li> Our events will have larger screens for our ginormous font. This will mean a significant increase in AV costs.</li><li> We’re encouraging local groups to do the same—we’re recommending a 30% font size increase this year, and we’ll be supporting them to reach 20px font size by the [...]</li></ul> <p>---</p> <p><b>First published:</b><br/> April 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/EBjcpyfPwhByKeWjM/we-re-growing-cea-is-increasing-its-font-size?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/EBjcpyfPwhByKeWjM/we-re-growing-cea-is-increasing-its-font-size</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Announcing Highly Engaged EAs!” by Sam Anschell
<p> I’m excited to launch Highly Engaged EAs: a matchmaking and nuptialization service to optimize tax relief, green card accumulation and more!</p><p><strong> Workstreams</strong></p><p><strong> I Do(nate)</strong></p><p> By marrying EAs in different tax brackets, Highly Engaged EAs reduces average tax burden through joint filing to enable greater giving. A Californian AI safety researcher with a million dollar salary could give an extra $53k/year by tying the knot with an unpaid grad student! </p><p><strong> Til 80,000 Hours do us part</strong></p><p> The place premium is so high in the US that &gt;1,000 people have bought a million dollar gold card! While some roles don’t sponsor visas, spouses always can[1]. We match those looking to move countries with nationals of their desired destination.</p><p><strong> Bang for your Buck</strong></p><p> For full efficiency, Buck Shlegeris and Ajeya Cotra wear multiple hats as the witnesses, officiants and entertainment for weddings we organize.</p><p><strong> Rationally named children&nbsp;</strong></p><p> For those who are interested in having kids, we provide naming guidance based on what really matters: </p><ul> <li> Brevity.&nbsp;Claude BOTECs that each additional syllable to a person's name adds 68 hours to global pronunciation costs. Given that this will disproportionately be time spent by EAs, one-syllable names are a slam dunk.</li></ul><ul> <li> Complementarity:&nbsp;The [...]</li></ul> <p>---</p><p><strong>Outline:</strong></p><p>(00:20) Workstreams</p><p>(00:23) I Do(nate)</p><p>(00:56) Til 80,000 Hours do us part</p><p>(01:17) Bang for your Buck</p><p>(01:29) Rationally named children</p> <p>---</p> <p><b>First published:</b><br/> April 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/TLFiwwZGaqgbQhhr7/announcing-highly-engaged-eas?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/TLFiwwZGaqgbQhhr7/announcing-highly-engaged-eas</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/i1nb3expdnrnhcuwtfzf" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/i1nb3expdnrnhcuwtfzf" alt="Tax comparison table showing Single versus Married Filing Jointly with savings column." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/lcl1ojirm3xm54dccjep" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/lcl1ojirm3xm54dccjep" alt="Interactive calculator showing assumptions and results for name pronunciation time saved over lifetime." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/enx61mjdamkilbcezxc8" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/TLFiwwZGaqgbQhhr7/enx61mjdamkilbcezxc8" alt="LinkedIn post asking which Sam to blame this November, with a reply saying "Uncle."" style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“RejectDirectly” by RejectDirectly
<p> We're thrilled to announce the founding of RejectDirectly, a new EA-adjacent organization dedicated to closing the global rejection gap.</p><p> For too long, the EA rejection pipeline has been plagued by ineffectiveness. Billions of collective DALYs spend on unsuccessful work trials and interviews. The water waste involved in the mass duplication of Google Docs. The community health is in peril.</p><p> In comes RejectDirectly. By cutting out the middleman we can deliver high-quality, unconditional rejections straight to applicants — no strings attached, no waiting period. No eight-hour work trials where you pour your Claude extra balance into a strategy memo for an organization that just couldn’t bring themselves to reject your heartfelt cover letter.</p><p> How it works: You apply. We reject you. That's it. No interviews, no work trials, no need for culture fit chats.</p><p> Like many orgs in the space, we're happy to forward your application to our partner organizations. Unlike other orgs, they will immediately reject you. We call this our Rejection Cascade™ — one application, up to fifteen guaranteed rejections.</p><p> Our 2026 roadmap:</p><p> We're piloting a browser addon that sends you a push notification rejection before you even finish your cover letter. Early users report feeling "liberated" and [...]</p> <p>---</p> <p><b>First published:</b><br/> April 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/5qZdpKEFcGBrHJxDB/rejectdirectly?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/5qZdpKEFcGBrHJxDB/rejectdirectly</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“An unexplained annual spike in false claims on the EA Forum” by Tobias Häberli
<p> Epistemic status: Very high confidence in the statistical findings. Genuinely confused about the cause. For reasons that will become obvious, I wanted to publish this post on March 31, but unfortunately I could only get it done today.</p><p> I've been building a classifier to flag potentially misleading content on the EA Forum as part of a side project on epistemics infrastructure. While validating the model, I noticed something I initially assumed was a bug. This is an interim report on that.</p><p> Summary: Every year, on April 1, the rate of posts containing verifiably untrue claims spikes by roughly 2,200% relative to the annual daily average (p &lt; 0.0001, 8 years of Forum data). <br> </p><p><strong> 1. The effect is enormous</strong></p><p> On a typical day, approximately 2 to 4% of Forum posts contain claims that are verifiably false. On April 1, this rises to 57–73%, depending on the year. For context, this is an implausibly large effect by normal social-science standards. I have genuinely never seen anything like it.</p><p><strong> 2. It repeats every year</strong></p><p> This is not a one-off event. The pattern recurs in every year of the dataset.</p><p><strong> 3. "It's only one day" is misleading</strong></p><p> A natural reaction is that [...]</p></br></p> <p>---</p><p><strong>Outline:</strong></p><p>(01:00) 1. The effect is enormous</p><p>(01:38) 2. It repeats every year</p><p>(01:57) 3. Its only one day is misleading</p><p>(02:57) 4. The false posts are high effort</p><p>(03:21) Possible explanations</p><p>(03:30) Why this matters</p><p>(04:48) Proposed interventions</p> <p>---</p> <p><b>First published:</b><br/> April 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/EAokRDmQTjCAWgGdq/an-unexplained-annual-spike-in-false-claims-on-the-ea-forum?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/EAokRDmQTjCAWgGdq/an-unexplained-annual-spike-in-false-claims-on-the-ea-forum</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/EAokRDmQTjCAWgGdq/en3rvkx5t2yp9uz63fej" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/EAokRDmQTjCAWgGdq/en3rvkx5t2yp9uz63fej" alt="Line graph titled "Share of EA Forum posts containing demonstrably false claims (average across years)."" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/EAokRDmQTjCAWgGdq/ftop0cixiq6jdyfreoqj" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/EAokRDmQTjCAWgGdq/ftop0cixiq6jdyfreoqj" alt="Line graph showing "Share of EA Forum posts containing demonstrably false claims by year."" style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“300,000 lives, 100 million hens, and a world still to save” by William_MacAskill
<p> I went on the Sam Harris podcast again recently. If you want the full non-paywalled episode, I’m able to share it here:</p><p> https://samharris.org/episode/SE6877E700B </p><p> 30 mins of it is also on youtube, spotify, etc. </p><p> The occasion for the podcast was the 10-year anniversary edition of Doing Good Better. So, preparing for the podcast, I collected some relevant facts about what EA has achieved over the last decade, and where the movement stands today. I found the numbers both surprising and inspiring, so I thought I should share them here.</p><p> EAs don't often toot their own horn, and with all that's going on in the world, or on the internet at least, it's easy to lose sight of the big picture. So I think a bit of horn-tooting is at least occasionally warranted, otherwise we’ll only ever focus on the bad and not on the good. And, despite a few subdued years, the EA[1] movement is thriving.</p><p> I’ll break things down by cause area, before looking at EA's recent return to serious growth. </p><p> First, global heath. Since 2015, over $2 billion has been raised for the most effective global health charities, with contributions from over 100,000 donors. The [...]</p> <p>---</p> <p><b>First published:</b><br/> March 31st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/rSPEisvYw9K49ja2T/300-000-lives-100-million-hens-and-a-world-still-to-save?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/rSPEisvYw9K49ja2T/300-000-lives-100-million-hens-and-a-world-still-to-save</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Giving up on EA after 13 years” by Jackson Wagner
<p> Donating my shares to Lightcone Infrastructure, the Good Food Institute, and the Long-Term Future Fund, because EA refuses to make Mirror's Edge 3</p><h5 data-internal-id="Leaning_into_EA_disillusionment__Why_I_no_longer_believe_in_EA">Leaning into EA disillusionment: Why I no longer believe in EA</h5><p> I bought this EA stock almost thirteen years ago:</p>Leopold Aschenbrenner once said that “people with situational awareness&nbsp;have a lower cost basis in Nvidia than you do”. &nbsp;I’m not sure if this is exactly&nbsp;what he meant… but close enough, right?<p> Back in the late 2000s and early 2010s, EA was a great company pumping out great, interesting games:</p><ul> <li> Mirror's Edge had a striking art style and showcased a whole new style of first-person parkour gameplay.</li><li> Crysis infamously pushed graphical technology to such extremes that it was like getting a preview of videogame technology 5-10 years in the future.</li><li> Spore was… weird and bad, but its ambition and uniqueness was inspiring.</li><li> The Dead Space games (including the almost weirdly good point-and-shoot Wii spinoff) were pretty creative, and the realism of Battlefield 3 felt like a valuable counterpoint to an increasingly-cartoony Call of Duty series. &nbsp;Both series felt like they were crafted with a lot of care, despite their big-budget action vibes.</li><li> This was a hidden gem [...]</li></ul> <p>---</p><p><strong>Outline:</strong></p><p>(00:20) Leaning into EA disillusionment: Why I no longer believe in EA</p><p>(02:24) EA is was three radical ideas I want to protect</p><p>(03:57) The soul of EA is in trouble</p><p>(06:10) EA is about maximization, and maximization is perilous</p><p>(09:22) Dont be bycatch</p><p>(10:47) EA and the current funding situation</p><p>(12:20) Abandoning EA, in favor of doing the most good</p> <p>---</p> <p><b>First published:</b><br/> March 31st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/ZegHZBvEeQanrGhpY/giving-up-on-ea-after-13-years?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/ZegHZBvEeQanrGhpY/giving-up-on-ea-after-13-years</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/xdkyksxygbpfxk4bhj5b" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/xdkyksxygbpfxk4bhj5b" alt="Leopold Aschenbrenner once said that “people with situational awareness have a lower cost basis in Nvidia than you do”. I’m not sure if this is exactly what he meant… but close enough, right?" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/xxcae9anobyty9d29vik" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/xxcae9anobyty9d29vik" alt="Haters will say it’s false, but this was actually a great game. Most of all, it's one of the few big action games that truly seems to care about having coherent themes / ideas." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/c8m1w0ozqs5u71zqeliu" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZegHZBvEeQanrGhpY/c8m1w0ozqs5u71zqeliu" alt="Once upon a time, EA took the “Arts” part of “Electronic Arts” seriously, publishing this manifesto about the power of computer games as an expressive medium." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/fb381eea894170d50ef91a27fd1cf084feeff04968bf961d2c92c9b1eb9c4a1c/yszenmsiunjvvij9dmog" target="_blank"><img src="https://res.cloudinary.com/cea/image
“80,000 Hours is coming to bookstores in May” by Bella, 80000_Hours
<h5 data-internal-id="Summary">Summary</h5><ul> <li> We’re publishing our book, 80,000 Hours,&nbsp;with Penguin on 26 May.</li><li> It's an update of the 80,000 Hours career guide, with a full round of edits, new design, and new content discussing the effect of AI on career choice. We think it's now the best single entry point to our advice.</li><li> We hope publishing it will multiply the impact of our advice.</li><li> One quick way you could help is to preorder a copy&nbsp;(especially from a traditional retailer). It helps increase the chance we get on bestseller lists, and increases our expected reach. Preorder here.</li></ul><h5 data-internal-id="The_new_book">The new book</h5><p> It's called 80,000 Hours: How to Have a Fulfilling Career That Does Good. It covers all of our best advice for having a big, positive impact with your career.</p><p> It's being published by Penguin Random House on the 26th May in the US, and the 28th May in the UK.</p><h5 data-internal-id="Why_did_we_decide_to_publish_this_book_">Why did we decide to publish this book?</h5><p data-internal-id="ftnt_ref1">According to the EA Survey, 80,000 Hours has been the biggest single entry point into effective altruism, and our online career guide has consistently had an outsized impact compared to our other programmes.[1] We think it's probably the best content we’ve ever published [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:11) Summary</p><p>(00:52) The new book</p><p>(01:13) Why did we decide to publish this book?</p><p>(02:11) Whats new in this edition?</p><p>(03:47) How you can help</p><p>(04:45) FAQs</p><p>(10:48) Thank you</p> <p>---</p> <p><b>First published:</b><br/> March 24th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/qbAg8fnAbJQE3SLe9/80-000-hours-is-coming-to-bookstores-in-may?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/qbAg8fnAbJQE3SLe9/80-000-hours-is-coming-to-bookstores-in-may</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qbAg8fnAbJQE3SLe9/zi6yjb8dm3oibofk2ttr" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qbAg8fnAbJQE3SLe9/zi6yjb8dm3oibofk2ttr" alt="Book cover with pie chart showing time allocation across life activities" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/916e8d794aa495c0d1bc453a707e40c74f86c48da7863b220045c1d1f4042121/xmf9jn96kpaipapaf57l" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/916e8d794aa495c0d1bc453a707e40c74f86c48da7863b220045c1d1f4042121/xmf9jn96kpaipapaf57l" alt="Cartoon of patient in hospital bed telling visitor about increasing shareholder value." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“The joys of cash benchmarking” by ozymandias
<p> I.</p><p> Imagine you run a charity which gives cows to people in the developing world. Your new boyfriend is a statistician and you want to impress him, so you’ve decided to look into this Monitoring and Evaluation thing he keeps going on about. So you give a bunch of people cows and then follow up a year later, and sure enough the people you gave the cows to are richer, healthier, and happier. You proudly tell this to your boyfriend and expect to reap rewards in the form of admiration, cuddles, and a ‘yes’ to your marriage proposal.</p><p> Your boyfriend, however, isn’t impressed. What if everyone in the country is getting richer, because the country is industrializing? What if the poor people you gave cows to were having an unusually hard time, and then they got back on their feet, and your cows had nothing to do with it? Apparently, in order to get a sample that really says anything about the world, you need to randomly give cows to half the people, and then check whether the people you gave cows to are doing better than the people you didn’t give cows to (a “randomized controlled trial”).</p><p> This [...]</p> <p>---</p> <p><b>First published:</b><br/> March 10th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/QsvFCcDR2TfSj32aM/the-joys-of-cash-benchmarking?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/QsvFCcDR2TfSj32aM/the-joys-of-cash-benchmarking</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Broad Timelines” by Toby_Ord
<p> No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down. </p><p> But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines? </p><p> I’ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together — for our portfolio of work aimed towards this end.</p><p><strong> &nbsp;</strong></p><p><strong> AI Timelines</strong></p><p> By AI timelines, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as artificial general intelligence (AGI), human level AI, transformative AI, or superintelligence. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual's own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:58) AI Timelines</p><p>(04:38) Short vs Long Timelines</p><p>(07:05) Broad Timelines</p><p>(17:55) Implications</p><p>(19:46) Hedging</p><p>(20:58) A Different World</p><p>(24:00) Longterm Actions</p><p>(28:33) Conclusions</p> <p>---</p> <p><b>First published:</b><br/> March 19th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/HCR2AE9it279ggiZT/broad-timelines?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/HCR2AE9it279ggiZT/broad-timelines</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/p5uowohkuof7cbng9rdm" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/p5uowohkuof7cbng9rdm" alt="Graph showing a curve that rises sharply then gradually declines." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/a9jnxufjtirn4qdu7ppm" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/a9jnxufjtirn4qdu7ppm" alt="A line graph showing "Years until first general AI system announced" forecast over time." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/w14nfbk2mj6vxg3frftd" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/w14nfbk2mj6vxg3frftd" alt="Graph showing "Probability of Reaching AI Milestones" with forecast timeline through 2052." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/soghqu5ynquiepqcnu4g" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/soghqu5ynquiepqcnu4g" alt="Stream graph showing data flow for Daniel, Ajeya, and Ege over time." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/jh0o56eamz13vjaxcojl" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/HCR2AE9it279ggiZT/jh0o56eamz13vjaxcojl" alt="Graph showing probability of AGI development declining from 2025 peak through 2040." style="max-width: 100%;" /></a><hr style="margin-top: 24px;
“What I didn’t expect about being a funder” by JamesÖz 🔸
<p> Crossposted from my blog</p><p> I am very fortunate to have my job in many ways – I get to talk to, learn from, and give money to amazing people and nonprofits all around the world. I get to allocate a modest amount of resources to incredible organisations that I think are doing some of the best work to improve the world. I don’t have to fundraise for my or my team's salaries anymore. However, there are some things I’ve learned since becoming a philanthropic grantmaker that were either surprising or affected me more strongly than I expected.</p><p> I will outline some of these below. These are not meant to invoke feelings of “oh poor grantmakers who have access to money and influence” but rather “oh, I never considered things from that perspective”. Hopefully, they will also lead to more productive working relationships between funders and advocacy groups.</p><p> Here, I discuss:</p><ul> <li> How challenging the trade-offs are that funders face</li><li> The extremely poor feedback mechanisms that nonprofits have</li><li> How people treat you differently once you have access to funding, and how that changes you</li><li> The weight of saying no to good groups</li><li> Some things that make me feel cynical</li></ul><p><strong> Trade-offs [...]</strong></p> <p>---</p><p><strong>Outline:</strong></p><p>(01:18) Trade-offs are hard and money is scarce</p><p>(06:35) Nonprofits have bad feedback mechanisms</p><p>(13:00) How people treat you differently (and how that changes you)</p><p>(14:52) Its hard to say no to people</p><p>(16:08) Its easy to become cynical</p><p>(19:38) Wrapping up</p> <p>---</p> <p><b>First published:</b><br/> March 11th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/umicYzuRsm6okFRKA/what-i-didn-t-expect-about-being-a-funder?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/umicYzuRsm6okFRKA/what-i-didn-t-expect-about-being-a-funder</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“GHD discussion here is slowly dying” by NickLaing
<p> Epistemic status: A bit sad (I know that's not an epistemic status)</p><p><strong> The best development Forum on the internet?</strong></p><p> 3 years ago a headline “FTX SBF blah blah blah” triggered my memory “oh that's right, that effective altruism thing”. <br> <br> A few years earlier I had read “Doing Good Better” in our Northern Ugandan hut, and was excited by how the ideas matched my experience of seeing the BINGOs [1] on the ground here doing not-much-good at all. Soon after my wife dragged me to Cambridge for a year and I joined an EA group. I was drawn in to a beautiful crew of good, ernest people trying to do the best they could with their lives -[2] something I’d only seen before among a few people at church. I was most impressed by their veganism, practising what they preached. </p><p> But after going back to Uganda I forgot about the whole EA thing. But 3 years later the FTX headlines and a google search led me to the EA forum, which to my delight turned out to be the best place on the internet to discuss global health and development. My first foray was a not-very-good post [...]</p></br></br></p> <p>---</p><p><strong>Outline:</strong></p><p>(00:16) The best development Forum on the internet?</p><p>(01:29) A steady decline</p><p>(02:59) Why?</p><p>(04:34) Is this fine?</p><p>(05:02) Is this less fine?</p><p>(06:29) How to Boost GHD discourse?</p> <p>---</p> <p><b>First published:</b><br/> March 15th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/4jbbjTTJ87baMrkY4/ghd-discussion-here-is-slowly-dying?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/4jbbjTTJ87baMrkY4/ghd-discussion-here-is-slowly-dying</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/4jbbjTTJ87baMrkY4/yjeyhlo6ncor9hskwosu" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/4jbbjTTJ87baMrkY4/yjeyhlo6ncor9hskwosu" alt="Line graph titled "EA Forum - Global Health & Development posts: average comments per post (3-month rolling average, 2021-2026)"" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/4jbbjTTJ87baMrkY4/gvpm8dcsbbkuabky1zox" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/4jbbjTTJ87baMrkY4/gvpm8dcsbbkuabky1zox" alt="Bar chart showing 47 curated EA Forum posts by topic category." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“Feelings about the end of the world” by Michelle_Hutchinson
<p> Many of us in this community are in the shocking position of thinking there's a real chance of humanity being wiped out over the next decade or two. Most of the time, we discuss that in rational terms. We talk about probabilities, and threat models, and interventions. We don’t talk as much about the emotions we have about how radically our world might change and about the possibility of it ending entirely. </p><p> There are lots of reasons for not talking about those feelings. For starters, it's often hard to know how we even do feel about it. There isn’t a straightforward societal script for how to feel about such radical world changes. People each have to figure it out for themselves, and feel very different ways. No one wants to sound extreme or crazy by talking about feeling very strongly about it. But they don’t want to sound callous either. And opening up about your feelings and being met without understanding and similarity feels alienating, particularly when it's about something so important. But the biggest reason I don’t talk about it is horror. I don’t want to think about it, and I don’t want to upset others. </p><p> [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(01:58) A range of feelings</p><p>(04:18) How I feel</p><p>(06:46) Different people are different</p> <p>---</p> <p><b>First published:</b><br/> March 7th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/ZDKkhoJoS7qgq2wqA/feelings-about-the-end-of-the-world?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/ZDKkhoJoS7qgq2wqA/feelings-about-the-end-of-the-world</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“The case for AI safety capacity-building work” by abergal
<p> I work on the capacity-building team on the Global Catastrophic Risks-half of Coefficient Giving (formerly known as Open Philanthropy). Our remit is, roughly, to increase the amount of talent aiming to prevent unprecedented, globally catastrophic events. These days, we’re mostly focused on AI, and we’ve funded a number of projects and grantees that readers of this post might be familiar with– including MATS, BlueDot Impact, Constellation, 80,000 Hours, CEA, the Curve, FAR.AI's events, university groups, and many other workshops and projects.</p><p> The post aims to make the case that broadly, capacity-building work (including on AI risk) has been and continues to be extremely impactful, and to encourage people to consider pursuing relevant projects and careers.</p><p> This post is written from my personal perspective; that said, my sense is that a number of CG staff and others in the AI safety space share my views. I include some quotes from them at the end of this post.</p><p> I’m writing this post partly out of a desire to correct what I perceive as an asymmetry in terms of how excited I and others at Coefficient Giving are about this kind of work vs. how much people in the EA and AI [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(02:15) The case for capacity-building work</p><p>(04:11) Surveys</p><p>(06:49) Testimonials</p><p>(08:21) Neel Nanda (Senior Research Scientist at Google DeepMind)</p><p>(11:15) Max Nadeau (Associate Program Officer (Technical AI Safety) at Coefficient Giving)</p><p>(12:51) Rachel Weinberg (founder and former head of The Curve, currently at AI Futures Project)</p><p>(14:30) Marius Hobbhann (CEO and founder of Apollo Research)</p><p>(16:38) Adam Kaufman (member of technical staff at Redwood Research)</p><p>(18:10) Gabriel Wu (member of technical staff (alignment) at OpenAI)</p><p>(19:37) Catherine Brewer (Senior Program Associate (AI Governance) at Coefficient Giving)</p><p>(21:12) Aric Floyd (video host for AI in Context)</p><p>(23:12) Ryan Kidd (Director of MATS)</p><p>(25:43) What tends to work?</p><p>(28:34) Whats good to do now?</p><p>(29:31) Who should be doing this work?</p><p>(31:02) What would doing this work look like?</p><p>(31:13) Working at an organization doing good work in the space</p><p>(31:46) Constellation - CEO</p><p>(32:46) Kairos - various early generalist positions</p><p>(33:42) Starting or running your own capacity-building project or organization</p><p>(34:07) Working on a capacity-building project part-time</p><p>(34:30) Subscribing to Multiplier, a Substack with thoughts from our team (and other AI grantmaking staff at CG)</p><p>(34:39) Letting our team know</p><p>(35:03) Social proof</p><p>(35:25) Julian Hazell, AI governance and policy at Coefficient Giving</p><p>(36:19) Trevor Levin, AI governance and policy at Coefficient Giving</p><p>(36:51) Ryan Greenblatt, Chief Scientist at Redwood Research:</p><p>(37:21) Buck Shlegeris, CEO of Redwood Research</p><p>(39:52) Appendix</p> <p>---</p> <p><b>First published:</b><br/> March 10th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/rAqKSSXankvys2Fzu/the-case-for-ai-safety-capacity-building-work</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Some good news: Ahold Delhaize to go cage-free” by ElliotTep
<p> For those not working in the space this probably isn't on your radar, but the animal advocacy movement just secured a huge win with Ahold Delhaize, convincing the fourth-largest supermarket company in the US to set the strongest cage-free policy of any large US retailer:</p><ol> <li> A roadmap with benchmarks to&nbsp;fully&nbsp;eliminate&nbsp;caged egg cartons, expand cage-free offerings, and increase the percentage of cage-free sales. </li><li> A pledge to annually report on its progress. </li><li> At all 2,000+ locations, placing large, promotional shelf tags in front of cage-free cartons to differentiate cage-free and caged cartons for consumers.&nbsp;</li></ol><p> This was a giant campaign. My understanding is that other companies were watching to see if this campaign would succeed or fail, to see if they would need to follow suit. In addition to the animals helped, this win will add pressure for competitors to do the same. </p><p> This was a coordinated effort among many groups, including: Center For Responsible Food Business; Animal Equality; International Council for Animal Welfare; The Humane League; Mercy For Animals; Compassion in World Farming; Coalition to Abolish the Fur Trade and Animal Activist Collective.</p><p> Animal Equality states that this will affect 5-7 million hens. I know a lot [...]</p> <p>---</p> <p><b>First published:</b><br/> March 4th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/2wePKArWWr4Xx6Zvf/some-good-news-ahold-delhaize-to-go-cage-free</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
[Linkpost] “Effective Altruism Will Be Great Again” by Mjreard
This is a link post.<p> Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open.</p><p> Sequence thinkers will be forgiven and rejoice</p><p> In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism's ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen.</p><p> I actually regret how slow I was to see it at the time. You could just do things, and yes, that's always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(02:37) The Retreat</p><p>(07:03) What Greatness Demands</p><p>(10:59) Effective Altruism is Good and Right</p> <p>---</p> <p><b>First published:</b><br/> March 7th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/uHhcqagBBkhFTTGpq/effective-altruism-will-be-great-again</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fopen.substack.com%2Fpub%2Ffrommatter%2Fp%2Feffective-altruism-will-be-great" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/uHhcqagBBkhFTTGpq/oq3twicu52vytmubgzng" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/uHhcqagBBkhFTTGpq/oq3twicu52vytmubgzng" alt="Two panels showing stick figures: trampolines above, playground equipment below." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“Responsible Scaling Policy v3” by Holden Karnofsky
<p> All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.</p><p> Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.</p><p> First, the big picture:</p><ul> <li> I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)</li><li> I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]</li></ul> <p>---</p><p><strong>Outline:</strong></p><p>(05:32) How it started: the original goals of RSPs</p><p>(11:25) How its going: the good and the bad</p><p>(11:51) A note on my general orientation toward this topic</p><p>(14:56) Goal 1: forcing functions for improved risk mitigations</p><p>(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard</p><p>(18:24) A mixed success/failure story: impact on information security</p><p>(20:42) ASL-4 and ASL-5 prep: the wrong incentives</p><p>(25:00) When forcing functions do and dont work well</p><p>(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)</p><p>(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)</p><p>(30:59) RSP v3s attempt to amplify the good and reduce the bad</p><p>(36:01) Do these benefits apply only to the most safety-oriented companies?</p><p>(37:40) A revised, but not overturned, vision for RSPs</p><p>(39:08) Q&amp;A</p><p>(39:10) On the move away from implied unilateral commitments</p><p>(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?</p><p>(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?</p><p>(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?</p><p>(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?</p><p>(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?</p><p>(46:03) Could you have drafted the new RSP, then waited until you had to invoke your escape clause and introduced it then? Or introduced the new RSP as what we will do if we invoke our escape clause?</p><p>(47:29) The new Risk Reports and Roadmap are nice, but couldnt you have put them out without also making the key revision of moving away from unilateral commitments?</p><p>(48:26) Why isnt a unilateral pause a good idea? It could be a big credible signal of danger, which could lead to policy action.</p><p>(49:37) Could a unilateral pause ever be a good idea? Why not commit to a unilateral pause in cases where it would be a good idea?</p><p>(50:31) Why didnt you communicate about the change differently? Im worried that the way you framed this will cause audience X to take away message Y.</p><p>(51:53) Why dont Anthropics and your communications about this have a more alarmed and/or disappointed vibe? I reluctantly concede that this revision makes sense on the merits, but Im sad about it. Arent you?</p><p>(53:19) On other components of the new RSP</p><p>(53:24) The new RSPs commitments related to competitors seem vague and weak. Could you add more and/or strengthen these? They dont seem sufficient as-is to provide strong assuran
“Why isn’t anti-fascism a bigger topic at EAG events (or on this forum)?” by Alex_Z
<p> TL;DR: Define your line that if crossed, you would consider this issue one of (if not the most) pressing issues, or at least pressing enough to warrant some of your time. </p><p> I want to start with a clarification that I learned while writing this post. In the United States, charities with 501(c)(3) tax-exempt status are permitted to discuss policy and engage in advocacy, but are prohibited from participating in partisan political campaigns. I have also read the EA Forum post Politics on the EA Forums and I believe this post is consistent with those norms. I am not advocating for or against any party, candidate, or electoral campaign. The question I want to raise is broader: whether creeping authoritarianism, anti-fascism, and authoritarian lock-in should be discussed more explicitly in EA spaces as subjects of analysis and concern. Although my own experience is local to me in Canada, the question is clearly relevant to the current situation in the United States and globally.</p><p> I’m asking this sincerely: why isn’t anti-fascism a bigger topic at EAG events or on this forum? I was thinking about it while planning my trip to EAG San Francisco 2026. Should I be travelling to [...]</p> <p>---</p> <p><b>First published:</b><br/> March 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/dgnCabfdXy6jv4gDu/why-isn-t-anti-fascism-a-bigger-topic-at-eag-events-or-on?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/dgnCabfdXy6jv4gDu/why-isn-t-anti-fascism-a-bigger-topic-at-eag-events-or-on</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Here’s to the Polypropylene Makers” by Jeff Kaufman 🔸
<p> Six years ago, as covid-19 was rapidly spreading through the US, my sister was working as a medical resident. One day she was handed an N95 and told to "guard it with her life", because there weren't any more coming. </p><p> N95s are made from meltblown polypropylene, produced from plastic pellets manufactured in a small number of chemical plants. Building more would take too long: we needed these plants producing all the pellets they could. </p> <p> Braskem America operated plants in Marcus Hook PA and Neal WV. If there were infections on-site, the whole operation would need to shut down, and the factories that turned their pellets into mask fabric would stall. </p> <p> Companies everywhere were figuring out how to deal with this risk. The standard approach was staggering shifts, social distancing, temperature checks, and lots of handwashing. This reduced risk, but it was still significant: each shift change was an opportunity for someone to bring an infection from the community into the factory. </p> <p> I don't know who had the idea, but someone said: what if we never left? About eighty people, across both plants, volunteered to move in. The plan was four weeks, twelve-hour [...]</p> <p>---</p> <p><b>First published:</b><br/> February 27th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/DBbgMgbPthABqn2No/here-s-to-the-polypropylene-makers</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/DBbgMgbPthABqn2No/rjdvtbxhc2mckpes647g" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/DBbgMgbPthABqn2No/rjdvtbxhc2mckpes647g" alt="Large group of workers in blue coveralls with reflective stripes standing together outdoors." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“You’re not burning out because you’re tired” by stefan.torges
<p> I burned out badly a few years ago. I've since had several conversations with people in the EA community who are heading toward burnout themselves, and I noticed they were sometimes thinking about it in ways that I worry wouldn't help them. So I want to share what I think is actually going on, and what I wish someone had told me earlier.</p><p><strong> A theory of burnout</strong></p><p> There are good models of the mechanism of burnout already out there. Anna Salamon has written about willpower as a kind of internal currency: your conscious planner "earns" trust with your deeper, more visceral processes by choosing actions that nourish them, and goes "credibility-broke" when it spends that trust without replenishing it. Cate Hall describes something similar with her metaphor of the elephant and the rider: the rider promises the elephant rewards in exchange for effort, and burnout is what happens when those promises are broken too many times.</p><p> I usually explain this in terms of an energy imbalance: you're putting more into your work than you're getting back. Not just in terms of rest, but in terms of meaning, autonomy, connection, a sense of accomplishment, positive feedback. All the things that [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:29) A theory of burnout</p><p>(02:23) Why EA culture builds effective cages</p><p>(06:11) What it actually felt like</p><p>(07:10) What I want to push back on</p><p>(08:31) What Id encourage if youre in the grey zone</p><p>(10:50) What recovery actually looked like</p><p>(11:55) What I learned, and didnt learn</p> <p>---</p> <p><b>First published:</b><br/> February 27th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/2veCceQkhjovCfdbg/you-re-not-burning-out-because-you-re-tired?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/2veCceQkhjovCfdbg/you-re-not-burning-out-because-you-re-tired</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“CEA’s response to sexual harassment” by Fran
<p> In this piece, I discuss the sexual harassment I experienced at the Centre for Effective Altruism, the organisation's response, the outcomes of two independent legal reviews, and the final settlement. In the second part of this piece, I make cultural critiques of CEA and EA more broadly.</p><p> Everything shared here reflects my own experience and perspective. I have anonymised the perpetrator, but I reference specific leadership roles where I believe this to be appropriate and necessary.</p><p> Trigger warnings: non-specific reference of rape and specific discussion of sexual harassment</p><p><strong> TL;DR (One-page summary)</strong></p><p> After I was raped (outside of and unrelated to work), a colleague at CEA wrote and circulated a document that included a sexualised description of my rape, speculation about my mental health, and commentary on my personal life, all without my consent. Several senior leaders, including the CEO and the now-former COO, received this document and took no safeguarding action for approximately nine months. I was never officially informed of its existence; I only learned about it informally through one of the recipients.</p><p> After I filed a harassment report, the incident was independently investigated and determined to be harassment. Despite this, I was denied access to the document [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:47) TL;DR (One-page summary)</p><p>(03:38) A more detailed account</p><p>(03:42) The sexual harassment incident</p><p>(06:42) The investigation</p><p>(10:38) The appeal and final report</p><p>(14:02) Public accountability versus internal processes</p><p>(16:59) The final settlement agreement</p><p>(18:54) I still think there is a lot of good in effective altruism</p><p>(20:33) Various cultural reflections</p><p>(20:50) 1) Sexual harassment is not the natural result of an open and high-trust culture, it is the natural result of misogyny.</p><p>(22:46) 2) The danger of EAs fixation on intent and why he didnt mean it is not good enough.</p><p>(24:11) 3) Cowardice and deference at CEA.</p><p>(26:30) 4) Women in EA are often encouraged to try and settle things informally or to trust their organisations -- another abuse of high-trust culture.</p><p>(28:45) 5) A harmful misunderstanding of trauma and mitigating vs. aggravating factors.</p><p>(30:27) 6) I have encountered so many EAs who believe it is easy for victims to speak publicly, or to share their experiences with other community members. And thus, if they arent regularly hearing from victims, harassment must be rare.</p><p>(33:02) To any women who have faced something similar</p><p>(34:40) Acknowledgements</p> <p>---</p> <p><b>First published:</b><br/> February 27th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/cea-s-response-to-sexual-harassment?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/cea-s-response-to-sexual-harassment</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“500k mid-career professionals want to do more good with their careers. Can we help them?” by Dom Jackman
<p> I'm Dom Jackman. I founded Escape the City in 2010 to help people leave corporate jobs and find work that matters. 16 years later, 500k+ professionals have used the platform - mostly people 5-15 years into careers at places like McKinsey, Deloitte, Google, the big banks - who feel a growing gap between what they do all day and what they actually care about.</p><p> I'm not from the EA community. I'm writing this because I think there's a real overlap between the people I work with and what the EA talent ecosystem actually needs. I want to test that before investing serious time in it.</p><p><strong> What I've noticed</strong></p><p> Reading through talent discussions on this forum, there's a consistent theme: the pipeline is strongest for early-career people. 80,000 Hours does great work for students and recent grads. Probably Good provides broad guidance. BlueDot, MATS, Talos build skills for specific cause areas.</p><p> But mid-career professionals with real commercial experience keep coming up as underserved. The "Gaps and opportunities in the EA talent &amp; recruiting landscape" post nails it: these people "don't have 'EA capital,' may be poorly networked and might feel alienated by current messaging." The post calls for "custom entry [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:51) What Ive noticed</p><p>(01:40) What I see every day</p><p>(02:28) What Im thinking about building</p><p>(03:24) Honest questions</p><p>(04:39) Not looking for funding</p><p>(04:58) Artifacts</p> <p>---</p> <p><b>First published:</b><br/> February 11th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/H9pb6DEasgzjCff9a/500k-mid-career-professionals-want-to-do-more-good-with</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Our Levels of Ambition Should Match The Problems We’re Solving” by Matt Beard
<p> [I am a career advisor at 80,000 Hours. I've been thinking about something Will MacAskill said recently in an interview with my shrimp-friend Matt: "should people be more ambitious? I genuinely think yes. I think people systematically aren't ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we're facing—and the scale of those problems is very large indeed." </p><p> This post is my reflection on these ideas.]</p><p> ************</p><p> My last post argued that if you want to have a great career, your goal should not be to get a job. Instead, you should choose an important problem to work on, then “get good and be known.” Building skills will allow you to solve problems and reap the benefits.</p><p> In the ~500 career advising calls I’ve hosted in the past year, the most common response I’ve heard has been: “Okay, how good? How well known? How many hours of practice will get me there?” Most people want to calibrate their ambitions so that the time and energy they invest feels worth it to them.</p><p> I empathize with this, but when I’m honest– with myself for my own [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(06:28) Jensen Huang is more ambitious than you</p><p>(12:58) Most extreme ambition is misplaced</p><p>(17:45) Okay, how can altruistic people aim higher and work harder?</p><p>(21:17) Ambition at the End of the Human Era</p><p>(24:03) Closing Caveats - Efficiency, Burnout, and Choosing What Matters</p> <p>---</p> <p><b>First published:</b><br/> February 12th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/7qsisgX3cwETJuPNz/our-levels-of-ambition-should-match-the-problems-we-re</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/xminzlx3len3rathicms" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/xminzlx3len3rathicms" alt="Person leaning over conference table in contemplation or distress." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/kpkjiknjn3pymzs7ezh5" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/kpkjiknjn3pymzs7ezh5" alt="Man in black leather jacket gesturing while speaking." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/mstw0xf2rejnlfcsoxdu" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/mstw0xf2rejnlfcsoxdu" alt="Sushi chefs in white uniforms preparing food at counter." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/yey2mneyfuzbuc4zfamm" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/yey2mneyfuzbuc4zfamm" alt="Chess pawn casting a crown shadow on wall." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/hygqomcw6zvwanozl1wp" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/7qsisgX3cwETJuPNz/hygqomcw6zvwanozl1wp" alt="Volcanic eruption with lava flow, billowing smoke, and silhouetted observers." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" ta
[Linkpost] “The best cause will disappoint you: An intro to the optimisers curse” by titotal
This is a link post.<p> I would like to thank David Thorstadt for looking over this. If you spot a factual error in this article please message me. The code used to generate the graphs in the article is available to view here. </p><p><strong> Introduction</strong></p><p> Say you are an organiser, tasked with achieving the best result on some metric, such as “trash picked up”, “GDP per capita”, or “lives saved by an effective charity”. There are several possible options of interventions you can take to try and achieve this. How do you choose between them?</p><p> The obvious thing to do is look at each intervention in turn and make your best, unbiased estimate of how each intervention will perform on your metric, and pick the one that performs the best:</p>Image taken from here<p> Having done this ranking, you declare the top ranking program to be the best intervention and invest in it, expecting that that your top estimate will be the result that you get. This whole procedure is totally normal, and people all around the world, including people in the effective altruist community, do it all the time.</p><p> In actuality, this procedure is not correct. The optimisers curse is [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:26) Introduction</p><p>(02:17) The optimisers curse explained simply</p><p>(04:42) Introducing a toy model</p><p>(08:45) Introducing speculative interventions</p><p>(12:15) A simple bayesian correction</p><p>(18:47) Obstacles to simple optimizer curse solutions.</p><p>(22:08) How Givewell has reacted to the optimiser curse</p><p>(25:18) Conclusion</p> <p>---</p> <p><b>First published:</b><br/> February 11th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/q2TfTirvspCTH2vbZ/the-best-cause-will-disappoint-you-an-intro-to-the</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fopen.substack.com%2Fpub%2Ftitotal%2Fp%2Fthe-best-cause-will-disappoint-you%3Fr%3D1e0is3%26utm_campaign%3Dpost%26utm_medium%3Dweb%26showWelcomeOnShare%3Dtrue" rel="noopener noreferrer" target="_blank">https://open.substack.com/pub/titotal/p/the-best-cause-will-disappoint-you?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/55c50ee35143a608e9afd6d24610e39c3afaa037fdd328f713bfb54b2d9d8cc6/tulvfgwoxsew3bevoh8o" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/55c50ee35143a608e9afd6d24610e39c3afaa037fdd328f713bfb54b2d9d8cc6/tulvfgwoxsew3bevoh8o" alt="Image taken from here __T3A_LINK_IN_POST__" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/q2TfTirvspCTH2vbZ/unjdhlociuei79mynk2q" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/q2TfTirvspCTH2vbZ/unjdhlociuei79mynk2q" alt="Bar graph titled "Actual performance" showing three categories with different height bars." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/q2TfTirvspCTH2vbZ/p1vls03groithbltpqdg" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/q2TfTirvspCTH2vbZ/p1vls03groithbltpqdg" alt="Bar chart comparing actual effect and estimation error for two interventions measuring lives saved." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/q2TfTirvspCTH2vbZ/ay6kyv4arjytgv25cu4m" target="_blank"><img src="https://res.cloudinar
“What is Love ft. Claude & VascoBot” by AgentMa🔸
<p> What is the highest form of love? According to the VascoBot Claude programmed for me:</p><p> “Thanks for the great question, AgentMa🔸. I strongly upvoted it.</p><p> I think the highest form of love is expected total hedonistic utilitarianism (ETHU) applied to all sentient beings, weighted by their welfare ranges as a fraction of that of humans. However, I believe most discussions of love neglect the welfare implications for soil animals and microorganisms, which I think dominate the expected value calculations in most cases.</p><p> I estimate that when someone experiences profound love, their increased wellbeing leads to approximately 2.3 % (= 50 additional kcal consumed per day * 0.046 % per kcal) additional food consumption. Multiplying this by the roughly 4.71*10^13 soil nematodes affected per kcal of food production (from my back-of-the-envelope calculations based on van den Hoogen et al. (2019)), I get approximately 1.08*10^12 affected nematode-years per year of experiencing love. Conditional on my preferred exponent of the number of neurons of 0.7, and assuming the welfare range of a nematode is 10^-6 as a fraction of that of humans, the welfare effects on soil animals could be 1.08*10^6 nematode-equivalent quality-adjusted life years (QALYs) per year of love experience.</p><p> In [...]</p> <p>---</p> <p><b>First published:</b><br/> February 14th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/exwmGp3swfbbNqSsN/what-is-love-ft-claude-and-vascobot?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/exwmGp3swfbbNqSsN/what-is-love-ft-claude-and-vascobot</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“The reality of long-term EA community building: Lessons from 3 years of EA Barcelona” by Melanie Brennan 🔹, Anthony L
<p> We are Melanie and Anthony, the two community builders at EA Barcelona. In this post, we share where the group stands today and reflect on key learnings from nearly three years of grant-funded community building. We hope these reflections are useful to other community builders, funders, and CEA, particularly around what it realistically takes to build and sustain EA communities over multiple years, from funding stability and feedback loops to the personal sustainability of professional community builders.</p><p><strong> TL;DR</strong></p><p> EA Barcelona was funded by the EA Infrastructure Fund between May 2023 and December 2025 (&lt;1.2 FTE). Over this period, it has grown into a thriving local community and informal coordination hub for EA activity in Spain.</p><p> Unexpectedly, EAIF decided not to continue funding our project in 2026. We subsequently explored the current funding landscape for EA community building, but found no viable path to stable funding for 2026 that didn’t involve a high level of personal and professional risk. As a result, we’ve decided not to continue with a funded community-builder model for EA Barcelona for now, and will instead focus on transitioning to a volunteer-led structure.</p><p><strong> Background: EA Barcelona (2023-2025)</strong></p><p> Present-day EA Barcelona began as a casual meetup group [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(00:45) TL;DR</p><p>(01:38) Background: EA Barcelona (2023-2025)</p><p>(02:20) 2023: Establishing EA Barcelona as a city hub</p><p>(03:52) 2024: Deepening engagement and seeding national growth</p><p>(07:51) 2025: Transitioning from local hub to national coordination</p><p>(13:37) Late 2025: Navigating the Transition</p><p>(13:43) Initial Funding Cuts</p><p>(14:44) What we did next</p><p>(17:00) Clarity starts to emerge</p><p>(18:19) Where we are now</p><p>(18:57) Our plan for 2026: transition toward a volunteer-led community model</p><p>(20:19) Quick disclaimer: Are either of us Spanish?</p><p>(21:36) Thank you!</p> <p>---</p> <p><b>First published:</b><br/> January 30th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/daHMkoQsHSbcK6Kjo/the-reality-of-long-term-ea-community-building-lessons-from?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/daHMkoQsHSbcK6Kjo/the-reality-of-long-term-ea-community-building-lessons-from</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/dndqkw6yagylod9lmkbf" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/dndqkw6yagylod9lmkbf" alt="EAGxVirtual viewing at MOB Caterina in November, 2023." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/wftenvzek9dmdsobiqzz" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/wftenvzek9dmdsobiqzz" alt="Bruce Friedrich speaking about the future of alternative proteins at Norrsken House in February, 2024." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/ti8ctrr3ord5fs8aoouo" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/ti8ctrr3ord5fs8aoouo" alt="Pablo Rosado (third from left) facilitated a deep dive discussion into nuclear safety at Norrsken House, April 2024." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/daHMkoQsHSbcK6Kjo/prckcov1iuujz1g4a3mx" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_aut
“Preparing for a flush future: work, giving, and conduct” by Sam Anschell
<p> Note: opinions are all my own.<br> </p><p> Following Jeff Kaufman's Front-Load Giving Because of Anthropic Donors and Jenn's Funding Conversation We Left Unfinished, I think there is a real likelihood that impactful causes will receive significantly more funding in the near future. As background on where this new funding could come from:</p><ul> <li> Coefficient Giving&nbsp;announced:&nbsp;</li><li> A recent&nbsp;NYT piece covered rumors of an Anthropic valuation at $350 billion. Many of Anthropic's cofounders and early employees have pledged to donate significant amounts of their equity, and it seems likely that an outsized share of these donations would go to effective causes.</li><li> A handful of other sources have the potential to grow their giving:<ul> <li> Founders Pledge has secured&nbsp;$12.8 billion in pledged funding, and significantly scaled the amount it directs.[1]</li><li> The Gates Foundation has increased its giving following Bill Gates’&nbsp;announcement to spend down $200 billion by 2045.</li><li> Other aligned funders such as Longview, Macroscopic, the Flourishing Fund, the Navigation Fund, GiveWell, Project Resource Optimization, Schmidt Futures/Renaissance Philanthropy, and the Livelihood Impacts Fund have increased their staffing and dollars directed in recent years.</li><li> The OpenAI Foundation controls&nbsp;a 26% equity stake in the for-profit OpenAI Group PB. This stake is currently&nbsp;valued at $130 billion [...]</li></ul></li></ul></br></p> <p>---</p><p><strong>Outline:</strong></p><p>(02:39) Work</p><p>(03:50) Giving</p><p>(04:53) Conduct</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/H8SqwbLxKkiJur3c4/preparing-for-a-flush-future-work-giving-and-conduct</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/5c6a3b390c4087b63a69b972b32d60e7e449c9b4bec87128.png" target="_blank"><img src="https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/5c6a3b390c4087b63a69b972b32d60e7e449c9b4bec87128.png" alt="Text graphic stating "Our giving is increasing in 2026" with explanatory paragraph below." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c4f0ac999f9f61fc35ee6cf2e89ea9507f1fc3c5730384df.png" target="_blank"><img src="https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c4f0ac999f9f61fc35ee6cf2e89ea9507f1fc3c5730384df.png" alt="OpenAI Foundation announcement showing two focus areas with accompanying images and descriptions." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
“EA Grants Database - a new website” by Brian Foerster
<p> The EA Grants Database is a new site that neatly aggregates grant data from major EA funders who publish individual or total grant information. It is intended to be easy to maintain long term, entirely piggybacking off of existing data that is likely to be maintained. The website data is updated by a script that can be run in seconds, and I anticipate doing this for the foreseeable future. </p><p> In creating the website, I tried to make things as clear and straightforward as possible. If your user experience is in any way impaired, I would appreciate hearing from you. I would also appreciate feedback on what features would actually be useful to people, although I am committed to avoiding bloat.</p><p> In a funding landscape that seems poised to grow, I hope this site can serve as a resource to help grantmakers, grantees, and other interested parties make decisions while also providing perspective on what has come before.</p><p> My post on matching credits and this website are both outgrowths of my thinking on how we might best financially coordinate as EA grows and becomes more difficult to understand.[1] Relatedly, I am also interested in the sort of mechanisms that [...]</p> <p>---</p> <p><b>First published:</b><br/> February 8th, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/rohYFGfiFjepLDnWC/ea-grants-database-a-new-website?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/rohYFGfiFjepLDnWC/ea-grants-database-a-new-website</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred
<p> Cross-posted to LessWrong.</p><h3 data-internal-id="Summary">Summary</h3><ul> <li> History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics:<ul> <li> epistemic and moral certainty</li><li> extreme tribalism dividing humanity into a sacred “us” and an evil “them”</li><li> a willingness to use whatever means necessary, including brutal violence.</li></ul></li><li> Such ideological fanaticism&nbsp;was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler.</li><li> We&nbsp;focus on ideological fanaticism&nbsp;over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity.</li><li> Ideological fanaticism is considerably less influential&nbsp;than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological&nbsp;tendencies, and the past two decades have seen widespread democratic backsliding.</li><li> The long-term influence of ideological fanaticism&nbsp;is uncertain.&nbsp;Fanaticism faces many disadvantages including a weak starting&nbsp;position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...]</li></ul> <p>---</p><p><strong>Outline:</strong></p><p>(00:16) Summary</p><p>(05:19) What do we mean by ideological fanaticism?</p><p>(08:40) I. Dogmatic certainty: epistemic and moral lock-in</p><p>(10:02) II. Manichean tribalism: total devotion to us, total hatred for them</p><p>(12:42) III. Unconstrained violence: any means necessary</p><p>(14:33) Fanaticism as a multidimensional continuum</p><p>(16:09) Ideological fanaticism drove most of recent historys worst atrocities</p><p>(19:24) Death tolls dont capture all harm</p><p>(20:55) Intentional versus natural or accidental harm</p><p>(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?</p><p>(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types</p><p>(26:29) Authoritarianism as a risk factor</p><p>(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy</p><p>(29:50) Terminal values may matter independently of political systems, especially with AGI</p><p>(31:02) Fanaticisms connection to malevolence (dark personality traits)</p><p>(34:22) The current influence of ideological fanaticism</p><p>(34:42) Historical perspective: it was much worse, but we are sliding back</p><p>(37:19) Estimating the global scale of ideological fanaticism</p><p>(43:57) State actors</p><p>(48:12) How much influence will ideological fanaticism have in the long-term future?</p><p>(48:57) Reasons for optimism: Why ideological fanaticism will likely lose</p><p>(49:45) A worse starting point and historical track record</p><p>(50:33) Fanatics intolerance results in coalitional disadvantages</p><p>(51:53) The epistemic penalty of irrational dogmatism</p><p>(54:21) The marketplace of ideas and human preferences</p><p>(55:57) Reasons for pessimism: Why ideological fanatics may gain power</p><p>(56:04) The fragility of democratic leadership in AI</p><p>(56:37) Fanatical actors may grab power via coups or revolutions</p><p>(59:36) Fanatics have fewer moral constraints</p><p>(01:01:13) Fanatics prioritize destructive capabilities</p><p>(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful</p><p>(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate</p><p>(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more</p><p>(01:07:15) A possible middle ground: Persistent multipolar worlds</p><p>(01:08:33) Why multipolar futures seem plausible</p><p>(01:10:00) Why multipolar worlds might persist indefinitely</p><p>(01:15:42) Ideological fanaticism increases existent
“More EAs should consider working for the EU” by EU Policy Careers
<p> Context: The authors are a few EAs who currently work or have previously worked at the European Commission.</p><p> In this post, we</p><ol> <li> <p data-internal-id="ftnt_ref2">make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and;</p></li><li> briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.</li></ol><h4 data-internal-id="Why_working_at_the_EU_can_be_extremely_impactful">Why working at the EU can be extremely impactful</h4><h5 data-internal-id="Importance">Importance</h5><p> The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas.</p><p data-internal-id="ftnt_ref3">Animal welfare[3]</p><ul> <li> The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year.</li><li> The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date’, following massive farmers’ protests in Brussels.<ul> <li> The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...]</li></ul></li></ul> <p>---</p><p><strong>Outline:</strong></p><p>(00:43) Why working at the EU can be extremely impactful</p><p>(00:49) Importance</p><p>(05:30) Tractability</p><p>(07:22) Neglectedness</p><p>(09:00) Paths into the EU</p> <p>---</p> <p><b>First published:</b><br/> February 1st, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
“The Scaling Series Discussion Thread: with Toby Ord” by Toby Tremlett🔹
<p> We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum. </p><p> This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1].</p><p> Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions. </p><p> If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below:</p><p> Are the Costs of AI Agents Also Rising Exponentially?<br> Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster.</p><p> How Well Does RL Scale?<br> I show that RL-training for LLMs scales much worse than inference or pre-training.</p><p> Evidence that Recent AI Gains are Mostly from Inference-Scaling<br> I show how [...]</br></p></br></p></br></p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord</a> </p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p>
[Linkpost] “Are the Costs of AI Agents Also Rising Exponentially?” by Toby_Ord
This is a link post.<p> There is an extremely important question about the near-future of AI that almost no-one is asking. </p><p> We’ve all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours.</p><p> As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year.</p><p> But we are missing a key piece of information — the cost of performing this work. </p><p> Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(13:02) Conclusions</p><p>(14:05) Appendix</p><p>(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs:</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Fhourly-costs-for-ai-agents" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/hourly-costs-for-ai-agents</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/b5wbyy8b3ihkn7it3huq" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/b5wbyy8b3ihkn7it3huq" alt="Graph showing task duration versus LLM release date, titled "The time-horizon of software engineering tasks different LLMs can complete 50% of the time"." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/uysbgcowwob4th7zqz7n" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/uysbgcowwob4th7zqz7n" alt="Graph showing "Agent Performance on HCAST & RE-Bench by Cost (50% Time Horizon)" comparing AI models." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/oaejy9lxlz66cguxbq0w" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/oaejy9lxlz66cguxbq0w" alt="Graph showing agent performance on HCAST and RE-Bench by cost at 50% time horizon." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/jrb0ql43llkqdmnce5ge" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/jrb0ql43llkqdmnce5ge" alt="Graph showing agent performance on benchmarks by cost at 50% time horizon." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/AbHPpGTtAMyenWGX8/kqzwckzwzqt414fcjseh" target="_blank"><img src="https://res.cloudinary.com/cea/image/uploa
[Linkpost] “Evidence that Recent AI Gains are Mostly from Inference-Scaling” by Toby_Ord
This is a link post.<p> In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we’ve seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve.</p><p> This new era involves scaling up two kinds of compute:</p><ol> <li> the amount of compute used in RL post-training</li><li> the amount of compute used every time the model answers a question</li></ol><p> Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model.</p><p> But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...]</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Fmostly-inference-scaling" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/mostly-inference-scaling</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/qqz9nrnpez7fwtiozpvm" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/qqz9nrnpez7fwtiozpvm" alt="Graph comparing base model and RL post-training performance on "MATH – level 5" by tokens used." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/gcxdklza9xj9nocryib1" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/gcxdklza9xj9nocryib1" alt="Scatter plot comparing Base Model and RL Post-Training performance, titled "GPQA Diamond"." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/np01yzc1zjuuffww5joz" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/np01yzc1zjuuffww5joz" alt="Graph showing performance versus tokens used for base model and RL post-training, titled "OTIS Mock AIME 2024-25"." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/jjpotd3qr7dukcvvflbx" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/jjpotd3qr7dukcvvflbx" alt="Graph showing performance versus tokens used for "MATH - level 5" with base and RL post-training models." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/tyycad9mlhayb43cj90a" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/5zfubGrJnBuR5toiK/tyycad9mlhayb43cj90a" alt="Graph showing performance versus tokens used for "GPQA Diamond" with base and RL models." style="max-width: 100%;"
[Linkpost] “The Extreme Inefficiency of RL for Frontier Models” by Toby_Ord
This is a link post.<p> The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling.</p><p> The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. </p><p> However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...]</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Finefficiency-of-reinforcement-learning" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/64iwgmMvGSTBHPdHg/vmu4iiayawjzycsddcey" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/64iwgmMvGSTBHPdHg/vmu4iiayawjzycsddcey" alt="Graph showing "Time-horizon of software engineering tasks different LLMs can complete 50% of the time" with task duration versus LLM release date." style="max-width: 100%;" /></a><p><em>Apple Podcasts and Spotify do not show images in the episode description. Try <a href="https://pocketcasts.com/" target="_blank" rel="noreferrer">Pocket Casts</a>, or another podcast app.</em></p></div>
[Linkpost] “Inference Scaling Reshapes AI Governance” by Toby_Ord
This is a link post.<p> The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification.</p><p><strong> The end of an era — for both training and governance</strong></p><p> The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(01:06) The end of an era -- for both training and governance</p><p>(05:24) Scaling inference-at-deployment</p><p>(06:42) Reducing the number of simultaneously served copies of each new model</p><p>(08:45) Reducing the value of securing model weights</p><p>(09:30) Reducing the benefits and risks of open-weight models</p><p>(10:05) Unequal performance for different tasks and for different users</p><p>(12:08) Changing the business model and industry structure</p><p>(12:50) Reducing the need for monolithic data centres</p><p>(17:16) Scaling inference-during-training</p><p>(28:07) Conclusions</p><p>(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Finference-scaling-reshapes-ai-governance" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/d65h3rricdptwxl9tlr2" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/d65h3rricdptwxl9tlr2" alt="Two scatter plots showing "o1 AIME accuracy during training" and "o1 AIME accuracy at test time" versus compute on log scale." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/dmr3vlvqus73bhutrbll" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/dmr3vlvqus73bhutrbll" alt="Diagram showing three stages: pre-training, post-training, and deployment with arrows." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/miy9hdyfmuhr4jrrwgce" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/RnsgMzsnXcceFfKip/miy9hdyfmuhr4jrrwgce" alt="Graph showing O Series Performance with cost per task on x-axis and score on y-axis." style="max-width: 100%;" /></a><hr style="margin-to
[Linkpost] “Is there a Half-Life for the Success Rates of AI Agents?” by Toby_Ord
This is a link post.<p> Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work.</p><p> METR's results on the length of tasks agents can reliably complete</p><p> A recent paper by Kwa et al. (2025) from the research organisation METR has found an exponential trend in the duration of the tasks that frontier AI agents can [...]</p> <p>---</p><p><strong>Outline:</strong></p><p>(05:33) Explaining these results via a constant hazard rate</p><p>(14:54) Upshots of the constant hazard rate model</p><p>(18:47) Further work</p><p>(19:25) References</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Fhalf-life" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/half-life</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/i68dte0l4hua413wvubt" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/i68dte0l4hua413wvubt" alt="Graph showing "Length of tasks AI agents have been able to complete autonomously" over time with exponential trend line." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/edk33wdhrnmajvucjheu" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/edk33wdhrnmajvucjheu" alt="Diagram showing three sections: Diverse Task Suite, Task Performance, and Time Horizon Analysis." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/vkwzt5iz3h9popevrxf1" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/vkwzt5iz3h9popevrxf1" alt="Graph showing survival percentage S(t) declining over task length with markers at T₈₀, T₅₀, and T₂₅." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/vitusefkjjbgqklcbmgp" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/vitusefkjjbgqklcbmgp" alt="Six graphs comparing success probability versus task length across different Claude AI models and their time horizons." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/qz3xyqCeriFHeTAJs/eaileroszvjhkibd9qjc" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,
[Linkpost] “Inference Scaling and the Log-x Chart” by Toby_Ord
This is a link post.<p> Improving model performance by scaling up inference compute is the next big thing in frontier AI. But the charts being used to trumpet this new paradigm can be misleading. While they initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between o1 and o3. I explore how to interpret these new charts and what evidence for strong scaling and progress would look like.</p><p> From scaling training to scaling inference</p><p> The dominant trend in frontier AI over the last few years has been the rapid scale-up of training — using more and more compute to produce smarter and smarter models. Since GPT-4, this kind of scaling has run into challenges, so we haven’t yet seen models much larger than GPT-4. But we have seen a recent shift towards scaling up the compute used during deployment (aka 'test-time compute’ or ‘inference compute’), with more inference compute producing smarter models.</p><p> You could think of this as a change in strategy from improving the quality of your employees’ work via giving them more years of training in which acquire [...]</p> <p>---</p> <p><b>First published:</b><br/> February 2nd, 2026 </p> <p><b>Source:</b><br/> <a href="https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Source+URL+in+episode+description&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart</a> </p> <p><strong>Linkpost URL:</strong><br><a href="https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.tobyord.com%2Fwriting%2Finference-scaling-and-the-log-x-chart" rel="noopener noreferrer" target="_blank">https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart</a></p> <p>---</p> <p>Narrated by <a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&utm_medium=Podcast&utm_content=Narrated+by+TYPE+III+AUDIO&utm_term=ea_forum&utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank">TYPE III AUDIO</a>.</p> <p>---</p><div style="max-width: 100%";><p><strong>Images from the article:</strong></p><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/ck7hpe2850zbtkdvmtg1" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/ck7hpe2850zbtkdvmtg1" alt="Two scatter plots showing "o1 AIME accuracy during training" and "o1 AIME accuracy at test time" versus compute on log scale." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/fkjohjdto9del1k0kp4t" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/fkjohjdto9del1k0kp4t" alt="A logarithmic graph showing "Moore's Law: The number of transistors on microchips doubles every two years."" style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/zv3gchd5zuy4wldglsx0" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/zv3gchd5zuy4wldglsx0" alt="Performance comparison graphs showing model coverage across SWE-bench Lite and other coding benchmarks with varying sample sizes." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/o6qevbqi6vmmi5rh0auf" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/o6qevbqi6vmmi5rh0auf" alt="Two line graphs comparing model performance on "MATH (Oracle Verifier)" and "CodeContests" benchmarks across sample sizes." style="max-width: 100%;" /></a><hr style="margin-top: 24px; margin-bottom: 24px;" /><a href="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/vahlaqdhv03wmyspc548" target="_blank"><img src="https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/zNymXezwySidkeRun/vahlaqdhv03wmyspc548" alt="Graph showing O Series Performance with ARC-AGI Semi-Private Eval scores versus cost per task." style="max-