
TYPE III AUDIO (All episodes)
167 episodes — Page 3 of 4
EA Forum Weekly Summaries – Episode 13 (Jan. 9 - 15, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/DNWpFLrtrJXe4mted/ea-and-lw-forum-summaries-9th-jan-to-15th-jan-23This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"What you can do to help stop violence against women and girls" by Akhil
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 2h20mqa_time: 0h45m---I previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.Original article:https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-andNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years" by Trevor Chow, Basil Halperin, & J. Zachary Mazlish
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__forecastingnarrator: pwqa: mdsnarrator_time: 4h45mqa_time: 1h30m---In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities: Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years. Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.In the rest of this post we flesh out this argument.Original article:https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-orNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 12 (Dec. 19, 2022 to Jan. 8, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/JZuCg7TtfzzaX9bBY/ea-and-lw-forum-summaries-holiday-edition-19th-dec-8th-janThis is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"How to make tough career decisions" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 2h00mqa_time: 0h30m---Should I quit my job? Which of my offers should I take? Which long-term options should I explore?These decisions will affect how you spend years of your time, so the stakes are high. But they’re also an area where you shouldn’t expect your intuition to be a reliable guide. This means it’s worth taking a more systematic approach.What might a good career decision process look like? A common approach is to make a pro and con list, but it’s possible to do a lot better. Pro and con lists make it easy to put too much weight on an unimportant factor. More importantly, they don’t encourage you to make use of the most powerful decision-making methods, which can greatly improve the quality of your decisions.Original article:https://80000hours.org/career-decision/article/Narrated for the 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"How 'Discovering Latent Knowledge in Language Models Without Supervision' Fits Into a Broader Alignment Scheme" by Collin
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: kmnarrator_time: 2h15mqa_time: 0h35m---A few collaborators and I recently released a new paper: Discovering Latent Knowledge in Language Models Without Supervision. For a quick summary of our paper, you can check out this Twitter thread.In this post I will describe how I think the results and methods in our paper fit into a broader scalable alignment agenda. Unlike the paper, this post is explicitly aimed at an alignment audience and is mainly conceptual rather than empirical. Tl;dr: unsupervised methods are more scalable than supervised methods, deep learning has special structure that we can exploit for alignment, and we may be able to recover superhuman beliefs from deep learning representations in a totally unsupervised way.Original article:https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-withoutNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Models Don't 'Get Reward'" by Sam Ringer
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: kmnarrator_time: 1h05mqa_time: 0h10m---Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.In terms of content, this has a lot of overlap with Reward is not the optimization target. I'm basically rewriting a part of that post in language I personally find clearer, emphasising what I think is the core insight.When thinking about deception and RLHF training, a simplified threat model is something like this: A model takes some actions. If a human approves of these actions, the human gives the model some reward. Humans can be deceived into giving reward in situations where they would otherwise not if they had more knowledge. Models will take advantage of this so they can get more reward.Models will therefore become deceptive.Before continuing, I would encourage you to really engage with the above. Does it make sense to you? Is it making any hidden assumptions? Is it missing any steps? Can you rewrite it to be more mechanistically correct?I believe that when people use the above threat model, they are either using it as shorthand for something else or they misunderstand how reinforcement learning works. Most alignment researchers will be in the former category. However, I was in the latter.Original article:https://www.lesswrong.com/posts/TWorNr22hhYegE4RT/models-don-t-get-rewardNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"The Feeling of Idea Scarcity" by John Wentworth
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h05mqa_time: 0h10m---Here’s a story you may recognize. There's a bright up-and-coming young person - let's call her Alice. Alice has a cool idea. It seems like maybe an important idea, a big idea, an idea which might matter. A new and valuable idea. It’s the first time Alice has come up with a high-potential idea herself, something which she’s never heard in a class or read in a book or what have you.So Alice goes all-in pursuing this idea. She spends months fleshing it out. Maybe she writes a paper, or starts a blog, or gets a research grant, or starts a company, or whatever, in order to pursue the high-potential idea, bring it to the world.And sometimes it just works!… but more often, the high-potential idea doesn’t actually work out. Maybe it turns out to be basically-the-same as something which has already been tried. Maybe it runs into some major barrier, some not-easily-patchable flaw in the idea. Maybe the problem it solves just wasn’t that important in the first place.Original article:https://www.lesswrong.com/posts/mfPHTWsFhzmcXw8ta/the-feeling-of-idea-scarcityNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Why Anima International suspended the campaign to end live fish sales in Poland" by Jakub Stencel & Weronika Zurek
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 2h45mqa_time: 0h50m---At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.Original article:https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-liveThis is a linkpost for https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-polandNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Why some of your career options probably have 100x the impact of others" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 1h10mqa_time: 0h15m---We believe that some of the career paths open to you likely have over 100 times more positive impact than other paths you might take.Why? In our key ideas series, we’ve shown that you can have more impact by:Finding a bigger and/or more neglected problem to work onFinding a path that gives you a bigger opportunity to contributeFinding work that fits you betterWe’ve also shown that there are big differences for each factor:Some problems seem hundreds of times more neglected relative to their scale than others.Some career paths let you make 100 times as big a contribution to solving those problems as others — via giving you more leverage or letting you support more effective solutions.You can have many times more impact in a path that’s a good fit.On top of that, you can further increase your impact by having a good career strategy, such as by striking the right balance between investing in yourself and having an impact right away.Original article:https://80000hours.org/articles/careers-differ-in-impact/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"StrongMinds should not be a top-rated charity (yet)" by Simon_M
---client: ea_forumproject_id: curatedqa: mdsnarrator_time: 1h40mqa_time: 0h0m---This is a linkpost for https://simonm.substack.com/p/strongminds-should-not-be-a-top-ratedGWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”Original article:https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yetNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Let’s think about slowing down AI" by Katja Grace
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__governancenarrator: pwqa: kmnarrator_time: 5h00mqa_time: 1h50m---If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).Original article:https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1Narrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"High-level hopes for AI alignment" by Holden Karnofsky
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__governance, ai_safety__technicalnarrator: not_t3aqa: not_t3a---In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.Original article:https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignmentNarrated by Holden Karnofsky for the Cold Takes blog.Share feedback on this narration.
"The next decades might be wild" by Marius Hobbhahn
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__forecastingnarrator: pwqa: mdsnarrator_time: 4h30mqa_time: 2h0m---This post is inspired by What 2026 looks like and an AI vignette workshop guided by Tamay Besiroglu. I think of this post as “what would I expect the world to look like if these timelines (median compute for transformative AI ~2036) were true” or “what short-to-medium timelines feel like” since I find it hard to translate a statement like “median TAI year is 20XX” into a coherent imaginable world.I expect some readers to think that the post sounds wild and crazy but that doesn’t mean its content couldn’t be true. If you had told someone in 1990 or 2000 that there would be more smartphones and computers than humans in 2020, that probably would have sounded wild to them. The same could be true for AIs, i.e. that in 2050 there are more human-level AIs than humans. The fact that this sounds as ridiculous as ubiquitous smartphones sounded to the 1990/2000 person, might just mean that we are bad at predicting exponential growth and disruptive technology. Original article:https://www.lesswrong.com/posts/qRtD4WqKRYEtT5pi3/the-next-decades-might-be-wildNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?" by Benjamin Hilton & 11 anonymous experts
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 2h40mqa_time: 0h50m---We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because capabilities work and some forms of safety work are intertwined, and many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?Original article:https://80000hours.org/articles/ai-capabilities/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 10 (Dec. 5 - 11, 2022)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/8bcPkqdLYG78YbnTh/ea-and-lw-forums-weekly-summary-5th-dec-11th-dec-22This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"3 key career stages" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 1h0mqa_time: 0h10m---If you want to have an impact, the aim is to find a job that has the potential to make a big contribution to a pressing problem, and that’s a good fit for you. But how can you find a job like that?In the strategy section of our key ideas series, we discuss the value of exploration and career capital, as well as many other ideas, like why to be more ambitious. Here we sum them up into a simple career strategy.Original article:https://80000hours.org/articles/key-career-stages/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"Planning a high-impact career: a summary of everything you need to know in 7 points" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 1h0mqa_time: 0h30m---We took 10 years of research and what we’ve learned from advising more than 1,000 people on how to build high-impact careers, compressed that into an eight-week course to create your career plan, and then compressed that into this summary of the main points.(It’s especially aimed at people who want a career that’s both satisfying and has a significant positive impact, but much of the advice applies to all career decisions.)Original article:https://80000hours.org/career-planning/summary/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"This is your most important decision" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 1h30mqa_time: 20m---Why your career is your biggest opportunity to make a difference to the worldWhen people think of living ethically, they most often think of things like recycling, fair trade, and volunteering.But that’s missing something huge: your choice of career.We believe that what you do with your career is probably the most important ethical decision of your life.The first reason is the huge amount of time at stake. You have about 80,000 hours in your career: 40 hours per week, 50 weeks per year, for 40 years. That’s more time than you’ll spend eating, socialising, and watching Netflix put together.And it means (unless you happen to be the heir to a large estate) that time is the biggest resource you have to help others.So if you can increase the overall impact of your career by just a tiny amount, it will likely do more good than changes you could make to other parts of your life.Or, to look at it another way: it’s worth thinking a lot about how to make even just small improvements to your career. For instance, if you could increase the impact of your career by 1%, it would be worth spending up to 800 hours working out how to do that.And that brings us to the second reason why your choice of career is so important: some careers give you the opportunity to do vastly more good for the world than others — to a much greater extent than people realise.In fact, we’ll argue that some career paths open to you likely have 10, 100, or even 1,000 times more impact than others. And this makes it even more important to think hard about your career.Why do careers differ so much in impact?Original article:https://80000hours.org/make-a-difference-with-your-career/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
What are the most pressing world problems? Frequently asked questions
---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmnarrator_time: 3h15mqa_time: 0h30m---Question 1: What are these lists based on?Our aim is to find the problems where an additional person can have the greatest positive social impact, given how effort is already allocated in society. The primary way we do that is by trying to compare global issues based on their scale, neglectedness, and tractability. To learn about this framework, see our introductory article on prioritising world problems.To assess the problems based on this framework, we mainly draw upon research and advice from subject-matter experts and advisors in the effective altruism research community — including the Global Priorities Institute, Rethink Priorities, and Open Philanthropy — though we also make some of our own judgement calls in borderline cases.To see the reasons why we listed each individual problem, click through to see the full profiles.Assessments of the scale and tractability of different global issues depend on your values and worldview. You can see some of the most important aspects of our worldview in the ‘foundations’ section of our key ideas series, especially our article on how we define social impact.All this has led to a few themes in the issues we tend to prioritise most highly:Original article:https://80000hours.org/problem-profiles/#problems-faqNarrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"The Welfare Range Table" by Bob Fischer
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 1h50mqa_time: 45m---Key TakeawaysOur objective: estimate the welfare ranges of 11 farmed species.Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states. There are many unknowns across many species.It’s rare to have evidence that animals lack a given trait.We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.Many of the traits about which we know the least are affective traits.We do have information about some significant traits for many animals.Original article:https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-tableNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Some observations from an EA-adjacent charitable effort" by Patrick McKenzie
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 1h30mqa_time: 25m---Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.A Thing That Happened Last YearAs some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.Original article:https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effortNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 9 (Nov. 28 - Dec. 4, 2022)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities). Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight" by Adam Shriver
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 2hqa_time: 35m---Key Takeaways:Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.We take the following ideas to be the strongest reasons in favor of a neuron count proxy:neuron counts are correlated with intelligence and intelligence is correlated with moral weight,additional neurons result in “more consciousness” or “more valenced consciousness,” andincreasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.However:in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; andthere is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities. Original article:https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moralThis is a linkpost for https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit?rtpof=true&sd=trueNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Population ethics without axiology: A framework" by Lukas_Gloor
---client: ea_forumproject_id: red_teamnarrator: pwqa: mdsnarrator_time: 4h15mqa_time: 2h---This post introduces a framework for thinking about population ethics: “population ethics without axiology.” In its last section, I sketch the implications of adopting my framework for evaluating the thesis of longtermism. Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”Original article:https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-frameworkNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Notes on effective altruism" by Michael Nielsen
---client: ea_forumproject_id: red_teamnarrator: pwqa: mdsnarrator_time: 4h14mqa_time: 1h30m---Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom."Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.Original article:https://michaelnotebook.com/eanotes/Narrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"How bad could a war get?" by Stephen Clare & Rani Martin
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 4hqa_time: 30m---In “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years. Assume that the distribution of battle deaths in wars follows a power law. Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deathsWork out the likelihood of such a war given the expected number of wars between now and 2100.Not everybody was convinced. I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale? Original article:https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-getNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Are you really in a race? The cautionary tales of Szilárd and Ellsberg" by Haydn Belfield
---client: ea_forumproject_id: red_teamfeed_id: ai_safetynarrator: pwqa: kmnarrator_time: 3h30mqa_time: 1h---In both the 1940s and 1950s, well-meaning and good people – the brightest of their generation – were convinced they were in an existential race with an expansionary, totalitarian regime. Because of this belief, they advocated for and participated in a ‘sprint’ race: the Manhattan Project to develop a US atomic bomb (1939-1945); and the ‘missile gap’ project to build up a US ICBM capability (1957-1962). These were both based on a mistake, however - the Nazis decided against a Manhattan Project in 1942, and the Soviets decided against an ICBM build-up in 1958. The main consequence of both was to unilaterally speed up dangerous developments and increase existential risk. Key participants, such as Albert Einstein and Daniel Ellsberg, described their involvement as the greatest mistake of their life.Our current situation with AGI shares certain striking similarities and certain lessons suggest themselves: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.Original article:https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-andNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"My take on What We Owe the Future" by Eli Lifland
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 4hqa_time: 1h30m---What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn’t clear enough about MacAskill’s views on longtermist priorities, and to the extent it is it presents a mistaken view of the most promising longtermist interventions.I argue that MacAskill:Underestimates risk of misaligned AI takeover. Overestimates risk from stagnation. Isn’t clear enough about longtermist priorities. I highlight and expand on these disagreements in part to contribute to the debate on these topics, but also make a practical recommendation.Original article:https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-futureNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
Climate change: Is climate change the greatest threat facing humanity today?
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 3h 13mqa_time: 2h---Could climate change lead to the end of civilisation?Across the world, over half of young people worry that, as a result of climate change, humanity is doomed. They feel angry, powerless, and — above all — afraid about what the future may hold.Climate change matters so much, to so many, not just because of the suffering and injustice it’s already causing, but also because it’s one of the few issues that has obvious potential to affect our world over many future generations. We think safeguarding future generations is a key moral priority, and should be a crucial consideration in prioritising problems on which to work.If climate change could lead to the end of civilisation, then that would mean future generations might never get to exist – or they could live in a permanently worse world. If so, then preventing climate change, and adapting to its effects, might be more important than working on almost any other issue.So – what does the science say?The Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report is, to our knowledge, the most authoritative and comprehensive source on climate change. The report is clear: climate change will be hugely destructive. We’ll see floods, famines, fires, and droughts — and the world’s poorest people will be affected the most.But even when we try to account for unknown unknowns, nothing in the IPCC’s report suggests that civilisation will be destroyed.This isn’t to say society shouldn’t do far more to tackle climate change.Original article:https://80000hours.org/problem-profiles/climate-change/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"Effective altruism in the garden of ends" by tyleralterman
---client: ea_forumproject_id: red_teamnarrator: pwqa: kmnarrator_time: 3h30mqa_time: 1h15m---This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world.Original article:https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-endsNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Measuring Good Better" by Michael Plant, GiveWell, Jason Schukraft, Matt Lerner, and Innovations for Poverty Action
---client: ea_forumproject_id: curated---Excerpt:At EA Global: San Francisco 2022, the following organisations held a joint session to discuss their different approaches to measuring ‘good’: GiveWellOpen PhilanthropyHappier Lives InstituteFounders PledgeInnovations for Poverty ActionA representative from each organisation gave a five-minute lightning talk summarising their approach before the audience broke out into table discussions. Original article:https://forum.effectivealtruism.org/posts/8whqn2GrJfvTjhov6/measuring-good-better-1Edited for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"AGI and lock-in" by Lukas Finnveden, Jess Riedel, & Carl Shulman
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: kmnarrator_time: 1h30mqa_time: 35m---The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.Original article:https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-inNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Counterarguments to the basic AI risk case" by Katja_Grace
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technical, ai_safety__governancenarrator: pwqa: kmnarrator_time: 5hqa_time: 2h15m---This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. To start, here’s an outline of what I take to be the basic case:I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lightsIII. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad Original article:https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-caseNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Does economic growth meaningfully improve well-being? An optimistic re-analysis of Easterlin’s research: Founders Pledge" by Vadim Albinsky
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 2h15mqa_time: 50m---Understanding the relationship between wellbeing and economic growth is a topic that is of key importance to Effective Altruism (e.g. see Hillebrandt and Hallstead, Clare and Goth). In particular, a key disagreement regards the Easterlin Paradox; the finding that happiness varies with income across countries and between individuals, but does not seem to vary significantly with a country’s income as it changes over time. Michael Plant recently wrote an excellent post summarizing this research. He ends up mostly agreeing with Richard Easterlin’s latest paper arguing that the Easterlin Paradox still holds; suggesting that we should look to approaches other than economic growth to boost happiness. I agree with Michael Plant that life satisfaction is a valid and reliable measure, that it should be a key goal of policy and philanthropy, and that boosting income does not increase it as much as we might naively expect. In fact, we at Founders Pledge highly value and regularly use Michael Plant’s and Happier Lives Institute’s (HLI) research; and we believe income is only a small part of what interventions should aim at. However, my interpretation of the practical implications of Easterlin’s research differ from Easterlin’s in three ways which I argue in this post.Original article:https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-anNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"What happens on the average day?" by rosehadshar
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 1h20mqa_time: 1h30m---I want to know what’s going on in the world. I’m a human; I’m interested in what other humans are up to; I value them, care about their triumphs and mourn their deaths.But:There’s far too much going on for me to keep track of all of itI think that some parts of what’s going are likely far more important than othersI don’t think that regular news providers are picking the important bits to report onI would really like there to be a scope sensitive news provider which was making a good faith attempt to report on the things which most matter in the world. But as far as I know, this doesn’t exist.In the absence of such a provider, I’ve spent a small amount of time trying to find out some basic context on what happens in the world on the average day. I think of this as a bit like a cheat sheet: some information to have in the back of my mind when reading whatever regular news stories are coming at me, to ground me in something that feels a bit closer to what’s actually going on.Original article:https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-dayNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
“500 million, but not a single one more” by jai
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 45mqa_time: 15m---We will never know their names. The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.Original article:https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-moreNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Biological Anchors external review" by Jennifer Lin
---client: ea_forumproject_id: red_teamfeed_id: ai, ai_safety, ai_safety__forecastingnarrator: pwqa: mdsnarrator_time: 3h15mqa_time: 1h30m---In this note I’ll summarize the bio-anchors report, describe my initial reactions to it, and take a closer look at two disagreements that I have with background assumptions used by (readers of) the report. This report attempts to forecast the year when the amount of compute required to train a transformative AI (TAI) model will first become available, as the year when a forecast for the amount of compute required to train TAI in a given year will intersect a forecast for the amount of compute that will be available for a training run of a single project in a given year.Original article:https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#Narrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Case for emergency response teams" by Gavin, Jan_Kulveit
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 1hqa_time: 30m---So far, long-termist efforts to change the trajectory of the world focus on far-off events. This is on the assumption that we foresee some important problem and influence its outcome by working on the problem for longer. We thus start working on it sooner than others, we lay the groundwork for future research, we raise awareness, and so on. Many longtermists propose that we now live at the “hinge of history”, usually understood on the timescale of critical centuries, or critical decades. But ”hinginess” is likely not constant: some short periods will be significantly more eventful than others. It is also possible that these periods will present even more leveraged opportunities for changing the world’s trajectory.These “maximally hingey” moments might be best influenced by sustained efforts long before them (as described above). But it seems plausible that in many cases, the best realistic chance to influence them is “while they are happening”, via a concentrated effort at that moment.Original article:https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teamsNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"What matters to shrimps? Factors affecting shrimp welfare in aquaculture" by Lucas Lewit-Mendes & Aaron Boddy
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 4h15mqa_time: 1h30m---Shrimp Welfare Project produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!Original article:https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-inNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
Summary: what makes for a high-impact career?
---narrator_time: 1hediting_time: 0narrator: pweditor: pwqa: MDSclient: 80000_hoursproject_id: 80000_hours---Just the bottom lines from our key ideas series.https://80000hours.org/key-ideas/summary/TLDR: Get good at something that lets you effectively contribute to big and neglected global problems.What ultimately makes for an impactful career? You can have more positive impact over the course of your career by aiming to:Help solve a more pressing problem. Many global issues should get more attention, but as individuals we should look for the biggest gaps in existing efforts. To do that, you can compare issues in terms of scale, neglectedness, and tractability. It turns out that some issues receive hundreds of times less attention relative to how big and solvable they seem. This means which issue you choose to work on is likely the biggest driver of your impact. In particular, our generation may see the rise of transformative technologies, which could lead to existential risks and make now a crucial moment that could affect the future for many generations to come — but our current institutions are doing little to address these issues. We have a list of global issues we think are particularly pressing for more people to work on right now.Share feedback on this narration.
"Lessons learned from talking to >100 academics about AI safety" by Marius Hobbhahn
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safetynarrator: pwqa: kmnarrator_time: 2h20mqa_time: 45m---Excerpt:During my Master's and Ph.D. (still ongoing), I have spoken with many academics about AI safety. These conversations include chats with individual PhDs, poster presentations and talks about AI safety. I think I have learned a lot from these conversations and expect many other people concerned about AI safety to find themselves in similar situations. Therefore, I want to detail some of my lessons and make my thoughts explicit so that others can scrutinize them.TL;DR: People in academia seem more and more open to arguments about risks from advanced intelligence over time and I would genuinely recommend having lots of these chats. Furthermore, I underestimated how much work related to some aspects AI safety already exists in academia and that we sometimes reinvent the wheel. Messaging matters, e.g. technical discussions got more interest than alarmism and explaining the problem rather than trying to actively convince someone received better feedback.Original article:https://www.lesswrong.com/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academicsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
China-related AI safety and governance paths
---feed_id: ai, ai_safety, ai_safety__governancenarrator_time: 4hnarrator: pweditor: pwqa: mdsclient: 80000_hoursproject_id: 80000_hours---https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/#strong-networking-abilities-especially-for-policy-rolesExpertise in China and its relations with the world might be critical in tackling some of the world’s most pressing problems. In particular, China’s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP.1 These considerations led us to publish a guide to improving China–Western coordination on global catastrophic risks and other key problems in 2018. Since then, we have seen an increase in the number of people exploring this area.China is one of the most important countries developing and shaping advanced artificial intelligence (AI). The Chinese government’s spending on AI research and development is estimated to be on the same order of magnitude as that of the US government,2 and China’s AI research is prominent on the world stage and growing.Because of the importance of AI from the perspective of improving the long-run trajectory of the world, we think relations between China and the US on AI could be among the most important aspects of their relationship. Insofar as the EU and/or UK influence advanced AI development through labs based in their countries or through their influence on global regulation, the state of understanding and coordination between European and Chinese actors on AI safety and governance could also be significant.That, in short, is why we think working on AI safety and governance in China and/or building mutual understanding between Chinese and Western actors in these areas is likely to be one of the most promising China-related career paths. Below we provide more arguments and detailed information on this option.If you are interested in pursuing a career path described in this profile, contact 80,000 Hours’ one-on-one team and we may be able to put you in touch with a specialist advisor.Share feedback on this narration.
Start here: Why we're here and how we can help
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 2hqa_time: 25m---Excerpt:You have 80,000 hours in your career: 40 hours per week, 50 weeks per year, for 40 years.That’s a huge amount of time. And it means that your career is not only a major driver of your happiness — it’s probably also your biggest opportunity to have a positive impact on the world.So how can you best spend those hours?We’re a nonprofit that aims to help you answer this question, and in this article we’ll explain how we can help.Original article:https://80000hours.org/start-here/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 8 (Nov. 7 - 13, 2022)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7CThis is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
LessWrong: "How my team at Lightcone sometimes gets stuff done" by jacobjacob
---narrator_time: 1h30mediting_time:narrator: pweditor: pwqa: mdsclient: lesswrongproject_id: curated---https://www.lesswrong.com/posts/6LzKRP88mhL9NKNrS/how-my-team-at-lightcone-sometimes-gets-stuff-doneDisclaimer: I originally wrote this as a private doc for the Lightcone team. I then showed it to John and he said he would pay me to post it here. That sounded awfully compelling. However, I wanted to note that I’m an early founder who hasn't built anything truly great yet. I’m writing this doc because as Lightcone is growing, I have to take a stance on these questions. I need to design our org to handle more people. Still, I haven’t seen the results long-term, and who knows if this is good advice. Don’t overinterpret this. Suppose you went up on stage in front of a company you founded, that now had grown to 100, or 1000, 10 000+ people. You were going to give a talk about your company values. You can say things like “We care about moving fast, taking responsibility, and being creative” -- but I expect these words would mostly fall flat. At the end of the day, the path the water takes down the hill is determined by the shape of the territory, not the sound the water makes as it swooshes by. To manage that many people, it seems to me you need clear, concrete instructions. What are those? What are things you could write down on a piece of paper and pass along your chain of command, such that if at the end people go ahead and just implement them, without asking what you meant, they would still preserve some chunk of what makes your org work? Share feedback on this narration.
LessWrong: "Decision theory does not imply that we get to have nice things" by So8res
---narrator_time: 2h30mediting_time:narrator: pweditor: pwqa: mdsclient: lesswrongproject_id: curated---https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-niceCrossposted from the AI Alignment Forum. May contain more technical jargon than usual.(Note: I wrote this with editing help from Rob and Eliezer. Eliezer's responsible for a few of the paragraphs.)A common confusion I see in the tiny fragment of the world that knows about logical decision theory (FDT/UDT/etc.), is that people think LDT agents are genial and friendly for each other.[1]One recent example is Will Eden’s tweet about how maybe a molecular paperclip/squiggle maximizer would leave humanity a few stars/galaxies/whatever on game-theoretic grounds. (And that's just one example; I hear this suggestion bandied around pretty often.)I'm pretty confident that this view is wrong (alas), and based on a misunderstanding of LDT. I shall now attempt to clear up that confusion.To begin, a parable: the entity Omicron (Omega's little sister) fills box A with $1M and box B with $1k, and puts them both in front of an LDT agent saying "You may choose to take either one or both, and know that I have already chosen whether to fill the first box". The LDT agent takes both."What?" cries the CDT agent. "I thought LDT agents one-box!"LDT agents don't cooperate because they like cooperating. They don't one-box because the name of the action starts with an 'o'. They maximize utility, using counterfactuals that assert that the world they are already in (and the observations they have already seen) can (in the right circumstances) depend (in a relevant way) on what they are later going to do.A paperclipper cooperates with other LDT agents on a one-shot prisoner's dilemma because they get more paperclips that way. Not because it has a primitive property of cooperativeness-with-similar-beings. It needs to get the more paperclips.Share feedback on this narration.
"What 2026 looks like" by Daniel Kokotajlo
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__forecastingnarrator_time: 2h30mediting_time: 0narrator: pweditor: pwqa: km---https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2022Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This was written for the Vignettes Workshop.[1] The goal is to write out a detailed future history (“trajectory”) that is as realistic (to me) as I can currently manage, i.e. I’m not aware of any alternative trajectory that is similarly detailed and clearly more plausible to me. The methodology is roughly: Write a future history of 2022. Condition on it, and write a future history of 2023. Repeat for 2024, 2025, etc. (I'm posting 2022-2026 now so I can get feedback that will help me write 2027+. I intend to keep writing until the story reaches singularity/extinction/utopia/etc.)What’s the point of doing this? Well, there are a couple of reasons:Sometimes attempting to write down a concrete example causes you to learn things, e.g. that a possibility is more or less plausible than you thought.Most serious conversation about the future takes place at a high level of abstraction, talking about e.g. GDP acceleration, timelines until TAI is affordable, multipolar vs. unipolar takeoff… vignettes are a neglected complementary approach worth exploring.Most stories are written backwards. The author begins with some idea of how it will end, and arranges the story to achieve that ending. Reality, by contrast, proceeds from past to future. It isn’t trying to entertain anyone or prove a point in an argument.Anecdotally, various people seem to have found Paul Christiano’s “tales of doom” stories helpful, and relative to typical discussions those stories are quite close to what we want. (I still think a bit more detail would be good — e.g. Paul’s stories don’t give dates, or durations, or any numbers at all really.)[2]“I want someone to ... write a trajectory for how AI goes down, that is really specific about what the world GDP is in every one of the years from now until insane intelligence explosion. And just write down what the world is like in each of those years because I don't know how to write an internally consistent, plausible trajectory. I don't know how to write even one of those for anything except a ridiculously fast takeoff.” --Buck ShlegerisShare feedback on this narration.
EA Forum Weekly Summaries – Episode 7 (Oct. 31 - Nov. 6, 2022)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQThis is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
80,000 Hours: Preventing Catastrophic Pandemics
---client: 80000_hoursproject_id: articlesnarrator: pw---https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/Share feedback on this narration.