
TYPE III AUDIO (All episodes)
167 episodes — Page 2 of 4
[Week 3] "The alignment problem from a deep learning perspective" (Sections 2, 3 and 4) by Richard Ngo, Lawrence Chan & Sören Mindermann
---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 1h00m---Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and briefly review research directions aimed at preventing this outcome.Original article:https://arxiv.org/abs/2209.00626Authors:Richard Ngo, Lawrence Chan, Sören Mindermann---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.
"Existential risk, AI, and the inevitable turn in human history" by Tyler Cowen
---client: t3afeed_id: ai ai_safety---In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:1. American hegemony over much of the world, and relative physical safety for Americans.2. An absence of truly radical technological change.Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.In other words, virtually all of us have been living in a bubble “outside of history".Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. AI represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.Original article:https://marginalrevolution.com/marginalrevolution/2023/03/existential-risk-and-the-turn-in-human-history.htmlNarrated by TYPE III AUDIO.Share feedback on this narration.
"How much should governments pay to prevent catastrophes? Longtermism’s limited role" by Elliott Thornley and Carl Shulman
---client: ea_forumproject_id: curatedfeed_id: ai ai_safety ai_safety__governance narrator: pwqa: mdsnarrator_time: 7h00mqa_time: 2h00m---Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. Standard cost-benefit analysis implies that governments should spend much more on reducing catastrophic risk. We argue that a government catastrophe policy guided by cost-benefit analysis should be the goal of longtermists in the political sphere. This policy would be democratically acceptable, and it would reduce existential risk by almost as much as a strong longtermist policy.Original article:https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophesNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" by Matt Reynolds
---client: ea_forumproject_id: curatednarrator: pwqa: kmqa_time: 0h45m---Oral rehydration therapy is now the standard treatment for dehydration. It’s saved millions of lives, and can be prepared at home in minutes. So why did it take so long to discover?Written by Matt Reynolds for Asterisk Magazine.Original article:https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-childrenNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"More information about the dangerous capability evaluations we did with GPT-4 and Claude." by Beth Barnes
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: mdsqa_time: 0h30m---This is a linkpost for https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/[Written for more of a general-public audience than alignment-forum audience. We're working on a more thorough technical report.]We believe that capable enough AI systems could pose very large risks to the world. We don’t think today’s systems are capable enough to pose these sorts of risks, but we think that this situation could change quickly and it’s important to be monitoring the risks consistently. Because of this, ARC is partnering with leading AI labs such as Anthropic and OpenAI as a third-party evaluator to assess potentially dangerous capabilities of today’s state-of-the-art ML models. The dangerous capability we are focusing on is the ability to autonomously gain resources and evade human oversight.We attempt to elicit models’ capabilities in a controlled environment, with researchers in-the-loop for anything that could be dangerous, to understand what might go wrong before models are deployed. We think that future highly capable models should involve similar “red team” evaluations for dangerous capabilities before the models are deployed or scaled up, and we hope more teams building cutting-edge ML systems will adopt this approach. The testing we’ve done so far is insufficient for many reasons, but we hope that the rigor of evaluations will scale up as AI systems become more capable.Original article:https://www.lesswrong.com/posts/4Gt42jX7RiaNaxCwP/more-information-about-the-dangerous-capability-evaluationsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
""Carefully Bootstrapped Alignment" is organizationally hard" by Raemon
---client: lesswrongproject_id: curatedfeed_id: ai_safety ai_safety__governance narrator: pwqa: mdsqa_time: 0h30m---In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that. In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."Original article:https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hardNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"How to navigate the AI apocalypse as a sane person" by Erik Hoel
---client: t3aproject_id: ai_safetyfeed_id: ai ai_safetynarrator: pwqa: km---Last week, The New York Times published the transcript of a conversation with Microsoft’s Bing (AKA Sydney) wherein over the course of a long chat the next-gen AI tried, very consistently, and without any prompting to do so, to break up the reporter’s marriage and to emotionally manipulate him in every way possible. I had been up late the night before researching reports coming out of similar phenomena as Sydney threatened and cajoled users across the globe, later arguing that same day in “I am Bing, and I am evil” that it was time to panic about AI safety. Like many, while I knew that current AIs were capable of these acts, what I didn’t expect was Microsoft to release one that was so obviously unhinged and yet, at the same time, so creepily convincing and intelligent.Original article:https://erikhoel.substack.com/p/how-to-navigate-the-ai-apocalypseNarrated for Erik Hoel by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 20 (March 6-12, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"The Parable of the King and the Random Process" by moridinamael
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h20mqa_time: 0h10m---~ A Parable of Forecasting Under Model Uncertainty ~You, the monarch, need to know when the rainy season will begin, in order to properly time the planting of the crops. You have two advisors, Pronto and Eternidad, who you trust exactly equally. You ask them both: "When will the next heavy rain occur?"Pronto says, "Three weeks from today."Eternidad says, "Ten years from today."Original article:https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-process#Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Enemies vs Malefactors" by Nate Soares
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h15mqa_time: 0h15m---Status: some mix of common wisdom (that bears repeating in our particular context), and another deeper point that I mostly failed to communicate.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)Original article:https://www.lesswrong.com/posts/zidQmfFhMgwFzcHhs/enemies-vs-malefactorsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 19 (Feb. 27 - Mar. 5, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"The Waluigi Effect (mega-post)" by Cleo Nardo
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: kmnarrator_time: 3h30mqa_time: 0h50m---In this article, I will present a mechanistic explanation of the Waluigi Effect and other bizarre "semiotic" phenomena which arise within large language models such as GPT-3/3.5/4 and their variants (ChatGPT, Sydney, etc). This article will be folklorish to some readers, and profoundly novel to others.Original article:https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-postNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Acausal normalcy" by Andrew Critch
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h30mqa_time: 0h20m---Summary: Having thought a bunch about acausal trade — and proven some theorems relevant to its feasibility — I believe there do not exist powerful information hazards about it that stand up to clear and circumspect reasoning about the topic. I say this to be comforting rather than dismissive; if it sounds dismissive, I apologize. With that said, I have four aims in writing this post:Dispelling myths. There are some ill-conceived myths about acausal trade that I aim to dispel with this post. Alternatively, I will argue for something I'll call acausal normalcy as a more dominant decision-relevant consideration than one-on-one acausal trades. Highlighting normalcy. I'll provide some arguments that acausal normalcy is more similar to human normalcy than any particular acausal trade is to human trade, such that the topic of acausal normalcy is — conveniently — also less culturally destabilizing than (erroneous) preoccupations with 1:1 acausal trades. Affirming AI safety as a straightforward priority. I'll argue that for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant, except insofar as they push a bit further towards certain broadly agreeable human values applicable in the normal-everyday-human-world, such as nonviolence, cooperation, diversity, honesty, integrity, charity, and mercy. In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.Affirming normal human kindness. I also think reflecting on acausal normalcy can lead to increased appreciation for normal notions of human kindness, which could lead us all to treat each other a bit better. This is something I wholeheartedly endorse.Original article:https://www.lesswrong.com/posts/3RSq3bfnzuL3sp46J/acausal-normalcyNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Scoring forecasts from the 2016 “Expert Survey on Progress in AI”" by PatrickL
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__forecastingnarrator: pwqa: mdsnarrator_time: 1h30mqa_time: 0h30m---This document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions. My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.Original article:https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-inNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Why I don’t agree with HLI’s estimate of household spillovers from therapy" by James Snowden
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 2h00mqa_time: 1h00m---I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers. My main disagreements are:Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression). Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.The results of the third RCT are inconsistent with HLI’s estimate.Original article:https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-householdNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 18 (Feb. 20-26, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
Effektiver Altruismus: eine Einführung
---client: german_eanarrator: ur---Effektiver Altruismus bezeichnet die Suche nach den besten Wegen, anderen zu helfen, und deren Umsetzung in die Praxis.Es handelt sich um ein Projekt, das aus zwei komplementären Teilen besteht: zum Einen, einem Forschungsfeld, das sich darauf konzentriert, die wichtigsten globalen Probleme und die besten Lösungen für diese Probleme zu ermitteln. Zum anderen einer Gemeinschaft von Menschen, die in dem Bestreben geeint sind, auf der Grundlage dieser Erkenntnisse Gutes zu tun.Dieses Projekt ist von großer Bedeutung, denn während viele Versuche, Gutes zu tun, scheitern, sind einige enorm effektiv. So helfen einige Hilfsorganisationen mit den gleichen Mitteln 100- oder sogar 1.000-mal so vielen Menschen wie andere.Das bedeutet, dass wir die wichtigsten globalen Probleme weitaus besser angehen können, wenn wir sorgfältig über die besten Lösungswege nachdenken.Share feedback on this narration.
"Why should ethical anti-realists do ethics?" by Joe Carlsmith
---client: ea_forumproject_id: curatednarrator: not_t3a---Ethical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just … stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers. Original article:https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethicsNarrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 17 (Feb. 6-19, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"Why I No Longer Prioritize Wild Animal Welfare (edited)" by saulius
---client: ea_forumproject_id: curatednarrator: pwqa: km---This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.Original article:https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfareNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"CE: Announcing our 2023 Charity Ideas. Apply now!" by Steve Thompson & Charity Entrepreneurship
---client: ea_forumproject_id: curatednarrator: pwqa: mds---Apply now to start a nonprofit in Biosecurity or Large-Scale Global HealthIn this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community. We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023. [APPLY NOW]We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here. Original article:https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2Narrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Cyborgism" by Nicholas Kees & Janus
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technical, ai_safety__governance narrator: pwqa: mdsnarrator_time: 5h30mqa_time: 2h15m---There is a lot of disagreement and confusion about the feasibility and risks associated with automating alignment research. Some see it as the default path toward building aligned AI, while others expect limited benefit from near term systems, expecting the ability to significantly speed up progress to appear well after misalignment and deception. Furthermore, progress in this area may directly shorten timelines or enable the creation of dual purpose systems which significantly speed up capabilities research. OpenAI recently released their alignment plan. It focuses heavily on outsourcing cognitive work to language models, transitioning us to a regime where humans mostly provide oversight to automated research assistants. While there have been a lot of objections to and concerns about this plan, there hasn’t been a strong alternative approach aiming to automate alignment research which also takes all of the many risks seriously. The intention of this post is not to propose an end-all cure for the tricky problem of accelerating alignment using GPT models. Instead, the purpose is to explicitly put another point on the map of possible strategies, and to add nuance to the overall discussion. Original article:https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgismNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Childhoods of exceptional people" by Henrik Karlsson
---client: lesswrongproject_id: curatednarrator: pwqa: mds---This is a linkpost for https://escapingflatland.substack.com/p/childhoodsLet’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field. If you want to learn writing, read great writers, etc.But this is not what parents usually do when they think about how to educate their kids. The default for a parent is rather to imitate their peers and outsource the big decisions to bureaucracies. But what would we learn if we studied the highest achievements? Thinking about this question, I wrote down a list of twenty names—von Neumann, Tolstoy, Curie, Pascal, etc—selected on the highly scientific criteria “a random Swedish person can recall their name and think, Sounds like a genius to me”. That list is to me a good first approximation of what an exceptional result in the field of child-rearing looks like. I ordered a few piles of biographies, read, and took notes. Trying to be a little less biased in my sample, I asked myself if I could recall anyone exceptional that did not fit the patterns I saw in the biographies, which I could, and so I ordered a few more biographies.This kept going for an unhealthy amount of time.I sampled writers (Virginia Woolf, Lev Tolstoy), mathematicians (John von Neumann, Blaise Pascal, Alan Turing), philosophers (Bertrand Russell, René Descartes), and composers (Mozart, Bach), trying to get a diverse sample. In this essay, I am going to detail a few of the patterns that have struck me after having skimmed 42 biographies. I will sort the claims so that I start with more universal patterns and end with patterns that are less common.Original article:https://www.lesswrong.com/posts/CYN7swrefEss4e3Qe/childhoods-of-exceptional-peopleNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Is Power-Seeking AI an Existential Risk?" by Joseph Carlsmith
---client: joe_carlsmithproject_id:feed_id: ai, ai_safety, ai_safety__technical, ai_safety__forecasting narrator: pwqa: kmnarrator_time: 18h00mqa_time: 5h30m---This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe. I assign rough subjective credences to the premises in this argument, and I end up with an overall estimate of ~5% that an existential catastrophe of this kind will occur by 2070. (May 2022 update: since making this report public in April 2021, my estimate here has gone up, and is now at >10%.) Original article:https://arxiv.org/abs/2206.13353Narrated for Joseph Carlsmith by TYPE III AUDIO.Share feedback on this narration.
"What I mean by "alignment is in large part about making cognition aimable at all"" by Nate Soares
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: mds---(Epistemic status: attempting to clear up a misunderstanding about points I have attempted to make in the past. This post is not intended as an argument for those points.)I have long said that the lion's share of the AI alignment problem seems to me to be about pointing powerful cognition at anything at all, rather than figuring out what to point it at.It’s recently come to my attention that some people have misunderstood this point, so I’ll attempt to clarify here.Original article:https://www.lesswrong.com/posts/NJYmovr9ZZAyyTBwM/what-i-mean-by-alignment-is-in-large-part-about-makingNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"H5N1 - thread for information sharing, planning, and action" by MathiasKB
---client: ea_forumproject_id: curatednarrator: pwqa: mds---Hi everyone,I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my hunch is that there is a non-zero chance that it will cost more than 10,000 people their lives.To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!Original article:https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-actionNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Literature review of Transformative Artificial Intelligence timelines" by Jaime Sevilla
---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__forecastingnarrator: pwqa: mdsnarrator_time: 1h20mqa_time: 0h15m---This is a linkpost for https://epochai.org/blog/literature-review-of-transformative-artificial-intelligence-timelinesWe summarize and compare several models and forecasts predicting when transformative AI will be developed.HighlightsThe review includes quantitative models, including both outside and inside view, and judgment-based forecasts by (teams of) experts.While we do not necessarily endorse their conclusions, the inside-view model the Epoch team found most compelling is Ajeya Cotra’s “Forecasting TAI with biological anchors”, the best-rated outside-view model was Tom Davidson’s “Semi-informative priors over AI timelines”, and the best-rated judgment-based forecast was Samotsvety’s AGI Timelines Forecast.The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043).Original article:https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligenceNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"On not getting contaminated by the wrong obesity ideas" by Natália Coelho Mendonça
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 5h00mqa_time: 2h00m---A Chemical Hunger (a), a series by the authors of the blog Slime Mold Time Mold (SMTM), argues that the obesity epidemic is entirely caused (a) by environmental contaminants. In my last post, I investigated SMTM’s main suspect (lithium).[1] This post collects other observations I have made about SMTM’s work, not narrowly related to lithium, but rather focused on the broader thesis of their blog post series.I think that the environmental contamination hypothesis of the obesity epidemic is a priori plausible. After all, we know that chemicals can affect humans, and our exposure to chemicals has plausibly changed a lot over time. However, I found that several of what seem to be SMTM’s strongest arguments in favor of the contamination theory turned out to be dubious, and that nearly all of the interesting things I thought I’d learned from their blog posts turned out to actually be wrong. I’ll explain that in this post.Original article:https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-getting-contaminated-by-the-wrong-obesity-ideasNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 16 (Jan. 30 - Feb. 5, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"What is social impact? A definition" by Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 3h00mqa_time: 0h45m---Lots of people say they want to “make a difference,” “do good,” “have a social impact,” or “make the world a better place” — but they rarely say what they mean by those terms.By clarifying your definition, you can better target your efforts, and make a difference more effectively.But how should you define social impact?Thousands of years of philosophy have gone into that question. We’re going to try to sum up that thinking; introduce a practical, rough-and-ready definition of social impact; and explain why we think it’s a good definition to focus on.This is a bit ambitious for one article, so to the philosophers in the audience, please forgive the enormous simplifications. We hope the usefulness of the definition will make up for it.Original article:https://80000hours.org/articles/what-is-social-impact-definition/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"Operations management in high-impact organisations" by Benjamin Todd & Roman Duda
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 3h00mqa_time: 1h15m---In a nutshell: People in operations roles act as multipliers, maximising the productivity of others in the organisation by building systems that keep the organisation functioning effectively at a high level. As a result, people who excel in these positions require significant creativity, self-direction, social skills, and conscientiousness. If you’re a good fit, operations could be the highest-impact role for you.This career review is based largely on our 2017 survey of talent needs and input from 12 people who have worked in these roles (often in leadership positions) — you can see the full list of contributors in the online version of this article. However, the views presented here do not necessarily reflect those of everyone listed.Original article:https://80000hours.org/articles/operations-management/ Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"Communication careers" by Luisa Rodriguez & Benjamin Todd
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 2h30mqa_time: 0h30m---Many of the highest-impact people in history have been communicators and advocates of one kind or another. Take Rosa Parks, who in 1955 refused to give up her seat to a white man on a bus, sparking a protest which led to a Supreme Court ruling that segregated buses were unconstitutional. Parks was a seamstress in her day job, but in her spare time she was involved with the civil rights movement. After she was arrested, she and the NAACP used widely distributed fliers to launch a total boycott of buses in a city with 40,000 African Americans, while simultaneously pushing forward with legal action. This led to major progress for civil rights.Communication can be aimed at a broad audience (like in Parks’s case) or a narrow influential group. This means there are also many examples of important communicators you’ve never heard of, like Viktor Zhdanov.In the 20th century, smallpox killed around 400 million people — far more than died in all the century’s wars and political famines.Although credit for the elimination of smallpox often goes to D.A. Henderson (who directly oversaw the programme), it was Viktor Zhdanov who lobbied the World Health Organization to start the elimination campaign in the first place — while facing significant opposition from the members of the World Health Assembly (the proposal passed by just two votes). Without his involvement, smallpox’s elimination probably would not have happened until much later, costing millions of lives, and possibly not at all.So why has communicating important ideas sometimes been so effective?Original article:https://80000hours.org/articles/communication/ Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"SolidGoldMagikarp (plus, prompt generation)"
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: km---TL;DRAnomalous tokens: a mysterious failure mode for GPT (which reliably insulted Matthew)We have found a set of anomalous tokens which result in a previously undocumented failure mode for GPT-2 and GPT-3 models. (The 'instruct' models “are particularly deranged” in this context, as janus has observed.)Many of these tokens reliably break determinism in the OpenAI GPT-3 playground at temperature 0 (which theoretically shouldn't happen).Prompt generation: a new interpretability method for language models (which reliably finds prompts that result in a target completion). This is good for:eliciting knowledgegenerating adversarial inputsautomating prompt search (e.g. for fine-tuning)In this post, we'll introduce the prototype of a new model-agnostic interpretability method for language models which reliably generates adversarial prompts that result in a target completion. We'll also demonstrate a previously undocumented failure mode for GPT-2 and GPT-3 language models, which results in bizarre completions (in some cases explicitly contrary to the purpose of the model), and present the results of our investigation into this phenomenon. Further detail can be found in a follow-up post.Original article:https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generationNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Astronomische Verschwendung: Die Opportunitätskosten verzögerten technologischen Fortschritts" von Nick Bostrom
---client: german_eanarrator: ur---ABSTRACT. Mit sehr fortschrittlicher Technologie könnte eine sehr große Population von Personen, die glückliche Leben leben, in der erreichbaren Region des Universums aufrechterhalten werden. Daraus ergeben sich entsprechende Opportunitätskosten für jedes Jahr, um das sich die Entwicklung solcher Technologien und die folgliche Kolonialisierung des Universums verzögert: Ein potenzielles Gut, nämlich das lebenswerter Leben, wird nicht realisiert. Unter plausiblen Annahmen sind diese Kosten extrem groß. Jedoch lautet die Lehre für den Standardutilitaristen nicht, dass wir die Geschwindigkeit des technologischen Fortschritts maximieren sollten, sondern dass wir seine Sicherheit maximieren sollten, d. h. die Wahrscheinlichkeit, dass die Kolonisierung des Weltalls tatsächlich stattfinden wird. Dieses Ziel hat eine so hohe Utilität, dass Standardutilitaristen all ihre Energie darauf verwenden sollten. Utilitaristen der „personenbezogenen“ Sorte sollten eine modifizierte Version dieser Schlussfolgerung akzeptieren. Manch andere ethische Sichtweisen, welche utilitaristische Erwägungen mit anderen Kriterien kombinieren, werden zu einem ähnlichen Fazit kommen. Share feedback on this narration.
"Focus on the places where you feel shocked everyone's dropping the ball" by Nate Soares
---client: lesswrongproject_id: curatednarrator: pwqa: mdsnarrator_time: 0h40mqa_time: 0h15m---Writing down something I’ve found myself repeating in different conversations:If you're looking for ways to help with the whole “the world looks pretty doomed” business, here's my advice: look around for places where we're all being total idiots.Look for places where everyone's fretting about a problem that some part of you thinks it could obviously just solve.Look around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better.Then do it better.Original article:https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-sNarrated for the LessWrong by TYPE III AUDIO.Share feedback on this narration.
"The Capability Approach to Human Welfare" by Ryan C Briggs
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 1h00mqa_time: 0h30m---This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.Original article:https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approachNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 15 (Jan. 23 - 29, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
"Basics of Rationalist Discourse" by Duncan Sabien
---client: lesswrongproject_id: curatednarrator: pwqa: mdsnarrator_time: 3h40mqa_time: 1h45m---This post is meant to be a linkable resource. Its core is a short list of guidelines (you can link directly to the list) that are intended to be fairly straightforward and uncontroversial, for the purpose of nurturing and strengthening a culture of clear thinking, clear communication, and collaborative truth-seeking."Alas," said Dumbledore, "we all know that what should be, and what is, are two different things. Thank you for keeping this in mind."There is also (for those who want to read more than the simple list) substantial expansion/clarification of each specific guideline, along with justification for the overall philosophy behind the set.Original article:https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"Sapir-Whorf for Rationalists" by Duncan Sabien
---client: lesswrongproject_id: curatednarrator: pwqa: mdsnarrator_time: 2h15mqa_time: 1h15m---Casus Belli: As I was scanning over my (rather long) list of essays-to-write, I realized that roughly a fifth of them were of the form "here's a useful standalone concept I'd like to reify," à la cup-stacking skills, fabricated options, split and commit, and sazen. Some notable entries on that list (which I name here mostly in the hope of someday coming back and turning them into links) include: red vs. white, walking with three, setting the zero point[1], seeding vs. weeding, hidden hinges, reality distortion fields, and something-about-layers-though-that-one-obviously-needs-a-better-word.While it's still worthwhile to motivate/justify each individual new conceptual handle (and the planned essays will do so), I found myself imagining a general objection of the form "this is just making up terms for things," or perhaps "this is too many new terms, for too many new things." I realized that there was a chunk of argument, repeated across all of the planned essays, that I could factor out, and that (to the best of my knowledge) there was no single essay aimed directly at the question "why new words/phrases/conceptual handles at all?"So ... voilà.Original article:https://www.lesswrong.com/posts/PCrTQDbciG4oLgmQ5/sapir-whorf-for-rationalistsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"My Model Of EA Burnout" by Logan Strohl
---client: lesswrongproject_id: curatednarrator: pwqa: mdsnarrator_time: 0h45mqa_time: 0h15m---I think that EA [editor note: "Effective Altruism"] burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others): True Values: Abundance Power Novelty Social Harmony Beauty Growth Comfort The Wellbeing Of Others Excitement Personal Longevity AccuracyOne day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon. I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.Original article:https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnoutNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
EA Forum Weekly Summaries – Episode 14 (Jan. 16 - 22, 2023)
---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.
Is Power-Seeking AI an Existential Risk?
---client: joe_carlsmithnarrator: pwqa: km---Share feedback on this narration.
"500 Millionen, doch kein einziger mehr" von Jai
---client: german_easervice: human_edits_ai_narratesnarrator: nnmqa: ph---Ihre Namen werden wir nie kennen.Das erste Opfer hatte nicht erfasst werden können, denn damals gab es noch keine Schriftsprache, in der man es aufzeichnen konnte. Die Opfer waren jemandes Töchter oder Söhne, ein Freund oder eine Freundin. Sie wurden von den Menschen in ihrem Umfeld geliebt. Und sie hatten Schmerzen, ihre Haut war von Ausschlägen bedeckt, sie waren verwirrt, verängstigt, wussten nicht, warum ihnen das geschah oder was sie dagegen tun konnten – sie waren die Opfer eines zornigen, unmenschlichen Gottes. Es gab nichts, was man tun konnte – die Menschheit war noch nicht stark genug, noch nicht sensibilisiert genug, nicht sachkundig genug, um ein Ungeheuer zu bekämpfen, das man nicht sehen konnte.Share feedback on this narration.
"Die Fabel vom tyrannischen Drachen" von Nick Bostrom
---client: german_easervice: human_narrationnarrator: urqa: not_t3a---This is an audio narration of the German translation of The Fable of the Dragon-Tyrant by Nick Bostrom. The translation was done by Franz Fuchs, edited by Stephan Dalügge and narrated by Uta Reichardt. You can find the original paper at nickbostrom.com. Links and related reading suggestions are in the episode description. Es war einmal vor langer, langer Zeit, da wurde unser Planet von einem riesigen Drachen tyrannisiert. Der Drache überragte selbst die höchste Kathedrale und war mit einem dicken Panzer aus schwarzen Schuppen bedeckt. Seine roten Augen glühten vor Hass und aus seinem furchtbaren Maul floss beständig ein übel riechender, gelblich-grüner Schleim. Er verlangte der Menschheit einen Furcht einflößenden Tribut ab: Um seinen gigantischen Appetit zu stillen, mussten jeden Tag beim Einbruch der Dunkelheit zehntausend Männer und Frauen zum Fuß des Berges gebracht werden, wo der tyrannische Drache lebte. Manchmal verschlang der Drache die Unglücklichen sofort; manchmal wiederum kerkerte er sie im Berg ein. Dort mussten sie Monate oder Jahre dahinsiechen, bis sie schließlich verspeist wurden.Share feedback on this narration.
"Rethink Priorities’ Welfare Range Estimates" by Bob Fischer
---client: ea_forumproject_id: curatednarrator: pwqa: mdsnarrator_time: 2h15mqa_time: 1h0m---We offer welfare range estimates for 11 farmed species: pigs, chickens, carp, salmon, octopuses, shrimp, crayfish, crabs, bees, black soldier flies, and silkworms.These estimates are, essentially, estimates of the differences in the possible intensities of these animals' pleasures and pains relative to humans' pleasures and pains. Then, we add a number of controversial (albeit plausible) philosophical assumptions (including hedonism, valence symmetry, and others discussed here) to reach conclusions about animals' welfare ranges relative to human's welfare range.Given hedonism and conditional on sentience, we think (credence: 0.7) that none of the vertebrate nonhuman animals of interest have a welfare range that’s more than double the size of any of the others. While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another. Given hedonism and conditional on sentience, we think (credence 0.6) that all the invertebrates of interest have welfare ranges within two orders of magnitude of the vertebrate nonhuman animals of interest. Invertebrates are so diverse and we know so little about them; hence, our caution.Our view is that the estimates we’ve provided should be seen as placeholders—albeit, we submit, the best such placeholders available. We’re providing a starting point for more rigorous, empirically-driven research into animals’ welfare ranges. At the same time, we’re offering guidance for decisions that have to be made long before that research is finishedOriginal article:https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimatesNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"The Social Recession: By the Numbers" by Anton Stjepan Cebalo
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h15mqa_time: 0h40m---One of the most discussed topics online recently has been friendships and loneliness. Ever since the infamous chart showing more people are not having sex than ever before first made the rounds, there’s been increased interest in the social state of things. Polling has demonstrated a marked decline in all spheres of social life, including close friends, intimate relationships, trust, labor participation, and community involvement. The trend looks to have worsened since the pandemic, although it will take some years before this is clearly established.The decline comes alongside a documented rise in mental illness, diseases of despair, and poor health more generally. In August 2022, the CDC announced that U.S. life expectancy has fallen further and is now where it was in 1996. Contrast this to Western Europe, where it has largely rebounded to pre-pandemic numbers. Still, even before the pandemic, the years 2015-2017 saw the longest sustained decline in U.S. life expectancy since 1915-18. While my intended angle here is not health-related, general sociability is closely linked to health. The ongoing shift has been called the “friendship recession” or the “social recession.”My intention here is not to present a list of miserable points, but to group them together in a meaningful context whose consequences are far-reaching. While most of what I will outline here focuses on the United States, many of these same trends are present elsewhere because its catalyst is primarily the internet itself. With no signs of abating, a new kind of sociability has only started to affect what people ask of the world through the prism of themselves.Original article:https://www.lesswrong.com/posts/Xo7qmDakxiizG7B9c/the-social-recession-by-the-numbersNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
"On Living Without Idols" by Rockwell
---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 0h45mqa_time: 0h15m---For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.Original article:https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idolsNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
"Journalism: Career review" by Cody Fenwick
---client: 80000_hoursproject_id: articlesnarrator: pwqa: mdsnarrator_time: 3h00mqa_time: 1h00m---For the right person, becoming a journalist could be very impactful. Good journalists help keep people informed, positively shape public discourse on important topics, and can provide a platform for people and ideas that the public might not otherwise hear about.But the most influential positions in the field are highly competitive, and journalists face a lot of mixed incentives that may detract from their ability to have a positive impact.Original article:https://80000hours.org/career-reviews/journalism/Narrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.
"Recursive Middle Manager Hell" by Raemon
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h45mqa_time: 0h30m---I think Zvi's Immoral Mazes sequence is really important, but comes with more worldview-assumptions than are necessary to make the points actionable. I conceptualize Zvi as arguing for multiple hypotheses. In this post I want to articulate one sub-hypothesis, which I call "Recursive Middle Manager Hell". I'm deliberately not covering some other components of his model[1].tl;dr: Something weird and kinda horrifying happens when you add layers of middle-management. This has ramifications on when/how to scale organizations, and where you might want to work, and maybe general models of what's going on in the world.You could summarize the effect as "the org gets more deceptive, less connected to its original goals, more focused on office politics, less able to communicate clearly within itself, and selected for more for sociopathy in upper management."You might read that list of things and say "sure, seems a bit true", but one of the main points here is "Actually, this happens in a deeper and more insidious way than you're probably realizing, with much higher costs than you're acknowledging. If you're scaling your organization, this should be one of your primary worries."Original article:https://www.lesswrong.com/posts/pHfPvb4JMhGDr4B7n/recursive-middle-manager-hellNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
The Only Living Boy in Palo Alto - Theodore Schleifer
---client: t3a_test_ph---https://archive.ph/8fUo2Share feedback on this narration.