PLAY PODCASTS
TYPE III AUDIO (All episodes)

TYPE III AUDIO (All episodes)

167 episodes — Page 1 of 4

"Information security in high-impact areas career review" by Jarrah Bloomfield

---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmqa_time: 0h30m---As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email. The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo — was extraordinarily consequential.Source:https://80000hours.org/career-reviews/information-securityNarrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.

Jun 23, 202320 min

Part 3: No matter your job, here’s 3 evidence-based ways anyone can have a real impact

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---No matter which career you choose, anyone can make a difference by donating to charity, engaging in advocacy, or volunteering.Unfortunately, many attempts to do good in this way are ineffective, and some actually cause harm.Take sponsored skydiving. Every year, thousands of people collect donations for good causes and throw themselves out of planes to draw attention to whatever charity they’ve chosen to support. This sounds like a win-win: the fundraiser gets an exhilarating once-in-a-lifetime experience while raising money for a worthy cause. What could be the harm in that?Quite a bit, actually.Source:https://80000hours.org/career-guide/making-a-difference/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202322 min

Part 10: How to make your career plan

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---People often come to us trying to figure out what they should do over the next 10 or 20 years. Others say they want to figure out “the right career” for them.The problem with all of this is that, as we’ve seen, your plan is almost certainly going to change:You’ll change — more than you think.The world will change — many industries around today won’t even exist in 20 years.You’ll learn more about what’s best for you — it’s very hard to predict what you’re going to be good at ahead of time.In a sense, there is no stable “right career for you.” Rather, the best option will keep changing as the world changes and you learn more. Many people we’ve advised would never have predicted the job they’ve ended up doing.Long-term planning could even be counterproductive. There’s a risk of becoming fixated on a specific plan, and failing to change your plans as your situation changes.All that said, giving up on planning and setting goals probably isn’t wise either. As Eisenhower said, “Plans are useless but planning is essential.”Having some idea of where you’d like to end up can help you spot much better opportunities to advance. In fact, if you want to have a big positive impact, we’d argue that planning is even more important. Many of the highest-impact roles require specialist career capital you’re unlikely to get by accident, such as connections to people in biosecurity or expertise in particular technical skills. Likewise, getting to the top of many fields often requires decades of focused effort.This is the planning puzzle — most ‘plans’ will radically change long before they’re completed, but we still benefit from having them.Given all this, how should you make a good career plan? Here are our main tips.Source:https://80000hours.org/career-guide/career-planning/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202324 min

Part 11: All the best advice we could find on how to get a job

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---When it comes to advice on how to get a job, most of it is pretty bad.CollegeFeed suggests that you “be confident” as their first interview tip, which is a bit like suggesting that you should “be employable.”Many advisors cover the “clean your nails and have a firm handshake” kind of thing.One of the most popular interview videos on YouTube, with over 8 million views, makes the wise point that you definitely mustn’t sit down until you’re explicitly invited to do so by the interviewer.Who could ever recover from taking a seat a few seconds too early?Over the years, we’ve sifted through a lot of bad advice to find the nuggets that are actually good. We’ve also provided one-on-one coaching to thousands of people who are applying for jobs, and hired about 30 people ourselves, so we’ve seen what works from both sides. Here, we’ll sum up what we’ve learned.The key idea is that getting a job is about convincing someone that you have something valuable to offer. So you should focus on doing whatever employers will find most convincing. That means instead of sending out lots of CVs, focus on getting recommendations and proving you can do the work. Read on to get a step-by-step guide.Source:https://80000hours.org/career-guide/how-to-get-a-job/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202333 min

Part 9: All the evidence-based advice we found on how to be more successful in any job

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---The trouble with self-help advice is that it’s often based on barely any evidence.For example, how many times have you been told to “think positively” in order to reach your goals? It’s probably the most popular piece of personal guidance, beloved by everyone from high school teachers to bestselling careers experts. One key idea behind the slogan is that if you visualise your ideal future, you’re more likely to get there.The problem? Recent research found evidence that fantasising about your perfect life actually makes you less likely to make it happen. While it can be pleasant, it appears to reduce motivation because it makes you feel that you’ve already hit those targets.1 We’ll cover some ways positive thinking can be helpful later in the article.Much other advice is just one person’s opinion, or useless clichés. But at 80,000 Hours, we’ve found that there are a number of evidence-backed steps that anyone can take to become more productive and successful in their career, and life in general. And as we saw in an earlier article, people can keep improving their skills for decades.So we’ve gathered up all the best advice we’ve found over our last 10+ years of research. These are things that anyone can do in any job to increase their career capital and personal fit — and, therefore, their positive impact.Source:https://80000hours.org/career-guide/how-to-be-successful/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 20231h 4m

Part 8: How to find the right career for you

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---Everyone says it’s important to find a job you’re good at, but no one tells you how.The standard advice is to think about it for weeks and weeks until you “discover your talent.” To help, career advisers give you quizzes about your interests and preferences. Others recommend you go on a gap yah, reflect deeply, imagine different options, and try to figure out what truly motivates you.But as we saw in an earlier article, becoming really good at most things takes decades of practice. So to a large degree, your abilities are built rather than “discovered.” Darwin, Lincoln, and Oprah all failed early in their careers, then went on to completely dominate their fields. Albert Einstein’s 1895 schoolmaster’s report reads, “He will never amount to anything.”Asking “What am I good at?” needlessly narrows your options. It’s better to ask: “What could I become good at?”That aside, the bigger problem is that these methods aren’t reliable. Plenty of research shows that while it’s possible to predict what you’ll be good at ahead of time, it’s difficult. Just “going with your gut” is particularly unreliable, and it turns out career tests don’t work very well either.Instead, you should be prepared to think like a scientist — learn about and try out your options, looking outwards rather than inwards. Here we’ll explain why and how.Source:https://80000hours.org/career-guide/personal-fit/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202337 min

Part 7: Which jobs put you in the best long-term position?

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---People like to lionise the Mozarts, Malala Yousafzais, and Mark Zuckerbergs of the world — people who achieved great success while young — and there are all sorts of awards for young leaders, like the Forbes 30 Under 30.But these stories are interesting precisely because they’re the exception.Most people reach the peak of their impact in their middle age. Income usually peaks in the 40s, suggesting that it takes around 20 years for most people to reach their peak productivity.Source:https://80000hours.org/career-guide/career-capital/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202351 min

Part 6: Which jobs help people the most?

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---Many people think of Superman as a hero. But he may be the greatest example of underutilised talent in all of fiction. It was a blunder to spend his life fighting crime one case at a time; if he’d thought a little more creatively, he could have done far more good. How about delivering vaccines to everyone in the world at superspeed? That would have eradicated most infectious disease, saving hundreds of millions of lives.Here we’ll argue that a lot of people who want to “make a difference” with their career fall into the same trap as Superman. College graduates imagine becoming doctors or teachers, but these may not be the best fit for their particular skills. And like Superman fighting crime, these paths are often limited in the amount they could potentially contribute to solving a problem.In contrast, Nobel Prize winner Karl Landsteiner discovered blood groups, enabling hundreds of millions of lifesaving operations. He would have never been able to carry out that many surgeries himself.Below we’ll introduce five ways you could use your career to help tackle the social problems you want to help work on (which we identified in the previous article). The five ways are: earning to give, communication, research, government and policy, and organisation-building. We’ll make concrete recommendations on how to pursue each approach.Source:https://80000hours.org/career-guide/high-impact-jobs/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202344 min

Part 12: One of the most powerful ways to improve your career: Join a community.

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---Not many students are in a position to start a successful, cost-effective charity straight out of a philosophy degree. But when Thomas attended an “effective altruism” conference in London in 2018, he discovered an opportunity to start a nonprofit that could have a major impact on factory farmed animals.Through the community, he received advice and funding, and ended up in an incubation programme. Today, Thomas’s charity, the Fish Welfare Initiative, has reduced the suffering of around one million factory farmed fish, and has an annual budget of over half a million dollars.If Thomas had just added loads of people on LinkedIn, this would have probably never happened. And this illustrates what many people miss about networking: the value of joining a great community.Finding the right community can help you gain hundreds of potential allies in one go.In fact, getting involved in the right community can be one of the best ways to make friends, advance your career, and have a greater impact. Many people we advise say that “finding their people” was one of the most important steps in their career, and life in general.What’s more, a group of people working together can have more impact than they could individually.In this article, we’ll explain how joining a community can help, and how to get involved.Source:https://80000hours.org/career-guide/community/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202317 min

Summary: How to find a fulfilling career that does good

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---TL;DR: To have a fulfilling career, get good at something and then use it to tackle pressing global problems.Rather than expect to discover your passion in a flash of insight, your job satisfaction will grow over time as you learn more about what kind of work fits you, master valuable skills, and use them to find engaging work that helps others.Source:https://80000hours.org/career-guide/summary/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 20235 min

Part 4: Want to do good? Here's how to choose an area to focus on

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---If you want to make a difference with your career, one place to start is to ask which global problems most need attention. Should you work on education, climate change, poverty, or something else?The standard advice is to do whatever most interests you, and most people seem to end up working on whichever social problem first grabs their attention.That’s exactly what our cofounder, Ben, did. At age 19, he was most interested in climate change. Here he is at a rally, in a suitably artistic shot:However, his focus on climate change wasn’t the result of a careful comparison of the pros and cons of working on different problems. Rather, by his own admission, he’d happened to read about it, and found it engaging because it was sciency and he was geeky.The problem with this approach is that you might happen to stumble across an area that’s just not that big, important, or easy to make progress on. You’re also much more likely to stumble across the problems that already receive the most attention, which makes them lower impact.So how can you avoid these mistakes, and do more good?Source:https://80000hours.org/career-guide/most-pressing-problems/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202313 min

The end: A cheery final note — imagining your deathbed

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---We’re about to summarise the whole guide in one minute. But before that, imagine a cheery thought: you’re at the end of your 80,000-hour career.You’re on your deathbed looking back. What are some things you might regretPerhaps you drifted into whatever seemed like the easiest option, or did what your parents did.Maybe you even made a lot of money doing something you were interested in, and had a nice house and car. But you still wonder: what was it all for?Now imagine instead that you worked really hard throughout your life, and ended up saving the lives of 100 children. Can you really imagine regretting that?To have a truly fulfilling life, we need to turn outwards rather than inwards. Rather than asking “What’s my passion?,” ask “How can I best contribute to the world?”As we’ve seen, by using our fortunate positions and acting strategically, there’s a huge amount we can all do to help others. And we can do this at little cost to ourselves, and most likely while having a more successful and satisfying career too.Source:https://80000hours.org/career-guide/end/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 20235 min

Part 5: The world’s biggest problems and why they’re not what first comes to mind

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---We’ve spent much of the last 10+ years trying to answer a simple question: what are the world’s biggest and most neglected problems?We wanted to have a positive impact with our careers, and so we set out to discover where our efforts would be most effective.Our analysis suggests that choosing the right problem could increase your impact by over 100 times, which would make it the most important driver of your impact.Here, we give a summary of what we’ve learned. Read on to hear why ending diarrhoea might save as many lives as world peace, why artificial intelligence might be an even bigger deal, and what to do in your own career to make the most urgent changes happen.In short, the most pressing problems are those where people can have the greatest impact by working on them. As we explained in the previous article, this means problems that are not only big, but also neglected and solvable. The more neglected and solvable, the further extra effort will go. And this means they’re not the problems that first come to mind.Source:https://80000hours.org/career-guide/world-problems/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202336 min

Part 2: Can one person make a difference? What the evidence says.

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---It’s easy to feel like one person can’t make a difference. The world has so many big problems, and they often seem impossible to solve.So when we started 80,000 Hours — with the aim of helping people do good with their careers — one of the first questions we asked was, “How much difference can one person really make?”We learned that while many common ways to do good (such as becoming a doctor) have less impact than you might first think, others have allowed certain people to achieve an extraordinary impact.In other words, one person can make a difference — but you might have to do something a little unconventional.In this article, we start by estimating how much good you could do by becoming a doctor. Then, we share some stories of the highest-impact people in history, and consider what they mean for your career.Source:https://80000hours.org/career-guide/can-one-person-make-a-difference/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202316 min

Part 1: We reviewed over 60 studies about what makes for a dream job. Here’s what we found.

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---We all want to find a dream job that’s enjoyable and meaningful, but what does that actually mean?Some people imagine that the answer involves discovering their passion through a flash of insight, while others think that the key elements of their dream job are that it be easy and highly paid.We’ve reviewed three decades of research into the causes of a satisfying life and career, drawing on over 60 studies, and we didn’t find much evidence for these views.Instead, we found six key ingredients of a dream job. They don’t include income, and they aren’t as simple as “following your passion."In fact, following your passion can lead you astray. Steve Jobs was passionate about Zen Buddhism before entering technology. Maya Angelou worked as a calypso dancer before she became a celebrated poet and civil rights activist.Rather, you can develop passion by doing work that you find enjoyable and meaningful. The key is to get good at something that helps other people.Source:https://80000hours.org/career-guide/job-satisfaction/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 202330 min

Introduction: Why should I read this guide?

---client: 80000_hoursproject_id: articlesnarrator: pwqa: km---You’ll spend about 80,000 hours working in your career: 40 hours a week, 50 weeks a year, for 40 years. So how to spend that time is one of the most important decisions you’ll ever make.Choose wisely, and you will not only have a more rewarding and interesting life — you’ll also be able to help solve some of the world’s most pressing problems. But how should you choose?To answer this question, we set up an independent nonprofit and have done over 10 years of research alongside Oxford academics. Our only aim is to help you have the greatest possible positive impact.Along the way, we’ve discovered some surprising things, and over 10 million people have viewed our advice.Source:https://80000hours.org/career-guide/introduction/Narrated for 80,000 Hours by Perrin Walker of TYPE III AUDIO.Share feedback on this narration.

Jun 14, 20237 min

"Information security in high-impact areas career review" by Jarrah Bloomfield

---client: 80000_hoursproject_id: articlesnarrator: pwqa: kmqa_time: 0h30m---As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email. The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo — was extraordinarily consequential.Source:https://80000hours.org/career-reviews/information-securityNarrated for 80,000 Hours by TYPE III AUDIO.Share feedback on this narration.

Jun 12, 202320 min

"Cause area report: Antimicrobial Resistance" by Akhil

---client: ea_forumproject_id: curatednarrator: pwqa: kmqa_time: 0h20m---This post is a summary of some of my work as a field strategy consultant at Schmidt Futures' Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed here.Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.Source:https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistanceNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Jun 8, 202312 min

"Tips for people considering starting new incubators" by Joey

---client: ea_forumproject_id: curatednarrator: pwqa: kmnarrator_time: 1h15mqa_time: 30m---Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them. There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further: Mid-stage funding, Founders and Multiplying effects.Source:https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubatorsNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

May 26, 202315 min

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

---client: lesswrongproject_id: articlesfeed_id: ai ai_safety narrator: pwqa: kmnarrator_time: 4h30mqa_time: 2h0m---Philosopher David Chalmers asked: "Is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?" Unsurprisingly, the actual reason people expect AGI ruin isn't a crisp deductive argument; it's a probabilistic update based on many lines of evidence. The specific observations and heuristics that carried the most weight for someone will vary for each individual, and can be hard to accurately draw out. That said, Eliezer Yudkowsky's So Far: Unfriendly AI Edition might be a good place to start if we want a pseudo-deductive argument just for the sake of organizing discussion. People can then say which premises they want to drill down on. In The Basic Reasons I Expect AGI Ruin, I wrote: "When I say "general intelligence", I'm usually thinking about "whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems". It's possible that we should already be thinking of GPT-4 as "AGI" on some definitions, so to be clear about the threshold of generality I have in mind, I'll specifically talk about "STEM-level AGI", though I expect such systems to be good at non-STEM tasks too. STEM-level AGI is AGI that has "the basic mental machinery required to do par-human reasoning about all the hard sciences", though a specific STEM-level AGI could (e.g.) lack physics ability for the same reasons many smart humans can't solve physics problems, such as "lack of familiarity with the field".Source:https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruinNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

May 16, 20231h 2m

"How much do you believe your results?" by Eric Neyman

---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 2h45mqa_time: 1h00m---Thanks to Drake Thomas for feedback. I. Here’s a fun scatter plot. It has two thousand points, which I generated as follows: first, I drew two thousand x-values from a normal distribution with mean 0 and standard deviation 1. Then, I chose the y-value of each point by taking the x-value and then adding noise to it. The noise is also normally distributed, with mean 0 and standard deviation 1. Notice that there’s more spread along the y-axis than along the x-axis. That’s because each y-coordinate is a sum of two independently drawn numbers from the standard normal distribution. Because variances add, the y-values have variance 2 (standard deviation 1.41), not 1. Statisticians often talk about data forming an “elliptical cloud”.Original text:https://www.lesswrong.com/posts/nnDTgmzRrzDMiPF9B/how-much-do-you-believe-your-resultsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

May 10, 202333 min

EA Forum Weekly Summaries – Episode 25 (May 1-7, 2023)

---client: ea_forumproject_id: summariesnarrator: cs---We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already! Original text:https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.

May 10, 202328 min

"Predictable updating about AI risk" by Joe Carlsmith

---client: ea_forumproject_id: curatedfeed_id: ai_safetynarrator: jc---How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now. Original article:https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-riskNarrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

May 9, 20231h 3m

EA Forum Weekly Summaries – Episode 24 (April 24-30, 2023)

---client: ea_forumproject_id: summariesnarrator: cs---We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already! Original text:https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.

May 8, 202322 min

"AGI safety career advice" by Richard Ngo

---client: ea_forumproject_id: curatednarrator: pwqa: kmqa_time: 0h35m ---People often ask me for career advice related to AGI safety. This post summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post I wrote two years ago, containing a bunch of fairly general career advice. ## General mindset In order to have a big impact on the world you need to find a big lever. This document assumes that you think, as I do, that AGI safety is the biggest such lever. There are many ways to pull on that lever, though—from research and engineering to operations and field-building to politics and communications. I encourage you to choose between these based primarily on your personal fit—a combination of what you're really good at and what you really enjoy. In my opinion the difference between being a great versus a mediocre fit swamps other differences in the impactfulness of most pairs of AGI-safety-related jobs.Original article:https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-adviceNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

May 4, 202322 min

EA Forum Weekly Summaries – Episode 23 (April 17-23, 2023)

---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023This podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.

May 2, 202314 min

"First clean water, now clean air" by Fin Moorhouse

---client: ea_forumproject_id: curatednarrator: pwqa: kmqa_time: 0h45m---The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:"Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind […] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer […] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness."Original article:https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-airNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

May 2, 202333 min

[Week 2] "Learning from human preferences" (Blog Post) by Dario Amodei, Paul Christiano & Alex Ray

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h15m---One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.Original article:https://openai.com/research/learning-from-human-preferencesAuthors:Dario Amodei, Paul Christiano, Alex Ray---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Apr 28, 20236 min

"New 80,000 Hours Podcast on high-impact climate philanthropy" by Johannes Ackva

---client: ea_forumproject_id: curatednarrator: pwqa: mdsqa_time: 0h05m---This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:Original article:https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropyNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Apr 27, 20233 min

"Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)" by Chris Scammell & DivineMango

---client: lesswrongproject_id: curatednarrator: pwqa: mdsqa_time: 1h00m---This is a post about mental health and disposition in relation to the alignment problem. It compiles a number of resources that address how to maintain wellbeing and direction when confronted with existential risk. Many people in this community have posted their emotional strategies for facing Doom after Eliezer Yudkowsky’s “Death With Dignity” generated so much conversation on the subject. This post intends to be more touchy-feely, dealing more directly with emotional landscapes than questions of timelines or probabilities of success. The resources section would benefit from community additions. Please suggest any resources that you would like to see added to this post.Please note that this document is not intended to replace professional medical or psychological help in any way. Many preexisting mental health conditions can be exacerbated by these conversations. If you are concerned that you may be experiencing a mental health crisis, please consult a professional.Original article:https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of#Narrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 27, 202338 min

EA Forum Weekly Summaries – Episode 22 (Mar. 27 - Apr. 16, 2023)

---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-aprilThis podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.

Apr 22, 202321 min

"A freshman year during the AI midgame: my approach to the next year" by Buck

---client: ea_forumproject_id: curated narrator: pwqa: mdsqa_time: 0h15m---I recently spent some time reflecting on my career and my life, for a few reasons:It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.I wanted to have a better answer to these questions:What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?How much urgency should I feel in my life?How hard should I work?How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?Original article:https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-theNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Apr 19, 202310 min

"On AutoGPT" by Zvi

---client: lesswrongproject_id: curatedfeed_id: ai_alignment, ai narrator: pwqa: kmqa_time: 0h50m---The primary talk of the AI world recently is about AI agents (whether or not it includes the question of whether we can’t help but notice we are all going to die.)The trigger for this was AutoGPT, now number one on GitHub, which allows you to turn GPT-4 (or GPT-3.5 for us clowns without proper access) into a prototype version of a self-directed agent.We also have a paper out this week where a simple virtual world was created, populated by LLMs that were wrapped in code designed to make them simple agents, and then several days of activity were simulated, during which the AI inhabitants interacted, formed and executed plans, and it all seemed like the beginnings of a living and dynamic world. Game version hopefully coming soon.How should we think about this? How worried should we be?Original article:https://www.lesswrong.com/posts/566kBoPi76t8KAkoD/on-autogptNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 19, 202337 min

"Want to win the AGI race? Solve alignment." by Leopold Aschenbrenner

---client: ea_forumproject_id: articlesfeed_id: ai_safetynarrator: pwqa: mdsqa_time: 0h15m---This is a linkpost for https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here. Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?Original article:https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignmentNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Apr 14, 20239 min

"GPTs are Predictors, not Imitators" by Eliezer Yudkowsky

---client: lesswrongproject_id: curatedfeed_id: ai_safetynarrator: pwqa: kmqa_time: 0h10m---(Related text posted to Twitter; this version is edited and has a more advanced final section.)Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.Koan: Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? What factors make that task easier, or harder? (If you don't have an answer, maybe take a minute to generate one, or alternatively, try to predict what I'll say next; if you do have an answer, take a moment to review it inside your mind, or maybe say the words out loud.)Original article:https://www.lesswrong.com/posts/nH4c3Q9t9F3nJ7y8W/gpts-are-predictors-not-imitatorsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 12, 20236 min

[Week 0] "Machine Learning for Humans, Part 2.1: Supervised Learning" by Vishal Maini

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h30m---The two tasks of supervised learning: regression and classification. Linear regression, loss functions, and gradient descent.How much money will we make by spending more dollars on digital advertising? Will this loan applicant pay back the loan or not? What’s going to happen to the stock market tomorrow?Original article:https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feabAuthor:Vishal Maini---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Apr 8, 202322 min

Interpretability in the wild and other papers

---client: t3afeed_id: ai_safety_abstractsnarrator: ai---This episode covers 3 abstracts:Active reward learning from multiple teachers - Peter Barnett et al. Conditioning Predictive Models: Risks and Strategies - Hubinger et al.Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT2 small - Kevin Wang et al.Share feedback on this narration.

Apr 6, 20235 min

"Discussion with Nate Soares on a key alignment difficulty" by Holden Karnofsky

---client: lesswrongproject_id: curatedfeed_id: ai_safety narrator: pwqa: mdsqa_time: 1h00m---In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment.I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that we iterated on until we were both reasonably happy with its characterization of the difficulty and our disagreement.1 My short summary is:Nate thinks there are deep reasons that training an AI to do needle-moving scientific research (including alignment) would be dangerous. The overwhelmingly likely result of such a training attempt (by default, i.e., in the absence of specific countermeasures that there are currently few ideas for) would be the AI taking on a dangerous degree of convergent instrumental subgoals while not internalizing important safety/corrigibility properties enough.I think this is possible, but much less likely than Nate thinks under at least some imaginable training processes.Original article:https://www.lesswrong.com/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficultyNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 5, 202339 min

"A stylized dialogue on John Wentworth's claims about markets and optimization" by Nate Soares

---client: lesswrongproject_id: curatednarrator: pwqa: mdsqa_time: 0h30m---(This is a stylized version of a real conversation, where the first part happened as part of a public debate between John Wentworth and Eliezer Yudkowsky, and the second part happened between John and me over the following morning. The below is combined, stylized, and written in my own voice throughout. The specific concrete examples in John's part of the dialog were produced by me. It's over a year old. Sorry for the lag.)(As to whether John agrees with this dialog, he said "there was not any point at which I thought my views were importantly misrepresented" when I asked him for comment.) Original article: https://www.lesswrong.com/posts/fJBTRa7m7KnCDdzG5/a-stylized-dialogue-on-john-wentworth-s-claims-about-marketsNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 5, 202316 min

"Nobody’s on the ball on AGI alignment" by Leopold Aschenbrenner

---client: ea_forumproject_id: curatedfeed_id: ai_safetynarrator: pwqa: mdsqa_time: 0h30m---Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)Original article:This is a linkpost for https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignmentNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Apr 5, 202317 min

"Deep Deceptiveness" by Nate Soares

---client: lesswrongproject_id: curatedfeed_id: ai_safetynarrator: pwqa: mdsqa_time: 0h45m---This post is an attempt to gesture at a class of AI notkilleveryoneism (alignment) problem that seems to me to go largely unrecognized. E.g., it isn’t discussed (or at least I don't recognize it) in the recent plans written up by OpenAI (1,2), by DeepMind’s alignment team, or by Anthropic, and I know of no other acknowledgment of this issue by major labs.You could think of this as a fragment of my answer to “Where do plans like OpenAI’s ‘Our Approach to Alignment Research’ fail?”, as discussed in Rob and Eliezer’s challenge for AGI organizations and readers. Note that it would only be a fragment of the reply; there's a lot more to say about why AI alignment is a particularly tricky task to task an AI with. (Some of which Eliezer gestures at in a follow-up to his interview on Bankless.)Original article:https://www.lesswrong.com/posts/XWwvwytieLtEWaFJX/deep-deceptivenessNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.

Apr 5, 202330 min

[Week 1] "Visualizing the deep learning revolution" by Richard Ngo

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h30m---The field of AI has undergone a revolution over the last decade, driven by the success of deep learning techniques. This post aims to convey three ideas using a series of illustrative examples:There have been huge jumps in the capabilities of AIs over the last decade, to the point where it’s becoming hard to specify tasks that AIs can’t do.This progress has been primarily driven by scaling up a handful of relatively simple algorithms (rather than by developing a more principled or scientific understanding of deep learning).Very few people predicted that progress would be anywhere near this fast; but many of those who did also predict that we might face existential risk from AGI in the coming decades.I’ll focus on four domains: vision, games, language-based tasks, and science. The first two have more limited real-world applications, but provide particularly graphic and intuitive examples of the pace of progress.Original article:https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5Author:Richard Ngo---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Apr 3, 202341 min

[Week 1] "Intelligence Explosion: Evidence and Import" (Sections 3 to 4.1) by Luke Muehlhauser & Anna Salamon

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h30m---It seems unlikely that humans are near the ceiling of possible intelligences, rather than simply being the first such intelligence that happened to evolve. Computers far outperform humans in many narrow niches (e.g. arithmetic, chess, memory size), and there is reason to believe that similar large improvements over human performance are possible for general reasoning, technology design, and other tasks of interest. As occasional AI critic Jack Schwartz (1987) wrote:"If artificial intelligences can be created at all, there is little reason to believe that initial successes could not lead swiftly to the construction of artificial superintelligences able to explore significant mathematical, scientific, or engi-neering alternatives at a rate far exceeding human ability, or to generate plans and take action on them with equally overwhelming speed. Since man’s near-monopoly of all higher forms of intelligence has been one of the most basic facts of human existence throughout the past history of this planet, such developments would clearly create a new economics, a new sociology, and a new history."Why might AI “lead swiftly” to machine superintelligence? Below we consider some reasons.Original article:https://drive.google.com/file/d/1QxMuScnYvyq-XmxYeqBRHKz7cZoOosHr/viewAuthors:Luke Muehlhauser, Anna Salamon---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Apr 3, 202318 min

[Week 1] "On the opportunities and risks of foundation models" by Bommasani et al.

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h30m---AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.Original article:https://arxiv.org/abs/2108.07258Authors:Bommasani et al.---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Apr 3, 202315 min

[Week 2] "Specification gaming: the flip side of AI ingenuity" by Victoria Krakovna et al.

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h30m---Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material - and thus exploit a loophole in the task specification. Original article:https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuityAuthors:Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, Shane Legg---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Mar 31, 202313 min

[Week 2] "Superintelligence: Instrumental convergence" by Nick Bostrom

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h15m---According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the “instrumental convergence” thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows:The instrumental convergence thesis:"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents."Original article:https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/viewAuthor:Nick Bostrom---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Mar 31, 202317 min

[Week 2] "The easy goal inference problem is still hard" by Paul Christiano

---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 0h15m---One approach to the AI control problem goes like this:Observe what the user of the system says and does.Infer the user’s preferences.Try to make the world better according to the user’s preference, perhaps while working alongside the user and asking clarifying questions.This approach has the major advantage that we can begin empirical work today — we can actually build systems which observe user behavior, try to figure out what the user wants, and then help with that. There are many applications that people care about already, and we can set to work on making rich toy models.It seems great to develop these capabilities in parallel with other AI progress, and to address whatever difficulties actually arise, as they arise. That is, in each domain where AI can act effectively, we’d like to ensure that AI can also act effectively in the service of goals inferred from users (and that this inference is good enough to support foreseeable applications).This approach gives us a nice, concrete model of each difficulty we are trying to address. It also provides a relatively clear indicator of whether our ability to control AI lags behind our ability to build it. And by being technically interesting and economically meaningful now, it can help actually integrate AI control with AI practice.Overall I think that this is a particularly promising angle on the AI safety problem.Original article:https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hardAuthors:Paul Christiano---This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.Narrated by TYPE III AUDIO on behalf of BlueDot Impact.Share feedback on this narration.

Mar 31, 20237 min

"Seeing more whole" by Joe Carlsmith

---client: ea_forumproject_id: curatednarrator: jc---In my last essay, I looked at two stories (brute preference for systematic-ness, and money-pumps) about why ethical anti-realists should still be interested in ethics – two stories about why the “philosophy game” is worth playing, even if there are no objective normative truths, and you’re free to do whatever you want. I think some versions of these stories might well have a role to play; but I find that on their own, they don’t fully capture what feels alive to me about ethics. Here I try to say something that gets closer.Original article:https://joecarlsmith.com/2023/02/17/seeing-more-wholeNarrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.

Mar 30, 202352 min

EA Forum Weekly Summaries – Episode 21 (March 13-19, 2023)

---client: ea_forumproject_id: summariesnarrator: cs---Original article:https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.Share feedback on this narration.

Mar 30, 202316 min

"What is effective altruism?"

---client: ceaproject_id: articlesnarrator: pwqa: mdsqa_time: 1h15m---Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.This project matters because, while many attempts to do good fail, some are enormously effective. For instance, some charities help 100 or even 1,000 times as many people as others, when given the same amount of resources.This means that by thinking carefully about the best ways to help, we can do far more to tackle the world’s biggest problems.Original article:https://www.effectivealtruism.org/articles/introduction-to-effective-altruismNarrated for effectivealtruism.org by TYPE III AUDIO.Share feedback on this narration.

Mar 28, 202348 min