PLAY PODCASTS
The Tech Policy Press Podcast

The Tech Policy Press Podcast

309 episodes — Page 4 of 7

A Conversation with Mark Surman, President of Mozilla

Justin Hendrix speaks with Mark Surman, President of Mozilla, about Mozilla’s work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman discusses Mozilla's initiatives in AI investment and development, and reflects on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, Surman shares his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people’s values and dignity. 

Aug 25, 202424 min

Design Codes and the Courts

On Friday, August 16, the United States Ninth Circuit Court of Appeals issued a ruling in NetChoice v. Bonta, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The court affirmed that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data. In this episode, Justin Hendrix recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow Dean Jackson is joined by Tech Justice Law Project executive director Meetali Jain and USC Marshall School Neely Center managing director Ravi Iyer for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.

Aug 18, 202427 min

New Mexico Attorney General Raúl Torrez on His Lawsuit Against Meta

Raúl Torrez was sworn in as New Mexico’s 32nd Attorney General in January 2023. Last December, Attorney General Torrez filed a lawsuit against Meta for allegedly failing to protect children from sexual abuse, online solicitation, and human trafficking. The outcome of this case could have broader implications for how online platforms are regulated and held accountable for user safety in the future, including through litigation. Justin Hendrix spoke to Attorney General Torrez in advance of a panel discussion he participated in alongside the Attorney General of Virginia at the 2024 Coalition to End Exploitation Global Summit on Wednesday, August 7, 2024 in Washington DC.

Aug 11, 202429 min

Using AI to Engage People about Conspiracy Beliefs

In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.

Aug 4, 202435 min

Data Workers, In Their Own Words

The Distributed AI Research Institute, or DAIR—which seeks to conduct community-rooted AI research that is independent from the technology industry—has launched a new project called the Data Workers' Inquiry to invite data workers to create their own research and recount their experiences. The project is supported by DAIR, the Weizenbaum Institute, and TU Berlin. For this episode, journalist and audio producer Rebecca Rand parsed some of the ideas and experiences discussed at a virtual launch event for the inquiry that took place earlier this month. 

Jul 28, 202429 min

Silicon Valley Leaders Cast Their Lot with Donald Trump

In the past week, multiple Silicon Valley billionaires announced endorsements of former President and 2024 Republican nominee Donald Trump. To dig a bit deeper into their motivations to support Trump and his new running mate, Ohio Senator and former venture capitalist J.D. Vance, Justin Hendrix invited on three sharp observers of politics and technology, including:Henry Farrell, a professor of the international affairs and democracy at Johns Hopkins University and the recent co-author with Abraham Newman of Underground Empire: How America Weaponized the World Economy.Elizabeth Spiers, a writer and digital strategist and contributing writer for the New York Times, and co-host the Slate Money Podcast.Dave Karpf, an associate professor at George Washington University in the School of Media and Public Affairs.

Jul 21, 202445 min

The Future of Privacy in the Age of AI

It goes without saying that privacy and the creation of laws and regulations around it are fundamental to determining how we will live and work with technology, and whether technology operates in service of democratic societies or only in service of governments and corporations. A couple of weeks ago, Justin Hendrix had a chance to speak with two leaders from the Future of Privacy Forum (FPF)-Jules Polonetsky, its CEO, and Anne J. Flanagan, the head of its new Center on AI. They discussed the recent US Supreme Court decision to overturn the Chevron doctrine and its implications for privacy legislation in the United States, the fierce battle over privacy laws in the US, and potential conflicts between Europe's General Data Protection Regulation (GDPR) and the new AI Act. And, they talked about how the 15-year-old Future of Privacy Forum envisions its role in the age of artificial intelligence.

Jul 21, 202444 min

Data Rights in the Age of AI

In this episode, David Carroll, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School, speaks to Ravi Naik, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist Max Schrems.

Jul 14, 202442 min

What Comes After Murthy v Missouri

On June 26, the US Supreme Court issued a 6-3 ruling in Murthy v Missouri, a cased that considered whether the Biden administration violated the First Amendment in its efforts to address COVID-19 mis- and disinformation on social media. Tech Policy press fellow Dean Jackson, who studied the case closely, discussed the outcome and what it means for the future with three experts:Olga Belogolova, director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS);Mayze Teitler, a legal fellow at the Knight First Amendment Institute; andNina Jankowicz, co-Founder and CEO of the American Sunlight Project.

Jul 14, 202457 min

Considering the Ethics of AI Assistants

In April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines in different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” Justin Hendrix the chance to speak to two of the papers authors about some of these issues:Shannon Vallor, a professor of AI and data ethics at the University of Edinburgh and director of the Center for Technomoral Futures in the Edinburgh Futures Institute; andIason Gabriel, a research scientist at Google DeepMind in its ethics research team.

Jul 7, 202453 min

Big Tech and the News

News and journalism organizations and dominant tech companies are in a years-long battle over content, clicks and revenue, and the tech companies are winning. What are policy options that encourage both the sustainability and quality of news content on popular online platforms? In this episode, Rebecca Rand explores perspectives on the subject, drawing on a conversation hosted by Justin Hendrix with experts Anya Schiffrin and Cory Doctorow at the Knight Foundation's INFORMED conference earlier this year.

Jun 30, 202441 min

Internet Governance Is At A Crossroads

In this episode, we explore a topic that sits at the heart of global digital policy: the contrasting visions of internet governance championed by the United States and its Western allies versus those promoted by China and nations in its orbit. This debate is playing out across various international venues and has profound implications for the future of digital rights, privacy, and the open internet. Justin Hendrix is joined by experts at the Atlantic Council that study these issues from a variety of angles and across multiple geographies, including:Rose Jackson, the director of the Democracy + Tech Initiative within the Atlantic Council Technology Programs;Konstantinos Komaitis, a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council's Digital Forensic Research Lab;Kenton Thibaut, a senior resident China fellow at the Atlantic Council's Digital Forensic Research Lab; andIria Puyosa, a senior research fellow at the Atlantic Council’s Digital Forensic Research Lab.

Jun 23, 202450 min

Understanding the Digital Silk Road

In October 2023, during the third Belt and Road Forum in Beijing, China's leader Xi Jinping signaled a shift in focus from more grandiose physical infrastructure projects to 'small yet smart' initiatives. This shift underscores the need to understand China's ambitions to reshape global digital governance, moving away from an open and free internet towards a model rooted in government control and mass surveillance. The advocacy group Article 19 documents this shift in a recent report titled "The Digital Silk Road: China and the Rise of Digital Repression in the Indo-Pacific," examining China's influence on digital infrastructure and governance in Cambodia, Malaysia, Nepal, and Thailand. As the Indo-Pacific remains strategically significant for China in deploying next-generation technologies, the report argues that assessing China’s regional partnerships and their implications for digital repression is crucial for understanding its broader ambitions to reshape global digital norms. To discuss these issues in more depth, Justin Hendrix is joined by:Michael Caster, Asia Digital Program Manager at ARTICLE 19; andCatherine Tai, the deputy director for Asia and the Pacific team at Center for International Enterprise (CIPE).

Jun 23, 202449 min

How China Regulates Tech

Angela Zhang is the author of High Wire: How China Regulates Big Tech and Governs Its Economy, published this year by Oxford University Press. With a career in the practice of law and in teaching it, Zhang has held roles King’s College London and at New York University School of Law, and most recently served as Director of Philip K. H. Wong Center for Chinese Law at the University of Hong Kong. She will join the University of Southern California as a Professor of Law in fall 2024.

Jun 23, 202439 min

The Demise of CrowdTangle and What It Means for Independent Technology Research

A topic we returned to often in this podcast is the dire need for independent technology researchers to have access to platform data. Without it, we cannot understand the extent of the harms and effects of social media on people and on society, and we cannot understand the limits of those harms. This makes it difficult to respond in acute moments such as elections, and to understand issues such as the relationship between tech platforms and social cohesion, or mental health, or any number of the other issues policymakers care about. In this episode, Justin Hendrix speaks with two people on the front lines of the fight to secure access to data, including advocating for Meta to do better in light of the impending deprecation of CrowdTangle, a tool used by researchers study Meta's products, including Facebook and Instagram. They are:Brandi Guerkink, the executive director of the Coalition for Independent Technology Research, andClaire Pershan, EU advocacy lead at the Mozilla Foundation.

Jun 21, 202427 min

Finding the Humanity in an Automated World

Madhumita Murgia, AI editor at the Financial Times, is the author of a new book called Code Dependent: Living in the Shadow of AI. The book combines reporting and research to provide a look at the role that AI and automated decision-making is playing in reshaping our lives, our politics, and our economies across the world.

Jun 18, 202434 min

A Conversation with White House Office of Science and Technology Policy Director Arati Prabhakar

Dr. Arati Prabhakar the Director of the White House Office of Science and Technology Policy and Technology Policy and Science Advisor to President Joe Biden. This week, she hosted an event in Washington DC called "AI Aspirations: R&D for Public Missions." Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators Amy Klobuchar and Mark Warner, and Representative Don Beyer. Prior to the event, Justin Hendrix spoke to Dr. Prabhakar about OSTP's priorities.

Jun 16, 202435 min

AI and Epistemic Risk: A Coming Crisis?

What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, Justin Hendrix speaks with Elise Silva, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and John Wihbey, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a recent piece in Tech Policy Press titled "AI-Powered Search and the Rise of Google’s 'Concierge Wikipedia.'” Wihbey is the author a paper published last month titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"

Jun 10, 202445 min

What's Next for Tech Policy in India After the Elections

What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining Justin Hendrix are three experts: Amber Sinha and Vandinika Shukla, both fellows at Tech Policy Press, and Prateek Waghre, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager Prithvi Iyer sums up the election result.

Jun 9, 202447 min

How Are Political Campaigners in the US Using Generative AI?

The guests in this episode are authors of a new study titled Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work. In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining Justin Hendrix to discuss the results are:Dean Jackson, the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;Zelly Martin, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and Inga Trauthig, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.

Jun 6, 202448 min

The Role of Shareholder Activism in Tech Accountability

This episode focuses on the role of shareholder activism in pursuing transparency and accountability from tech firms. In a week where board resolutions are up for a vote at Meta and Alphabet related to each company's development and deployment of artificial intelligence, Justin Hendrix spoke to five individuals working at the intersection of sustainable investing in tech accountability:Michael Connor, Executive Director of Open MICJessica Dheere, Advocacy Director at Open MICNatasha Lamb, Chief Investment Officer at Arjuna CapitalJonas Kron, Chief Advocacy Officer at Trillium Asset ManagementChristina O'Connell, Senior Manager for Shareholder Engagement and Investments at Ekō

Jun 2, 202441 min

Shadow Report on AI Addresses What the US Senate Missed

As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:Sarah West, co-executive director of the AI Now InstituteNasser Eledroos, policy lead on technology at Color of ChangeParamita Shah, executive director of Just Futures LawCynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab

May 26, 202438 min

A Perspective on Meta's Moderation of Palestinian Voices

A conversation with Marwa Fatafta, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has engaged Meta in recent months over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza. 

May 26, 202440 min

Prioritizing Civil Rights in US AI Policy: Claudia Ruiz and Alejandra Montoya-Boyer

On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, Justin Hendrix spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:Alejandra Montoya-Boyer, Senior Director at the Center for Civil Rights & Tech at the Leadership Conference on Civil and Human RightsClaudia Ruiz, Senior Civil Rights Policy Analyst at UnidosUS

May 19, 202435 min

AI: Past, Present, and Future with Chris Stokel-Walker

One tech journalist whose byline always draws me in is Chris Stokel-Walker. He writes for multiple publications including The New York Times, The Washington Post, The Economist, Wired, Fast Company, and New Scientist. Now, he’s got a new book out: How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future. Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.

May 19, 202436 min

What We're Talking About When We Talk About Rural AI

Last October, Dr. Jasmine McNealy, as an associate professor at the University of Florida, a Senior Fellow in Tech Policy with the Mozilla Foundation, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, wrote in Tech Policy Press about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. Justin Hendrix attended the workshop, and recently he checked in with Dr. McNealy and three of the other attendees he met there:Michaela Henley, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;Dr. Dominique Harrison, founding principal of Equity Innovation Ventures; andDr. Theodora Dryer, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.

May 12, 202442 min

A Hippocratic Oath for AI? A Conversation with Chinmayi Sharma

The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.Today’s guest is Chinmayi Sharma, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a substantial paper on the 'why' and the 'how' of her proposal. 

May 11, 202446 min

Don't Hype Disinfo, Say Disinfo Experts

One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more Justin Hendrix spoke with Gavin Wilde, Thomas Rid, and Olga Belogolova, who with Lee Foster are the authors of an essay in the publication Foreign Affairs titled "Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It." 

May 5, 202443 min

Resisting AI and the Consolidation of Power

In an introduction to a special issue of the journal First Monday on topics related to AI and power, Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more. To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and Émile P. Torres.

May 4, 202453 min

What's Next for TikTok, and US Tech Policy

Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?To learn more, Justin Hendrix spoke with Anupam Chander, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; Rose Jackson, the director of the Democracy and Tech Initiative at the Atlantic Council; and Justin Sherman, CEO of global cyber strategies and adjunct professor at Duke University.

Apr 28, 202449 min

Securing Privacy Rights to Advance Civil Rights

Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.One of the witnesses that the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. Justin Hendrix caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.

Apr 21, 202427 min

The Societal Impacts of Foundation Models, and Access to Data for Researchers

This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. In the first, we’ll hear from Sayash Kapoor, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled On the Societal Impact of Open Foundation Models. And in the second, we’ll hear from Politico Chief Technology Correspondent Mark Scott about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by interviewing over 50 stakeholders, including regulators, researchers, and platform executives.

Apr 14, 202457 min

Elon Musk's X Loses in Court: Why It Matters for Independent Technology Research

Last week, a federal judge granted a motion to dismiss and strike a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH).  To learn more about why the ruling matters, Justin Hendrix spoke to Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University; Imran Ahmed, the CEO and founder of the Center for Countering Digital Hate; and Roberta Kaplan, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter. 

Apr 7, 202454 min

Nathan Schneider on Democratic Design for Online Life

On this show, when we talk about technology and democracy, guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. Nathan Schneider, an assistant professor of media studies at the University of Colorado Boulder, is the author of Governable Spaces: Democratic Design for Online Life.

Apr 6, 202439 min

Reforming Tech Amidst a Global Backlash Against Women's Rights

Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights. Justin Hendrix speaks with Lucy Purdon, the founder of Courage Everywhere and author of a recent report for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."

Mar 31, 202435 min

Unpacking the Oral Argument in Murthy v Missouri

On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.

Mar 24, 202451 min

What's at Stake in Murthy v Missouri?

On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.

Mar 17, 20241h 23m

Exploring the Intersection of Information Integrity, Race, and US Elections

At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.He's joined by three other experts, including: Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; andKathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.

Mar 10, 202449 min

US Supreme Court Considers Florida and Texas Social Media Laws

On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.

Mar 3, 202428 min

What Leverage Remains to Preserve Free Expression in Hong Kong?

This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.

Feb 29, 202445 min

How to Counter Disinformation Based on Science

If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion. Today’s guests are Jon Bateman and Dean Jackson. The two have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.

Feb 25, 202447 min

Evaluating the Role of Media in the January 6 Attack on the US Capitol

A new book that ships this week from Oxford University Press titled simply Media and January 6th assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President Donald Trump led to try to hang on to power after he lost the 2020 election to Joe Biden. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context. The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? Justin Hendrix spoke to three of the book's four editors: Khadijah Costley White, Daniel Kreiss, and Shannon C. McGregor.

Feb 25, 202447 min

Pakistan and the Intersection of Tech & Elections

It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all. At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: Ramsha Jahangir, a Pakistani journalist currently based in the Netherlands.

Feb 24, 202417 min

Ranking Content On Signals Other Than User Engagement

Today's guests are Jonathan Stray, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and Ravi Iyer, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently published a paper based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."

Feb 18, 202434 min

FTC Commissioner Alvaro Bedoya on Algorithmic Fairness, Voice Cloning, and the Future

In May 2022, Alvaro Bedoya was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a recent settlement over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the growing problem of impersonations utilizing AI, and how he thinks about the future.

Feb 18, 202433 min

Imagining AI Countergovernance

Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, Blair Attard-Frost, has put forward a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them. 

Feb 11, 202438 min

Tech CEOs Face the US Senate on Child Safety

On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.

Feb 4, 202421 min

How to Assess AI Governance Tools

Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems. Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.

Jan 28, 202436 min

How to Defend Independent Technology Research from Corporate and Political Opposition

In October 2022, a group of researchers published a manifesto establishing a Coalition for Independent Technology Research. “Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health. Justin Hendrix, who is a member of the coalition, caught up with Brandi Geurkink, who was hired as the coalition's first Executive Director in December 2023, to discuss its priorities.

Jan 21, 202441 min

Questioning OpenAI's Nonprofit Status

Today’s guest is Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed. 

Jan 14, 202419 min