
Show overview
The Tech Policy Press Podcast has been publishing since 2022, and across the 4 years since has built a catalogue of 309 episodes. That works out to roughly 210 hours of audio in total. Releases follow a weekly cadence.
Episodes typically run thirty-five to sixty minutes — most land between 33 min and 47 min — though episode length varies meaningfully from one episode to the next. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-language Technology show.
The show is actively publishing — the most recent episode landed 4 days ago, with 29 episodes already out so far this year. Published by Tech Policy Press.
From the publisher
Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. You can find us at https://techpolicy.press/, where you can join the newsletter.
Latest Episodes
View all 309 episodesHow to Confront the Threat of AI Dictatorship
RightsCon Organizers Take Stock of What's Next After Zambia
AI, Gig Work, and the Future of Nursing
Unpacking the SECURE Data Act
Attorney General Raúl Torrez on What's Next in New Mexico's Case Against Meta
Why Palantir's ImmigrationOS Endangers Democracy and the Rule of Law
What to Do If the AI Bubble Bursts
Project Maven and the Age of AI Warfare
X is a Preferred Tool for American Propaganda. What Does It Mean?

Olivier Sylvain Wants to Reclaim the Internet from Big Tech
This was a landmark week for tech accountability in US courts. Juries in New Mexico and California delivered verdicts finding tech giants Meta and Google liable for harms to young users on their platforms, decisions that are projected to open the door to more lawsuits alleging that social media creates addiction or endangers kids.Today’s guest sees these developments as positive and in line with the types of thinking he believes will help improve the internet. Olivier Sylvain is a professor at Fordham Law School and the author of a new book titled Reclaiming the Internet: How Big Tech Took Control—and How We Can Take It Back, published by Columbia Global Reports. Justin Hendrix interviewed him at Book Culture, a bookstore on 112th Street in New York City.

How to Study the Phenomenon of Tech Hype
AI hype is everywhere, and the CEOs of many tech firms are promising that the tech will soon eclipse human intelligence. The trillions in investment towards this goal and the massive deployment of capital and the human and natural resources it purchases both requires this kind of hype and causes it to compound. Today’s guests are studying this phenomenon from a variety of perspectives, building out a line of inquiry they call "Hype Studies." It's the subject of an occasional series of contributions to Tech Policy Press. Guests include:Jascha Bareis, a postdoctoral political scientist at the University of Fribourg;Andreu Belsunces Gonçalves, a sociologist of design and technology pursuing a PhD at the Tecnopolítica unit of the Open University of Catalonia;Marché Arends, a South African independent investigative journalist.

Considering How AI Destroys Democratic Institutions
Across the world, governments and other institutions are racing to apply artificial intelligence in countless ways. In a draft paper forthcoming in the UC Law Journal titled "How AI Destroys Institutions," Boston University law professors Woodrow Hartzog and Jessica Silbey argue that the design of AI systems—from large language models to predictive and automated decision tools—is fundamentally incompatible with the civic institutions that hold democratic society together, including the rule of law, universities, a free press, and civic life itself. This isn't necessarily because AI is being misused or falling into the wrong hands, they say—in most instances AI is working exactly as intended and, in doing so, eroding the expertise, decision-making structures, and human connection that give institutions their legitimacy.

Google Employees Push Back on Government Surveillance Contracts
Early this year, following the deaths of Keith Porter, Renee Good, and Alex Pretti at the hands of federal agents and the violent immigration raids on communities across the United States, 1,500 Google workers signed a new petition demanding the company cut contracts with Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).Justin Hendrix spoke to two of the employees who signed the petition about why they signed it, the environment inside the company, and how they think about the risk they face for speaking out.

How to Regulate Deepfake Financial Fraud
Online fraud has become one of the fastest-growing criminal enterprises on the planet. Deepfake fraud cases are surging, and Deloitte analysts project that generative AI-driven banking fraud alone could climb to roughly as much as $40 billion in the US alone by 2027.The problem is not just the volume. It's the architecture. These are no longer opportunistic scams—they are industrialized, AI-assisted operations, and the synthetic media tools that power them are becoming cheaper and more convincing by the month.A new report on deepfake financial fraud from Data & Society maps this threat. Justin Hendrix spoke to its authors, including:Alice Marwick, director of research at Data & Society, andAnya Schiffrin, co-director of the tech policy and innovation concentration at Columbia University’s School of International and Public Affairs.

Cindy Cohn on How to Sustain the Fight Against Authoritarianism
Today's guest has spent thirty years on the front lines of one of the defining battles at the intersection of technology and democracy: privacy and the fight for who controls your digital life. Cindy Cohn is the executive director of the Electronic Frontier Foundation (EFF), and she has been in the room for some of the most consequential fights over digital rights since the internet became part of everyday life—from fighting for encryption in the 90s, to the NSA mass surveillance revelations, to battling FBI gag orders that kept Americans in the dark about government data requests, and now for the fight against the grave civil rights and privacy abuses of the Trump administration.Now, as she’s preparing to step down from her role at EFF, she's telling her story, and trying to recruit a new generation to the fight. Her new book, Privacy's Defender, out March 10 from MIT Press, weaves her personal journey with the legal battles she's fought on behalf of whistleblowers, researchers, innovators, and everyday people.

In Age of Disruption, a Defense of Incrementalism
In their new book, Move Slow and Upgrade: The Power of Incremental Innovation, Evan Selinger, a professor in the Department of Philosophy at Rochester Institute of Technology and Albert Fox Cahn, founder in residence of the Surveillance Technology Oversight Project (STOP), argue that society is over-fixated on disruptive innovation over the kind of steady incrementalism that can deliver sustainable returns over longer time frames. They argue in favor of more careful deliberation and adopting what they call the “upgrader’s mindset,” which should be applied whenever “disruptive changes would pose the greatest social risk.”

How to Think About the Anthropic-Pentagon Dispute
The Pentagon wants AI that can fight wars — without limits. One of the United States’ leading AI companies says there are lines it won't cross. And this week, that standoff turned into an all-out confrontation. To discuss the implications of the dispute between Anthropic and the Pentagon, including the determination that the company represents a supply chain risk, Justin Hendrix spoke to two experts:Kat Duffy, senior fellow for digital and cyberspace policy at the Council on Foreign Relations, andAmos Toh, senior counsel in the Liberty and National Security Program at the Brennan Center for Justice.

How to Get Paid to Polarize on TikTok
Concerns about synthetic media and coordinated manipulation of online platforms have moved from theoretical worry to documented reality. Researchers, regulators, and civil society organizations are working to understand how algorithmically driven content recommendation systems can be exploited — not just by ideologically motivated actors, but by ordinary users pursuing financial gain.Fundación Maldita.es is a Spanish nonprofit that has been working on information integrity and fact-checking since 2017. Its most recent investigation focuses on TikTok, and what they found raises pointed questions about the platform's creator monetization program. Researchers at Maldita documented a network of hundreds of accounts — spanning eighteen countries — that were producing AI-generated videos of protests that never happened, and doing so not out of any discernible political motive, but to accumulate followers, qualify for TikTok's revenue-sharing program, and, in some cases, sell the accounts outright. In this episode, Justin Hendrix is joined by Maldita associate director for public policy Carlos Hernández-Echevarría and public policy officer Marina Sacristán.

How to Become an Algorithmic Problem
As AI technologies proliferate, a growing number of people are asking what it means to live in a world dominated by algorithms and automated systems—and what gets lost when those systems optimize human behavior at scale. These questions sit at the intersection of political theory, technology policy, and everyday life, and they are drawing scholars from fields well outside computer science into the conversation.José Marichal is a political scientist at California Lutheran University who has been writing and teaching about technology and politics for more than two decades. Marichal's new book, You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract, considers the age of recommendation systems and large language models. Drawing on political philosophy, he argues that individuals have entered into an implicit bargain with technology companies, trading unpredictability and novelty for the convenience of algorithmically curated experience. The consequences of that bargain, he contends, reach beyond personal preference and into the foundations of liberal democratic citizenship.

The Digital Services Act is a Lightning Rod for Debate
This week marks the second DSA and Platform Regulation conference in Amsterdam, where experts will convene to consider the Digital Services Act (DSA) two years after it entered full effect across the European Union. Over that period, the law has been tested by national elections, geopolitical tensions, high-profile enforcement actions, and the rapid rise of generative AI. It has become both a benchmark for platform accountability and a political lightning rod.Ahead of the conference, Tech Policy Press senior editor Ramsha Jahangir spoke with members of the DSA Observatory, which is organizing the conference, to take stock. What have these first years of enforcement clarified? Where does opacity remain? And what does it mean to conduct DSA research in today’s political climate? Guests include:John Albert, associate researcher, DSA Observatory.Paddy Leerssen, postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory.Magdelena Jozwiak, associate researcher at the DSA Observatory.