PLAY PODCASTS
Scaling Laws

Scaling Laws

221 episodes — Page 1 of 5

Forecasting AI's Impact on the Economy with Deger Turan, CEO of Metaculus

May 12, 202653 min

Rapid Response: An "FDA for AI" at the White House?, with Dean Ball

May 8, 202633 min

Lawfare Daily: Why AI Won’t Revolutionize Law (At Least Not Yet), with Arvind Narayanan and Justin Curl

May 5, 202644 min

An EU-perspective on America’s Approach to AI with Marietje Schaake

May 1, 202645 min

Eliminating Barriers to AI Adoption with Clarion AI's Bennett Borden

Apr 28, 202650 min

Facts & Myths About AI's Energy Usage with Gavin McCormick

Apr 24, 202649 min

AI as Abnormal Technology? Scott Sullivan Analyzes AI in the Military Domain

Apr 21, 202645 min

Lawfare Daily: Talking About Sam Altman with Ronan Farrow and Andrew Marantz

Apr 17, 202649 min

Why AI Needs Independent Auditors, with Miles Brundage

Apr 14, 202653 min

Productivity Boom? Labor Shock? Google's Chief Economist on AI

Fabien Curto Millet, Chief Economist at Google, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to discuss the potential of AI to catalyze a productivity boom while also addressing labor market instability. The three dive into likely changes in AI capabilities as well as ongoing reasons for slow organizational adoption of AI. Finally, they close with a brief discussion of potential policy approaches.  Hosted on Acast. See acast.com/privacy for more information.

Apr 7, 202650 min

Abundance & AI? Nicholas Bagley Explains

Nicholas Bagley, Professor of Law at Michigan Law, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, for a live recording of the podcast in Ann Arbor. Thanks to Graham Hardig and Brinson Elliott for organizing a great event. Professors Bagley and Frazier start by analyzing a recent debate over housing policy before diving into the weeds of the Abundance Agenda, its nexus with AI policy, and what this all means for the future of legal education and governance. Hosted on Acast. See acast.com/privacy for more information.

Apr 3, 202643 min

How To Use, Govern, And Lead On AI? Rep. Begich Points The Path Forward

Representative Nick Begich, Alaska's at-large member of Congress, joins Kevin Frazier, Director the the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the current state of AI policy on the Hill. As one of the few members of Congress with a background in tech, Rep. Begich offers a unique perspective on this unique and evolving regulatory question. The two also assess how Alaska may be a leader in developing AI infrastructure. Finally, Rep. Begich shares how he and his staff leverage AI to improve their own operations.  Hosted on Acast. See acast.com/privacy for more information.

Mar 31, 202646 min

Should AI Laws Be Subject To A Higher Standard? The Right to Compute with Kendall Cotton

Kendall Cotton, Founder and CEO of Montana’s Frontier Institute, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss Montana’s groundbreaking Right to Compute Act and how Montana hopes to protect access to AI and related technologies. We will discuss the history and reach of this Act and why other states may want to follow Montana's lead. Hosted on Acast. See acast.com/privacy for more information.

Mar 27, 202639 min

Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield

Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks. The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening. Mentioned in this episode:Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025) Hosted on Acast. See acast.com/privacy for more information.

Mar 24, 202649 min

Rapid Response Pod: Trump's New AI Framework with Helen Toner & Dean Ball

On Friday, March 20, the Trump Administration announced a National Policy Framework for AI. White House officials have stressed that they want Congress to act on the framework's recommendations within the year. What this all means for AI policy is an open question that warrants calling in two of the smartest folks in the business: Helen Toner, Interim Executive Director at Georgetown's Center for Security and Emerging Technology (CSET), and Dean Ball, a senior fellow at the Foundation for American Innovation.  This rapid response episode cuts to the chase as everyone makes sense of this important development in the national AI policy conversation.  Hosted on Acast. See acast.com/privacy for more information.

Mar 21, 202625 min

Is AI a Death Sentence for Civic Institutions?, with Jessica Silbey and Woodrow Hartzog

Alan Rozenshtein, research director at Lawfare, spoke with Woodrow Hartzog, the Andrew R. Randall Professor of Law at Boston University School of Law, and Jessica Silbey, Professor of Law and Honorable Frank R. Kenison Distinguished Scholar in Law at Boston University School of Law, about their new paper "How AI Destroys Institutions," which argues that AI systems threaten to erode the civic institutions that organize democratic society. The conversation covered the sociological concept of institutions and why they differ from organizations; the idea of technological affordances from science and technology studies; how AI undermines human expertise through both accuracy and inaccuracy; the cognitive offloading problem and whether AI-driven skill atrophy differs from past technological transitions; whether AI-generated decisions can satisfy the legitimacy requirements of the rule of law; the role of reason-giving, contestation, and political accountability in legal institutions; the tension between the paper's sweeping diagnosis and its more incremental prescriptions; and the case for bespoke, institution-specific AI tools over general-purpose deployment. Hosted on Acast. See acast.com/privacy for more information.

Mar 17, 202653 min

Can AI Enable Human Agency?, with Tomicah Tillemann

Tomicah Tillemann, President at Project Liberty Institute, joins the show. Tomicah offers a unique perspective on regulating emerging technology given his time as a venture capitalist and head of policy at Andreessen Horowitz and Haun Ventures. His contemporary focus is on identifying “policy solutions that enable human agency and human flourishing in an AI-powered world.” It’s a tall order that he breaks down with Kevin Frazier, a Senior Fellow at the Abundance Institute, Adjunct Research Fellow at the Cato Institute, and a Senior Editor at Lawfare. Hosted on Acast. See acast.com/privacy for more information.

Mar 13, 202646 min

Live from Ashby: Taking a Long View on AI Governance with Austin Carson and Caleb Watney

Kevin Frazier hangs out with Caleb Watney of the Institute for Progress and Austin Carson of SeedAI at the Ashby Workshops to discuss the long-run policy foundations needed for the AI Age.Rather than focusing on near-term regulation, the conversation explores how AI challenges existing assumptions about state capacity, research funding, talent pipelines, and institutional design. Caleb and Austin unpack concepts like meta-science, public compute infrastructure, immigration policy, and congressional expertise—and explain why these “boring” policy areas may matter more for AI outcomes than headline-grabbing rules.The episode also examines how AI policy discourse has evolved in Washington, what lessons policymakers should draw from efforts like the National AI Research Resource, and why many AI governance failures may ultimately be failures of institutions rather than intent. Hosted on Acast. See acast.com/privacy for more information.

Mar 10, 202658 min

Scaling Laws x AI Summer: Who Controls the Machine God?

Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and senior editor at Lawfare, were joined by Dean Ball, senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, and Timothy B. Lee, author of the Understanding AI newsletter, for a joint crossover episode of the Scaling Laws and AI Summer podcasts about the escalating dispute between Anthropic and the Pentagon over AI usage restrictions in military contracts.The conversation covered the timeline of the Anthropic-Pentagon dispute and Secretary Hegseth's supply chain risk designation; the legal basis for the designation under 10 U.S.C. § 3252 and whether it was intended to apply to domestic companies; the role of personality and politics in the dispute; OpenAI's competing Pentagon contract and debate over whether its terms actually match Anthropic's red lines; public opinion polling showing bipartisan concern about AI mass surveillance and autonomous weapons; the broader question of what the government-AI industry relationship should look like; the prospect of partial or full nationalization of AI capabilities; and whether frontier AI models are actually decisive for military applications. Hosted on Acast. See acast.com/privacy for more information.

Mar 6, 202657 min

In Defense of Optimism with Packy McCormick

Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.

Mar 3, 202646 min

The Pentagon Goes to War With Anthropic

An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here. Hosted on Acast. See acast.com/privacy for more information.

Feb 27, 202646 min

Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier

Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate. Hosted on Acast. See acast.com/privacy for more information.

Feb 24, 202651 min

Claude's Constitution, with Amanda Askell

Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications. Hosted on Acast. See acast.com/privacy for more information.

Feb 20, 202647 min

Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman

Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.

Feb 17, 202654 min

The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026) Hosted on Acast. See acast.com/privacy for more information.

Feb 10, 202658 min

Alan and Kevin join the Cognitive Revolution.

Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute."  Learn more about the Cognitive Revolution here. It's our second favorite AI podcast! Hosted on Acast. See acast.com/privacy for more information.

Feb 3, 20261h 31m

Is this your last "job"? The AI Economy With AEI's Brent Orrell

Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition.  Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback ([email protected]) and leave us a review! Hosted on Acast. See acast.com/privacy for more information.

Jan 27, 202651 min

Rapid Response Pod on The Implications of Claude's New Constitution

Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare Hosted on Acast. See acast.com/privacy for more information.

Jan 22, 202655 min

The Honorable AI? Shlomo Klapper Talks Judicial Use of AI

Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice. Hosted on Acast. See acast.com/privacy for more information.

Jan 20, 202642 min

How AI Can Transform Local Criminal Justice, with Francis Shen

Alan Rozenshtein, research director at Lawfare, spoke with Francis Shen, Professor of Law at the University of Minnesota, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney.The conversation covered the intersection of neuroscience, AI, and criminal justice; how AI tools can improve criminal investigations and clearance rates; the role of AI in adjudication and plea negotiations; precision sentencing and individualized justice; the ethical concerns around AI bias, fairness, and surveillance; the practical challenges of implementing AI systems in local government; building institutional capacity and public trust; and the future of the prosecutor's office in an AI-augmented justice system. Hosted on Acast. See acast.com/privacy for more information.

Jan 13, 202651 min

Release Schedules and Iterative Deployment with Open AI's Ziad Reslan

Ziad Reslan, a member of OpenAI’s Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab’s approach to testing and deploying its models. It’s a complex and, at times, controversial approach.  Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended. Hosted on Acast. See acast.com/privacy for more information.

Jan 6, 202651 min

A Year That Felt Like a Decade: 2025 Recap with Sen. Maroney & Neil Chilson

Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026.  Hosted on Acast. See acast.com/privacy for more information.

Dec 30, 202553 min

Cass Sunstein on What AI Can and Cannot Do

Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, speaks with Cass Sunstein, the Robert Walmsley University Professor at Harvard University, about his new book, Imperfect Oracle: What AI Can and Cannot Do. They discuss when we should trust algorithms over our own judgment, why AI can eliminate the noise and bias that plague human decision-making but can't predict revolutions, cultural hits, or even a coin flip—and, perhaps most importantly, when it makes sense to delegate our choices to AI and when we should insist on deciding for ourselves. Hosted on Acast. See acast.com/privacy for more information.

Dec 23, 202543 min

AI Chatbots and the Future of Free Expression with Jacob Mchangama and Jacob Shapiro

Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University. The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems. Hosted on Acast. See acast.com/privacy for more information.

Dec 16, 202553 min

Rapid Response Pod on the AI Preemption Executive Order

In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China.  Mentioned in this episode:Executive Order: Ensuring a National Policy Framework for Artificial IntelligenceCharlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.

Dec 12, 202556 min

Graham Dufault on small businesses and navigating EU AI laws

Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.

Dec 9, 202545 min

Caleb Withers on the Cybersecurity Frontier in the Age of AI

Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb’s recent research here. Hosted on Acast. See acast.com/privacy for more information.

Dec 2, 202548 min

A Startup's Perspective on AI Policy

Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective.  Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They’re creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That’s the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas’s LinkedIn, August’s LinkedIn Hosted on Acast. See acast.com/privacy for more information.

Nov 25, 202551 min

Anthropic's General Counsel, Jeff Bleich, Explores the Intersection of Law, Business, and Emerging Technology

Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.The two also review how Jeff’s prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.

Nov 18, 202536 min

The AI Economy and You: How AI Is, Will, and May Alter the Nature of Work and Economic Growth with Anton Korinek, Nathan Goldschlag, and Bharat Chander

Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.

Nov 11, 202543 min

Anthropic's Gabriel Nicholas Analyzes AI Agents

Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it’s widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.

Nov 4, 202548 min

The GoLaxy Revelations: China's AI-Driven Influence Operations, with Brett Goldstein, Brett Benson, and Renée DiResta

Alan Rozenshtein, senior editor at Lawfare, spoke with Brett Goldstein, special advisor to the chancellor on national security and strategic initiatives at Vanderbilt University; Brett Benson, associate professor of political science at Vanderbilt University; and Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown University's McCourt School of Public Policy.The conversation covered the evolution of influence operations from crude Russian troll farms to sophisticated AI systems using large language models; the discovery of GoLaxy documents revealing a "Smart Propaganda System" that collects millions of data points daily, builds psychological profiles, and generates resilient personas; operations targeting Hong Kong's 2020 protests and Taiwan's 2024 election; the fundamental challenges of measuring effectiveness; GoLaxy's ties to Chinese intelligence agencies; why detection has become harder as platform integrity teams have been rolled back and multi-stakeholder collaboration has broken down; and whether the United States can get ahead of this threat or will continue the reactive pattern that has characterized cybersecurity for decades.Mentioned in this episode:"The Era of A.I. Propaganda Has Arrived, and America Must Act" by Brett J. Goldstein and Brett V. Benson (New York Times, August 5, 2025)"China Turns to A.I. in Information Warfare" by Julian E. Barnes (New York Times, August 6, 2025)"The GoLaxy Papers: Inside China's AI Persona Army" by Dina Temple-Raston and Erika Gajda (The Record, September 19, 2025)"The supply of disinformation will soon be infinite" by Renée DiResta (The Atlantic, September 2020) Hosted on Acast. See acast.com/privacy for more information.

Oct 28, 202555 min

Sen. Scott Wiener on California Senate Bill 53

California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.

Oct 21, 202549 min

AI and Energy: What do we know? What are we learning?

Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI.  They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI’s growing energy and environmental costs.  Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003’https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.

Oct 14, 202551 min

AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan

David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC’s Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI.   They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You’ll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false  Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.

Oct 7, 202546 min

Rapid Response: California Governor Newsom Signs SB-53

In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 202536 min

The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).

Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.

Sep 30, 202542 min

AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers

Alan Rozenshtein, Renee DiResta, and Jess Miers discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health. They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability. Hosted on Acast. See acast.com/privacy for more information.

Sep 23, 202558 min

AI Copyright Lawsuits with Pam Samuelson

On today's Scaling Laws episode, Alan Rozenshtein sat down with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, School of Law, to discuss the rapidly evolving legal landscape at the intersection of generative AI and copyright law. They dove into the recent district court rulings in lawsuits brought by authors against AI companies, including Bartz v. Anthropic and Kadrey v. Meta. They explored how different courts are treating the core questions of whether training AI models on copyrighted data is a transformative fair use and whether AI outputs create a “market dilution” effect that harms creators. They also touched on other key cases to watch and the role of the U.S. Copyright Office in shaping the debate. Mentioned in this episode:"How to Think About Remedies in the Generative AI Copyright Cases"by Pam Samuelson in LawfareAndy Warhol Foundation for the Visual Arts, Inc. v. GoldsmithBartz v. AnthropicKadrey v. Meta PlatformsThomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.U.S. Copyright Office, Copyright and Artificial Intelligence, Part 3: Generative AI Training Hosted on Acast. See acast.com/privacy for more information.

Sep 16, 202559 min

AI and the Future of Work: Joshua Gans on Navigating Job Displacement

Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.

Sep 11, 202557 min