PLAY PODCASTS
Arrested by AI

Arrested by AI

Confident in AI-generated results, some police departments are using facial recognition technology to help solve crimes. But how reliable is that software, and what happens when it’s wrong?

Post Reports

January 14, 202531m 36s

Audio is streamed directly from the publisher (tracking.swap.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

After two men brutally assaulted a security guard on a train platform in St. Louis, police detectives faced a daunting challenge: identifying the attackers. Police turned to facial recognition technology, feeding a blurry image from a small surveillance camera into the software.


The software gave them the mugshot of a man who says he had nothing to do with the crime. Christopher Gatlin spent over a year in jail awaiting trial before the case was dropped.


Gatlin is one of at least eight people in the United States who have been wrongfully arrested after being identified by facial recognition technology. All of those cases were eventually dropped by prosecutors – but only after the suspects fought to clear their names.


Business and tech investigations reporter Doug MacMillan unpacks his research into how police are using AI-driven facial recognition and how people like Gatlin have been wrongfully arrested as a result.


Today’s show was produced by Emma Talkoff and Trinity Webster-Bass. It was edited by Maggie Penman and Evelyn Larrubia. Thank you to David Ovalle and Aaron Schaffer.



Subscribe to The Washington Post here.