
Red Team Brainstorming With GPTs Accelerates Threat Modeling
Hallucinations Are 'Ideas That Haven't Been Tested Yet,' Says Erica Burgess
December 23, 2025
Audio is streamed directly from the publisher (ismg-cdn.nyc3.cdn.digitaloceanspaces.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
About this episode
Large language models have a well-earned reputation for making things up. But for AI cybersecurity architect Erica Burgess, rather than being a bug, GPT hallucinations can be a threat-modeling feature. "I like to think of the hallucinations as just ideas that haven't been tested yet," she said.