![Prompts gone rogue. [Research Saturday]](https://megaphone.imgix.net/podcasts/58ab7ae0-def8-11ea-b34c-b35b208b0539/image/f873f2ed2bf2868969a07ebae4846fbf.png?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress)
Prompts gone rogue. [Research Saturday]
CyberWire Daily · N2K Networks
Audio is streamed directly from the publisher (pdst.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
Show Notes
Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.
This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.
The research can be found here:
Learn more about your ad choices. Visit megaphone.fm/adchoices