PLAY PODCASTS
LLMs: risks, rewards, and realities
Season 4 · Episode 13

LLMs: risks, rewards, and realities

Cloud Security Today · Matthew Chiodi

November 20, 202447m 26s

Audio is streamed directly from the publisher (api.riverside.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

Send a text

Nate Lee discusses his transition from a CISO role to fractional CISO work, emphasizing the importance of variety and exposure in his career. He delves into the rise of AI, particularly large language models (LLMs), and the associated security concerns, including prompt injection risks.

Nate highlights the critical role of orchestrators in managing AI interactions and the need for security practitioners to adapt to the evolving landscape. He shares insights from his 20 years in cybersecurity and offers recommendations for practitioners to engage with AI responsibly and effectively.

Takeaways

  • Nate transitioned to fractional CISO work for variety and exposure.
  • Prompt injection is a major vulnerability in LLM systems.
  • Orchestrators are essential for managing AI interactions securely.
  • Security practitioners must understand how LLMs work to mitigate risks.
  • Nate emphasizes the importance of human oversight in AI systems.

Link to Nate's research with the Cloud Security Alliance.

The future of cloud security.
Simplify cloud security with Prisma Cloud, the Code to Cloud platform powered by Precision AI.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.