The MLSecOps Podcast

Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake

May 24, 2023 MLSecOps.com Season 1 Episode 10
Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake
The MLSecOps Podcast
More Info
The MLSecOps Podcast
Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake
May 24, 2023 Season 1 Episode 10
MLSecOps.com

This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now. 

In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively. 

Our host, Daryan Dehghanpisheh, is joined by special guest-host (Red Team Director and prior show guest) Johann Rehberger to discuss Kai’s research, including the potential real-world implications of these security breaches. They also examine contrasts to traditional security injection vulnerabilities like SQL injections. 

The group also discusses the role of LLM applications in everyday workflows and the increased security risks posed by their integration into various industry systems, including military applications. The discussion then shifts to potential mitigation strategies and the future of AI red teaming and ML security. 



Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now. 

In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively. 

Our host, Daryan Dehghanpisheh, is joined by special guest-host (Red Team Director and prior show guest) Johann Rehberger to discuss Kai’s research, including the potential real-world implications of these security breaches. They also examine contrasts to traditional security injection vulnerabilities like SQL injections. 

The group also discusses the role of LLM applications in everyday workflows and the increased security risks posed by their integration into various industry systems, including military applications. The discussion then shifts to potential mitigation strategies and the future of AI red teaming and ML security. 



Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform