The MLSecOps Podcast

Beyond Prompt Injection: AI’s Real Security Gaps

MLSecOps.com

Send us a text

In Part 1 of this two-part MLSecOps Podcast, Principal Security Consultant Gavin Klondike joins Dan and Marcello to break down the real threats facing AI systems today. From prompt injection misconceptions to indirect exfiltration via markdown and the failures of ML Ops security practices, Gavin unpacks what the industry gets wrong—and how to fix it.

Full transcript with links to resources available at https://mlsecops.com/podcast/beyond-prompt-injection-ais-real-security-gaps

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform