The MLSecOps Podcast

Practical Offensive and Adversarial ML for Red Teams

MLSecOps.com Season 2 Episode 13

Send us a text

Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.

Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.

The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform