The MLSecOps Podcast

MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD

April 18, 2023 MLSecOps.com Season 1 Episode 5
MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD
The MLSecOps Podcast
More Info
The MLSecOps Podcast
MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD
Apr 18, 2023 Season 1 Episode 5
MLSecOps.com

This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.


Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris  discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix. 


The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

This week The MLSecOps Podcast talks with Dr. Christina Liaghati, AI Strategy Execution & Operations Manager of the AI & Autonomy Innovation Center at MITRE.


Chris King, Head of Product at Protect AI, guest-hosts with regular co-host D Dehghanpisheh this week. D and Chris  discuss various AI and machine learning security topics with Dr. Liaghati, including the contrasts between the MITRE ATT&CK matrices focused on traditional cybersecurity, and the newer AI-focused MITRE ATLAS matrix. 


The group also dives into consideration of new classifications of ML attacks related to large language models, ATLAS case studies, security practices such as ML red teaming; and integrating security into MLOps.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform