The MLSecOps Podcast

The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST

June 14, 2023 MLSecOps.com Season 1 Episode 12
The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST
The MLSecOps Podcast
More Info
The MLSecOps Podcast
The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST
Jun 14, 2023 Season 1 Episode 12
MLSecOps.com

In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.

Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing. 

Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes. 

Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper. 

Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.

Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing. 

Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes. 

Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper. 

Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform