The MLSecOps Podcast

Risk Management and Enhanced Security Practices for AI Systems

February 06, 2024 MLSecOps.com Season 2 Episode 4
Risk Management and Enhanced Security Practices for AI Systems
The MLSecOps Podcast
More Info
The MLSecOps Podcast
Risk Management and Enhanced Security Practices for AI Systems
Feb 06, 2024 Season 2 Episode 4
MLSecOps.com

In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems.

Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems.

Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com.

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform