The MLSecOps Podcast

ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt

August 18, 2023 MLSecOps.com Season 1 Episode 17
ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
The MLSecOps Podcast
More Info
The MLSecOps Podcast
ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
Aug 18, 2023 Season 1 Episode 17
MLSecOps.com

This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.

But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.

He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups. 

Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.


Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.

But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.

He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups. 

Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.


Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense

Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform