The MLSecOps Podcast

Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal

June 07, 2023 MLSecOps.com Season 1 Episode 11
Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal
The MLSecOps Podcast
More Info
The MLSecOps Podcast
Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal
Jun 07, 2023 Season 1 Episode 11
MLSecOps.com

In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.

Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs). 

In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs. 

If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.


Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Show Notes

In “Navigating the Challenges of LLMs: Guardrails to the Rescue,” Protect AI Co-Founders, Daryan Dehghanpisheh and Badar Ahmed, interview the creator of Guardrails AI, Shreya Rajpal.

Guardrails AI is an open source package that allows users to add structure, type, and quality guarantees to the outputs of large language models (LLMs). 

In this highly technical discussion, the group digs into Shreya’s inspiration for starting the Guardrails project, the challenges of building a deterministic “guardrail” system on top of probabilistic large language models, and the challenges in general (both technical and otherwise) that developers face when building applications for LLMs. 

If you’re an engineer or developer in this space looking to integrate large language models into the applications you’re building, this episode is a must-listen and highlights important security considerations.


Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform