Gå offline med appen Player FM !
Practical Foundations for Securing AI
Manage episode 418235989 series 3461851
In this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation!
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 episoder
Manage episode 418235989 series 3461851
In this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation!
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 episoder
Alle episoder
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.