Gå offline med appen Player FM !
MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger
Manage episode 361711037 series 3461851
Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
48 episoder
Manage episode 361711037 series 3461851
Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
48 episoder
Alle episoder
×
1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53

1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19

1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41

1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15

1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06

1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59

1 The MLSecOps Podcast Season 2 Finale 40:54

1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37

1 MLSecOps Culture: Considerations for AI Development and Security Teams 38:44
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.