Artwork

Indhold leveret af Demetrios. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Demetrios eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

MLSecOps is Fundamental to Robust AISPM // Sean Morgan // #257

42:35
 
Del
 

Manage episode 437151309 series 3241972
Indhold leveret af Demetrios. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Demetrios eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Sean Morgan is an active open-source contributor and maintainer and is the special interest group lead for TensorFlow Addons. Learn more about the platform for end-to-end AI Security at https://protectai.com/. MLSecOps is Fundamental to Robust AI Security Posture Management (AISPM) // MLOps Podcast #257 with Sean Morgan, Chief Architect at Protect AI. // Abstract MLSecOps, which is the practice of integrating security practices into the AIML lifecycle (think infusing MLOps with DevSecOps practices), is a critical part of any team’s AI Security Posture Management. In this talk, we’ll discuss how to threat model realistic AIML security risks, how you can measure your organization’s AI Security Posture, and most importantly how you can improve that security posture through the use of MLSecOps. // Bio Sean Morgan is the Chief Architect at Protect AI. In prior roles he's led production AIML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Sean's GitHub: https://github.com/seanpmorgan MLSecOps Community: https://community.mlsecops.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Sean on LinkedIn: https://www.linkedin.com/in/seanmorgan/ Timestamps: [00:00] Sean's preferred coffee [00:10] Takeaways [01:39] Register for the Data Engineering for AI/ML Conference now! [02:21] KubeCon Paris: Emphasis on security and AI [05:00] Concern about malicious data during training process [09:29] Model builders, security, pulling foundational models, nuances [12:13] Hugging Face research on security issues [15:00] Inference servers exposed; potential for attack [19:45] Balancing ML and security processes for ease [23:23] Model artifact security in enterprise machine learning [25:04] Scanning models and datasets for vulnerabilities [29:23] Ray's user interface vulnerabilities lead to attacks [32:07] ML Flow vulnerabilities present significant server risks [36:04] Data ops essential for machine learning security [37:32] Prioritized security in model and data deployment [40:46] Automated scanning tool for improved antivirus protection [42:00] Wrap up

  continue reading

420 episoder

Artwork
iconDel
 
Manage episode 437151309 series 3241972
Indhold leveret af Demetrios. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Demetrios eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Sean Morgan is an active open-source contributor and maintainer and is the special interest group lead for TensorFlow Addons. Learn more about the platform for end-to-end AI Security at https://protectai.com/. MLSecOps is Fundamental to Robust AI Security Posture Management (AISPM) // MLOps Podcast #257 with Sean Morgan, Chief Architect at Protect AI. // Abstract MLSecOps, which is the practice of integrating security practices into the AIML lifecycle (think infusing MLOps with DevSecOps practices), is a critical part of any team’s AI Security Posture Management. In this talk, we’ll discuss how to threat model realistic AIML security risks, how you can measure your organization’s AI Security Posture, and most importantly how you can improve that security posture through the use of MLSecOps. // Bio Sean Morgan is the Chief Architect at Protect AI. In prior roles he's led production AIML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Sean's GitHub: https://github.com/seanpmorgan MLSecOps Community: https://community.mlsecops.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Sean on LinkedIn: https://www.linkedin.com/in/seanmorgan/ Timestamps: [00:00] Sean's preferred coffee [00:10] Takeaways [01:39] Register for the Data Engineering for AI/ML Conference now! [02:21] KubeCon Paris: Emphasis on security and AI [05:00] Concern about malicious data during training process [09:29] Model builders, security, pulling foundational models, nuances [12:13] Hugging Face research on security issues [15:00] Inference servers exposed; potential for attack [19:45] Balancing ML and security processes for ease [23:23] Model artifact security in enterprise machine learning [25:04] Scanning models and datasets for vulnerabilities [29:23] Ray's user interface vulnerabilities lead to attacks [32:07] ML Flow vulnerabilities present significant server risks [36:04] Data ops essential for machine learning security [37:32] Prioritized security in model and data deployment [40:46] Automated scanning tool for improved antivirus protection [42:00] Wrap up

  continue reading

420 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning

Lyt til dette show, mens du udforsker
Afspil