Artwork

Indhold leveret af EDGE AI FOUNDATION. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af EDGE AI FOUNDATION eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse

20:34
 
Del
 

Manage episode 481681451 series 3574631
Indhold leveret af EDGE AI FOUNDATION. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af EDGE AI FOUNDATION eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Unlock the secrets of deploying TinyML models in real-world scenarios with Alessandro Grande, Head of Product at Edge Impulse. Curious about how TinyML has evolved since its early days? Alessandro takes us through a journey from his initial demos at Arm to the sophisticated, scalable deployments we see today. Learn why continuous model monitoring is not just important but essential for the reliability and functionality of machine learning applications, especially in large-scale IoT deployments. Alessandro shares actionable insights on how to maintain a continuous lifecycle for ML models to handle unpredictable changes and ensure sustained success.
Delve into the intricacies of health-related use cases with a spotlight on the HIFE AI cough monitoring system. Discover best practices for data collection and preparation, including identifying outliers and leveraging Generative AI like ChatGPT 4.0 for efficient data labeling. We also emphasize the importance of building scalable infrastructure for automated ML development. Learn how continuous integration and continuous deployment (CI/CD) pipelines can enhance the lifecycle management of ML models, ensuring security and scalability from day one. This episode is a treasure trove of practical advice for anyone tackling the challenges of deploying ML models in diverse environments.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Kapitler

1. Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse (00:00:00)

2. Model Monitoring in Real-World Deployment (00:00:05)

3. Health Workflow and Data Collection (00:11:26)

4. Automated Model Deployment in Production (00:18:14)

70 episoder

Artwork
iconDel
 
Manage episode 481681451 series 3574631
Indhold leveret af EDGE AI FOUNDATION. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af EDGE AI FOUNDATION eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Unlock the secrets of deploying TinyML models in real-world scenarios with Alessandro Grande, Head of Product at Edge Impulse. Curious about how TinyML has evolved since its early days? Alessandro takes us through a journey from his initial demos at Arm to the sophisticated, scalable deployments we see today. Learn why continuous model monitoring is not just important but essential for the reliability and functionality of machine learning applications, especially in large-scale IoT deployments. Alessandro shares actionable insights on how to maintain a continuous lifecycle for ML models to handle unpredictable changes and ensure sustained success.
Delve into the intricacies of health-related use cases with a spotlight on the HIFE AI cough monitoring system. Discover best practices for data collection and preparation, including identifying outliers and leveraging Generative AI like ChatGPT 4.0 for efficient data labeling. We also emphasize the importance of building scalable infrastructure for automated ML development. Learn how continuous integration and continuous deployment (CI/CD) pipelines can enhance the lifecycle management of ML models, ensuring security and scalability from day one. This episode is a treasure trove of practical advice for anyone tackling the challenges of deploying ML models in diverse environments.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Kapitler

1. Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse (00:00:00)

2. Model Monitoring in Real-World Deployment (00:00:05)

3. Health Workflow and Data Collection (00:11:26)

4. Automated Model Deployment in Production (00:18:14)

70 episoder

All episodes

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning

Lyt til dette show, mens du udforsker
Afspil