Artwork

Indhold leveret af Dev and Doc. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Dev and Doc eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

#22 Explaining Explainable AI (for healthcare) with Dr Annabelle Painter (RSM digital health section Podcast)

58:40
 
Del
 

Manage episode 434385253 series 3585389
Indhold leveret af Dev and Doc. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Dev and Doc eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 Devanddoc@gmail.com

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

24 episoder

Artwork
iconDel
 
Manage episode 434385253 series 3585389
Indhold leveret af Dev and Doc. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Dev and Doc eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 Devanddoc@gmail.com

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

24 episoder

सभी एपिसोड

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning