Artwork

Indhold leveret af Rob Wiblin and Keiran Harris and The 80000 Hours team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Rob Wiblin and Keiran Harris and The 80000 Hours team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

22:54
 
Del
 

Manage episode 440570506 series 3320433
Indhold leveret af Rob Wiblin and Keiran Harris and The 80000 Hours team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Rob Wiblin and Keiran Harris and The 80000 Hours team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

90 episoder

Artwork
iconDel
 
Manage episode 440570506 series 3320433
Indhold leveret af Rob Wiblin and Keiran Harris and The 80000 Hours team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Rob Wiblin and Keiran Harris and The 80000 Hours team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

90 episoder

Semua episode

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning