What will the future look like? The Future of Everything offers a view of the nascent trends that will shape our world. In every episode, join our award-winning team on a new journey of discovery. We’ll take you beyond what’s already out there, and make you smarter about the scientific and technological breakthroughs on the horizon that could transform our lives for the better.
…
continue reading
Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !
Gå offline med appen Player FM !
Liron Shapira on Superintelligence Goals
MP3•Episode hjem
Manage episode 413410483 series 1334308
Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
…
continue reading
208 episoder
MP3•Episode hjem
Manage episode 413410483 series 1334308
Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
…
continue reading
208 episoder
Alle episoder
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.