Artwork

Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Dan Hendrycks on Catastrophic AI Risks

2:07:24
 
Del
 

Manage episode 382010909 series 1334308
Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-source models 1:15:35 Dan's textbook on AI safety 1:22:58 Rogue AI 1:28:09 LLMs and value specification 1:33:14 AI goal drift 1:41:10 Power-seeking AI 1:52:07 AI deception 1:57:53 Representation engineering
  continue reading

214 episoder

Artwork
iconDel
 
Manage episode 382010909 series 1334308
Indhold leveret af Gus Docker and Future of Life Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Gus Docker and Future of Life Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-source models 1:15:35 Dan's textbook on AI safety 1:22:58 Rogue AI 1:28:09 LLMs and value specification 1:33:14 AI goal drift 1:41:10 Power-seeking AI 1:52:07 AI deception 1:57:53 Representation engineering
  continue reading

214 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning