Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Indhold leveret af LessWrong. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af LessWrong eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !
Gå offline med appen Player FM !
“Shallow review of technical AI safety, 2024” by technicalities, Stag, Stephen McAleese, jordine, Dr. David Mathers
MP3•Episode hjem
Manage episode 458257246 series 3364760
Indhold leveret af LessWrong. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af LessWrong eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
399 episoder
MP3•Episode hjem
Manage episode 458257246 series 3364760
Indhold leveret af LessWrong. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af LessWrong eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
399 episoder
Alle episoder
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.