AI Control: Improving Safety Despite Intentional Subversion
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424744791 series 3498845
We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even if AI instances are colluding to subvert the safety techniques. In this post:
- We summarize the paper;
- We compare our methodology to the methodology of other safety papers.
Source:
https://www.alignmentforum.org/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion
Narrated for AI Safety Fundamentals by Perrin Walker
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. AI Control: Improving Safety Despite Intentional Subversion (00:00:00)
2. Paper summary (00:02:41)
3. Setup (00:02:43)
4. Evaluation methodology (00:04:59)
5. Results (00:06:25)
6. Relationship to other work (00:10:51)
7. Future work (00:17:50)
85 episoder