Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Gå offline med appen Player FM !
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 429711880 series 3498845
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.
Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the constitutional AI approach, shows how it performs, and where future research might be directed.
If you are in a rush, focus on sections 1.2, 3.1, 3.4, 4.1, 6.1, 6.2.
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback (00:00:00)
2. Abstract (00:00:30)
3. 3 Open Problems and Limitations of RLHF (00:01:23)
4. 3.1 Challenges with Obtaining Human Feedback (00:03:17)
5. 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals (00:03:38)
6. 3.1.2 Good Oversight is Difficult (00:06:51)
7. 3.1.3 Data Quality (00:11:08)
8. 3.1.4 Limitations of Feedback Types (00:12:59)
9. 3.2 Challenges with the Reward Model (00:17:03)
10. 3.2.1 Problem Misspecification (00:17:27)
11. 3.2.2 Reward Misgeneralization and Hacking (00:20:24)
12. 3.2.3 Evaluating Reward Models (00:22:30)
13. 3.3 Challenges with the Policy (00:23:49)
14. 3.3.1 Robust Reinforcement Learning is Difficul (00:24:13)
15. 3.3.2 Policy Misgeneralization (00:26:23)
16. 3.3.3 Distributional Challenges (00:27:35)
17. 3.4 Challenges with Jointly Training the Reward Model and Policy (00:29:54)
85 episoder
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 429711880 series 3498845
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.
Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the constitutional AI approach, shows how it performs, and where future research might be directed.
If you are in a rush, focus on sections 1.2, 3.1, 3.4, 4.1, 6.1, 6.2.
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback (00:00:00)
2. Abstract (00:00:30)
3. 3 Open Problems and Limitations of RLHF (00:01:23)
4. 3.1 Challenges with Obtaining Human Feedback (00:03:17)
5. 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals (00:03:38)
6. 3.1.2 Good Oversight is Difficult (00:06:51)
7. 3.1.3 Data Quality (00:11:08)
8. 3.1.4 Limitations of Feedback Types (00:12:59)
9. 3.2 Challenges with the Reward Model (00:17:03)
10. 3.2.1 Problem Misspecification (00:17:27)
11. 3.2.2 Reward Misgeneralization and Hacking (00:20:24)
12. 3.2.3 Evaluating Reward Models (00:22:30)
13. 3.3 Challenges with the Policy (00:23:49)
14. 3.3.1 Robust Reinforcement Learning is Difficul (00:24:13)
15. 3.3.2 Policy Misgeneralization (00:26:23)
16. 3.3.3 Distributional Challenges (00:27:35)
17. 3.4 Challenges with Jointly Training the Reward Model and Policy (00:29:54)
85 episoder
Alle episoder
×1 Introduction to Mechanistic Interpretability 11:45
1 Illustrating Reinforcement Learning from Human Feedback (RLHF) 22:32
1 Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback 32:19
1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Intro to Brain-Like-AGI Safety 1:02:10
1 Empirical Findings Generalize Surprisingly Far 11:32
1 Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions 16:39
1 Least-To-Most Prompting Enables Complex Reasoning in Large Language Models 16:08
1 ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation 16:08
1 Imitative Generalisation (AKA ‘Learning the Prior’) 18:14
Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.