Artwork

Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

32:19
 
Del
 

Manage episode 429711880 series 3498845
Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.

Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the constitutional AI approach, shows how it performs, and where future research might be directed.

If you are in a rush, focus on sections 1.2, 3.1, 3.4, 4.1, 6.1, 6.2.

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Kapitler

1. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback (00:00:00)

2. Abstract (00:00:30)

3. 3 Open Problems and Limitations of RLHF (00:01:23)

4. 3.1 Challenges with Obtaining Human Feedback (00:03:17)

5. 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals (00:03:38)

6. 3.1.2 Good Oversight is Difficult (00:06:51)

7. 3.1.3 Data Quality (00:11:08)

8. 3.1.4 Limitations of Feedback Types (00:12:59)

9. 3.2 Challenges with the Reward Model (00:17:03)

10. 3.2.1 Problem Misspecification (00:17:27)

11. 3.2.2 Reward Misgeneralization and Hacking (00:20:24)

12. 3.2.3 Evaluating Reward Models (00:22:30)

13. 3.3 Challenges with the Policy (00:23:49)

14. 3.3.1 Robust Reinforcement Learning is Difficul (00:24:13)

15. 3.3.2 Policy Misgeneralization (00:26:23)

16. 3.3.3 Distributional Challenges (00:27:35)

17. 3.4 Challenges with Jointly Training the Reward Model and Policy (00:29:54)

85 episoder

Artwork
iconDel
 
Manage episode 429711880 series 3498845
Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators.

Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the constitutional AI approach, shows how it performs, and where future research might be directed.

If you are in a rush, focus on sections 1.2, 3.1, 3.4, 4.1, 6.1, 6.2.

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Kapitler

1. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback (00:00:00)

2. Abstract (00:00:30)

3. 3 Open Problems and Limitations of RLHF (00:01:23)

4. 3.1 Challenges with Obtaining Human Feedback (00:03:17)

5. 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals (00:03:38)

6. 3.1.2 Good Oversight is Difficult (00:06:51)

7. 3.1.3 Data Quality (00:11:08)

8. 3.1.4 Limitations of Feedback Types (00:12:59)

9. 3.2 Challenges with the Reward Model (00:17:03)

10. 3.2.1 Problem Misspecification (00:17:27)

11. 3.2.2 Reward Misgeneralization and Hacking (00:20:24)

12. 3.2.3 Evaluating Reward Models (00:22:30)

13. 3.3 Challenges with the Policy (00:23:49)

14. 3.3.1 Robust Reinforcement Learning is Difficul (00:24:13)

15. 3.3.2 Policy Misgeneralization (00:26:23)

16. 3.3.3 Distributional Challenges (00:27:35)

17. 3.4 Challenges with Jointly Training the Reward Model and Policy (00:29:54)

85 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning

Lyt til dette show, mens du udforsker
Afspil