Artwork

Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Illustrating Reinforcement Learning from Human Feedback (RLHF)

22:32
 
Del
 

Manage episode 429711881 series 3498845
Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This more technical article explains the motivations for a system like RLHF, and adds additional concrete details as to how the RLHF approach is applied to neural networks.

While reading, consider which parts of the technical implementation correspond to the 'values coach' and 'coherence coach' from the previous video.

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Kapitler

1. Illustrating Reinforcement Learning from Human Feedback (RLHF) (00:00:00)

2. RLHF: Let’s take it step by step (00:03:16)

3. Pretraining language models (00:03:51)

4. Reward model training (00:05:46)

5. Fine-tuning with RL (00:09:26)

6. Open-source tools for RLHF (00:16:10)

7. What’s next for RLHF? (00:18:20)

8. Further reading (00:21:17)

83 episoder

Artwork
iconDel
 
Manage episode 429711881 series 3498845
Indhold leveret af BlueDot Impact. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af BlueDot Impact eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

This more technical article explains the motivations for a system like RLHF, and adds additional concrete details as to how the RLHF approach is applied to neural networks.

While reading, consider which parts of the technical implementation correspond to the 'values coach' and 'coherence coach' from the previous video.

A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.

  continue reading

Kapitler

1. Illustrating Reinforcement Learning from Human Feedback (RLHF) (00:00:00)

2. RLHF: Let’s take it step by step (00:03:16)

3. Pretraining language models (00:03:51)

4. Reward model training (00:05:46)

5. Fine-tuning with RL (00:09:26)

6. Open-source tools for RLHF (00:16:10)

7. What’s next for RLHF? (00:18:20)

8. Further reading (00:21:17)

83 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning