Artwork

Indhold leveret af TWIML and Sam Charrington. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af TWIML and Sam Charrington eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

49:46
 
Del
 

Manage episode 411383461 series 2355587
Indhold leveret af TWIML and Sam Charrington. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af TWIML and Sam Charrington eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.

  continue reading

710 episoder

Artwork
iconDel
 
Manage episode 411383461 series 2355587
Indhold leveret af TWIML and Sam Charrington. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af TWIML and Sam Charrington eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.

  continue reading

710 episoder

所有剧集

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning