Artwork

Indhold leveret af PyTorch, Edward Yang, and Team PyTorch. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af PyTorch, Edward Yang, and Team PyTorch eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Half precision

18:00
 
Del
 

Manage episode 301973966 series 2921809
Indhold leveret af PyTorch, Edward Yang, and Team PyTorch. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af PyTorch, Edward Yang, and Team PyTorch eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

83 episoder

Artwork

Half precision

PyTorch Developer Podcast

34 subscribers

published

iconDel
 
Manage episode 301973966 series 2921809
Indhold leveret af PyTorch, Edward Yang, and Team PyTorch. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af PyTorch, Edward Yang, and Team PyTorch eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

83 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning