毎週水曜日更新中!
…
continue reading
Indhold leveret af Hajime Morrita , Jun Mukai. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Hajime Morrita , Jun Mukai eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !
Gå offline med appen Player FM !
#131: FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
MP3•Episode hjem
Manage episode 414016280 series 2151064
Indhold leveret af Hajime Morrita , Jun Mukai. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Hajime Morrita , Jun Mukai eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
CUDA で書かれた PyTorch 用カーネルに森田が玉砕しました。ご意見感想などは Reddit やおたより投書箱にお寄せください。iTunes のレビューや星もよろしくね。
- [2205.14135] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
- GitHub – Dao-AILab/flash-attention: Fast and memory-efficient exact attention
- GitHub – NVIDIA/apex: A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
- [2307.08691] FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
- [2112.05682] Self-attention Does Not Need $O(n^2)$ Memory
- GitHub – tspeterkim/flash-attention-minimal: Flash Attention in ~100 lines of CUDA (forward pass only)
144 episoder
MP3•Episode hjem
Manage episode 414016280 series 2151064
Indhold leveret af Hajime Morrita , Jun Mukai. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Hajime Morrita , Jun Mukai eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
CUDA で書かれた PyTorch 用カーネルに森田が玉砕しました。ご意見感想などは Reddit やおたより投書箱にお寄せください。iTunes のレビューや星もよろしくね。
- [2205.14135] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
- GitHub – Dao-AILab/flash-attention: Fast and memory-efficient exact attention
- GitHub – NVIDIA/apex: A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
- [2307.08691] FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
- [2112.05682] Self-attention Does Not Need $O(n^2)$ Memory
- GitHub – tspeterkim/flash-attention-minimal: Flash Attention in ~100 lines of CUDA (forward pass only)
144 episoder
Alle episoder
×Velkommen til Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.