Artwork

Indhold leveret af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

The Lottery Ticket Hypothesis

19:45
 
Del
 

Manage episode 254315967 series 74115
Indhold leveret af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Recent research into neural networks reveals that sometimes, not all parts of the neural net are equally responsible for the performance of the network overall. Instead, it seems like (in some neural nets, at least) there are smaller subnetworks present where most of the predictive power resides. The fascinating thing is that, for some of these subnetworks (so-called “winning lottery tickets”), it’s not the training process that makes them good at their classification or regression tasks: they just happened to be initialized in a way that was very effective. This changes the way we think about what training might be doing, in a pretty fundamental way. Sometimes, instead of crafting a good fit from wholecloth, training might be finding the parts of the network that always had predictive power to begin with, and isolating and strengthening them. This research is pretty recent, having only come to prominence in the last year, but nonetheless challenges our notions about what it means to train a machine learning model.
  continue reading

293 episoder

Artwork

The Lottery Ticket Hypothesis

Linear Digressions

3,116 subscribers

published

iconDel
 
Manage episode 254315967 series 74115
Indhold leveret af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Ben Jaffe and Katie Malone, Ben Jaffe, and Katie Malone eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Recent research into neural networks reveals that sometimes, not all parts of the neural net are equally responsible for the performance of the network overall. Instead, it seems like (in some neural nets, at least) there are smaller subnetworks present where most of the predictive power resides. The fascinating thing is that, for some of these subnetworks (so-called “winning lottery tickets”), it’s not the training process that makes them good at their classification or regression tasks: they just happened to be initialized in a way that was very effective. This changes the way we think about what training might be doing, in a pretty fundamental way. Sometimes, instead of crafting a good fit from wholecloth, training might be finding the parts of the network that always had predictive power to begin with, and isolating and strengthening them. This research is pretty recent, having only come to prominence in the last year, but nonetheless challenges our notions about what it means to train a machine learning model.
  continue reading

293 episoder

Minden epizód

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning