SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow
The Lottery Ticket Hypothesis has shown that it’s theoretically possible to prune a neural network at the beginning of training and still achieve good performance, if we only knew which weights to prune away. This paper does not only explain where other attempts at pruning fail, but provides an algorithm that provably reaches maximum compression capacity, all without looking at any data!
OUTLINE:
0:00 - Intro & Overview
1:00 - Pruning Neural Networks
3:40 - Lottery Ticket Hypothesis
6:00 - Paper Story Over
17 views
11
4
5 years ago 00:44:53 1
SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow