|
@DeepMind | |||||
|
We also introduce a technique [arxiv.org/abs/1911.11134] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners. pic.twitter.com/fA7VmXrj20
|
||||||
|
||||||
|
DeepMind
@DeepMind
|
26. stu |
|
“Fast Sparse ConvNets”, a collaboration w/ @GoogleAI [arxiv.org/abs/1911.09723], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents. pic.twitter.com/poDKMzfA4u
|
||
|
|
||