|
@DeepMind | |||||
|
“Fast Sparse ConvNets”, a collaboration w/ @GoogleAI [arxiv.org/abs/1911.09723], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents. pic.twitter.com/poDKMzfA4u
|
||||||
|
||||||
|
DeepMind
@DeepMind
|
26. stu |
|
We also introduce a technique [arxiv.org/abs/1911.11134] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners. pic.twitter.com/fA7VmXrj20
|
||
|
|
||
|
Bharath
@Bharath09095865
|
26. stu |
|
Awesome Work DeepMind and Google. Are you going to open source the model?
|
||
|
|
||
|
Erich Elsen
@erich_elsen
|
26. stu |
|
The models are open sourced: github.com/google-researc…
The kernels are also open source as part of XNNPACK:
github.com/google/XNNPACK
|
||
|
|
||