Twitter | Pretraživanje | |
DeepMind 26. stu
“Fast Sparse ConvNets”, a collaboration w/ [], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents.
Reply Retweet Označi sa "sviđa mi se"
DeepMind
We also introduce a technique [] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners.
Reply Retweet Označi sa "sviđa mi se" More