Twitter | Pretraživanje | |
Quoc Le 12. stu
Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 12. stu
Odgovor korisniku/ci @quocleix
Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 12. stu
Odgovor korisniku/ci @quocleix
Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50 😉
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 12. stu
Odgovor korisniku/ci @quocleix
Method is also super simple: 1) Train a classifier on ImageNet 2) Infer labels on a much larger unlabeled dataset 3) Train a larger classifier on the combined set 4) Iterate the process, adding noise
Reply Retweet Označi sa "sviđa mi se"
Quoc Le
I also highly recommend this nice video that explains the paper very well:
Reply Retweet Označi sa "sviđa mi se" More
Connor Shorten 13. stu
Odgovor korisniku/ci @quocleix
Thank you so much for sharing this video!!
Reply Retweet Označi sa "sviđa mi se"
ecdrid 13. stu
Odgovor korisniku/ci @quocleix @A_K_Nain @abhi1thakur
Reply Retweet Označi sa "sviđa mi se"
Endre Moen 13. stu
Odgovor korisniku/ci @quocleix
can this technique also work for data sets which are small? Could be interesting to try.
Reply Retweet Označi sa "sviđa mi se"
Sam Sepiol ¯\_(ツ)_/¯ 14. stu
Odgovor korisniku/ci @quocleix
can class imbalance be mitigated by weighing the loss function or adding duplicate data is mandatory ?
Reply Retweet Označi sa "sviđa mi se"