|
@quocleix | |||||
|
I also highly recommend this nice video that explains the paper very well:
youtube.com/watch?v=Y8YaU9…
|
||||||
|
||||||
|
Quoc Le
@quocleix
|
12. stu |
|
Want to improve accuracy and robustness of your model? Use unlabeled data!
Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P).
Link: arxiv.org/abs/1911.04252 pic.twitter.com/0umSnX7wui
|
||
|
|
||
|
Quoc Le
@quocleix
|
12. stu |
|
Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model. pic.twitter.com/eem6tlfyPX
|
||
|
|
||
|
Quoc Le
@quocleix
|
12. stu |
|
Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50 😉 pic.twitter.com/BhwgJvSOYK
|
||
|
|
||
|
Quoc Le
@quocleix
|
12. stu |
|
Method is also super simple:
1) Train a classifier on ImageNet
2) Infer labels on a much larger unlabeled dataset
3) Train a larger classifier on the combined set
4) Iterate the process, adding noise
|
||
|
|
||
|
Connor Shorten
@CShorten30
|
13. stu |
|
Thank you so much for sharing this video!!
|
||
|
|
||
|
ecdrid
@aditya_soni2k17
|
13. stu |
|
|
||
|
Endre Moen
@Endre_Moen
|
13. stu |
|
can this technique also work for data sets which are small? Could be interesting to try.
|
||
|
|
||
|
Sam Sepiol ¯\_(ツ)_/¯
@SamSepiol59
|
14. stu |
|
can class imbalance be mitigated by weighing the loss function or adding duplicate data is mandatory ?
|
||
|
|
||