Twitter | Pretraživanje | |
Quoc Le 25. stu
AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 25. stu
Odgovor korisniku/ci @quocleix
Many of us tried to use adversarial examples as data augmentation and observed a drop in accuracy. And it seems that simply using two BatchNorms overcomes this mysterious drop in accuracy.
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 25. stu
Odgovor korisniku/ci @quocleix
As a data augmentation method, adversarial examples are more general than other image processing techniques. So I expect AdvProp to be useful everywhere (language, structured data etc.), not just image recognition.
Reply Retweet Označi sa "sviđa mi se"
Quoc Le 26. stu
Odgovor korisniku/ci @quocleix
AdvProp improves accuracy for a wide range of image models, from small to large. But the improvement seems bigger when the model is larger.
Reply Retweet Označi sa "sviđa mi se"
Quoc Le
Pretrained checkpoints in Pytorch: h/t to
Pretrained EfficientNet, MixNet, MobileNetV3, MNASNet A1 and B1, FBNet, Single-Path NAS - rwightman/gen-efficientnet-pytorch
GitHub GitHub @github
Reply Retweet Označi sa "sviđa mi se" More
Ross Wightman 26. stu
Odgovor korisniku/ci @quocleix @PyTorch
I'm not a Google team, just one dude who likes PyTorch and enjoys making good models available to other PyTorch users. So, thanks to and all team members for making lots of good models available.
Reply Retweet Označi sa "sviđa mi se"