|
@karpathy | |||||
|
An interesting trend from this year's CVPR are the numerous new papers on self-supervised learning. Andrew Zisserman gave a nice tutorial: project.inria.fr/paiss/files/20… although, there is a lot more geometry-related work as well (e.g. self-supervised depth & friends).
|
||||||
|
||||||
|
ganesh
@wittyperceptron
|
21. lip |
|
There was a workshop on self supervised learning at ICML on Saturday. Andrew Zisserman and Alexei Efros had great talks and it was recorded!
twitter.com/wittyperceptro…
|
||
|
|
||
|
Andrej Karpathy
@karpathy
|
21. lip |
|
Haha, you’re right, it all kind of blended :)
|
||
|
|
||
|
ml_shgidi
@shgidi
|
21. lip |
|
Very interesting! I once wrote a blogpost series about it - link.medium.com/IhOvrqFEzX
|
||
|
|
||
|
Nova Introvert
@NovaIntrovert
|
21. lip |
|
Almost everything Dr. Andrej Karpathy recommended is a must-read 🤣
|
||
|
|
||
|
muaz
@radzaeem
|
21. lip |
|
Self supervise is generalized denoising tasks. Features learnt is for data effiency.
Meta learning is exploiting conditional probability learning task.
Can deep learning people choose names which are wayy more self descriptive...
|
||
|
|
||
|
ganesh
@wittyperceptron
|
21. lip |
|
Metric learning, signed distance functions and lots of graph convolutions too!
|
||
|
|
||
|
Russ
@rmarcilhoo
|
21. lip |
|
Also, it seems the Berkeley RL Trinity are taking over the AI conferences.
|
||
|
|
||
|
ELLIOTT MAKES GANS
@neozero497
|
21. lip |
|
Who else is in this?
|
||
|
|
||
|
Some One
@mobile14u
|
21. lip |
|
In the future of ML... Neural nets are merely a wrapper ... Just give it 3 to 5 years... Maybe 10
|
||
|
|
||