| Tweetovi |
| charles blundell proslijedio/la je tweet | ||
|
DeepMind
@DeepMind
|
4. stu |
|
Out-of-sample generalisation of memory in deep reinforcement learning agents: we demonstrate how two different kinds of memory help out-of-distribution generalisation, and propose a memory task suite. arxiv.org/abs/1910.13406 pic.twitter.com/q4GnF3DIYr
|
||
|
|
||
| charles blundell proslijedio/la je tweet | ||
|
Berkeley AI Research
@berkeley_ai
|
5. lip 2018. |
|
#TransferLearning is crucial for general #AI, and understanding what transfers to what is crucial for #TransferLearning. Taskonomy (#CVPR18 oral) is one step towards understanding transferability among #perception tasks. Live demo and more: taskonomy.vision pic.twitter.com/Fl2UxyHYbl
|
||
|
|
||
| charles blundell proslijedio/la je tweet | ||
|
Richard Socher
@RichardSocher
|
20. lip 2018. |
|
Very excited to announce the natural language decathlon benchmark and the first single joint deep learning model to do well on ten different nlp tasks including question answering, translation, summarization, sentiment analysis, ++
einstein.ai/research/the-n… pic.twitter.com/4fotVhdRow
|
||
|
|
||