|
Will Dabney
@
wwdabney
London, England
|
|
Research scientist at DeepMind. On the critical path to AGI. Also, a persistent optimist.
|
|
|
76
Tweetovi
|
36
Pratim
|
577
Osobe koje vas prate
|
| Tweetovi |
|
Will Dabney
@wwdabney
|
2. velj |
|
Self-made billionaires
|
||
|
|
||
| Will Dabney proslijedio/la je tweet | ||
|
Quanta Magazine
@QuantaMagazine
|
30. sij |
|
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command. quantamagazine.org/artificial-int…
|
||
|
|
||
|
Will Dabney
@wwdabney
|
31. sij |
|
First, a quick mea culpa, I’m definitely guilty of this too. Solutions? As problematic as p-hacking is, it would already be an improvement if we were all reporting a reasonable statistical significance vs just eye-balling the learning curves (or worse).
|
||
|
|
||
|
Will Dabney
@wwdabney
|
29. sij |
|
Our paper 'A distributional code for value in dopamine-based reinforcement learning' on the cover of @nature!
Read it here: rdcu.be/b0mtA
Shout out to the amazing artists/designers at @DeepMind who make this possible, while we get to focus on the research. twitter.com/EricTopol/stat…
|
||
|
|
||
| Will Dabney proslijedio/la je tweet | ||
|
Sherjil Ozair
@sherjilozair
|
28. sij |
|
I haven't found a single person who has used jax and said they don't like it. I've been actively priming people to criticize it, but noone does. Instead they tell me how good it feels getting off of Tensorflow. Looking forward to jaxxing myself soon.
|
||
|
|
||
| Will Dabney proslijedio/la je tweet | ||
|
Pablo Samuel Castro
@pcastr
|
16. sij |
|
Hey everyone, I'm so excited to share my recent interview on Music & AI plus "A Geometric Perspective on Reinforcement Learning" with @samcharrington for the @twimlai podcast. Check it out! twimlai.com/talk/339 via @twimlai twitter.com/twimlai/status…
|
||
|
|
||
| Will Dabney proslijedio/la je tweet | ||
|
Eric Topol
@EricTopol
|
17. sij |
|
The reciprocal inspiration of #AI and neuroscience.
A @DeepMindAI @nature paper this week on the mechanism of reinforcement learning
nature.com/articles/s4158…
by @wwdabney @zebkDotCom and colleagues
with an excellent explainer by @_KarenHao @techreview technologyreview.com/s/615054/deepm… pic.twitter.com/SdT7Jvh9Zf
|
||
|
|
||
|
Will Dabney
@wwdabney
|
16. sij |
|
Thank everyone! You can also read the paper for free here: rdcu.be/b0mtA
|
||
|
|
||
|
Will Dabney
@wwdabney
|
15. sij |
|
And it all started (for me) almost exactly three years ago working with @marcgbellemare and Remi on distributional RL:
proceedings.mlr.press/v70/bellemare1…
|
||
|
|
||
|
Will Dabney
@wwdabney
|
15. sij |
|
It has been an incredible collaboration with my co-authors, especially working with @zebkDotCom and Matt Botvinick. Also incredibly grateful to Naoshige Uchida and Clara Starkweather from Harvard, as well as Remi Munos and Demis Hassabis for their work and constant endurance! 2/
|
||
|
|
||
|
Will Dabney
@wwdabney
|
15. sij |
|
When neuroscience and AI researchers get to chatting, cool stuff happens! My first, and I hope not last, trip into neuroscience has been published in Nature. 1/ twitter.com/DeepMind/statu…
|
||
|
|
||
|
Will Dabney
@wwdabney
|
8. sij |
|
Almost all of these (IMO) apply equally well to research. I most disagree with the “short 1:1, long group meetings” one, but do other research people think most of these apply to them? twitter.com/sama/status/12…
|
||
|
|
||
|
Will Dabney
@wwdabney
|
5. sij |
|
I’ve been playing with Hugo in gitlab with netlify. It’s painfully simple, but yes markdown has a little more limited formatting. I think git push for publishing a post is too beautiful to pass up.
|
||
|
|
||
|
Will Dabney
@wwdabney
|
24. pro |
|
Building upon past work is excessively difficult without open source. I wasted tons of time reimplementing from papers in grad school. With today’s frantic pace, this time investment feels untenable.
That said, I don’t think reviewer nullification is the answer.
|
||
|
|
||
|
Will Dabney
@wwdabney
|
21. pro |
|
|
||
|
Will Dabney
@wwdabney
|
21. pro |
|
Happy to have worked with @Zergylord on research combining behavioural mutual information and successor features, which has been accepted for oral presentation at ICLR.
Favorite part: clean answer to where to get the “features” for successor features.
openreview.net/forum?id=BJeAH…
|
||
|
|
||
|
Will Dabney
@wwdabney
|
15. pro |
|
Absolutely agree, a beautiful city and fantastic venue!
|
||
|
|
||
|
Will Dabney
@wwdabney
|
8. pro |
|
So fun how non-stationary our perception is. It’s not hard to get the direction cued onto any physical change. Opening/closing hand, blinking, you can even pretend to spin it this way and that with your thumb and it will switch. twitter.com/FelixHill84/st…
|
||
|
|
||
| Will Dabney proslijedio/la je tweet | ||
|
Anna Harutyunyan
@aharutyu
|
5. pro |
|
Really excited for #NeurIPS2019 next week and to present our spotlight on credit assignment :)
papers.nips.cc/paper/9413-hin…
tl;dr We can rewrite value functions in terms of a hindsight quantity that explicitly captures credit assignment and get a whole new family of RL algs! 🥳 pic.twitter.com/Hr6vqqaH6Z
|
||
|
|
||
|
Will Dabney
@wwdabney
|
4. pro |
|
Let’s just hope Fox News doesn’t run this, or he might just declare war on all our allies. twitter.com/ianbremmer/sta…
|
||
|
|
||