Twitter | Search | |
Steven Hansen
Research Scientist at DeepMind. Slowly learning to learn fast. All opinions my own?
1,110
Tweets
417
Following
1,389
Followers
Tweets
Steven Hansen Jan 30
There is so much potential here: That feeling when you've been in the saddle for too long. Drinking the BLEUs away. Something something Lasso regression... Hell, "Long short-term memories" is even a good album title.
Reply Retweet Like
Steven Hansen Jan 30
Replying to @RWerpachowski
I agree, not a replacement -- but they do compete for computing resources. And generally I've found that it's easier to overfit to a single task than it is to be misled by too few seeds. Both are problematic, but without infinite compute hard decisions must be made.
Reply Retweet Like
Steven Hansen Jan 30
Fair point, but it's worth noting that there is a trade-off between # seeds, # baselines, and # and complexity of environment(s). Personally, I'd prefer a method /w 3 seeds eval'd on 57 tasks with 5 baselines to one with 10 seeds eval'd on 1 task with 2 baselines.
Reply Retweet Like
Steven Hansen Jan 10
Reply Retweet Like
Steven Hansen Dec 30
Replying to @egrefen
100% agree. Felt guilty contributing to it, but justified it by reassuring myself that having too much pent-up snark is unhealthy. P.S. Wish you were still at DeepMind so I could get "top-class trolling" on my next performance review ;)
Reply Retweet Like
Steven Hansen Dec 30
Replying to @jachiam0
But how else will we ever find out who wins science?!? ;)
Reply Retweet Like
Steven Hansen Dec 30
Replying to @srchvrs
Definitely don't wanna be lumped into the "DL solves all" crowd. Wanted to emphasize that fixing the flaws in a system is rarely an armchair exercise in integrating it with the previous system. But saying so doesn't imply the current system is perfect.
Reply Retweet Like
Steven Hansen Dec 30
Replying to @VahidK
NeurIPS 2019 lives on in all of our hearts! ... And in the Twitter bio of the forgetful
Reply Retweet Like
Steven Hansen Dec 29
Aristotelian physics may be flawed, but it has been unfairly discarded in favor of Newtonian physics, despite the latter not solving all of the open problems in the field. Clearly the way forward is a hybrid system, wherein aether obeys the conservation of momentum.
Reply Retweet Like
Steven Hansen Dec 27
Episodic coverage-based exploration is a great idea, and this instantiation of it yields great results on hard exploration Atari games. Great work from my colleagues at
Reply Retweet Like
Steven Hansen retweeted
Will Dabney Dec 21
Happy to have worked with on research combining behavioural mutual information and successor features, which has been accepted for oral presentation at ICLR. Favorite part: clean answer to where to get the “features” for successor features.
Reply Retweet Like
Steven Hansen Dec 20
Replying to @skrish_13 @iclr_conf
Thanks :)
Reply Retweet Like
Steven Hansen Dec 19
Conference-decision-anticipation at an all time high. I need that sweet sweet email notification!
Reply Retweet Like
Steven Hansen Dec 18
Replying to @alfcnz @ylecun and 3 others
It's fitting that Martin is located at the barycenter of academia :)
Reply Retweet Like
Steven Hansen Dec 16
I know people who think the original meme was "all your Bayes are belong to us". Can't tell if that makes them more or less nerdy 🤔
Reply Retweet Like
Steven Hansen Dec 13
talk at I managed to ask a question afterwards: Q: Did you ever implement, or even attempt to implement, the "special case" where adversarial training is used to mimic a training dataset A: No. Hopefully this helps clear up some misconceptions ;)
Reply Retweet Like
Steven Hansen retweeted
Greg Brockman Dec 13
We just released our scientific analysis of OpenAI Five: We are already using findings from Five in other systems at OpenAI like Dactyl () or our multi-agent work (). Hope that others find the results useful!
Reply Retweet Like
Steven Hansen Dec 13
finally released a paper on their DOTA AI. I've given them a lot of crap over the years for not sharing details in a timely manner (and I still stand by most of it that), but this sort of openness is still greatly appreciated.
Reply Retweet Like
Steven Hansen retweeted
Meire Fortunato Dec 12
Very proud of this work. I am not at NeurIPS this year, but my awesome co-authors Melissa Tan, and are presenting the poster (#192). 🙃🙂🙃
Reply Retweet Like
Steven Hansen Dec 12
Very proud to be a part of this! The docker release makes installing and using these environment painless. Test it out for yourself and come yell at me if something goes wrong ;)
Reply Retweet Like