Twitter | Pretraživanje | |
Michael Zhang
PhD student in machine learning / ( '18). Always exploring.
53
Tweetovi
143
Pratim
185
Osobe koje vas prate
Tweetovi
Michael Zhang 30. sij
Odgovor korisniku/ci @pabbeel @CovariantAI
Congrats, very impressive!
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 29. sij
Odgovor korisniku/ci @SilviuPitis
We show our rules compare favorably to rules learned by order-invariant neural networks under different noise models. paper: code:
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 29. sij
Odgovor korisniku/ci @SilviuPitis
This is interesting especially when the voting rule has access to auxiliary information e.g. some proxy for voter experience. Our model is applicable for cooperative policy making and peer review--lots more exciting directions to pursue!
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 29. sij
Our AAMAS 2020 paper “Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes” ( and me) is online! We analyze voting rules under a setup where voters get noisy estimates of some underlying ground truth.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang proslijedio/la je tweet
Barack Obama 26. sij
Kobe was a legend on the court and just getting started in what would have been just as meaningful a second act. To lose Gianna is even more heartbreaking to us as parents. Michelle and I send love and prayers to Vanessa and the entire Bryant family on an unthinkable day.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 25. sij
Odgovor korisniku/ci @hugo_larochelle @TorontoStar
The researcher themed trading cards are really neat too.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 25. sij
Enjoyed this short piece by : via It's hard to capture how much hockey is a part of growing up in Canada
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 17. sij
Odgovor korisniku/ci @AndrewLBeam
This was definitely my thought until fairly recently. I think Book of Why has a nice small section describing the difference.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 16. sij
Lots of exciting research on distributional RL lately, and this work shows that dopamine in mouse brain cells is better modeled with a distribution (rather than a point estimate as in classic TD)!
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 9. sij
also, reddit thread on helping Australia:
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 23. pro
Odgovor korisniku/ci @james_r_lucas @tw_killian
Tell everyone is crucial, glad we can do so easily
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 13. pro
Thanks for the support! These are great :)
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 12. pro
Writing down notes, both digital and physical, between sessions has been very useful for remembering cool ideas, experiences, and people.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang proslijedio/la je tweet
James  Lucas 9. pro
Odgovor korisniku/ci @michaelrzhang
(3) Lookahead Optimizer: k steps forward, 1 step back. Thursday evening, East Hall B+C (#200) We propose a new optimization algorithm that wraps around existing optimizers, reducing variance and improving convergence. Work with , Geoff Hinton, and Jimmy Ba.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 4. pro
Odgovor korisniku/ci @james_r_lucas
The algorithm has minimal computational overhead and stores one additional copy of the parameters. It can be incorporated into existing pipelines with a couple of lines of code. Our implementation is available at:
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 4. pro
Odgovor korisniku/ci @james_r_lucas
Lookahead selects a search direction based on k steps of the inner optimizer. We demonstrate that this reduces variance, which improves convergence and makes Lookahead more robust to hyperparameter choices. This is desirable on novel datasets without well-calibrated baselines.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 4. pro
Excited to share the NeurIPS camera-ready version of the Lookahead Optimizer: . Lookahead wraps around and often improves the performance of other optimizers. Very grateful to work on this with James Lucas, Geoffrey Hinton, and Jimmy Ba.
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 30. stu
Odgovor korisniku/ci @Scientist_Rhi
Being a TA and leading discussion sections helped me a lot. Improv classes are also great
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang proslijedio/la je tweet
Arvind Narayanan 26. stu
Odgovor korisniku/ci @random_walker
There’s a simple way to fight familiarity bias. When you read a good paper with an author you don’t know, especially if they’re junior, take a minute to look them up, get to know their work, cite them, and keep them in mind for events you organize. In short, remember the name!
Reply Retweet Označi sa "sviđa mi se"
Michael Zhang 18. stu
Looking forward to giving a talk this week at Toronto Machine Learning Summit () about the Lookahead Optimizer! Details:
Reply Retweet Označi sa "sviđa mi se"