| Tweetovi |
|
Michael Zhang
@michaelrzhang
|
30. sij |
|
Congrats, very impressive!
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
29. sij |
|
We show our rules compare favorably to rules learned by order-invariant neural networks under different noise models.
paper: arxiv.org/abs/2001.10092
code: github.com/spitis/objecti…
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
29. sij |
|
This is interesting especially when the voting rule has access to auxiliary information e.g. some proxy for voter experience. Our model is applicable for cooperative policy making and peer review--lots more exciting directions to pursue!
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
29. sij |
|
Our AAMAS 2020 paper “Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes” (@SilviuPitis and me) is online! We analyze voting rules under a setup where voters get noisy estimates of some underlying ground truth.
|
||
|
|
||
| Michael Zhang proslijedio/la je tweet | ||
|
Barack Obama
@BarackObama
|
26. sij |
|
Kobe was a legend on the court and just getting started in what would have been just as meaningful a second act. To lose Gianna is even more heartbreaking to us as parents. Michelle and I send love and prayers to Vanessa and the entire Bryant family on an unthinkable day.
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
25. sij |
|
The researcher themed trading cards are really neat too.
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
25. sij |
|
Enjoyed this short piece by @hugo_larochelle: thestar.com/opinion/contri… via @torontostar It's hard to capture how much hockey is a part of growing up in Canada
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
17. sij |
|
This was definitely my thought until fairly recently. I think Book of Why has a nice small section describing the difference.
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
16. sij |
|
Lots of exciting research on distributional RL lately, and this work shows that dopamine in mouse brain cells is better modeled with a distribution (rather than a point estimate as in classic TD)! twitter.com/DeepMind/statu…
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
9. sij |
|
also, reddit thread on helping Australia: reddit.com/r/australia/co… twitter.com/princessaspien…
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
23. pro |
|
Tell everyone is crucial, glad we can do so easily
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
13. pro |
|
Thanks for the support! These are great :)
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
12. pro |
|
Writing down notes, both digital and physical, between sessions has been very useful for remembering cool ideas, experiences, and people. #NeurIPS2019
|
||
|
|
||
| Michael Zhang proslijedio/la je tweet | ||
|
James Lucas
@james_r_lucas
|
9. pro |
|
(3) Lookahead Optimizer: k steps forward, 1 step back. Thursday evening, East Hall B+C (#200)
We propose a new optimization algorithm that wraps around existing optimizers, reducing variance and improving convergence. Work with @michaelrzhang, Geoff Hinton, and Jimmy Ba. pic.twitter.com/DcbR0qL4sF
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
4. pro |
|
The algorithm has minimal computational overhead and stores one additional copy of the parameters. It can be incorporated into existing pipelines with a couple of lines of code. Our implementation is available at: github.com/michaelrzhang/…
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
4. pro |
|
Lookahead selects a search direction based on k steps of the inner optimizer. We demonstrate that this reduces variance, which improves convergence and makes Lookahead more robust to hyperparameter choices. This is desirable on novel datasets without well-calibrated baselines.
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
4. pro |
|
Excited to share the NeurIPS camera-ready version of the Lookahead Optimizer: arxiv.org/abs/1907.08610. Lookahead wraps around and often improves the performance of other optimizers. Very grateful to work on this with James Lucas, Geoffrey Hinton, and Jimmy Ba. pic.twitter.com/TFvfL7F8nh
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
30. stu |
|
Being a TA and leading discussion sections helped me a lot. Improv classes are also great
|
||
|
|
||
| Michael Zhang proslijedio/la je tweet | ||
|
Arvind Narayanan
@random_walker
|
26. stu |
|
There’s a simple way to fight familiarity bias. When you read a good paper with an author you don’t know, especially if they’re junior, take a minute to look them up, get to know their work, cite them, and keep them in mind for events you organize. In short, remember the name!
|
||
|
|
||
|
Michael Zhang
@michaelrzhang
|
18. stu |
|
Looking forward to giving a talk this week at Toronto Machine Learning Summit (@TMLS_TO) about the Lookahead Optimizer! Details: torontomachinelearning.com
|
||
|
|
||