Twitter | Pretraživanje | |
Michael Figurnov
Research Scientist @ Deepmind
162
Tweetovi
361
Pratim
2.656
Osobe koje vas prate
Tweetovi
Michael Figurnov 13. pro
Odgovor korisniku/ci @mfigurnov
[4/4] We hope that this class of estimators will find exciting machine applications! The paper is available online at
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov 13. pro
Odgovor korisniku/ci @mfigurnov
[3/4] It has low variance, similar to the reparameterization gradients, and works with non-differentiable functions and discrete distributions, just like REINFORCE. The downside is the higher computational complexity that grows with the number of parameters.
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov 13. pro
Odgovor korisniku/ci @mfigurnov
[2/4] Measure valued derivatives are a class of Monte Carlo gradient estimators that has been introduced 30 years ago by Georg Pflug, but is almost unknown in the machine learning community.
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov 13. pro
[1/4] I will be talking about Measure Valued Derivatives for Approximate Bayesian Inference, our joint work with @elaClaudia , at the Bayesian Deep Learning workshop at 16:05 tomorrow.
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov 10. pro
I’m at this week. Let me know if you’d like to catch up!
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov 26. stu
Cool paper from : REBAR-like control variates for Plackett-Luce, a distribution over permutations, with application to learning of causal graphs. Check it out!
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Mihaela Rosca 22. stu
The code reproducing the experiments is this paper is now available at:
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Oriol Vinyals 30. lis
: Grandmaster level as all 3 races on , w/ a pro approved interface (camera & APM limits). 2 years ago I thought this was impossible! How? Imitation learning (Diamond) -> multiagent League (Grandmaster)
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Yaroslav Ganin 3. lis
A new paper on tweaking SPIRAL (). What's new: • Spectral normalization of discriminator (Miyato, 18) ⇒ sharper images • Reward shaping by (Ng, 99) ⇒ longer episodes • In-painting instead of stacking ⇒ better reconstructions Lots of nice samples :)
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
David Pfau 6. ruj
Thrilled to be able to share what I've been working on for the last year - solving the fundamental equations of quantum mechanics with deep learning!
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
DeepMind 20. kol
We’re excited to release episodes 1 - 4 of the ! Get the inside track on some of the big questions and challenges the field is wrestling with today. No need to be an expert - the amazing speaks to the people behind the science.
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Shakir Mohamed 31. srp
Really excited to share our latest paper in today on machine learning for health data to make early predictions of acute kidney injury. Has been an amazing journey over the last 2 years and with an amazing set of people.
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Shakir Mohamed 23. srp
Odgovor korisniku/ci @shakir_za
After a short delay, the code in a notebook to reproduce the graphs in section 3 of our paper () is online. More to be come soon. See thread above👆🏾. 👩🏾‍💻
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Danilo J. Rezende 5. srp
For anyone interested in constrained optimisation with DL models (e.g. as in ), we just released a few handy tools to deal with inequality constraints for Sonnet (). Thanks !
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Olaya Álvarez 3. srp
Do you ever feel like a Bayesian distribution?
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Shakir Mohamed 26. lip
Exited to share our new paper: 'Monte Carlo Gradient Estimation in Machine Learning', with @elaClaudia . It reviews of all the things we know about computing gradients of probabilistic functions. 🐾Thread👇🏾
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Bayesian Methods Research Group 9. svi
Right now , , and Oleg Ivanov with present their works at . Catch then while you can! - The Deep Weight Prior, #48 - Variance Networks, #72 - VAE with Arbitrary Conditioning, #74
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
DeepMind 10. tra
Our new blog post overviews unsupervised learning, a paradigm for creating artificial intelligence that learns about data without a particular task in mind. Read more about how we might teach computers to learn for the sake of learning:
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Bayesian Methods Research Group 26. ožu
Yesterday successfully defended his PhD thesis! Congratulations!
Reply Retweet Označi sa "sviđa mi se"
Michael Figurnov proslijedio/la je tweet
Sander Dieleman 13. ožu
Likelihood is a great loss fn, it's all about the space you measure it in! Our latest work on hierarchical AR image models (w/ , Karen Simonyan): We generated 128x128 & 256x256 samples for all ImageNet classes: (1/2)
Reply Retweet Označi sa "sviđa mi se"