Twitter | Pretraživanje | |
Miles Turpin
Undergrad . Interested in machine learning research and Effective Altruism. Aspiring abstract aesthete.
83
Tweetovi
523
Pratim
40
Osobe koje vas prate
Tweetovi
Miles Turpin 26. sij
Odgovor korisniku/ci @bencbartlett
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
4:12 No thing other than the posterior over models is more acceptable to God, and escheweth evil?
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
30:23 Then said Elkanah her husband liveth; but only if the mode is a Gaussian prior.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
7:15 Beware of the parameters.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
Carefully evaluating precise gradients using large datasets is often better than a strong hand, hath the LORD brought forth into their mind.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
47:13 And he said unto thy neighbour, do the gradient descent.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
Jesus of the Hebrews that were added to the LORD, he is gracious and merciful, slow to overfit, as is the value of λ.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
He said unto them, Have ye suffered so many equations?
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @milesaturpin
21:17 And David said to the Lord GOD; less data, is best.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
I present Love Thy Nearest Neighbor: a Markov chain generator trained on the King James Bible and Kevin Murphy’s Machine Learning: A Probabilistic Perspective. Behold...
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. sij
Odgovor korisniku/ci @RichardMCNgo
It has probably always been net negative to send your kids to school for a long time, and net positive to keep them working on the farm. I imagine just the proportion of these two things changing over time accounts for a good part of the trend
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 4. sij
It took me 20 minutes to fall in love with
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 28. pro
Shiri’s Scissor but for machine learning ()
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 28. pro
Odgovor korisniku/ci @milesaturpin
The answer depends on what kind of future data you wish to generalize to. If you want to generalize to new time points for a given patient, then you have tons of data! But if you want to generalize to new patients… then n=5. 2/2
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 28. pro
A deceptively difficult question: how much data do I have? If your data is hierarchical the answer is not obvious. Say I have a dataset of 5 patients, and for each one, fine-grained measurements across 1 million time points. Is my sample size 5 or 5 million? 1/2
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. pro
Odgovor korisniku/ci @notmisha
It’s interesting to the extent that you can make claims about the limitations of many types of models. But if your definition of DL is overly broad then you can only make weak general claims, which are not very interesting.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. pro
Odgovor korisniku/ci @notmisha
It’s hard to have productive discussion when one person thinks DL is differentiable programming and the other thinks it’s only MLPs
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 26. pro
Odgovor korisniku/ci @notmisha
It’s all a part of the process of agreeing on definitions. You have to agree about what DL is before you can argue about what DL can and can’t do - and understanding fundamental limitations is important for setting setting research directions.
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 24. pro
Odgovor korisniku/ci @rgblong
This is Yoshua's term for it - I agree it's a terrible name
Reply Retweet Označi sa "sviđa mi se"
Miles Turpin 23. pro
Odgovor korisniku/ci @jackclarkSF @ylecun @GiorgioPatrini
Scalability is hard because GANs are by definition resistant to one of our scalable detection methods - classification by CNN!
Reply Retweet Označi sa "sviđa mi se"