|
Miles Turpin
@
milesaturpin
|
|
Undergrad @DukeU. Interested in machine learning research and Effective Altruism. Aspiring abstract aesthete.
|
|
|
83
Tweetovi
|
523
Pratim
|
40
Osobe koje vas prate
|
| Tweetovi |
|
Miles Turpin
@milesaturpin
|
26. sij |
|
Inspired by @bencbartlett: twitter.com/bencbartlett/s…
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
4:12 No thing other than the posterior over models is more acceptable to God, and escheweth evil?
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
30:23 Then said Elkanah her husband liveth; but only if the mode is a Gaussian prior.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
7:15 Beware of the parameters.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
Carefully evaluating precise gradients using large datasets is often better than a strong hand, hath the LORD brought forth into their mind.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
47:13 And he said unto thy neighbour, do the gradient descent.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
Jesus of the Hebrews that were added to the LORD, he is gracious and merciful, slow to overfit, as is the value of λ.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
He said unto them, Have ye suffered so many equations?
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
21:17 And David said to the Lord GOD; less data, is best.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
I present Love Thy Nearest Neighbor: a Markov chain generator trained on the King James Bible and Kevin Murphy’s Machine Learning: A Probabilistic Perspective. Behold...
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. sij |
|
It has probably always been net negative to send your kids to school for a long time, and net positive to keep them working on the farm. I imagine just the proportion of these two things changing over time accounts for a good part of the trend
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
4. sij |
|
It took me 20 minutes to fall in love with @RoamResearch #roamcult
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
28. pro |
|
Shiri’s Scissor but for machine learning (slatestarcodex.com/2018/10/30/sor…) twitter.com/carlesgelada/s…
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
28. pro |
|
The answer depends on what kind of future data you wish to generalize to. If you want to generalize to new time points for a given patient, then you have tons of data! But if you want to generalize to new patients… then n=5. 2/2
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
28. pro |
|
A deceptively difficult question: how much data do I have? If your data is hierarchical the answer is not obvious. Say I have a dataset of 5 patients, and for each one, fine-grained measurements across 1 million time points. Is my sample size 5 or 5 million? 1/2
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. pro |
|
It’s interesting to the extent that you can make claims about the limitations of many types of models. But if your definition of DL is overly broad then you can only make weak general claims, which are not very interesting.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. pro |
|
It’s hard to have productive discussion when one person thinks DL is differentiable programming and the other thinks it’s only MLPs
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
26. pro |
|
It’s all a part of the process of agreeing on definitions. You have to agree about what DL is before you can argue about what DL can and can’t do - and understanding fundamental limitations is important for setting setting research directions.
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
24. pro |
|
This is Yoshua's term for it - I agree it's a terrible name
|
||
|
|
||
|
Miles Turpin
@milesaturpin
|
23. pro |
|
Scalability is hard because GANs are by definition resistant to one of our scalable detection methods - classification by CNN!
|
||
|
|
||