|
@AdamMarblestone | |||||
|
Mega relevant: youtube.com/watch?v=B2Ukzf…
|
||||||
|
||||||
|
David Pfau
@pfau
|
15. pro |
|
Graphical models got cool because Microsoft Research decided to specialize in them. Deep learning got a boost from Jeff Dean creating Google Brain in 2011. Lots of these shifts in zeitgeist have to do with the funding decisions of a few big tech companies.
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
You could alternatively say "when someone pushes them hard enough to show that [some of them] can scale to real problems" -- which makes it seem at least somewhat more reasonable, less contingent/sociological
|
||
|
|
||
|
David Pfau
@pfau
|
15. pro |
|
Why should scaling to optimizing ads on large datacenters be a cue for neuroscientists?
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
Better something than nothing
|
||
|
|
||
|
David Pfau
@pfau
|
15. pro |
|
Probably better to be driven by actual experimental observations from neuroscience
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
Relevant: twitter.com/rodneyabrooks/…
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
Seriously though, approximate backprop seems simpler than approximate PGM inference... and many probabilistic inference problems can be re-framed as neural nets as the field is doing now
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
And there are things like: nature.com/articles/s4146…
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
And PGMs have been heavily explored, while backprop was ignored for a long time... so I think things are OK
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
15. pro |
|
But yes... basing as directly as possible in ground truth neuro observations would be very nice...
|
||
|
|
||