|
@AdamMarblestone | |||||
|
(With these bits he basically sets the stage for 6 years later saying BP is an “intrinsic gradient“ system and then generalized that concept to other dynamics like specific kinds of RNNs...) pic.twitter.com/dpQpuQwwkP
|
||||||
|
||||||
|
KordingLab 👨💻🧠∇🔬📈,🏋️♂️⛷️🏂🛹🕺⛰️☕🦖
@KordingLab
|
1. pro |
|
I think that the brain almost certainly approximates gradient descent. And here is why: Any learning episode only appears to change the brain a tiny bit. 1/5
|
||
|
|
||
|
Barak A. Pearlmutter
@BAPearlmutter
|
9. pro |
|
I thought "Intrinsic Gradient Networks", Jason Tyler Rolfe's PhD Thesis (2012, doi:10.7907/YCB7-7X24. resolver.caltech.edu/CaltechTHESIS:…) was an underappreciated approach to the problem of 𝛁∈🧠
|
||
|
|
||
|
KordingLab 👨💻🧠∇🔬📈,🏋️♂️⛷️🏂🛹🕺⛰️☕🦖
@KordingLab
|
9. pro |
|
Omg. Why didn't I know of it!
|
||
|
|
||
|
Blake Richards
@tyrell_turing
|
9. pro |
|
Ditto!
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
10. pro |
|
Fascinating that one of the intrinsic gradient networks is... belief propagation. Rolfe’s earlier MS thesis has a dendritic + columnar model for sum-product BP. Once can do BP itself, gradient of BP is easy. Comes full circle with @dileeplearning’s comments on the brain GD issue?
|
||
|
|
||
|
Dileep George
@dileeplearning
|
10. pro |
|
Adam, do you have a link to his MS thesis?
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
10. pro |
|
|
||