|
@AdamMarblestone | |||||
|
Neither of these is a perfect example of what David is asking for. But as Konrad and Greg and I wrote in 2016, the use of backprop like credit assignment signals internally does not imply a monolithic end to end training from a single objective... pic.twitter.com/BUSoQKazlp
|
||||||
|
||||||
|
David A. Markowitz
@neurowitz
|
20. stu |
|
The fact that you can selectively turn off parts of cortex (e.g. via cooling) and the brain keeps working fine with only very specific deficits probably tells us all we need to know about the importance of backprop in biological learning.
|
||
|
|
||
|
Andrew Hires
@AndrewHires
|
21. stu |
|
That just tells us here are many distributed nets for specific tasks, and some association nets to manage the integration of them. Doesn’t say much about how the networks are trained, imho.
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
21. stu |
|
Agreed: like this arxiv.org/abs/1701.06538
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
21. stu |
|
And like this: arxiv.org/pdf/1608.05343…
|
||
|
|
||
|
Adam Marblestone
@AdamMarblestone
|
21. stu |
|
Consider an architecture where each module predicts its many inputs as a function of its outputs and some global conditioning. Might be very modular, despite the use of backprop like gradient flows.
|
||
|
|
||
|
KordingLab 👨💻🧠∇🔬📈,🏋️♂️⛷️🏂🛹🕺⛰️☕🦖
@KordingLab
|
21. stu |
|
We may conclude that BP is not sufficient as a model of how the brain works (duh). Clearly the brain has a bias towards forming modules that vanilla BP does not have. I mean yes, there is anatomy in the brain. There is also anatomy in ANNs.
|
||
|
|
||