Twitter | Pretraživanje | |
David A. Markowitz 20. stu
The fact that you can selectively turn off parts of cortex (e.g. via cooling) and the brain keeps working fine with only very specific deficits probably tells us all we need to know about the importance of backprop in biological learning.
Reply Retweet Označi sa "sviđa mi se"
Andrew Hires 21. stu
Odgovor korisniku/ci @neurowitz
That just tells us here are many distributed nets for specific tasks, and some association nets to manage the integration of them. Doesn’t say much about how the networks are trained, imho.
Reply Retweet Označi sa "sviđa mi se"
Adam Marblestone 21. stu
Odgovor korisniku/ci @AndrewHires @neurowitz
Agreed: like this
Reply Retweet Označi sa "sviđa mi se"
Adam Marblestone 21. stu
Odgovor korisniku/ci @AndrewHires @neurowitz
Reply Retweet Označi sa "sviđa mi se"
Adam Marblestone
Neither of these is a perfect example of what David is asking for. But as Konrad and Greg and I wrote in 2016, the use of backprop like credit assignment signals internally does not imply a monolithic end to end training from a single objective...
Reply Retweet Označi sa "sviđa mi se" More
Adam Marblestone 21. stu
Odgovor korisniku/ci @AndrewHires @neurowitz
Consider an architecture where each module predicts its many inputs as a function of its outputs and some global conditioning. Might be very modular, despite the use of backprop like gradient flows.
Reply Retweet Označi sa "sviđa mi se"
KordingLab 👨‍💻🧠∇🔬📈,🏋️‍♂️⛷️🏂🛹🕺⛰️☕🦖 21. stu
We may conclude that BP is not sufficient as a model of how the brain works (duh). Clearly the brain has a bias towards forming modules that vanilla BP does not have. I mean yes, there is anatomy in the brain. There is also anatomy in ANNs.
Reply Retweet Označi sa "sviđa mi se"