Twitter | Pretraživanje | |
Tweetovi
Matt Golub 3. velj
A new and intriguing view into neural population activity during learning, with , , , , Stephen Ryu, and .
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
Jay Hennig 10. lip
Super excited to see this work published! Congrats to Emily for finishing such an awesome project. These were super difficult experiments for her to run, and a really challenging problem to think through!
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
Niru Maheswaranathan 27. lip
New work out on arXiv! Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics (), with fantastic co-authors , , and . summary below! 👇🏾 (1/4)
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 1. stu 2018.
Odgovor korisniku/ci @MattGolub_Neuro
We hope this tool inspires you to unleash modern deep learning approaches toward understanding how networks and brains solve challenging tasks!
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 1. stu 2018.
Odgovor korisniku/ci @MattGolub_Neuro
FixedPointFinder identifies the stable (black) and unstable (red) fixed points, along with linearized dynamics local to each fixed point (red lines are dominant modes). Trajectories of the network state are overlaid in blue.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 1. stu 2018.
Odgovor korisniku/ci @MattGolub_Neuro
Here’s an example--we trained a 16-unit LSTM network to implement a 3-bit memory (a.k.a. the Flip Flop task). Each input (gray) delivers transient pulses to flip the state of a corresponding output (trained network: purple; training signal: cyan).
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 1. stu 2018.
We've created a Tensorflow toolbox for reverse engineering trained RNNs (with ). You train a network (e.g., "vanilla", LSTM, GRU, custom), then we use TF to do the fixed point optimizations and Jacobian computations.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
Sergey Stavisky 10. ruj 2018.
Great work, Megan, Christeva, and colleagues! It’s reassuring to see that every once in a while, the answer is not “Motor cortex just does everything”.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 3. srp 2018.
We've posted code to accompany our 2015 eLife paper. The framework extracts a subject's internal model of a dynamical system being controlled. Perhaps useful for those studying BMI / motor / control / learning!
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
Daniel Bear 2. srp 2018.
Deep convolutional neural networks are great models of the visual system, but these static systems don't explain the temporal dynamics of real visual responses. So we built deep recurrent networks: Paper:
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 20. lip 2018.
To reach or not to reach, that was the question. New work from et al shows that preparatory activity in F5 and AIP separates according to anticipated delays.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
David Sussillo ☝️🤓 16. lip 2018.
Most of you know me as a successful neuroscientist / deep learning researcher but I have a story that I want to share briefly. I grew up in a group home, which is basically an orphanage.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 13. ožu 2018.
Odgovor korisniku/ci @GJarbo
That's a great question! All of these experiments were performed with adult animals, so we unfortunately cannot speak to when during development different learning abilities arise.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 13. ožu 2018.
Odgovor korisniku/ci @neurotic_geek @NatureNeuro
Thanks for putting together the wonderful News and Views!
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 12. ožu 2018.
Odgovor korisniku/ci @smickdougle
(2/2) We've begun to test this hypothesis about long-term learning in experiments lead by Emily Oby. Emily dropped some exciting preliminary results about out-of-repertoire learning at last week. (see T-25 on pg 40).
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 12. ožu 2018.
Odgovor korisniku/ci @smickdougle
(1/2) Thanks Sam! We are capable of learning very complex skills given extensive practice. Our results strongly suggest that achieving expert levels of behavior requires changes to one's neural repertoire and perhaps one's intrinsic manifold.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 12. ožu 2018.
Thanks for the mention ! The paper is out today, check it out!
Reply Retweet Označi sa "sviđa mi se"
Matt Golub 12. ožu 2018.
How does the brain quickly learn to improve behavior, and what are the limitations this type of learning? Check out our latest paper, "Learning by neural reassociation," as featured in Byron Yu's talk.
Reply Retweet Označi sa "sviđa mi se"
Matt Golub proslijedio/la je tweet
Saurabh Vyas 15. velj 2018.
First tweet, new paper: we asked can learning motor tasks in your mind w/o physical movements (via a BMI) ‘transfer’ and improve overt behavior, & if so, by what neural mechanism? Thx co-authors Paul & Stephen
Reply Retweet Označi sa "sviđa mi se"