| Tweetovi |
|
Matt Golub
@MattGolub_Neuro
|
3. velj |
|
A new and intriguing view into neural population activity during learning, with @XuLunaSun, @djoshea, @EricMTrautmann, @SaurabhsNeurons, Stephen Ryu, and @shenoystanford. twitter.com/XuLunaSun/stat…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
Jay Hennig
@jehosafet
|
10. lip |
|
Super excited to see this work published! Congrats to Emily for finishing such an awesome project. These were super difficult experiments for her to run, and a really challenging problem to think through! @MattGolub_Neuro @AlanDegenhart
pnas.org/content/early/…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
Niru Maheswaranathan
@niru_m
|
27. lip |
|
New work out on arXiv! Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics (arxiv.org/abs/1906.10720), with fantastic co-authors @ItsNeuronal, @MattGolub_Neuro, @SuryaGanguli and @SussilloDavid. #tweetprint summary below! 👇🏾 (1/4)
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
1. stu 2018. |
|
We hope this tool inspires you to unleash modern deep learning approaches toward understanding how networks and brains solve challenging tasks!
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
1. stu 2018. |
|
FixedPointFinder identifies the stable (black) and unstable (red) fixed points, along with linearized dynamics local to each fixed point (red lines are dominant modes). Trajectories of the network state are overlaid in blue. pic.twitter.com/P4vXoqI8Nt
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
1. stu 2018. |
|
Here’s an example--we trained a 16-unit LSTM network to implement a 3-bit memory (a.k.a. the Flip Flop task). Each input (gray) delivers transient pulses to flip the state of a corresponding output (trained network: purple; training signal: cyan). pic.twitter.com/iivDDtks0o
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
1. stu 2018. |
|
We've created a Tensorflow toolbox for reverse engineering trained RNNs (with @SussilloDavid). You train a network (e.g., "vanilla", LSTM, GRU, custom), then we use TF to do the fixed point optimizations and Jacobian computations. joss.theoj.org/papers/10.2110…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
Sergey Stavisky
@sergeydoestweet
|
10. ruj 2018. |
|
Great work, Megan, Christeva, and colleagues! It’s reassuring to see that every once in a while, the answer is not “Motor cortex just does everything”. twitter.com/biorxiv_neursc…
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
3. srp 2018. |
|
We've posted code to accompany our 2015 eLife paper. The framework extracts a subject's internal model of a dynamical system being controlled. Perhaps useful for those studying BMI / motor / control / learning!
elifesciences.org/articles/10015
github.com/mattgolub/inte…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
Daniel Bear
@recursus
|
2. srp 2018. |
|
Deep convolutional neural networks are great models of the visual system, but these static systems don't explain the temporal dynamics of real visual responses. So we built deep recurrent networks:
@aran_nayebi @qbilius @SussilloDavid @NeuroAILab
Paper: arxiv.org/abs/1807.00053 pic.twitter.com/veyXeg5RGi
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
20. lip 2018. |
|
To reach or not to reach, that was the question. New work from @JonAMichaels et al shows that preparatory activity in F5 and AIP separates according to anticipated delays. twitter.com/JonAMichaels/s…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
David Sussillo ☝️🤓
@SussilloDavid
|
16. lip 2018. |
|
Most of you know me as a successful neuroscientist / deep learning researcher but I have a story that I want to share briefly.
I grew up in a group home, which is basically an orphanage. twitter.com/MaddowBlog/sta…
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
13. ožu 2018. |
|
That's a great question! All of these experiments were performed with adult animals, so we unfortunately cannot speak to when during development different learning abilities arise.
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
13. ožu 2018. |
|
Thanks for putting together the wonderful News and Views!
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
12. ožu 2018. |
|
(2/2) We've begun to test this hypothesis about long-term learning in experiments lead by Emily Oby. Emily dropped some exciting preliminary results about out-of-repertoire learning at #cosyne2018 last week.
cosyne.org/cosyne18/Cosyn…
(see T-25 on pg 40).
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
12. ožu 2018. |
|
(1/2) Thanks Sam! We are capable of learning very complex skills given extensive practice. Our results strongly suggest that achieving expert levels of behavior requires changes to one's neural repertoire and perhaps one's intrinsic manifold.
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
12. ožu 2018. |
|
Thanks for the mention @SaurabhsNeurons! The paper is out today, check it out!
nature.com/articles/s4159…
|
||
|
|
||
|
Matt Golub
@MattGolub_Neuro
|
12. ožu 2018. |
|
How does the brain quickly learn to improve behavior, and what are the limitations this type of learning? Check out our latest paper, "Learning by neural reassociation," as featured in Byron Yu's #cosyne2018 talk.
nature.com/articles/s4159…
|
||
|
|
||
| Matt Golub proslijedio/la je tweet | ||
|
Saurabh Vyas
@SaurabhsNeurons
|
15. velj 2018. |
|
First tweet, new paper: we asked can learning motor tasks in your mind w/o physical movements (via a BMI) ‘transfer’ and improve overt behavior, & if so, by what neural mechanism? Thx co-authors @NirEvenChen @sergeydoestweet @shenoystanford Paul & Stephen
cell.com/neuron/fulltex…
|
||
|
|
||