|
Ben Lansdell
@
benlansdell
Philadelphia, PA
|
|
Neuroscience, applied mathematics, deep learning, causality
|
|
|
53
Tweetovi
|
180
Pratim
|
143
Osobe koje vas prate
|
| Tweetovi |
|
Ben Lansdell
@benlansdell
|
4. velj |
|
I agree, it is interesting. The inverse model being something like: here are some state transitions, what are the interventions that were taken? Learning agents that could solve this in their environments would be better observational learners
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
29. sij |
|
Not sure about time spent... but Descartes apparently did a lot of his work in bed.
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
26. sij |
|
Evolution by natural selection can result in organisms climbing a fitness landscape 'as-if' doing gradient ascent. This doesn't mean there is any gradient computation being implemented anywhere... just random variation. 3/3
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
26. sij |
|
Marr II would be something like there are mechanisms in a neuron/dendrite that correspond to steps to compute a partial deriv. Ideas about 'directed' or 'smart' evolution aside, Darwin's theory perhaps provides a good example of the distinction. 2/3
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
26. sij |
|
Can you elaborate. When you say neurons/dendrites compute partial derivatives are you talking about Marr II or III? I take Marr III to be something just based on behavior. An 'as-if' claim. 1/3
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
23. sij |
|
Why isn't option (3) just 'They don't because evolution found another way'?
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
DeepMind
@DeepMind
|
20. sij |
|
Given the smoothness of videos, can we learn models more efficiently than with #backprop? We present Sideways - a step towards a high-throughput, approximate backprop that considers the one-way direction of time and pipelines forward and backward passes. arxiv.org/pdf/2001.06232… pic.twitter.com/evbwULE0s2
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
Ari Benjamin
@arisbenjamin
|
12. sij |
|
What makes a good lab? Are group meetings really the best way? In the @KordingLab we recently reexamined how we organize our weeks, and then redesigned everything in a systematic way (100% democratically!). I blogged about our design process and takeaways: kordinglab.com/2019/12/20/lab… pic.twitter.com/P7Z5YG4nLh
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
11. pro |
|
I agree that dX must be decodeable from neural activity. Alone, this won't say how it's solving the task though. And to say 'it infers dX' suggests a sort of explicit, sequential solution: 'infer dX, compute X(T), move'. It may not work like this, so is it useful to focus on?
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
Adam J Calhoun
@neuroecology
|
22. stu |
|
Can a neuroscientist understand a virtual rodent?
Authors took a realistic 3D animal, trained deep RL to control it during tasks, and then used neuroscience techniques to peak inside this animal...
[Josh Merel, Diego Aldorando, Greg Wayne, @BOlveczky]
arxiv.org/abs/1911.09451 pic.twitter.com/yHSro4KcvV
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
22. stu |
|
(Just in Shannon's rather narrow sense) a communication channel just encodes a message that can be decoded with some fidelity. There is nothing in the formalism that says what the message means or signifies. See e.g. the information section of ncbi.nlm.nih.gov/pmc/articles/P…
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
22. stu |
|
I think you can have well-defined neuronal communication, just in Shannon's info theory sense. This is a common focus in practice. What the neural activity signifies is a different question...and depends on a much wider context if it is to be meaningful
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
22. stu |
|
A signal is a sign, and has a field separate from Shannon's ideas on communication channels. en.wikipedia.org/wiki/Semiotics
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
22. stu |
|
Communicate is transmission of information. Signal is some meaning given to that information?
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
Brian Cheung
@thisismyhat
|
18. stu |
|
The new Cerebras chip is the most accidentally neuromorphic chip ever. cerebras.net/cerebras-wafer… pic.twitter.com/hNfhNqkfYD
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
CLaE
@leafs_s
|
18. stu |
|
Trends in Cognitive Sciences
Theories of Error Back-Propagation in the Brain
cell.com/trends/cogniti…
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
hardmaru
@hardmaru
|
14. stu |
|
“I’m going to work on artificial general intelligence.”
– John Carmack
twitter.com/hn_frontpage/s… pic.twitter.com/6tuk78hWvd
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
10. stu |
|
The neuroscience version of the AI effect?
en.wikipedia.org/wiki/AI_effect
|
||
|
|
||
|
Ben Lansdell
@benlansdell
|
5. stu |
|
How do we decide to imitate or emulate? biorxiv.org/content/biorxi…
|
||
|
|
||
| Ben Lansdell proslijedio/la je tweet | ||
|
Yonatan Aljadeff
@yahdef
|
4. stu |
|
Happy to share this work, with Claudia Clopath, Rob Froemke & Co.
arxiv.org/abs/1911.00307
We study how cortical synapses can rely on limited error information to decide whether to potentiate/depress.
To approach theoretical capacity, plasticity must have certain properties.
1/2
|
||
|
|
||