Twitter | Pretraživanje | |
Blake Richards 22. tra 2018.
Great thread. My personal take: this “non-visual” activity reflects the internal generative model of the animal, i.e. the visual cortex is sampling from a distribution conditioned on motor activity.
Reply Retweet Označi sa "sviđa mi se"
Marius Pachitariu 24. tra 2018.
Odgovor korisniku/ci @tyrell_turing
The neurons are representing actions, not the sensory consequences of those actions. The latter would have looked very different. For example in darkness, the sensory consequences of movement are still darkness, so there should be no response! Read the paper for the fine details.
Reply Retweet Označi sa "sviđa mi se"
Adam Hantman 24. tra 2018.
Odgovor korisniku/ci @marius10p @tyrell_turing
Not sure this makes sense, in darkness the sensory consequences of movement are proprioception.
Reply Retweet Označi sa "sviđa mi se"
Marius Pachitariu 24. tra 2018.
Odgovor korisniku/ci @AdamHantman @tyrell_turing
is talking about visual predictive coding in V1
Reply Retweet Označi sa "sviđa mi se"
Adam Hantman 24. tra 2018.
Odgovor korisniku/ci @marius10p @tyrell_turing
How do you know it is predictive coding in v1 and not proprioception in v1
Reply Retweet Označi sa "sviđa mi se"
David Schoppik 24. tra 2018.
or vestibular-driven? c.f. cat primary visual cortex (after caudal medulla section)
Reply Retweet Označi sa "sviđa mi se"
Marius Pachitariu 24. tra 2018.
Odgovor korisniku/ci @schoppik @AdamHantman @tyrell_turing
All good hypotheses. What I think doesn't work is predictive coding of visual inputs, which is a popular theory in neuroscience.
Reply Retweet Označi sa "sviđa mi se"
Blake Richards 24. tra 2018.
Odgovor korisniku/ci @marius10p @schoppik @AdamHantman
1/ I’ll have more to say after I read the paper, but want to clarify that I don’t necessarily advocate for Rao & Ballard predictive coding here.
Reply Retweet Označi sa "sviđa mi se"
Blake Richards 24. tra 2018.
Odgovor korisniku/ci @marius10p @schoppik @AdamHantman
2/ My intuition, though, is that motor/vestibular/proprioceptive input to V1 would provide conditioning variables for a generative model of visual input.
Reply Retweet Označi sa "sviđa mi se"
Adam Marblestone 24. tra 2018.
Odgovor korisniku/ci @tyrell_turing @marius10p i 2 ostali
Yeah! See "efference copy" on left hand side of below picture from for an example, building from their earlier paper
Reply Retweet Označi sa "sviđa mi se"