Twitter | Pretraživanje | |
Jaan Aru
Brains are amazing. Our lab demonstrates that single human layer 2/3 neurons can compute the XOR operation. Never seen before in any neuron in any other species. Out now in . Congrats Albert, Tim  & CO
A special developmental program in the human brain drives the disproportionate thickening of cortical layer 2/3. This suggests that the expansion of layer 2/3, along with its numerous neurons and...
Reply Retweet Označi sa "sviđa mi se" More
Subutai Ahmad 2. sij
Odgovor korisniku/ci @jaaanaru @sciencemagazine i 2 ostali
Great paper. I'm a bit confused about the novelty of the XOR result though. As I understand it, Yiota's model from 2003 (where a pyramidal cell was equivalent to a 2-layer network) in would also be able to solve the XOR problem.
Reply Retweet Označi sa "sviđa mi se"
Jaan Aru 2. sij
:) But this is *real* stuff, real neurons, real currents and real spikes, so nobody can complain that Albert "simply made up" something in a model by tuning parameters etc :)
Reply Retweet Označi sa "sviđa mi se"
Gary Lupyan 3. sij
Odgovor korisniku/ci @jaaanaru @sciencemagazine i 2 ostali
Super Interesting! I haven't had a chance to read this yet, but what's the evidence that this is about human layer 2/3 neurons in particular?
Reply Retweet Označi sa "sviđa mi se"
Jaan Aru 3. sij
Odgovor korisniku/ci @glupyan @sciencemagazine i 2 ostali
During the last 2 decades there has been quite some work on rodent layer 2/3 neurons with similar techniques. But in principle it is of course possible that in some other species (primates) this phenomenon will one day be observed. So far it hasn't been.
Reply Retweet Označi sa "sviđa mi se"
Christian Varela 2. sij
Odgovor korisniku/ci @jaaanaru @IanAdAstra i 3 ostali
I have an idea what this means, but could someone please explain this in more layman terms?
Reply Retweet Označi sa "sviđa mi se"
Your Caucus Reporting App, powered by Blockchain 2. sij
Odgovor korisniku/ci @vgman94 @jaaanaru i 4 ostali
1/ The XOR function (given two predicates A and B, XOR(A,B) = T if A=T and B=F, or vice versa) has been a focus for computational neuroscience since Minsky and Papert showed that a single linear perceptron was incapable of learning it.
Reply Retweet Označi sa "sviđa mi se"
Anders Sandberg 2. sij
Odgovor korisniku/ci @jaaanaru @sciencemagazine i 2 ostali
Maybe there is hope for the perceptrons yet.
Reply Retweet Označi sa "sviđa mi se"
Ron🔹 2. sij
Odgovor korisniku/ci @anderssandberg @jaaanaru i 3 ostali
Single neurons already have multiple inputs and outputs… they’re probably more like a multilayered perceptron on their own And I bet there are other neuron types that specialize in convolutions and permutations and other “primitives” operations that im uneducated about
Reply Retweet Označi sa "sviđa mi se"
Thomas Hannagan 2. sij
Odgovor korisniku/ci @jaaanaru @sciencemagazine i 2 ostali
This looks terrific, except for the paywall..
Reply Retweet Označi sa "sviđa mi se"
𝕋𝕙𝕖 𝕆𝕟𝕖 𝕚𝕞𝕡𝕣𝕠𝕧𝕚𝕟𝕘 θ 2. sij
Odgovor korisniku/ci @ThomasNeuro @jaaanaru i 3 ostali
Makes me miss Aaron Swartz
Reply Retweet Označi sa "sviđa mi se"