Twitter | Search | |
David Beniaguev
A story of a Cortical Neuron as a Deep Artificial Neural Net: 1) Neurons in the brain are bombarded with massive synaptic input distributed across a large tree like structure - its dendritic tree. During this bombardment, the tree goes wild preprint:
Reply Retweet Like More
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
2) These beautiful electricity waves are the result of many different ion channels opening and closing, and electric current flowing IN and OUT and ALONG the neuron. This is complex, a lot of things are going on, and the question arises - how can we understand this complexity?
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
3) The approach we take in the paper is the attempt to compress all of this complexity inside as-small-as-possible deep artificial neural network. We simulate a cell with all of it's complexity, and attempt to fit a DNN to the neuron's input-output transformation
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
4) We successfully manage to compress the full complexity of a neuron, that is usually described by more than 10,000 of coupled and non linear differential equations, to a smaller, but still very large, deep network. What biological mechanism is responsible for this complexity?
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
5) The first candidate that comes to mind is the NMDA ion channel that is present in all synapses. So we remove NMDA ion channels and repeat the experiment keeping only AMPA synapses Turns out, now we only need a very small artificial net to mimic the input-output transformation
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
6) So it turns out that most of the processing complexity of a single neuron is the result of two specific biological mechanisms - the distributed nature of the dendritic tree coupled with the NMDA ion channel. Take away one of those things - and a neuron turns to a simple device
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
7) One additional advantage deep neural netowrks have compared to thousands of complicated differential equations, is the ability to visualize their inner workings. The simplest method is to look at the first layer weights of the neural network:
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
8) Here depicted are weights for one of the artificial units in first layer of the large DNN that mimics a neuron with the full complexity: One can see the spatio-temporal structure of synaptic integration: The basal and oblique trees integrate predominantly recent inputs
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
9) Here, a different first layer unit, the apical tree appears to pay attention to what happened on it for many more milliseconds than the basal and oblique trees that we saw in previous unit (BTW, the blue traces are inhibition, the red traces are excitation)
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
10) if we look at the first layer units of the small DNN that fitted the neuron with AMPA only synapses, then it appears that the units don't really pay attention at all to what happens at the apical tree or any distal locations at basal and oblique trees.
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
11) it is a little bit hard to see the details of what's going on in those weight plots because there are so many synapses. So Let's focus on a single dendritic branch and zoom in on it. For a single branch with NMDA, it's possible to mimic it's behavior with only 4 hidden units
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
12) and here are it's spatio-temporal patterns of integration: I'll verbally describe those filters from top to bottom as questions that the neuron is "asking" the input in order to determine its output (Note: no inhibition here, only excitation. time window extent is 100ms)
Reply Retweet Like
David Beniaguev 24 May 19
Replying to @DavidBeniaguev
13) unit 1: was there very recent excitation that was proximal to soma? unit 2: was there very recent excitation that was distal to soma? unit 3: was there a quick distal to proximal pattern of excitation? unit 4: was there a slow distal to proximal pattern of excitation?
Reply Retweet Like
David Beniaguev 24 May 19
14) many more details are in the preprint on bioRxiv:
Reply Retweet Like
David Beniaguev 24 May 19
15) huge thanks to my PhD supervisors and and also to all my lab mates and also to everyone else that listened to me talked on and on of these stuff in the past! :-)
Reply Retweet Like
David Beniaguev Apr 1
Replying to @DavidBeniaguev
Released all code and data of this work! Think you can build a simpler yet not less accurate model for a single neuron? Just want to analyze the input-output dataset from a completely new perspective? I've tried making all of that as simple as I could
Reply Retweet Like