|
@TimKietzmann | |||||
|
New preprint from the lab: "Individual differences among deep neural network models."
biorxiv.org/content/10.110…
Work with @KriegeskorteLab, @HannesMehrer, and Courtney Spoerer. #tweeprint below. 1/7
|
||||||
|
||||||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
Deep neural networks have seen a surge in popularity in neuroscience and psychology, where they are used as a modelling framework to understand (visual) information processing in the brain. 2/7 pic.twitter.com/NQzLwIt50p
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
A computationally convenient (and therefore common) approach is to rely on single pre-trained computer vision models (Alexnet, VGG, etc.).
But do DNNs, just like brains, exhibit individual representational differences that need to be accounted for? 3/7
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
Here we test this by training multiple identical network instances while varying only the random seed during weight initialisation. We compare the learned representations using a technique from systems neuroscience: representational similarity analysis (RSA). 4/7 pic.twitter.com/mb9fCxzrQP
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
Simply changing the random seed leads to considerable individual differences (shared variance in distance estimates can be as low as 44% across networks). The size of the effect is comparable to training networks with completely different image sets. 5/7 pic.twitter.com/GRaCm6grir
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
What are the origins of this? We argue that the categorization objective does not sufficiently constrain the arrangement of category clusters and exemplars. In addition, the interplay of ReLus and properties of certain distance measures contribute to differences. 6/7 pic.twitter.com/0OwYKT89tf
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
Dropout can help, but considerable differences remain. This calls into question the practice of using single network instances to derive neuroscientific insight. Going forward, multiple DNNs may need to be analysed (similar to experimental participants). /fin pic.twitter.com/KJWbuPSGb9
|
||
|
|
||
|
Fleur Zeldenrust
@FleurZeldenrust
|
10. sij |
|
Interesting! So for me the question is: similarly, all 'real' brains are different. But what are the 'conserved' representations?
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
Conserved across human individuals, or between DNNs and brains?
|
||
|
|
||
|
Nicholas Blauch
@nmblauch
|
10. sij |
|
any thoughts on which constrains individual differences more: randomized initialization, or randomized training order?
|
||
|
|
||
|
Tim Kietzmann
@TimKietzmann
|
10. sij |
|
A good question to which I have no definite answer. We have compared differences that emerge from different random seeds (smallest intervention), differences due to different image sets (same categories), and differences due to different categories (Figure 5 in the paper).
|
||
|
|
||