|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@
KordingLab
Philadelphia, PA
|
|
@Penn Prof, #deepLearning in brains, #causality for brains and #healthcare. Physicist. Collaborator. Transdisciplinary optimist. Dad. Loves outdoors. π¦
|
|
|
10.847
Tweetovi
|
2.397
Pratim
|
14.853
Osobe koje vas prate
|
| Tweetovi |
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3 h |
|
Yes. That's ok.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
10 h |
|
But will there be Vikings?
|
||
|
|
||
| KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦ proslijedio/la je tweet | ||
|
|
Russ Poldrack
@russpoldrack
|
19 h |
|
How can neuroscientists respond to the climate emergency? psyarxiv.com/dxpv4
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
11 h |
|
Probably. Although having some link may be good. What is your topic?
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
14 h |
|
Yes. Happy to send you details. pic.twitter.com/zHSaKXx8AA
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
15 h |
|
In an online setting I will, on average, have seen half of the testing data. I need far less to deal with certain kinds of covariate drift.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
17 h |
|
Learn modeling, improve your data science, learn to better answer real questions. Clarity. And improve your python skills too. If you are in the broad sensorimotor field consider attending our Cosmo 2 week summer school. twitter.com/GunnarBlohm/stβ¦
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
19 h |
|
There are lots of cases where you are good as long as you don't see the labels . But given the bad things people do I will totally say live is a lie.
|
||
|
|
||
| KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦ proslijedio/la je tweet | ||
|
Eric Jonas
@stochastician
|
23 h |
|
βJudgments of effort for magical violations of intuitive physicsβ or βhow much harder is it to conjure a frog?β This is my new favorite Tomer paper journals.plos.org/plosone/articlβ¦
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
22 h |
|
By far the best introduction to probabilistic computation i have seen so far: buff.ly/2EZB2vv I also love the juxtaposition of code and graphs. By @betanalpha /ht @stochastician
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
24 h |
|
interestingly, you can take these super overparameterized models and compress them enough that you get pretty small models. For which you can then sometimes give meaningful generalization bounds, PAC Bayes style.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
4. velj |
|
but the experimentalists are becoming a lot more mathematically sophisticiated. Which bodes well for the future of the field.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
4. velj |
|
But also, and maybe sociologically more important, computation is essential for literally any measurement now. The people who can do that can also relate to modeling in a deeper way.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
4. velj |
|
I think that both modeling and experimentation had in the past made bad epistemological choices and is in the process of fixing that. Which may have us finally see modeling move closer to the center of neuroscience.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
4. velj |
|
I do not understand the original question. Forward models should contain how the world really works. This seems to be where the CI problem occurs. Inverse models are then just how to make the world better, given the forward model.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3. velj |
|
Once you add pretty much any additional idea things get really hard.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3. velj |
|
Not so sure. I still don't grok linear regression
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3. velj |
|
how well NTK describe actual ANN learning is still a matter of great debate.
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3. velj |
|
Neural Tangent Kernels (NTK) are a way of understanding neural networks in a certain domain (in the limit of N ->\infty, W0 not ~0). It is beautiful. And somewhat counterintuitive. Here are two nice write-ups aimed at explaining: rajatvd.github.io/NTK/ , offconvex.org/2019/10/03/NTK/
|
||
|
|
||
|
KordingLab π¨βπ»π§ βπ¬π,ποΈββοΈβ·οΈππΉπΊβ°οΈβπ¦
@KordingLab
|
3. velj |
|
We should write the NTK for Neuro folks paper ;)
|
||
|
|
||