Twitter | Search | |
Michael C. Frank
Developmental psychologist at Stanford studying language, thought, babies, pragmatics, numbers, cognition, teaching, and learning. Reader, climber, runner, dad.
3,326
Tweets
970
Following
3,383
Followers
Tweets
Michael C. Frank 2h
It's taking us 3+ hours to go through a single paragraph of results in detail. I think this is way too much to ask of reviewers!
Reply Retweet Like
Michael C. Frank retweeted
Russ Poldrack 23h
Reply Retweet Like
Michael C. Frank retweeted
Tobias Wood 23h
Methods sections are basically lossy compression. Very bad lossy compression.
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @TradeandMoney
Of course robustness still a major issue for us too.
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @TradeandMoney
Oh - so this is already huge progress! We need to take that step.
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @TradeandMoney
This is all small-scale lab expts.
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @TradeandMoney
Very interesting, would love to hear more. My impression is AERA articles typically use pre-existing data, specify regression?
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @gewaltig
There is a continuum of standards of course. At a minimum, code provides a better guide than verbal description for what was done.
Reply Retweet Like
Michael C. Frank Jul 16
Replying to @AndreaWiggins
RMarkdown renders to word format actually, big win in that case.
Reply Retweet Like
Michael C. Frank Jul 15
A lot of us felt like, let's deal with reality in our own labs rather than responding to anon ethnography. was less dismissive...
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @olsonista
Yes definitely! It helps explain what you did more explicitly.
Reply Retweet Like
Michael C. Frank Jul 15
And the most important thinks then IMO: 1. Exclusion rule 2. Exact stat test 3. Prediction/ interpretation for test result patterns.
Reply Retweet Like
Michael C. Frank retweeted
Heather Urry Jul 15
Yes, this! Loving 's papaja. Report numbers in ms *directly from R lists* for error-free, reproducible results. Friggin' magic.
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @psforscher
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
11/ In sum: Computational reproducibility is a MAJOR issue. Journal/funder policy must reflect critical importance of code sharing. [end]
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
10/ RENDER YOUR PAPER. Copy/paste of stats is VERY error-prone. RMarkdown a good solution. ...
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
9/ SHARE CODE WITH DATA. Of course, it's better to version control/packrat/dockerize. But ANY code/syntax is WAY better than none.
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
8/ 4. ALL of my N=3 papers had >1 issue (typo/ error). In *one paragraph*. No conclusions changed, but something def wrong in each. SO:
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
7/ 3. ANOVA is the root of all evil. Every program does repeated measures differently, and specifying ANOVA in prose is VERY imprecise.
Reply Retweet Like
Michael C. Frank Jul 15
Replying to @mcxfrank
6/ 2. There's LOTS of guesswork. Translating code to prose and back to code is very lossy, and you often have to extrapolate key steps.
Reply Retweet Like