Twitter | Search | |
Payton J. Jones
Clinical Psychology PhD Student | Fellow | Etiology of emotional disorders, network analysis, stats. If violence is decreasing, why isn’t PTSD?
1,496
Tweets
512
Following
2,112
Followers
Tweets
Payton J. Jones 7h
Years later, it was discovered that the mechanism of action of this medication was impairing the acquisition of memory. People were tossing, turning, and waking up all night long, but they didn't remember it, so they reported feeling that they slept great!
Reply Retweet Like
Payton J. Jones 7h
In addition to your examples of ECT and EMDR, I would add literally all of psychiatric medicine! Knowing (or having a strong suspicion about) the mechanism *is* a big benefit thought. I recall hearing a story about a medication used to help people sleep...
Reply Retweet Like
Payton J. Jones Oct 23
Replying to @DrAmeliaAldao
I honestly don't have any magic bullets, it's a tough problem. I would love to hear what solutions others have come up with!
Reply Retweet Like
Payton J. Jones Oct 23
Replying to @DrAmeliaAldao
One problem that can get in the way is when clients always come in with a "problem of the week". It can be a tough balance between truly listening to what they're dealing with, while also trying to stick with a treatment plan you trust will work better than 'empathy alone'.
Reply Retweet Like
Payton J. Jones Oct 20
Also panic disorder! (which depending on who you ask, might qualify as a phobia ;) )
Reply Retweet Like
Payton J. Jones Oct 20
That's a really good point. I suppose it would really help if we could somehow prevent the proliferation of these kinds of things in the first place. Easier said than done!
Reply Retweet Like
Payton J. Jones Oct 20
I mostly agree with that, although I do think there is sometimes value in testing 'non-science-based' type ideas. There is often some grain of truth to folk wisdom, and when there's not, at least a precise null result can help you persuade others it doesn't work.
Reply Retweet Like
Payton J. Jones Oct 20
I would personally add that even if interventions are founded on good ideas rather than pseudoscience, we still shouldn't use them in a widespread fashion until we first have evidence that they are effective and safe.
Reply Retweet Like
Payton J. Jones Oct 19
As I stated above, I’m more than happy to continue the discussion— just DM me! I’ve been increasingly getting the sense that tweets are not a very productive format for this type of discussion.
Reply Retweet Like
Payton J. Jones Oct 19
This doesn't have to be in a psychometric network context only. In any network where the edges are not 100% certain, any individual edge has a decent chance of going awry, but meta-statistics about the network such as centrality will be "roughly right" most of the time.
Reply Retweet Like
Payton J. Jones Oct 19
Yes, but the concern is more about the signal/noise ratio rather than the variance, right? If you accept the general rule that true networks are sparse, it follows that you'll get more signal by summing (or if you prefer, taking the mean of) multiple edges.
Reply Retweet Like
Payton J. Jones Oct 19
I wrote about the issue here: And Eiko wrote about it here: Also note that replicability is a distinct issue as it pertains to hypotheses/inferences. Having a wide CI is fine for replicability *if* you interpret it correctly.
Reply Retweet Like
Payton J. Jones Oct 19
Hi Ashley -- the issue here is that poor precision of estimates may indicate poor reliability of a specific *effect* (data), it cannot be used to infer poor reliability of the *statistical method being used* (in this case, network analysis). For that, you need simulations.
Reply Retweet Like
Payton J. Jones Oct 18
If you honestly want to have a conversation (beyond what I and others have already written on the issue), feel free to DM me! Otherwise I think I'll take a note from Denny's book
Reply Retweet Like
Payton J. Jones Oct 18
It's a shame that academics often end up in "heated agreements". In this case I think everyone can agree that it's inappropriate to make strong inferences if you're in an exploratory setting and you have a very wide CI.
Reply Retweet Like
Payton J. Jones Oct 18
I agree with the concern as stated by Aidan in his tweet, but it's important to phrase things correctly. Wide CIs and the danger of multiple comparisons in inference does not mean the same thing as saying a method has inherently low reliability or low replicability.
Reply Retweet Like
Payton J. Jones Oct 18
E.g. which aspects of event centrality shares the most connection to DSM symptoms of PTSD? Which social anxiety symptom has the strongest overall connectivity to aspects of problematic alcohol use? Etc.
Reply Retweet Like
Payton J. Jones Oct 18
I agree to an extent. Perhaps unsurprisingly, I find myself formulating hypotheses in terms of bridge centrality more often than other types of centrality, as it often relates to specific hypotheses about pathways between clusters.
Reply Retweet Like
Payton J. Jones Oct 18
The best case scenario is to pre-specify specific hypotheses about the network, but in an exploratory context I generally prefer centrality metrics. And in a confirmatory world I still (pers.) feel like centrality metrics are more likely to map onto hypotheses I'm interested in.
Reply Retweet Like
Payton J. Jones Oct 18
Hmm, my intuition is the opposite. Individual edges are often less stable (really huge CIs) and more vulnerable to topological overlap problems. Plus there are so darn many edges that I worry about multiple comparisons and cherry-picking.
Reply Retweet Like