Twitter | Search | |
John Anzalone
I appreciate this statement acknowledging was an outlier. But I disagree with "In the end we must put out the numbers we have." When our firm comes out of the field and we believe we have an outlier we shit can the numbers and redo the poll at our expense. Period
Reply Retweet Like More
Steve Zorowitz Aug 28
Replying to @JohnAnzo @MonmouthPoll
You should release both.
Reply Retweet Like
John Anzalone Aug 28
or combine the two data sets which I think is preferable
Reply Retweet Like
Meddler (zombie) Aug 28
This is a great way to have your entire body of work discredited as an academic social scientist!
Reply Retweet Like
Meddler (zombie) Aug 28
If you only continue sampling when you get unexpected preliminary results, you are intentionally biasing the final results. You can redo the poll but unless you would have kept sampling regardless, you should absolutely not combine the datasets
Reply Retweet Like
Jurgan Aug 28
Good point. At first I thought “bigger sample size can’t hurt,” but you’re right, only if you were planning to do that before you began the experiment.
Reply Retweet Like
Meddler (zombie) Aug 28
Exactly. This kind of thing is exactly why psychology (and really all science, social or otherwise, involving noisy data) found so many problems with replicability. We now are expected to pre-register all this stuff before data collection to ensure we aren't hacking our results
Reply Retweet Like
🎃👻 Eoin Hauntins 👻🎃 Aug 28
Replying to @JohnAnzo @MonmouthPoll
One might interpret this as you saying that when polls don't give you the desired result, you redo them until they do
Reply Retweet Like
🎃👻 Eoin Hauntins 👻🎃 Aug 28
Replying to @JohnAnzo @MonmouthPoll
Or, to put it another way:
Reply Retweet Like
Dan Rosenheck Aug 28
This is the most public admission of pollster herding I've ever seen...
Reply Retweet Like
AlwaysBePrimarying🌹 Aug 28
He's joking right? Otherwise this is just clear professional malpractice.
Reply Retweet Like
Gtmcauliffe Aug 28
It depends heavily about what he means by outlier. If it's just a movement they weren't expecting, yeah malpractice. If it was something that is flat out implausible, that's worth redoing the numbers on. This specific example seems more to be in the former though.
Reply Retweet Like
AlwaysBePrimarying🌹 Aug 28
Exactly, seems like the former to me.
Reply Retweet Like
Jon, but spooooooky Aug 28
Replying to @JohnAnzo @MonmouthPoll
What's the point of polling if you only release it when it says what you want it to say?
Reply Retweet Like
Ioana F Aug 28
that philosophy seems like an excellent means to confirmation bias your way into thinking there's no way sayyy a certain Donald Trump could ever win an election
Reply Retweet Like
Jon, but spooooooky Aug 28
Yeah wow their final poll was actually an outlier
Reply Retweet Like
Jon, but spooooooky Aug 28
Wait I'm dumb. This is Monmouth, who put out a good post about the desire to post their outliers to avoid skewing, not the Dem guy I'm yelling at.
Reply Retweet Like
LadyReverb #Women4Bernie 🌹🔥🏴 ☭ Aug 28
Replying to @JohnAnzo @MonmouthPoll
Now we know why Biden is still polling as high as he does.
Reply Retweet Like
Ken Masters Aug 28
Replying to @JohnAnzo @MonmouthPoll
It's remarkable... I don't think he realizes what he just admitted to doing?
Reply Retweet Like
David Rothschild Aug 28
Replying to @JohnAnzo @davidshor
Honest Question: Is tossing outlier polls, and redoing them, common practice among Democratic pollsters? cc/
Reply Retweet Like
(((David Shor))) Aug 28
Replying to @DavMicRot @JohnAnzo
The people jumping on Anzalone here don’t really appreciate the extent of non-sampling error (call houses faking data, transient spikes in differential non-response, etc). I agree with Anzalone that we have a responsibility to our clients that goes beyond blindly publishing.
Reply Retweet Like
David Rothschild Aug 28
Replying to @davidshor @JohnAnzo
Sure, I have paper on MOE being 2x sampling error on published polls () And as an internal pollster, he is only reporting to client. But, the responsible thing to do would be to report result to client, then repeat. Is that what "shit can and redo" means?
Reply Retweet Like
(((David Shor))) Aug 28
Replying to @DavMicRot @JohnAnzo
The basic idea here is that if you get a poll that’s moderately above your priors (IE, Trump at 45% approval), you update, if it’s way above your priors (IE, Trump at 80% approval), you decide there’s something wrong with the poll and don’t update
Reply Retweet Like
Neil Malhotra Aug 28
One could write down a model of polling/herding where this unravels very quickly.
Reply Retweet Like
(((David Shor))) Aug 28
“Delete five sigma polls don’t delete two sigma polls” in practice works pretty well here.
Reply Retweet Like
Neil Malhotra Aug 28
Sure but I would not call that Biden/Warren/Sanders poll a "five sigma" poll. How do you define the sigma cutoff? Are the professional incentives of pollsters such that sigma cutoff should be smaller?
Reply Retweet Like
tobias konitzer Aug 28
One cannot be blind to the fact that in practice, this induces herding, and is a huge problem especially for polling aggregators as errors in individual top-lines become massively correlated
Reply Retweet Like