2016 Postmortem
In reply to the discussion: Chances of Wisconsin Presidential Vote Results shown to be 1 in 850, and Worse for Other States [View all]Igel
(36,876 posts)that are used to expose fraud have a very large percentage of the voters surveyed.
In the US, they pick representative polling places and have a model as to who's voting. You take the numbers you got, plug them into the model, and out pops the prediction.
You poll one precinct, let's say. your "model" electorate" was
12% black (R-D split 10-90), 5% Lat. (20-80), 1% Asian (1-99), and the rest white (55-45). Your sample is
5% black (R-D split 1-99), 3% Lat. (5-95), 5% Asian (10-90), and the rest white (50-50% split).
Ooh, that's bad: your model is wrong.
At first you take the results for the voters assuming your model is correct, because the model isn't for "all those voting by noon" but "all those voting by the end of the day". But then you find that for 100 votes you assumed 10 of the blacks vote (D) and only 1 (R) when actually they accounted for only 5 (D) votes. You thought 3 (D) Latino votes, but instead get 1 (R) and 4 (D)--you're down one (R) vote but 4 (D) votes. Instead of one (D) vote you get effectively none from the Asians, but the white vote netted more (R) than you thought.
When you finally know what the electorate is like, you can either insist that reality is wrong and your fantasy model is right, or you can adjust your model for (a) turnout, (b) skew in the demographics, (c) wrong assumptions about how each demographic would vote. So, for example, in Texas the surprise was how many Latinos voted for Trump. He didn't get the percentage Romney got, but given the rhetoric the assumption was he'd be in the single digits for Latino support. He got something like 20% of the Latino vote.
This is a work in progress on election day, and the big question after recalibrating the model to match reality is always, "So, how can we improve our model next time?"
Note that the "catch election fraud" tends to get over 90% of the population polled, and they don't make predictions based on models. They actually count votes and fill in those areas missed *after* the fact based on reality.
You can run lots of valid tests based on those statistics. But when you run tests based on stats that result from a small sample with an assumed model, your tests say much more about the model used than the actual data. In other words, there's a very, very small likelihood that the model the exit pollers used was correct.
Edit history
Recommendations
0 members have recommended this reply (displayed in chronological order):