Five quick analytical takeaways from the midterms
What to make of what happened (for the world of forecasting)
The midterm elections upended forecasts, even those made after returns had already started coming in much better for Democrats than expected.
What should we, as political analysts, make of what happened on Tuesday night?
1. Don’t count polls out yet
For all the worry about declining response rates and partisan machinations to game the averages, polls were still a reliable indicator of what was going to happen.
This doesn’t mean that polling is fixed and will be accurate in 2024. It may be that polls’ misses in 2016 and 2020 shows that non-response rates is higher among voters who show up for presidential elections but not midterms. This theory would explain 2018’s relative accuracy. We still need to see, but it’s clear that they are not finished yet.
2. It’s okay to dismiss some polls
There was concern that Republican-aligned groups were commissioning pro-GOP polls in the final weeks to try to skew the averages. Here is RealClearPolitics’ generic Congressional average. I don’t know if there was a deliberate effort to make it look like Republicans were going to do better, but it certainly looks like it. We see a big jump for Republicans in October that didn’t translate to the voting.
For 2024, we’ll need to better distinguish between the outlets that were able to account for these late polls and those that didn’t/
3. Journalists move faster than voters
This election saw the rise of “vibes” as a catch-all for a sense of the race decoupled from any evidence. Here’s one example from the New York Times the day before votes were cast. Rather than simply a recap on what the last set of stump speeches were, it subtly inserts a forecast.
It’s important to remember that news cycles move faster than electoral cycles. While much of the final weeks of the election was dominated by talk about crime being a driving issue, the voters considered the Dobbs decision more important, even if it was being treated as old news.
Not every journalist does this, of course. There are some great ones working on reporting what the animating issues are. But the media ecosystem as a whole is biased to focus on what’s new and interesting and lose focus on old stories.
4. Probabilities exist for a reason
Even if Rep. Boebert wins re-election, it’ll be at a closer margin than nearly any of the simulations in the FiveThirtyEight model.
But that doesn’t tell us the model is wrong! It tells us why we have a model and run simulations. It’s true that Frisch only had a 3% chance of winning. But if there are 100 races at the same spread as the Colorado 3rd, then there should be three upsets like this.
Binary forecasts of will / won’t win throw us off and add fragility to our predictions.
5. Accountability is important
One of the benefits of forecasting tournaments or prediction markets is that we can see in real time when people realize they (or the average of all other participants) made a mistake and want to change their minds. The below shows that from 9pm to 11pm Eastern on election night the PredictIt crowd shifted rapidly to the prospects of a Democratic House.
For everyone else who makes a prediction, it can be easy to ignore this step. We move from “It’s obvious there will be a red wave” to “It was obvious there was never going to be a red wave.”
Even if we don’t share it with others, it is important that we hold ourselves accountable to past predictions so that we can look into why we were wrong or right. That leads us to identify factors that we should have up- or down-weighted and find evidence we missed. Every prediction is, in retrospect, a teachable moment. But only if we take the time to make it so.