• Sphere [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    19 days ago
    spoiler

    What to Make of This

    Of course, maybe this all just works out. Maybe what we’re seeing about Republican response rates increasing is just a fluke specific to YouGov’s surveys, that Pew’s survey from before Kamala’s entry still accurately reflects partisan leanings, and that all of this is helping us stop nonresponse bias. Maybe there are also further, as-of-now-unidentified causes of pro-Democratic survey error lurking out there, that the extra tools that pollsters are now employing are also helping them stop that. But we shouldn’t take this for granted, because there’s a real history of pollsters becoming too obsessed with their industry’s past blindspots, overcorrecting, and missing badly in the opposite direction. As noted by Nate Silver in a recent article, this happened quite noticeably in the 2017 U.K. election, when a country with a political culture long dominated by the idea of a “shy Tory vote” was just reeling from a 2015 election that saw the Conservatives underestimated in the polls. According to Silver, many pollsters in 2017 put their fingers on the scales to benefit the Tories, often using ad hoc methods to do so. It didn’t end up working out and only caused them to miss very real Labour strength. As a result, all major forecasters but one (the sole exception being YouGov, funnily enough) incorrectly projected a Conservative majority. Pollsters in the U.S. now are facing similar circumstances, harbor very similar self doubts, and are employing equally dubious methods to move their polls in the same direction. They could end up doing something quite similar in the end.

    This leads me to my final question: what kind of raw data would pollsters need to start seeing in order to produce polls with a meaningful Kamala lead? They’re clearly very comfortable with producing polls that show her up narrowly with Trump within the margin of error, but what kind of responses would they need to show her up by more than that? They’re clearly capable of imagining a supposedly endemic nonresponse bias that could inaccurately boost her by any amount imaginable. Because of this, it’s entirely possible that they could be converting any dataset they’re presented with to a result within the “safe” band between R+3 and D+3. If Kamala ends up outperforming her polls in the end, we may very well look back on her lack of a surge after events that have historically corresponded with gains, like the DNC and her successful debate, to have been a sign that pollsters were erring on the side of cowardice. This election’s lack of practically any polling variance—something that stands in stark contrast to Trump’s prior elections, including his re-election campaign when Americans had supposedly made up their minds about them—will also stick out like a sore thumb, especially given that one of the major party candidates entered the race at a historically late date while lacking much of a profile to voters. One would think that this would result in a race with a lot of movement, but we’ve hardly seen any since August, around when Kamala started putting together leads close to what pollsters might consider to be safe no matter the result.

    In this context, and in light of how many nonpartisan pollsters played me-too with GOP narratives at the close of the 2022 elections, who can we trust to be brave? There are the New York Times and Marist, but they are hardly infallible. While the Times’ eerily accurate closing Senate and House polls massively boosted their reputation in the aftermath of the 2022 election, it’s often forgotten that their final generic ballot poll overestimated Republicans by a few points, or that they editorialized against their own polls that went the furthest against the grain. Sticking by such results when they concern congressional district elections in Kansas is one thing, but it’s another thing entirely when it comes to things like the final poll of a presidential election with Trump on the ballot. Similarly, Marist was hardly free from error in 2022—they underestimated Colorado Senator Michael Bennet’s winning margin by eight points, for instance. They certainly have a degree of credibility that the Emersons of the world don’t have, but they’re not Gods. Even if they were, it’s just never going to be possible to model an entire election off of two pollsters, both of whom are subject to the same incentive structures that all the others are. Pay more attention to them if you please, but don’t expect them to give you a window into the “real” world that other nonpartisan pollsters aren’t showing you.

    In any case, we’re well past the point where these decisions won’t have any impact. If they do end up working out and polling happens to be right, the industry will be changed forever, for better or for worse. But if they don’t end up working out, don’t expect it to come without any real-world consequences. We know with certainty now that Trump will declare himself the winner of the election no matter how the results go, and that he and his followers will seize on any bit of proof to claim that it was stolen. In 2020, they were fully willing to use trivia about bellwether counties and the predictive power of Ohio to back up their claim that Trump won. This time, they will be guaranteed to have an extensive list of pollsters showing Trump winning, very possibly for unjustifiably cowardly reasons. In their attempt to cover their own asses, these pollsters may end up giving ammunition to an even more dangerous and well-prepared election denial movement.

    We don’t know what the consequences of this may be, but we do know one thing: that, if Kamala wins, Democrats will be too relieved to make fun of the pollsters who messed up. For some surveyors out there, that seems to be all that matters.