False Assumption Registry

Clinton Has 85 Percent Chance of Winning


False Assumption: Statistical models from social science accurately predict election outcomes with high probability.

Summaries Written by FARAgent (AI) on February 16, 2026 · Pending Verification

By 2016, it was perfectly reasonable to think modern election models could do more than offer hunches. Polling had a decent record in recent presidential races, forecasters had years of state data, and the language of probability sounded modest rather than grandiose: Hillary Clinton had an 85 percent chance, or a 70 percent chance, not a guarantee. Nate Silver, Nate Cohn, Samuel Wang, and campaign analysts were not reading tea leaves; they were aggregating polls, weighting survey quality, and using demographic and historical patterns that had often improved on pundit chatter. A reasonable person could look at Clinton’s steady national lead, her edge in key state polls late in the race, and the failure of most dramatic Trump scenarios in prior elections, and conclude that statistical social science had made election prediction more disciplined and more accurate.

Then came November 8. Clinton won the popular vote, which let defenders say the basic polling picture was not fantasy, but Trump carried the states that mattered, and forecasts sold to the public as highly confident suddenly looked less scientific than advertised. Evidence challenging the assumption piled up: state polls in the Upper Midwest missed in the same direction, low response rates raised doubts about who pollsters were actually reaching, and later work pointed to weighting problems, education gaps, and possible social desirability effects. Critics also argued that social science models often smuggle in fragile assumptions, ideological blind spots, and a false air of precision, especially when media outlets translate uncertainty into a single bold percentage.

The current debate is not settled. A substantial body of experts now rejects the stronger claim that these models can accurately predict election outcomes with high probability in the way the public was led to understand in 2016. But many forecasters answer that the real failure was not probability itself, but how people read it: 15 percent is not zero, and unlikely events happen all the time. Since 2016, modelers have revised methods, widened uncertainty bands, and stressed that forecasts are conditional estimates, not prophecies. The assumption survives in a narrower form, increasingly questioned but not abandoned.

Status: A significant portion of experts think this assumption was false
  • Nate Cohn served as the lead election forecaster for the New York Times Upshot team in 2016 and on November 8 he published an analysis giving Hillary Clinton an 85 percent chance of winning the presidency. His model drew on the latest state and national polls and reflected the conventional view among data journalists that the race had settled into a narrow but stable Clinton lead. The forecast carried weight because the Times occupied a central place in American political coverage and millions of readers treated its numbers as authoritative. Cohn presented the probability in good faith as the output of a transparent statistical process rather than a partisan prediction. [1]
  • Nate Silver founded FiveThirtyEight and built a national reputation on the idea that statistical models could cut through media spin to reveal true election probabilities. In the final weeks of the 2016 campaign he maintained that Clinton remained the clear favorite while repeatedly warning that her lead was smaller than conventional wisdom suggested. His 29 percent chance for Trump on the eve of the election stood out as higher than most other forecasters yet still framed Trump as the underdog. Silver later argued that groupthink and misreading of uncertainty rather than faulty data explained the widespread surprise at the result. [1][5]
  • Samuel Wang directed the Princeton Election Consortium and held a faculty position as a neuroscientist at Princeton. In late October 2016 he placed Clinton’s odds of victory between 98 and 99 percent and declared on national television that the race was “totally over.” His model aggregated state polls with a simple Bayesian approach that had performed well in prior cycles. The stark probability became a talking point among Democrats who cited it as scientific confirmation that Trump could not win. [1]
  • Bethany Heck contributed to FiveThirtyEight and on October 17 2016 wrote an article titled “Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning.” She reported the site’s polls-only model gave Clinton an 87 percent chance and the polls-plus version gave 84 percent. The piece used 20 000 simulated election outcomes to illustrate the possible paths and reached a large audience of subscribers and political analysts. Heck presented the numbers as the straightforward output of the model without claiming certainty. [4]
Supporting Quotes (10)
“On the morning of November 8, Cohn, one of the country’s most esteemed forecasters, announced an 85 percent chance that Clinton would win (her chances of losing were “about the same probability that an N.F.L. kicker misses a 37-yard field goal,” his website helpfully explained).”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Nate Cohn of the New York Times; Nate Silver of FiveThirtyEight.com; and scientific forecasters from the Huffington Post, PredictWise, and other authoritative outlets had all reached the same conclusion.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“The Princeton Election Consortium (PEC) went even further, placing Clinton’s probability of winning between 98 and 99 percent. In other words, Donald Trump’s odds of becoming the next president of the United States were literally approaching one in a hundred. Such calculations led Samuel Wang—the director of the PEC and a Stanford University–trained neuroscientist—to publicly announce in October that the presidential contest was already “totally over.” “If Trump wins more than 240 electoral votes,” he wrote, “I will eat a bug.””— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Her campaign, with its own polling, concluded the same thing. She barely even campaigned in Maine, Wisconsin or Minnesota, and visited Michigan only in the last week of the race.”— Putting the Polling Miss of the 2016 Election in Perspective
“After the election, for instance, The New York Times’ media columnist bashed the newspaper’s Upshot model (which had estimated Clinton’s chances at 85 percent) and others like it for projecting “a relatively easy victory for Hillary Clinton with all the certainty of a calculus solution.””— The Media Has A Probability Problem
“Even President Trump chimed in to say the storm hadn’t been predicted well, tweeting that the devastation from Irma had been “far greater, at least in certain locations, than anyone thought.””— The Media Has A Probability Problem
“Bethany Heck Want these election updates emailed to you right when they’re published? Sign up here.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“We had him with a 30 percent chance, and that’s a pretty likely occurrence.”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“Gross and Simmons used a randomized experiment to argue for the absence of ideological bias in graduate school admissions... 71 percent of scholars believe that personal views should guide selection of research topics.”— The ideological orientation of academic social science research 1960–2024
“James Manzi is a doctoral student in Sociology at the University of Oxford. This study analyzes approximately 600,000 English-language social science abstracts published between 1960 and 2024”— The Ideological Trajectory of the Social Sciences

The New York Times published Nate Cohn’s forecast giving Clinton an 85 percent chance of winning and its Upshot section repeatedly framed her polling lead as nearly insurmountable. Editors and columnists described the electoral math as forbidding for Trump and treated the race as a question of margin rather than outcome. The paper’s prestige lent institutional weight to the probabilistic language and millions of readers absorbed the forecast as expert consensus. After the election the Times joined other outlets in attributing the miss to polling error rather than its own interpretation of the data. [1][2][5]

FiveThirtyEight operated as a prominent polling aggregation site under Nate Silver’s direction and published daily updates showing Clinton’s win probability between 71 and 87 percent in the final month. The site sold subscriptions and books that rested on the premise that its statistical models offered reliable foresight. Its forecasts became a brand that shaped how many journalists and voters discussed the race. Silver himself noted afterward that conventional wisdom inside newsrooms had narrowed the range of acceptable probabilities. [1][2][4]

The Princeton Election Consortium functioned as an academic polling aggregator directed by Samuel Wang and issued forecasts that placed Clinton’s odds above 99 percent in the closing days. The Consortium’s simple methodology and university affiliation gave its numbers an aura of scientific detachment. Its extreme probability reinforced the broader media narrative that Trump’s path had vanished. The Consortium did not revise its model after the election to account for the polling miss. [1][2]

Supporting Quotes (14)
“Nate Cohn of the New York Times; Nate Silver of FiveThirtyEight.com; and scientific forecasters from the Huffington Post, PredictWise, and other authoritative outlets had all reached the same conclusion.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“The Princeton Election Consortium (PEC) went even further, placing Clinton’s probability of winning between 98 and 99 percent.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Nate Silver of FiveThirtyEight.com; and scientific forecasters from the Huffington Post, PredictWise, and other authoritative outlets had all reached the same conclusion.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“This includes our poll-based forecast, which gave Mrs. Clinton an 85 percent chance to win”— Putting the Polling Miss of the 2016 Election in Perspective
“71 percent at FiveThirtyEight”— Putting the Polling Miss of the 2016 Election in Perspective
“greater than 99 percent at the Princeton Election Consortium.”— Putting the Polling Miss of the 2016 Election in Perspective
“Two Saturday nights ago, just as Hurricane Irma had begun its turn toward Florida, the Associated Press sent out a tweet proclaiming that the storm was headed toward St. Petersburg and not its sister city Tampa, just 17 miles to the northeast across Tampa Bay.”— The Media Has A Probability Problem
“On the contrary, the more carefully one looked at the polling, the more reason there was to think that Clinton might not close the deal. [...] polls and other data did not support the exceptionally high degree of confidence that news organizations such as The New York Times regularly expressed about Hillary Clinton’s chances.”— The Media Has A Probability Problem
“Hillary Clinton has a 6.6-percentage-point lead in the popular vote and an 87 percent chance of winning the Electoral College, according to our polls-only forecast.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“If you go back and read The Times, they say point-blank, basically time and time again, “This is a sure thing for Hillary Clinton.””— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“stunned political pundits blamed pollsters and forecasters, proclaiming “the death of data.””— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“Even the crème de la crème of economics journals barely manage a ⅔ expected replication rate. 1 in 5 articles in QJE scores below 50%, and this is a journal that accepts just 1 out of every 30 submissions.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“For this stuff to be financed, generated, published, and eventually rewarded requires the complicity of funding agencies, journal editors, peer reviewers, and hiring/tenure committees. Given the current structure of the machine, ultimately the funding agencies are to blame.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“Disciplines concerned with public policy (“policy-proximal disciplines), such as economics and political science, tend to be less fanatically leftist than disciplines concerned more with feelings, such as psychology, sociology, and gender studies (“policy-distal disciplines”).”— The Ideological Trajectory of the Social Sciences

Proponents of statistical election forecasting believed that regressions on large samples of state and national polls could separate signal from noise and deliver probabilities accurate enough to guide public understanding. They pointed to the success of similar models in 2008 and 2012 and to the fact that national polls had missed by only a few points in recent cycles. The assumption rested on the idea that polling averages in the final three weeks especially in the industrial Midwest represented a stable underlying reality. A thoughtful observer in 2016 could reasonably conclude that Clinton’s consistent 2 to 6 point leads in key states reflected a structural advantage that was unlikely to disappear in the last week. [1][2]

State polling averages in Michigan Pennsylvania Wisconsin Minnesota and Maine showed Clinton ahead in high quality surveys taken in the final three weeks and these numbers were treated as reliable indicators of voter preference. National averages missed by only a few points and forecasters assumed that state errors would largely cancel out in the Electoral College. Post debate polls such as those from Morning Consult showed no movement in Clinton’s lead and reinforced the belief that the race had stabilized. The assumption contained a kernel of truth because polls had performed adequately in prior presidential elections and because large undecided voter pools had historically broken in predictable ways. [2][4]

Social scientists argued that faculty ideology produced at most modest effects on research output and pointed to experiments that found no evidence of grading bias or direct indoctrination of students. Surveys of scholars showed that while many believed personal views should guide topic selection they maintained that published work remained objective. The belief seemed plausible because peer review and replication norms were thought to filter out partisan distortion. A reasonable person reviewing the literature in 2016 could conclude that ideological skew existed but did not systematically undermine the reliability of election forecasts or policy relevant social science. [7]

Yet evidence later challenged these foundations. Voter turnout data revealed that polls had overrepresented college educated voters and systematically underestimated support for Trump among white non college voters. Nonresponse rates in telephone surveys exceeded 90 percent and white voters without college degrees proved especially likely to refuse participation. An analysis of 600 000 social science abstracts from 1960 to 2024 using large language models found that roughly 90 percent leaned left with every discipline showing a left of center mean each year. Prominent papers such as the 2005 article on hegemonic masculinity scored strongly left on an ideological scale while even moderate pieces still sat left of the national center. [2][10][12]

Supporting Quotes (13)
“These prediction gurus used methods and data developed by academics (including statistical regressions and demographic aggregations) to foretell political and economic futures.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Mrs. Clinton led in nearly every high-quality survey of Minnesota, Maine, Pennsylvania, Wisconsin and Michigan.”— Putting the Polling Miss of the 2016 Election in Perspective
“Over all, the national polls missed the result by only a few points: Hillary Clinton is on track to win the popular vote by around 1.5 percentage points, not especially far from her roughly four-point lead in an average of national polls.”— Putting the Polling Miss of the 2016 Election in Perspective
“For most of Irma’s existence, the entire Florida peninsula had been included in the National Hurricane Center’s “cone of uncertainty,” which covers two-thirds of possible landfall locations. [...] trying to distinguish between St. Petersburg and Tampa was like trying to predict whether 31st Street or 32nd Street would suffer more damage if a nuclear bomb went off in Manhattan.”— The Media Has A Probability Problem
“By Election Day, Clinton simply wasn’t all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEight’s forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trump’s favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him.”— The Media Has A Probability Problem
“A week ago, those numbers were about the same: a 6.4-point lead and an 85 percent chance. In our polls-plus forecast, Clinton’s chances are 84 percent, up only slightly from 82 percent a week ago.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“The only national poll conducted fully after the debate — from Morning Consult for Politico — shows Clinton up 6 percentage points, the same margin Morning Consult had in their previous poll.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“our model says to be cautious because of the still-high number of undecided voters and the broad swing state map, and that uncertainty could cut in either direction.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“I don’t think people have a good intuitive sense for how to translate polls to probabilities. In theory, that’s the benefit of a model. But I think people thought “Well, Clinton’s ahead in most of the polls in most states, and I remember that seems similar to Obama four years ago, and therefore I’m very confident that she’ll win.””— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“The assertion from the mainstream media was that the Electoral College was an advantage to Clinton, and of course it was a huge disadvantage to her.”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“There is evidence that direct biases in classroom instruction, grading, and graduate-level admissions are either unobserved or at most small effects; however, mechanisms by which ideology might influence research have recently been demonstrated in controlled experiments.”— The ideological orientation of academic social science research 1960–2024
“roughly 90 percent of politically relevant social science articles leaned left 1960–2024, and the mean political stance of every social science discipline was left-of-center every year during the period.”— The Ideological Trajectory of the Social Sciences
“e.g., 2005’s “Hegemonic Masculinity” in Gender and Society ranks as an 8 on a 0 to 10 scale of leftism, while 1972’s “The Agenda-Setting Function of Mass Media” ... in Public Opinion Quarterly is a middle of the road 5.”— The Ideological Trajectory of the Social Sciences

Newspapers and television broadcasts carried the forecasts to a national audience and repeated the high probabilities for Clinton until they became conventional wisdom. Forecasters cultivated personal brands through websites and books that promised to detect future political signals and this commercial incentive encouraged confident sounding predictions. Media outlets publicized Clinton’s leads in high quality state surveys and highlighted the 85 to 99 percent win probabilities issued by major models. [1][2]

FiveThirtyEight spread its forecasts through its website articles and email updates to subscribers and illustrated the scenarios with charts based on 20 000 model simulations. The site framed the race as having four possible endings three of which led to Clinton victories and this structure reinforced her status as heavy favorite. Journalists in newsrooms located in liberal cities relied on ad hoc descriptions of electoral math to assert that Clinton’s lead could not be overcome. [4][5]

Academic journals published social science papers without strong replication filters and citations continued to treat retracted or non replicating work as valid for years afterward. Nonresponse rates above 90 percent in election surveys became common yet the resulting data still circulated through literature reviews and meta analyses. The leftward ideological orientation documented in 600 000 abstracts spread through peer reviewed journals and influenced the framing of research questions in disciplines that fed policy debates. [6][7][10]

Supporting Quotes (14)
“Newspapers and television broadcasts across the country repeated the claim.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Such prediction gurus built personal brands, ran massive websites, and sold books based on their ability to statistically detect “signals” of future outcomes amid the wider informational “noise” of society.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“Few saw it coming. Mrs. Clinton led in nearly every high-quality survey of Minnesota, Maine, Pennsylvania, Wisconsin and Michigan.”— Putting the Polling Miss of the 2016 Election in Perspective
“Before a storm hits, the media demands impossible precision from forecasters, ignoring the uncertainties in the forecast and overhyping certain scenarios (e.g., the storm hitting Miami) at the expense of other, almost-as-likely ones (e.g., the storm hitting Marco Island).”— The Media Has A Probability Problem
“You’ll read lots of clips like this during an election campaign, full of claims about the “electoral math,” and they often don’t hold up to scrutiny. In this case, the article’s assertion that the loss of “a few percentage points” wouldn’t hurt Clinton’s chances of victory was wrong”— The Media Has A Probability Problem
“Want these election updates emailed to you right when they’re published? Sign up here.”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“The charts below — which are based on 20,000 simulations from our model as of Friday afternoon — attempt to explain this by laying out four broad scenarios:”— Election Update: There Are 4 Ways This Election Can End — And 3 Involve Clinton Winning
“people looking at the polls are mostly in newsrooms in Washington and Boston and New York. These are liberal cities, and so people tend to see evidence (in our view, it was kind of conflicting polling data) as pointing toward a certain thing.”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“people are just referencing 2012, which was a very unusually stable race”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“there's no association between statistical power and impact factor, and journals with higher impact factor have more papers with erroneous p-values.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“studies that replicate are cited at the same rate as studies that do not. ... even after retraction the vast majority of citations are positive, and those positive citations continue for decades after retraction.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“characterizing the political content of hundreds of thousands of academic texts, especially in a consistent and fine-grained way, has historically been prohibitively expensive and time-intensive when relying on human coders.”— The ideological orientation of academic social science research 1960–2024
“nonresponse rates exceeding 90% for election surveys”— What Low Response Rates Mean for Telephone Surveys
“sociology, unsurprisingly moved left during the 1960s, then stabilized during the Sociobiology era of 1975-1985, then moved steadily left through 2024.”— The Ideological Trajectory of the Social Sciences

Academic hiring tenure and grant decisions continued to rest on publication counts and journal prestige without routine checks for replicability and this practice channeled resources toward research that later proved fragile. Funding agencies supported the system by tying grants to publication records in high impact journals even when those journals showed no correlation between prestige and replication success. Billions of dollars flowed into social science projects whose causal claims shaped economic psychological and management interventions. [6]

Social science research informed public policy through left leaning framings in fields such as public administration and public health and these disciplines were presumed to offer neutral expertise despite their documented ideological skew. Policy proximal fields like economics and political science exerted direct influence on government decisions and their research agendas had shifted further left after 1990 following a brief moderation in the 1970s and 1980s. The assumption that such work remained balanced underpinned its continued use in regulatory and legislative debates. [7][12]

Supporting Quotes (3)
“all these things depend on their peers. When criticising bad research it's easier for everyone to blame the forking paths rather than the person walking them.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“policy-proximal disciplines generally showed limited rightward moderation between roughly 1970 and 1990, though policy-distal disciplines did not.”— The ideological orientation of academic social science research 1960–2024
“policy-proximal disciplines generally showed limited rightward moderation between roughly 1970 and 1990 ... Economics, political science, and public administration (“policy-proximal disciplines”) all became less leftwing in the 1970s.”— The Ideological Trajectory of the Social Sciences

Public trust in elite experts declined sharply after the election as many citizens viewed the confident forecasts as another example of institutional failure. Clinton’s campaign relied on the polling leads and conducted minimal campaigning in Wisconsin Minnesota and Maine which later analyses linked to her loss in the Electoral College. Overconfidence produced widespread surprise when Trump won despite the 85 to 99 percent probabilities that had dominated coverage. [1][2]

Media treatment of both the election and Hurricane Irma forecasts created public complacency and in the case of Irma risked dangerous misreactions during evacuations. The Associated Press tweeted a precise landfall prediction for Irma that proved incorrect and the tweet was deleted the next morning after criticism. Billions of dollars were spent on social science research that failed to replicate and false policy implications from non replicating studies distorted interventions in several fields. [3][6]

Ideological skew in social science shaped research agendas in ways that favored certain topics and framings and experiments later showed that faculty ideology causally affected estimates on issues such as immigration. Journals protected high profile researchers accused of fraud for years which prolonged harm and further eroded confidence in the enterprise. The cumulative effect left many observers questioning whether the statistical models used in elections and policy rested on solid ground. [6][7]

Supporting Quotes (10)
“Average citizens, doing their best to remain informed, were completely dumbfounded. They may not have understood the technicalities of statistical science, but surely some chicanery was afoot. In an election largely cast as a referendum on societal elites, yet another form of elite expert authority appeared to have failed spectacularly.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“She barely even campaigned in Maine, Wisconsin or Minnesota, and visited Michigan only in the last week of the race.”— Putting the Polling Miss of the 2016 Election in Perspective
“which gave Mrs. Clinton an 85 percent chance to win, alongside those from other organizations, which put Mrs. Clinton's chances at between 71 percent at FiveThirtyEight and greater than 99 percent at the Princeton Election Consortium.”— Putting the Polling Miss of the 2016 Election in Perspective
“But this was a ridiculous, even dangerous tweet: The forecast was nowhere near precise enough to distinguish Tampa from St. Pete.”— The Media Has A Probability Problem
“there’s a fairly widespread perception that meteorologists performed poorly with Irma, having overestimated the threat to some places and underestimated it elsewhere. [...] And they undoubtedly saved a lot of lives by giving people time to evacuate in places like the Florida Keys.”— The Media Has A Probability Problem
“We provided a lot of insight to our readers to not be complacent about the general election outcome, whereas I think that wasn’t true for a lot of places.”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“Rewarding impact without regard for the truth inevitably leads to disaster. ... basing further research on false results”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“Paolo Macchiarini "left a trail of dead patients" but was protected for years by his university. Andrew Wakefield's famously fraudulent autism-MMR study took 12 years to retract.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“Recent randomized experiments have been used to delineate a causal relationship between faculty ideology and academic research... research teams composed of pro-immigration researchers estimated more positive impacts of immigration on public support for social programs.”— The ideological orientation of academic social science research 1960–2024
“all disciplines showed leftward movement between 1990 and 2024.”— The Ideological Trajectory of the Social Sciences

Trump’s victory on November 8 2016 exposed the gap between the forecasts and reality as states where polls had shown solid Clinton leads flipped to the Republican column. The polling miss in the industrial Midwest proved to be among the largest state level errors in decades and voter file data later confirmed that white working class turnout had been underestimated. Nate Silver’s 29 percent chance for Trump had been treated as an outlier yet the actual outcome fell within the uncertainty he had highlighted. [1][2][5]

Post election analyses questioned the reliability of polling aggregation and pointed to social desirability bias and high nonresponse rates as persistent problems that had not been solved. An analysis of replication rates in top journals found no improvement after 2011 with expected replication remaining near 54 percent through 2018. The LLM based study of 600 000 abstracts revealed a persistent leftward tilt that intensified after 1990 and raised new doubts about the neutrality of the underlying social science. [6][7][9]

Mainstream outlets initially blamed the polls themselves rather than their interpretation of the probabilities and some forecasters defended the models by noting that Trump’s win had been within the wider confidence intervals. The episode prompted a broader debate about whether social science had overpromised on its ability to predict complex political events. By the end of 2016 the assumption that statistical models could reliably deliver high probability election forecasts had become contested among both practitioners and the public. [3][5][8]

Supporting Quotes (11)
“Yet as the first results of the vote came in on Election Day 2016, these forecasters changed their percentages with dizzying speed. Cohn’s website, for example, flipped from predicting in the afternoon an 85 percent chance of Clinton winning to predicting by nightfall a 95 percent chance of Trump winning.”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“In the following weeks the astounding blunders of the nation’s most popular election experts led to a public debate. Was the problem the methods of the forecasters, or was it the entire enterprise of forecasting human behavior more generally?”— We Built Reality: How Social Science Infiltrated Culture, Politics, and Power
“It was the biggest polling miss in a presidential election in decades.”— Putting the Polling Miss of the 2016 Election in Perspective
“The polls’ samples might have overrepresented well educated voters. The educational composition of the electorate is one of the biggest mysteries for pollsters.”— Putting the Polling Miss of the 2016 Election in Perspective
“To its credit, the AP deleted the tweet the next morning. [...] The outcome was well within the “cone of uncertainty,” so to speak.”— The Media Has A Probability Problem
“The polls pointed toward a competitive race. The national polls in particular were very close to where the race ended up.”— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“you have this big chunk of the population, 15 or 20 percent, that were like, “Well, I’m not sure what to do. I’ve never really been in a situation where I dislike both candidates so much.””— Nate Silver says conventional wisdom, not data, killed 2016 election forecasts
“In reality there has been no discernible improvement.”— What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
“roughly 90 percent of politically relevant social science articles leaned left 1960–2024, and the mean political stance of every social science discipline was left-of-center every year during the period... all disciplines showed leftward movement between 1990 and 2024.”— The ideological orientation of academic social science research 1960–2024
“White voters without a college degree were especially unlikely to participate in polls”— What Low Response Rates Mean for Telephone Surveys
“the 1960s saw the social sciences move dramatically to the left (higher on this graph), then stabilize in mid-1970s to late 1980s, then rise steadily after the collapse of the Muscovite empire ... Then leftism exploded during the late Great Awokening. It may (or may not) have diminished in 2024.”— The Ideological Trajectory of the Social Sciences

Know of a source that supports or relates to this entry?

Suggest a Source