The False Assumption Registry was built to test a simple idea: that many of our current social and political problems are not random misfortunes but downstream consequences of bad policies, and those bad policies were built on false assumptions.
But the thesis goes one step further: false assumptions are not spontaneous errors. They originate with experts giving bad information. That information is amplified by media and institutions until it becomes unchallenged mainstream belief, something "everyone knows" but no one has verified. That bad common knowledge is the false assumption. Once it reaches that status, policy is built on top of it without scrutiny. The chain runs: bad expert claims → unchallenged common knowledge (the false assumption) → bad policy → real-world harm.
The Deeper Pattern
Taken as a whole, the registry reveals something beyond a catalog of errors. Many of the false assumptions documented here share a peculiar structure: they are moral convictions disguised as empirical claims. The belief that all group differences are caused by discrimination, that punitive justice serves no social function, that national borders are economically irrational; these are not hypotheses that emerged from data. They are moral sentiments that went looking for scientific justification.
For most of Western history, moral claims about human dignity and the proper treatment of persons were grounded in philosophical and theological frameworks, what philosophers call teleological reasoning, the idea that human beings have a nature, a purpose, and a shared dignity independent of any measurable trait. An Aristotelian or Thomistic thinker could say, without contradiction, that human beings differ in capacities but share a common nature and inherent worth. Moral claims stood on their own foundations.
The Enlightenment gradually dismantled these foundations. The pivotal figure was David Hume, who in his Treatise of Human Nature (1739 to 1740) posed two challenges that would prove devastating. The first was what philosophers now call the is/ought problem: Hume observed that writers on morality imperceptibly shift from statements about what is the case to claims about what ought to be, without ever explaining how the one follows from the other. No set of factual premises, however extensive, can by itself yield a moral conclusion. You cannot get from “human groups differ in trait X” to “therefore group Y deserves less dignity” without smuggling in a moral premise that was never established by the data.
The second challenge was Hume’s theory of moral sentiments, and it cut even deeper. Hume argued that moral judgments are not deliverances of reason at all, but expressions of feeling, a kind of approval or disapproval that arises from what he called sympathy, our natural capacity to feel the pleasures and pains of others. When we call an action “virtuous” or “vicious,” we are not reporting a fact about the world; we are reporting a sentiment within ourselves, a felt response that we project onto the action and then mistake for a property of the action itself. “Reason is, and ought only to be the slave of the passions,” Hume wrote, “and can never pretend to any other office than to serve and obey them.” Reason, on its own, is motivationally inert; it can tell us what is true but not what we should care about. It can inform us about the consequences of an action, but the judgment that those consequences matter (that they are good or bad) comes from sentiment, not from logic. Morality, for Hume, is therefore rooted in human feeling, not in rational demonstration or divine command.
These two arguments (that facts cannot entail values, and that moral claims are grounded in feeling rather than reason) would echo through subsequent philosophy with increasing force. Kant tried to rescue moral objectivity by grounding it in pure practical reason: his categorical imperative attempted to derive moral duties from the very structure of rational agency, without appeal to sentiment or theology. But his system depended on contested premises about what rationality requires, and it never achieved the universal assent that would have been needed to replace the older foundations. Nietzsche drew the more radical conclusion that moral claims were expressions of will masquerading as truths, that the entire moral vocabulary of the West was a power play dressed in the language of objective duty.
By the early twentieth century, the logical positivists and emotivists had codified Hume’s intuition into an explicit and devastating doctrine. A. J. Ayer, in Language, Truth and Logic (1936), argued that moral statements are literally meaningless in the way that factual statements are meaningful: they express no proposition that could be verified or falsified, and therefore have no truth-value at all. To say “murder is wrong” is not to state a fact about murder; it is to express a feeling of disapproval, rather like saying “murder, boo!” Charles Stevenson refined this into a more sophisticated theory: moral statements are not descriptions of the world but tools of persuasion, designed to influence the attitudes and behavior of others. When someone says “equality is good,” they are not reporting a discoverable truth; they are attempting to get you to share their attitude toward equality.
As the philosopher Alasdair MacIntyre argued in After Virtue (1981), this trajectory was not an accidental wrong turn but the inevitable outcome of the Enlightenment project of grounding morality in reason alone, stripped of tradition, theology, or any conception of human purpose. MacIntyre’s central insight was that the Enlightenment thinkers had inherited a moral vocabulary (words like “good,” “just,” “virtuous”) that originally made sense only within a teleological framework, one in which human beings had a telos, a purpose or function, and moral judgments evaluated how well a person fulfilled that purpose. Strip away the teleology, as the Enlightenment did, and the moral vocabulary survives but loses its rational foundation. People continue to use moral language, continue to argue passionately about justice and rights, but the arguments can never be resolved because the shared framework that once made resolution possible has been removed. What remains is what MacIntyre called emotivism, not merely a theory held by a few philosophers, but the dominant condition of modern moral discourse: a condition in which moral claims are really expressions of preference dressed in the language of objective truth, and moral debates become interminable because the competing positions rest on incommensurable premises that no shared rational framework can adjudicate.
Emotivism did not merely weaken moral reasoning; it made the misuse of science inevitable. Once moral claims lost their independent standing and became, in MacIntyre’s phrase, nothing more than expressions of attitude and feeling, people who still held deep convictions about equality, dignity, and justice had nowhere left to ground them. Philosophy could no longer certify a moral claim as objectively true. Theology had been sidelined. The only institution still widely regarded as a source of objective truth was empirical science. So moral convictions migrated there, not because science invited them, but because there was nowhere else to go.
This was not a conspiracy or even a conscious choice. It was a structural inevitability. A society that has dismantled every framework for making objective moral claims except science will use science to make moral claims. The result is predictable: empirical propositions that function as moral commitments, held not because the evidence is strong but because abandoning them feels like abandoning the moral value they encode.
The Post-War Emergency
This substitution had been building for centuries, but the Second World War made it feel like a civilizational emergency. For all of recorded Western history, the prohibition against murder and the principle that every human being possesses inherent dignity had been treated as bedrock moral truths, grounded in religious commandment, natural law, or philosophical tradition. The Nazi regime abandoned both. It did not do so on a whim or out of simple cruelty; it did so on explicitly empirical grounds. Racial biology, presented as cutting-edge science, told the Nazis that certain groups were biologically inferior, subhuman in a literal, measurable sense. From this factual premise they derived a moral permission: if science says these people are not fully human, then the ancient prohibition against killing them does not apply. The traditional moral framework (that murder is wrong, that all persons share an equal claim to dignity) was not merely ignored but actively overridden by an empirical claim about human difference. The Holocaust was the most vivid possible demonstration of what happens when moral commitments are made to depend on what biology discovers rather than standing on their own foundations.
After the war, the horror was plain, but the damage could not simply be undone. The Nazis had opened a door that could not easily be closed: they had shown that if moral commitments rest on empirical claims, then a sufficiently determined regime can use rival empirical claims to overturn those commitments entirely. The natural response would have been to re-establish the moral prohibitions on independent foundations, to declare that murder is wrong and human dignity is inviolable regardless of what biology discovers about group differences, regardless of what any future science might say about human variation.
But this is precisely what the modern West could not do. It required exactly the kind of robust philosophical or religious framework that had never been properly developed. The Enlightenment had spent centuries eroding the authority of religious morality (the divine command tradition, natural law theory, the Thomistic synthesis) without ever constructing a secular replacement capable of grounding absolute moral claims independently of empirical facts. Kant had tried, but his system rested on contested premises about rational agency that never achieved universal assent. Hume had shown in the eighteenth century that you cannot derive an “ought” from an “is,” but no widely accepted philosophical tradition had taken up his challenge and built a foundation for human equality that did not quietly depend on the assumption of empirical sameness. The old religious foundations had been dismantled; the new secular foundations had never been built. The box was open, and there was no philosophical apparatus available to close it.
The post-war generation, lacking that foundation, drew a fateful lesson from the catastrophe. They concluded that the problem was the content of the Nazis’ empirical claims, specifically the assertion of biological differences between groups. The fix, as they saw it, was to establish the opposite empirical claims as unchallengeable orthodoxy. The UNESCO Statements on Race (1950 to 1967) were among the first and most explicit examples of this strategy: a deliberate effort to build a scientific consensus that would make a repeat of such horrors intellectually impossible.
As the historian Poul Duedahl has documented, this effort was not a straightforward exercise in scientific inquiry.[1] UNESCO’s first Director General, the biologist Julian Huxley, explicitly prioritized the social sciences over the natural sciences, believing that disciplines whose practitioners had been “active in criticizing racism before and during World War II” were more likely to dismantle the idea of inequality. The panel convened to draft the first statement in 1949 was recruited almost entirely from anthropologists, psychologists, sociologists, and ethnographers who already perceived the race concept as a social construct, most of them students or intellectual descendants of Franz Boas at Columbia University. Physical anthropologists, the scientists most likely to defend race as a meaningful biological category, were deliberately excluded.
The physical anthropologist Ashley Montagu, who drafted the 1950 statement, went further than even UNESCO had intended. His draft attempted “a single, universal rejection of the concept of race, which he found scientifically indefensible,” and recommended replacing “race” with “ethnic group” entirely.[1] The published statement declared that race was “less a biological fact than a social myth” and was promoted under the headline “No biological justification for race discrimination, say world scientists.” The backlash from physical anthropologists was immediate and severe: they charged that the statement was an ideological attempt to eliminate the concept of race in order to promote universal brotherhood, not a dispassionate summary of the evidence. A second, more cautious statement was issued in 1952, retreating from the claim that race was a myth while still rejecting any link between race and mental traits. Two further statements followed in 1964 and 1967, each signed by carefully chosen panels of scientists, and each reflecting what Duedahl describes as “a dispute about whether the natural sciences or the social sciences should take precedence in determining the origins of human difference, of social division, and of the attribution of value.”
The statements had enormous real-world consequences. In the United States, UNESCO’s race publications were distributed in “re-education” workshops in schools and churches. Social scientists affiliated with UNESCO served as expert witnesses in the NAACP’s desegregation cases, and the Chief Justice specifically cited the first UNESCO statement in the reasoning behind Brown v. Board of Education (1954). In South Africa, the apartheid government accused UNESCO of interference in domestic matters and withdrew from the organization in 1956. The blank-slate view of human nature, developed by Boas and his students in earlier decades, had gone from being one school of thought to a civilizational imperative, one backed by the authority of a world body and enforced through education, law, and social pressure.[1]
But the UNESCO statements were only the beginning. The same pattern, encoding moral commitments as empirical claims and treating challenges to the data as attacks on the moral commitment, spread across one domain after another in the decades that followed: criminology, education, immigration economics, sex differences, child development, public health. In each case, a defensible moral conviction (people deserve equal dignity; the vulnerable deserve protection; strangers deserve fair treatment) was fused with a specific empirical claim that may or may not have been true. Once fused, the empirical claim inherited the moral protection of the conviction. Questioning the data became indistinguishable from questioning the value.
Hume’s is/ought problem was not merely an abstract philosophical puzzle; it kept reasserting itself in practice. In 1976, the Genetics Society of America (GSA), then the world’s largest society of geneticists, attempted to publish a resolution settling the IQ controversy. As the historian Davide Serpico has documented through analysis of unpublished drafts and private correspondence between GSA members, the three-year drafting process became a case study in the impossibility of keeping facts and values separate when no independent moral framework exists.[4]
The early drafts committed exactly the error Hume had identified two centuries earlier: they implied that equal treatment (an ought) followed from genetic equality (an is). The first draft stated that “neither theory nor practice in education or politics shall rest upon a premise of difference in mental capacity between races unless or until the reality of such a difference has been established.” The implication was devastating: if such a difference were established, unequal treatment might be warranted. The committee had unwittingly made the moral case for equal dignity depend on a specific empirical outcome.
GSA members saw the problem immediately. Bernard Davis, a biologist at Harvard Medical School, observed that “wide publicity, and much governmental support, has been given to the view that equality of opportunity is best measured by ethnic parity in the distribution of all kinds of jobs and school admissions,” and warned that “if they believe in it as a consequence of an assumption about the distribution of genetic potential, that assumption is subject to scientific scrutiny.” John Turner, a population geneticist at the University of Leeds, put the logical problem in its starkest form: “To say that racial discrimination is wrong because there is no genetic difference is to invite the counter-proposition that racial discrimination is right because there is a genetic difference.”
When the membership pushed back (many geneticists defended the reliability of hereditarian data), the committee was forced through three increasingly cautious drafts, each retreating from the original fusion of fact and value. By the final version, the Resolution had explicitly separated the two: “Whether or not there are significant genetic inequalities in no way alters our ideal of political equality, nor justifies racism or discrimination in any form.” But by then the document had become so cautious and impartial that it said nothing of consequence. Serpico shows that this was not an accident but a symptom: the committee progressively realized that a scientific society could not simultaneously take a political stand and maintain scientific credibility, and the Resolution was quietly “buried” in a journal supplement where most GSA members never even saw it.
The historian William Provine diagnosed the underlying problem: scientists’ education was “inadequate” for framing the relationship between science and ethics, leaving them unable to separate the moral commitment to equal dignity from the empirical question of whether populations are identical. The GSA case was UNESCO repeated in miniature, a generation later: moral convictions that could have stood on independent philosophical foundations were instead made to depend on contested empirical claims, and when the claims proved more uncertain than expected, the moral authority of the enterprise collapsed along with them.
The motivation was noble. But the strategy contained a design flaw with consequences that are still unfolding: it made moral progress depend on specific empirical claims being true. When the evidence pointed elsewhere, the claims could not be revised without appearing to abandon the moral commitment behind them. A moral tripwire was wired into the empirical sciences, not once, but repeatedly, across dozens of fields.
This is not merely an external critique. Social psychology’s own authoritative reference work, the Handbook of Social Psychology, now explicitly documents the tension. In the sixth edition’s opening chapter, Dale T. Miller and Kristin Laurin identify four enduring tensions that have defined the field since its founding, the last of which (the pull between basic science, real-world relevance, and political advocacy) has grown especially heated.[3] They note that social psychologists can exaggerate the strength of policy-relevant research out of a desire to promote policies they consider morally preferable, and that the field’s pronounced liberal skew makes it naïve to think this overrepresentation does not influence the questions asked and the answers discovered. The chapter traces this entanglement back to the field’s founding: Kurt Lewin, the émigré psychologist who did more than anyone to shape modern social psychology, saw no boundary between scientific inquiry and social action. His vision of “action research” explicitly fused empirical investigation with the pursuit of social goals, the prototype for the pattern this thesis describes. The Handbook chapter also documents the more recent debate over whether science can be value-free at all, with philosophers of science such as McMullin (1982) and Douglas (2014) arguing that values inevitably shape which inferences scientists treat as warranted. When a field’s own flagship reference concedes that moral commitments have shaped its empirical conclusions, the fusion described here is not a conspiracy theory but a documented institutional reality.
The Replication Crisis as Predicted Symptom
If the thesis above is correct (that moral commitments colonized empirical fields and acquired the protective status of sacred values), then the modern crisis in social science is not a surprise. It is a prediction. A field whose primary function has shifted from discovering truth to providing scientific authority for moral convictions will, over time, produce exactly the pattern we now observe: landmark findings that replicate poorly or not at all, effect sizes that shrink toward zero under rigorous re-testing, and an institutional culture that treated scepticism as heresy rather than healthy science.
The numbers are stark. The Open Science Collaboration’s landmark 2015 study attempted to replicate 100 published results in psychology and found that only 36% produced statistically significant results the second time around; mean effect sizes in the replications were roughly half those originally reported. Subsequent large-scale projects confirmed the pattern: the Many Labs 2 project (2018) found that only 54% of 28 classic findings replicated, with several showing no detectable effect at all. The fields hardest hit were not random: they were precisely the subfields where moral stakes were highest: social priming, stereotype threat, implicit bias, ego depletion, and the psychology of power and inequality.
Consider the casualties. Stereotype threat, the claim that merely reminding people of a negative group stereotype causes them to underperform, was one of social psychology’s most celebrated findings and a pillar of policy interventions across education. Meta-analyses now suggest the original effects were heavily inflated by publication bias and selective reporting; large pre-registered replications find effects near zero. The Implicit Association Test (IAT), which claimed to reveal hidden prejudice and became the basis of a multi-billion-dollar diversity training industry, has been shown to be a poor predictor of discriminatory behavior, with test-retest reliability so low that its creators have acknowledged it should not be used to diagnose individuals. Ego depletion, power posing, facial-feedback effects, social priming: one after another, findings that had been cited thousands of times, taught in every introductory textbook, and used to justify institutional policies turned out to rest on statistical artifacts, small unrepresentative samples, and analytical flexibility that bordered on fiction.
The pattern is not that science sometimes gets things wrong; every field has failed replications. The pattern is that the failures cluster in areas where the findings carried moral weight. Claims that served no particular moral narrative (basic cognitive effects, psychophysics, visual perception) replicated at much higher rates. It was the findings that mattered morally, the ones that proved discrimination was hidden, that willpower was a myth, that power corrupts perception, that group disparities had purely environmental explanations, that failed most dramatically. This is exactly what you would predict if the selection pressure on these findings was not “is this true?” but “does this support the right moral conclusion?” When moral authority rather than empirical accuracy becomes the criterion for what gets published, celebrated, and replicated, the result is a literature optimized for persuasion rather than truth, and a field that mistakes its own confidence for evidence.
The replication crisis, in other words, is not an indictment of the scientific method. It is an indictment of what happens when a field abandons the scientific method’s central discipline (the willingness to be proven wrong) because the findings have become load-bearing walls in a moral edifice. The social sciences did not fail despite their moral seriousness; they failed because of it. Moral authority and empirical authority require opposite dispositions: the first demands conviction, the second demands doubt. A field that serves both masters will eventually betray one of them, and in the social sciences, it was doubt that was sacrificed.
The populist revolts now reshaping the Western world (Trump, Brexit, the European far right) are usually explained as backlash against immigration, economic anxiety, or cultural displacement. Those explanations have merit. But underneath them is something more fundamental: a loss of trust in the institutions that claimed to be the guardians of both truth and morality, and that turned out to have been fusing the two in ways that corrupted both. The revolt is not just against specific policies; it is against the epistemological authority of the institutions that produced those policies. People can feel that they have been lied to, even when they cannot articulate exactly how. What they are sensing is the fusion.
Social Consequences
If the thesis above is correct, then both of the dominant political reactions of the early 21st century (the progressive movement sometimes called “wokism” and the populist far right) are not aberrations but somewhat predictable consequences of the same underlying failure.
Consider what happens when policies built on false egalitarian assumptions fail to produce equal outcomes. If you have committed yourself to the empirical claim that all group differences are caused by discrimination, then persistent inequality after decades of reform does not refute the claim; it intensifies it. The gaps must mean that discrimination is deeper, more structural, and more pervasive than previously understood. The philosopher Nathan Cofnas has argued that this is precisely the mechanism behind the rise of what critics call “wokism”: it is the normal reaction to the failure of equalizing policies, given the background assumption that equality of outcome is the natural state absent oppression.[2] If the assumption is that all groups would perform identically in a fair system, then every persisting disparity is proof that the system is not yet fair, and the search for hidden barriers becomes ever more expansive, from overt discrimination to unconscious bias, systemic racism, microaggressions, and epistemic injustice. The progressive ratchet is not irrational within its premises; it is the logical consequence of refusing to revisit the foundational empirical claim.
The far right is the mirror image of the same failure. Where the progressive response to failed predictions is to double down on the framework and hunt for deeper oppression, the populist-right response is to reject the entire moral edifice. People who can see that the official explanations do not match observable reality, but who have no philosophical vocabulary for separating moral commitments from empirical claims, conclude that the moral commitments themselves were fraudulent. If “equality” was sold to them as an empirical fact and the fact turned out to be false, then equality itself must be a lie. This is how you get movements that do not merely question specific policies but reject the underlying values of human dignity and equal treatment, not because those values are indefensible, but because the only framework in which they were presented has collapsed.
Both reactions, in other words, are consequences of the same design flaw identified above: the fusion of moral commitments with empirical claims. The progressive reaction preserves the moral commitment by making the empirical claim unfalsifiable. The far-right reaction abandons the moral commitment because the empirical claim was falsified. Neither reaction is necessary. A society that had maintained independent philosophical foundations for human dignity–foundations that do not depend on any particular empirical finding about group similarities–would not face this dilemma. It could absorb unwelcome scientific findings without either denying the evidence or abandoning the values. That it cannot is the clearest sign that the fusion described in this thesis has real and continuing consequences.
The result is a distinctive failure mode visible throughout the registry: claims that are treated as empirical findings but are actually unfalsifiable moral commitments, protected from scrutiny not by the strength of the evidence but by the moral stigma attached to questioning them. Recognizing this pattern, the moment a factual claim becomes socially impermissible to test, may be the single most useful heuristic the registry offers.
Examples
The table below illustrates the pattern with specific cases. In each row, the moral conviction on the left is defensible on its own terms. The empirical claim on the right is what it became once it was routed through science, and the third column shows why the scientific version resists correction.
| Moral Conviction | Scientific Disguise | Why It Resists Correction |
|---|---|---|
| All people deserve equal dignity regardless of group membership | Any statistical disparity between groups is proof of ongoing discrimination | Questioning whether the data supports that conclusion feels like questioning the commitment to equality itself |
| Women deserve equal treatment and opportunity | All observed behavioral and cognitive differences between the sexes are products of socialization, not biology | Conceding any innate difference appears to justify unequal treatment |
| People in the justice system deserve humane treatment and a path to redemption | Punishment has no deterrent effect; rehabilitation is always superior to incarceration | Questioning the empirical claim feels like advocating for cruelty |
| Strangers and foreigners deserve hospitality and fair treatment | Immigration has no negative effects on wages, public services, or social cohesion | Presenting contrary economic data looks like nativism or xenophobia |
This thesis is tested against the registry's data by an independent AI analysis on the Summary of Findings page.
References
- Duedahl, Poul. “Changing the concept of race: On UNESCO and cultural internationalism.” Global Intellectual History (2020): 1–21. doi:10.1080/23801883.2020.1830496
- Cofnas, Nathan. “Why We Need to Talk About Race.” Aporia Magazine (2023). Cofnas argues that progressive racial politics (“wokism”) is the predictable consequence of the hereditarian hypothesis being true while society operates on the assumption that it is false: persistent gaps after equalizing policies are interpreted as proof of ever-deeper discrimination.
- Miller, Dale T., & Laurin, Kristin. “History of Social Psychology: Four Enduring Tensions.” In D. T. Gilbert, S. T. Fiske, E. J. Finkel, & W. B. Mendes (Eds.), The Handbook of Social Psychology (6th ed.). Situational Press, 2025. doi:10.70400/DCSX1997
- Serpico, Davide. “The Cyclical Return of the IQ Controversy: Revisiting the Lessons of the Resolution on Genetics, Race and Intelligence.” Journal of the History of Biology 54, no. 2 (2021): 199–228. doi:10.1007/s10739-021-09637-6