The base rate fallacy, also called base rate neglect, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking adequate account of the “base rate” or “prior probability” of H. Suppose you know that an urn contains either 80 black balls and 20 white ones or 50 black and 50 white. Let these two hypotheses be H and H*, respectively. You randomly draw from the urn and get a black ball; call this evidence E. One might think that the probability of H given E is high just because the probability of E given H is high (0.8). But suppose that you also know the prior chance of H is very low, 0.1. Maybe it was difficult for the urn-filler to get more than 50 black balls. In this case, the probability of H given E can be low as well (depending also on the prior of E). So, given that you draw a black ball, the chance that the urn had the 80/20 mix might still be very low while the probability it had the 50/50 mix might yet be much higher. To overlook or disregard the bearing of the low prior probability of H on the conditional probability of H given E is to commit the base rate fallacy.

For an instinctive example, suppose that Sue has won the town raffle and you hypothesize that she has won the raffle because she bribed the judges to print multiple copies of her ticket and to abandon many other tickets. After all, given that this hypothesis holds, the chance of Sue winning the raffle is high. But to think that this deliberation alone plus the fact that Sue won makes it probable that Sue effectively bribed the judges in such a manner is to commit the base rate fallacy. For, such reasoning ignores the prior probability that Sue pulls off such a bribe. Unless you have independent reasons for thinking the prior is high or that successful bribes of this kind occur at a considerably high rate, the fact that Sue won is not good evidence that this bribing scenario has occurred.