Imagine the following scenario:

There exists a drug test that is 99% accurate on both drug users and non-drug users; If 100 drug users are tested, 99 will test correctly positive and 1 will incorrectly test negative. If 100 non-drug users are tested, 99 will correctly test negative and 1 incorrectly positive (false positive).

Now, imagine a group of 1000 people, (workers, welfare recipients, whatever) whose rate of drug use is 0.5%. One individual from this group is chosen at random and tested. The test is positive. Most people would say that the probability of that individual being an actual drug user is 99%. The test is 99% accurate, right?

Wrong

The probability that the tested person is really a drug user is ~33%. In other words, it is more likely the person is

*not*a drug user, even though the "99% accurate" test was positive. At first, this may sound implausible. But it's not. Why?

The absolute number of non-drug users is much larger than users. The number of false positives (0.995%) outweighs the number of true positives (0.495%).

Substituting real numbers;

1000 individuals are tested

There should be 995 non-users and 5 users.

From the 995 non-users, 0.01 × 995 ≃ 10 false positives (1% of the 995)

From the 5 users, 0.99 × 5 ≃ 5 true positives (99% of 5)

The total of all positives, 10+5 = 15. Of these 15 positive results, only 5, about 33%, are genuine.

This is just one of many cases where mathematics is misused in public policy. Politicians and self-righteous people can claim "We should drug test population x, the tests are 99% right! Very few people will be falsely accused!" As I have demonstrated above, this is simply not true.

The same kind of misuse of mathematics is used in DNA sampling also, with literal life and death consequences. See Misused Mathematics of DNA Sampling.