In statistical computations, intuition can be very misleading

A Bad Penny Always Shows Up

To simulate what would happen with a bad (deformed, bent) penny that shows tails more often than heads, you could add more sides to the coin in line 5 of Listing 1:

my @sides  = qw( H H H T T T T );

From seven tosses, the coin would then come up heads three times and tails four times; the script would then correspondingly compute (with random deviation) the p-value from:

$ Rounds: 1000
Tails:    565
p-value:  0.0351

The p-value is approximately 0.04 percent (i.e., well below the specified 5 percent threshold for the significance value). This seriously threatens the null hypothesis that the coin should land on both sides with the same likelihood.

Careful with Your Diagnosis

Experiments that test new medications or treatment procedures for their efficiency define the null hypothesis as "The medication has no effect," set the significance value to around 5 percent, and then alert if the p-value drops below this in the experiment – that is, if there are suddenly good reasons for assuming that the null hypothesis is incorrect. In this case, the miracle cure tested really has shown some positive treatment effects with a high degree of probability.

According to Alex Reinhart's recently published book on statistical blunders [3], however, it is common practice for studies to interpret the significance value incorrectly retroactively, thus giving patients false hope or needlessly causing patients to panic. These base rate fallacies [5] occur in the context of conditional probabilities and are caused by the fact that a certain result already has a certain probability a priori that needs to be considered in the computation.

The following experiment from Reinhart's book shows what for many people is an amazing deviation between popular opinion and precise science: A mammography returns the correct diagnosis for patients with breast cancer with a 90 percent probability. However, the test comes up with a diagnosis of breast cancer for approximately 7 percent of healthy patients, so that – in the case of positive findings – further diagnostic procedures are necessary for clarification. The question is now: Is this test suitable for effectively screening the population? If the mammography detects breast cancer, how great is the probability that a randomly selected woman really needs treatment?

Most people will think about this for a while, and then subtract the 7 percent false positive rate from 100 percent in their heads and end up with a result of around 93 percent. However, this assumption is totally wrong. What is the correct result? Maybe 70 percent? Or even 50 percent? The amazing truth is that a mammography performed on a randomly selected woman correctly diagnoses breast cancer in only around 9 percent of the cases.

Amazing Statistics

If your mind experiment led you to believe that the test accuracy was higher than it actually is, you probably fell into the typical base rate fallacy trap and forgot to consider in your calculations that, on average, only 0.8 percent of women in a given population have breast cancer.

Of 1,000 women, 992 thus do not have breast cancer and in 7 percent of these cases, mammography will return the wrong diagnosis; that is, 70 of the women tested will be given incorrect results. Of the eight women with breast cancer, the test diagnoses the medical condition of seven of these women correctly, which means, in total, of the 77 breast cancer findings after mammography, only seven are correct (i.e., approximately 9 percent). Given this low accuracy rate, it is inadvisable to perform across-the-board tests; instead, only certain high-risk groups should be tested where an inefficient test is still far better than no test at all.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Python generators simulate gambling

    Can 10 heads in a row really occur in a coin toss? Or, can the lucky numbers in the lottery be 1, 2, 3, 4, 5, 6? We investigate the law of large numbers.

  • Calculating Probability

    To tackle mathematical problems with conditional probabilities, math buffs rely on Bayes' formula or discrete distributions, generated by short Perl scripts.

  • Welcome

    When Facebook renamed itself Meta in honor of its new vision of a virtual reality metaverse, I knew they were taking their initiative very seriously. I will admit, though, it was a little difficult to figure out what they were talking about.

  • Qiskit

    Qiskit is an open source framework that aims to make quantum computing technology both understandable and ready for production.

  • R For Science

    The R programming language is a universal tool for data analysis and machine learning.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More