Calculating Probability
Perl – Think Again

To tackle mathematical problems with conditional probabilities, math buffs rely on Bayes' formula or discrete distributions, generated by short Perl scripts.
Features
The Monty Hall problem is loved by statisticians around the world [1]. You might be familiar with this puzzle, in which a game show host offers a contestant a choice of three doors – behind one door is a prize, but the other two doors only reveal goats. After the contestant chooses a door, the TV host opens a different door, revealing a goat, and asks the candidate to reconsider (Figure 1). Who would have thought that probabilities in a static television studio could change so dramatically just because the host opens a door without a prize?

The human brain seems to struggle with conditional probabilities like these, which describe random events with preconditions. It's just like a task I set up in a Perl column 15 years ago, which involved a top hat with three cards inside. The first card is black on both the front and back, the second card is red on front and back, and the third card red on one side and black on the other (Figure 2). One card is drawn. You can see the front, which is red. What is the probability of the back of the card also being red?

Counterintuitive
Most people immediately respond "50 percent," because there are apparently two equally probable cases: the red-and-black card and the red-and-red card – that is, the opposite side is black in one case and red in the other case. To many people's surprise, however, the correct answer is "66 percent." The clue lies within the preconditions of the experiment: The red-and-red card is drawn twice as often in comparison to the red-and-black one, because it meets the precondition of the experiment – one side is red – on both sides, whereas the red-and-black card only gets drawn one-third of the time.
More than 250 years ago, the mathematician Thomas Bayes tried to express such intuitively error-prone problems in an immutable formula. The Bayes' theorem [2] describes in its "diachronic interpretation" how the probabilities of hypotheses change over time, as you encounter new data in an experiment. The formula
P(H|D) = P(H) × P(D|H)/P(D)
defines the probability P of a hypothesis H, for example, "I have drawn the red-and-red card," under the condition of newly emerging data D, such as "The front of the drawn card is red" (Figure 3).

On the right-hand side of the equation you see P(H), the probability of the hypothesis, before new data become available. Bayes multiplies it by P(D|H), that is, the probability that the data are available under the hypothesis. How likely is it, for example, that the person drawing the cards actually will see a red front if they have drawn the red-and-red card?
The denominator on the right side, P(D), is the probability of the available data, independent of any hypothesis. How likely is it that the candidate will see a red front after drawing any card?
Clever Bayes
All three possible hypotheses – randomly drawn cards – are obviously equally likely in this experiment. In other words, P(H) – the probability that the red-red card is drawn – is equal to 1/3, because it is drawn in one third of all cases. Assuming this hypothesis, the flip side of the drawn red-red card is trivially also red in exactly 100 percent of all cases; that is, P(D|H) = 1.
Regardless of the hypothesis of one drawn card, the probability of seeing a red card surface after drawing is 50 percent, that is, P(D) = 1/2. After all, there are six card sides in the hat, three of which are red, and three black. If you substitute this into the Bayesian formula, the result for P(H|D) is thus 1/3 × 1/(1/2) = 2/3. Thus, the formula predicts the experimentally demonstrable correct result of 66 percent and contradicts our intuition.
Probability, Discretely Programmed
Think Bayes [3], which was published about six months ago, not only explains the Bayesian formula but also describes a numerical method that is easy to use for experiments with short software programs. To do this, you store the hypotheses as statistical distributions, that is, in a Python dictionary or Perl hash, along with their probabilities.
As additional data becomes available, the stored values are then multiplied by the conditional probability of the incoming data, under the assumption that the individual hypothesis is true. Later, the program normalizes all values so that they add up to 1, or 100 percent.
The result is the probability of all hypotheses in the light of the incoming data. This process and specifically the final normalizing step can only be used, however, if:
- not more than one of the hypotheses is true, and
- there are no other options; that is, at least one of the hypotheses is true.
Listing 1 [4] shows the implementation of the cards
script, which prints the correct solution, 0.666. It uses the Distrib.pm
module (Listing 2), which provides auxiliary functions for computing the statistical distribution. The probabilities for the hypotheses of drawing different cards (BB
, BR
, RR
for black-black, black-red, red-red) are fed to the module in lines 7-9, each with a value of 1/3.
Listing 2
Distrib.pm
Listing 1
cards
Lines 11-13 of Listing 1 define the incoming data (i.e., the probabilities that the front sides of the drawn cards are red) for each hypothesis and multiply the supplied hypothesis values by them. Thus, the probability of the subject seeing a red front on the black-black card is zero. The probability is 50 percent for the black-red card, depending on which way the card faces coming out of the hat. The red-red card shows red in 100 percent of all cases, which is why line 13 sets the multiplier to a value of 1
.
Finally, the call to the prob("RR")
method returns the normalized probability of the hypothesis red-red card, which is 2/3.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.
News
-
elementary OS 7.1 Now Available for Download
The team behind elementary OS has released the latest version of its operating system with a focus on personalization, inclusivity, accessibility, and privacy.
-
The GNU Project Celebrates Its 40th Birthday
September 27 marks the 40th anniversary of the GNU Project, and it was celebrated with a hacker meeting in Biel/Bienne, Switzerland.
-
Linux Kernel Reducing Long-Term Support
LTS support for the Linux kernel is about to undergo some serious changes that will have a considerable impact on the future.
-
Fedora 39 Beta Now Available for Testing
For fans and users of Fedora Linux, the first beta of release 39 is now available, which is a minor upgrade but does include GNOME 45.
-
Fedora Linux 40 to Drop X11 for KDE Plasma
When Fedora 40 arrives in 2024, there will be a few big changes coming, especially for the KDE Plasma option.
-
Real-Time Ubuntu Available in AWS Marketplace
Anyone looking for a Linux distribution for real-time processing could do a whole lot worse than Real-Time Ubuntu.
-
KSMBD Finally Reaches a Stable State
For those who've been looking forward to the first release of KSMBD, after two years it's no longer considered experimental.
-
Nitrux 3.0.0 Has Been Released
The latest version of Nitrux brings plenty of innovation and fresh apps to the table.
-
Linux From Scratch 12.0 Now Available
If you're looking to roll your own Linux distribution, the latest version of Linux From Scratch is now available with plenty of updates.
-
Linux Kernel 6.5 Has Been Released
The newest Linux kernel, version 6.5, now includes initial support for two very exciting features.