Car NOT Goat

Acid Test

The training phase starts in line 22 by calling model.fit(). It defines 100 iterations (epochs) and a batch size of 100, which defines that only after 100 training values should the neural network adjust its inner weights with the collected information. From line 25, the script visualizes whether the training was successful or not: For all possible door combinations, line 30 calls the predict() method to pick a door according to what the network has learned so far. Lo and behold, Figure 4 actually shows that the computer selects the alternative door each time, thus letting the candidate switch to increase their chances of winning in the most optimal way, as the mathematical proof also shows.

Figure 4: The network has learned that the alternative door offers the most lucrative chance of winning.

This is remarkable, because the network does not know the mathematical correlations, but instead only learns from empirically obtained data. The input data are even somewhat ambiguous, because switching only leads to success in two thirds of all cases. If you forge the input data and hide the prize behind the alternate door every time, the network's internal success metrics rise all the way to 100 percent, and then the network is absolutely sure.

Even if we feed real-world data into the network and the candidate loses in a third of all cases using the switch method, the network optimizes its approach and ends up switching most of the time, with the occasional outlier. Also, if you vary the parameters of the network, for example, the number of epochs or number of neurons per layer, results might vary. As always with these kinds of problems, it's just as much art as science with lots of wiggle room to train an artificially intelligent system successfully.

Infos

  1. "Calculating Probability" by Michael Schilli, Linux Magazine, issue 165, August 2014, pg. 60,http://www.linux-magazine.com/Issues/2014/165/Calculating-Probability
  2. Monty Hall problem: https://en.wikipedia.org/wiki/Monty_Hall_problem
  3. Listings for this article: ftp://ftp.linux-magazine.com/pub/listings/magazine/207/

The Author

Mike Schilli works as a software engineer in the San Francisco Bay area, California. Each month in his column, which has been running since 1997, he researches practical applications of various programming languages. If you go to mailto:mschilli@perlmeister.com he will gladly answer any questions.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Neural Networks

    3, 4, 8, 11… ? A neural network can complete this series without knowledge of the underlying algorithm – by a kind of virtual gut feeling. We’ll show you how neural networks solve problems by simulating the behavior of a human brain.

  • Spam-Detecting Neural Network

    Build a neural network that uncovers spam websites.

  • Deep Learning

    Deep learning isn't just for industrial automation tasks. With a little help from Gimp and some special neural network tools, you can add color to your old black and white images.

  • Calculating Probability

    To tackle mathematical problems with conditional probabilities, math buffs rely on Bayes' formula or discrete distributions, generated by short Perl scripts.

  • Prado Framework

    The Prado PHP development framework helps you quickly build web applications.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News