Car NOT Goat

Presenter in a Bind

The presenter then has to open another door in the for loop starting at line 28, but must not reveal the main prize. In the revealed attribute, the object stores this door's index. The remaining third door's index is then saved in the alternate attribute. The for loop starting in line 50 iterates over 1,000 game shows, and the print() statement on line 55 outputs their results in CSV format. This is, line by line for each show, the indices of the candidate door, the presenter door, the remaining door, and the winning door.

One-Hot Encoding

If the AI apprentice employs a neural network and feeds in individual shows as 3-tuples, each paired with a one-part result tuple in the training phase, it won't produce satisfying results because door indices aren't really relevant as numerical values; instead, they stand for categories, each door representing a different category. The AI expert transforms such datasets before the training run into categories using one-hot encoding. If a dataset provides values for n categories, the one-hot encoder shapes individual records as n-tuples, each of which has one element set to 1, with the remaining elements set to  .

Figure 3 shows an example of how an input series like [2,1,2,0,1,0] is converted into six one-hot-encoded matrix rows. The code in Listing 2 uses the to_categorical() function from the np_utils module of the keras.utils package to accomplish this. To return from one-hot encoding back to the original value later, use the argmax() method provided by numpy arrays.

Listing 2

onehot

#!/usr/bin/env python3
from keras.utils import np_utils
import numpy
X = numpy.array([2,1,2,0,1,0])
print("org=", X)
onehot=np_utils.to_categorical(X)
print("onehot=", onehot)
a=onehot.argmax(1)
print("back=", a)
Figure 3: One-hot converts values into categories that set one value per tuple to 1.

Machine Learning

Armed with the input values in one-hot format, the three-layer neural network method defined in Listing 3 can now be fed with learning data. Important: The network also encodes the output values according to the one-hot method and therefore not only needs a single neuron on its output, but three of them, because both the training and, later on, the predicted values are available as 3-tuples, each of them indicating the winning door as a 1 in a sea of zeros.

Listing 3

learn

01 #!/usr/bin/env python3
02 from keras.models import Sequential
03 from keras.layers import Dense
04 from keras.utils import np_utils
05 import numpy
06
07 data = numpy.loadtxt("shows.csv",
08         delimiter=",", skiprows=1)
09 X = data[:,0:3]
10 Y = data[:,3]
11
12 categories=np_utils.to_categorical(Y)
13
14 model = Sequential()
15 model.add(Dense(10, input_dim=3,
16                 activation='relu'))
17 model.add(Dense(3, activation='relu'))
18 model.add(Dense(3, activation='sigmoid'))
19
20 model.compile(loss='binary_crossentropy',
21               optimizer='adam')
22 model.fit(X, categories, epochs=100,
23           batch_size=100, verbose=0)
24
25 test_data = numpy.array(
26   [[0,1,2], [0,2,1], [1,0,2],
27    [1,2,0], [2,0,1], [2,1,0]
28   ])
29
30 pred = model.predict(test_data)
31
32 for (idx,row) in enumerate(test_data):
33 for (idx,row) in enumerate(test_data):

The saved training data generated by Listing 1 in shows.csv is then read by Listing 3 in line 7. The first three elements of each line are the input data of the network (candidate door, presenter door, alternative door), and the last item indicates the index of the door to the main prize.

Line 12 transforms the desired output values into categories in one-hot encoding; lines 14 to 18 build the neural network with an entry layer, a hidden layer, and an output layer. All layers are of the Dense type; thus, they are networked in a brain-like style connecting with all elements of adjacent layers. The Sequential class of the Keras package holds the layers together. Line 20 compiles the neural network model; Listing 3 specifies the error function as binary_crossentropy as the learning parameter and selects the adam algorithm as the optimizer, which specializes in categorization problems.

In the three-layer model, 10 neurons receive the input data in the input layer and input_dim=3 sets the data width to 3, since it consists of 3-tuples (values for three doors). The middle layer has three neurons, and the output layer also has three. The latter is, as mentioned above, the one-hot encoding of the results as categories.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Neural Networks

    3, 4, 8, 11… ? A neural network can complete this series without knowledge of the underlying algorithm – by a kind of virtual gut feeling. We’ll show you how neural networks solve problems by simulating the behavior of a human brain.

  • Spam-Detecting Neural Network

    Build a neural network that uncovers spam websites.

  • Deep Learning

    Deep learning isn't just for industrial automation tasks. With a little help from Gimp and some special neural network tools, you can add color to your old black and white images.

  • FAQ

    Welcome our new artificial intelligence overlords by tinkering with their gray matter.

  • Calculating Probability

    To tackle mathematical problems with conditional probabilities, math buffs rely on Bayes' formula or discrete distributions, generated by short Perl scripts.

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95

News

njobs Europe
What:
Where:
Country:
Njobs Netherlands Njobs Deutschland Njobs United Kingdom Njobs Italia Njobs France Njobs Espana Njobs Poland
Njobs Austria Njobs Denmark Njobs Belgium Njobs Czech Republic Njobs Mexico Njobs India Njobs Colombia