AI learning with decision trees

Who Drove?

As a practical AI application, let's look at a decision tree based on car driving data to determine who operated a vehicle. As in the previous Programming Snapshot [5], I prepared data collected by my Automatic adapters [6] in my two cars, which recorded when, where, and how fast the vehicles were moving.

The only thing the adapter doesn't know is who was driving, and since my wife and I alternately drive both cars, I'd like to teach an algorithm to guess the driver by looking at the trip data. To get the project off the ground with some sample data from which to learn, I added driver abbreviations M or A in Listing 2 on selected trips where I knew who was driving at the time.

Listing 2



Each line in the CSV file in Listing 2 represents a recorded drive; the second to last column, vehicle, indicates whether commute mule Honda Fit (1) or my sportier 1998 Acura Integra (2) was used. My wife rarely drives the latter, but she often drives the mule to work during the week (dow = 1-5), while I tend to take the Integra for a spin on the weekends (dow = 6-7).

Whether the Honda or the Acura, the speed column provided by the Automatic adapter seems to give me more points than my wife as a driver, for reasons that are incomprehensible to me. The brake or the acceleration ratings (brakes and accels) seem to be evenly distributed. Are these criteria enough to teach the system to guess the driver correctly on newly recorded trips with unknown drivers?

Driving Experience

Listing 3 loads the CSV file into a Python Pandas dataframe. The Y list takes the entries that have been added to the driver column manually as desired results as M or A. X contains a two-dimensional list defined in line 9, in which the trip data are available in rows with the day of the week (dow: 1-7), the number of miles traveled (miles), hard brake and acceleration counters (brakes and accels), a speed rating (speed), and the vehicle ID (vehicle).

Listing 3


As in the previously discussed, more academic case, the sklearn class DecisionTreeClassifier is also used in Listing 3; its fit() method processes the training data to enable the model to later predict new results with predict().

Somewhat surprisingly, for new trip records shown in (Figure 4), the model guesses pretty well, even when some columns like the mileage contain values that haven't been seen before. For the same car (1: Honda Fit), only because of the driver's higher speed rating, the algorithm assigns the trip to M, and to A in the other case. After an in-depth analysis, the decision tree must have determined that this is the key distinguishing feature between the two drivers.

Figure 4: After the learning phase, the decision tree identifies the driver from their driving behavior.

For a first attempt, the process produces very good results; more collected live data later on will show how reliable they really are. If improvements are needed, more training data will help produce a more accurate decision tree.


  1. Vo.T.H, Phuong, Martin Czygan, Ashish Kumar, and Kirthi Raman. Python: Data Analytics and Visualization. Packt Publishing, 2017
  2. Listings for this article:
  3. AlphaGo:
  4. Joshi, Prateek. Artificial Intelligence with Python. Packt Publishing, 2017
  5. "Programming Snapshot – Mileage AI" by Mike Schilli, Linux Magazine, issue 203, October 2017, pg. 56:
  6. "Programming Snapshot – Driving Data" by Mike Schilli, Linux Magazine, issue 202, September 2017, pg. 50,

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95