Getting Started with Analytics

Data Analytics for the Newbie

By

The huge emphasis on data and data analytics in the business world and the IT job market might have you thinking about brushing up on your database skills. And what better platform than Linux to obtain powerful and free tools to flex and train your data muscles?

This month, I am going to highlight some of the best tools at your disposal for data storage, data analysis, and business intelligence: all free and open source.

Databases

Before you can start analyzing data, you are going to actually have to have some data on hand. That means a database – preferably a relational one.

If you had your sights set on a non-relational, NoSQL database solution, you might want to step back and catch your breath. NoSQL databases are unique because of their independence from the Structured Query Language (SQL) found in relational databases. Relational databases all use SQL as the domain-specific language for ad hoc queries, whereas non-relational databases have no such standard query language, so they can use whatever they want –including SQL. Non-relational databases also have their own APIs designed for maximum scalability and flexibility.

NoSQL databases are typically designed to excel in two specific areas: speed and scalability. But for the purposes of learning about data concepts and analysis, such super-powerful tools are pretty much overkill. In other words, you need to walk before you can run.

However, if you are just starting out, don’t worry – there’s plenty to learn.

By far the most common open source database is MySQL, which is now owned by Oracle. MySQL is ubiquitous on the Internet: The standard web server run by over 65 percent of all websites uses what’s known as a LAMP stack – the “M” of which stands for MySQL. MySQL epitomizes simplicity, too. Skilled data jockeys can use the command line to construct databases and queries with blinding speed.

If command lines are a bit steep, then I recommend you install a copy of MySQL Workbench, which is available from the MySQL site free of charge. Workbench enables you to manage MySQL databases visually, making it much easier to explore your data.

On the basis of how you feel about Oracle, you might not be too thrilled to give even the venerable MySQL a try. Fortunately, you have some robust alternatives:

* MariaDB is the community fork of MySQL founded and developed by Monty Widenius, the founder of MySQL. MariaDB is designed to be highly compatible with MySQL, so you can just replace MySQL with MariaDB instantly, if you want.

* Drizzle. Whereas MariaDB has tried to remain true to MySQL, Drizzle, another community fork, has decided to improve on MySQL radically. Learning Drizzle is a bit different from MySQL, but not so much that you couldn’t migrate back and forth.

To install your own instance of a database, it is recommended you use your distribution’s packages first, to ensure they are tested and all dependencies are checked. Updated packages are available at the sites linked above if your distro doesn’t have a set of these database packages in its repositories.

Naturally, you’re going to want to populate your database with some data – or at least some data with which to experiment. The MySQL developers are more than aware of this, and have posted up some handy sample databases to start using. Of course, my personal favorite database is the Lahman Baseball Database. Even if you’re not a baseball fan, there’s plenty of rich data in this database to keep you analyzing for months.

To load the database with the use of MySQL Workbench, click the New Server Instance link and connect to localhost. Once that operation is complete, click the Manage Import/Export link to use the Import/Export MySQL Data Wizard to locate the *.sql file you have downloaded (and extracted if necessary) and connect it to the Workbench.

Once you have the database loaded, you can use the Workbench application to manage the tables and queries as you need.

Getting Down to Business

Once you have your data store set up, you can start analyzing the data using SQL queries right from the get-go. But for a more user-friendly experience, I recommend you get your hands on a data analysis tool designed specifically for this kind of work.

Currently, the tool I recommend you try is the open source community edition of Pentaho, which is a very robust business analysis tool that can plug in to all kinds of databases – from MySQL all the way up to Hadoop clusters. Once connected to a data source, all you need to do is manipulate columns as needed to create reports, graphs, and other analytical views of your data.

Installing Pentaho isn’t terribly difficult, but you need to download and install a working instance of Java first. Ubuntu users might be tempted to use OpenJDK 6 instead of the old Sun version that no longer comes packaged with Ubuntu. Get the Sun version, because things got a little flaky on me when I tried the OpenJDK install. Pentaho can be run as a local or remote server.

Once installed, you can use the sample data source to build reports, data transformations, and graphs. A great tutorial for the Pentaho Community Edition is available that walks you through some of the basics with the sample data, and I suggest you give it a review.

R You Ready?

Pentaho is an excellent platform for pulling together business intelligence (BI), and for most enterprise uses, it might be enough to fulfill your needs (particularly since you can upgrade to a professional or enterprise edition).

However, if you are leaning more toward hard-core number crunching and analysis, you may want to try a more powerful tool, and few are more powerful than R.

R is kind of an odd duck: If you were to ask whether it is a statistical tool or a programing language, the answer would be a true – if confusing – ”yes.”

The nickel definition of R is that it is a statistical language for the purpose of performing high-level calculations. But it can also be used to build whole applications that can run in the R environment.

R might seem insanely high end for someone to get started in data analytics. Indeed, you could use a myriad of BI and analytic tools and never have to touch R. But R has three undeniable advantages: It’s free, it enables you to get intimately familiar with data, and it can be applied to just about any source of data. That means you can use it to work with data from a small relational database right up to a NoSQL big data cluster.

Installing R on Linux is pretty simple: just visit the Comprehensive R Archive Network’s download page for Linux and pull the appropriate package. (SUSE Linux users can even use one-click install.)

Once you install R, I recommend you take the time to install R Commander, a nice little GUI for R that runs within the R environment. Once these are installed by way of your favorite package manager, start R with the R command, and then, within R, enter:

library("Rcmdr")

Although a bit dated, a solid Journal of Statistical Software article from 2005 walks you through how to use R and the sample data within to get started.

If you want to learn how to use R with the R console, Carnegie Mellon University is hosting a nice basic R tutorial as well.

Hitting the Books

If you’re not familiar with data analysis, don’t worry; plenty of resources are out there to help you start picking it up.

O’Reilly seems to be the go-to publisher right now for all of the data resources, with a whole host of useful books that I have personally been diving into to get my own skills up to speed.

* Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions (Michael Milton). Part of O’Reilly’s Head First imprint, this is a excellent introductory text to the concepts of data analysis and the tools with which you can start performing your work. Some of the software is Windows oriented, but you can implement many of the tasks with Linux equivalents.

* 25 Recipes for Getting Started with R (Paul Teetor). This cookbook-style book is actually excerpts from Teetor’s larger R Cookbook. I haven’t gotten to his more comprehensive book yet, but the 25 Recipes offers a nice taste of getting started with R.

* Data Analysis with Open Source Tools (Philipp Janert). If you have any kind of programming skill, this is a very good book which to start. As a non-programmer, it’s a bit daunting, but as I ease into more data sets, I think the high-end data concepts in this book are going to be very helpful.

Wrapping Up

Many avenues are open for travel when starting the data analysis journey, and I have mentioned just a few. However, getting started with Linux is an easy first step to take.

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News