Getting started with the R data analysis language

Number Game

Author(s):

The R programming language is a universal tool for data analysis and machine learning.

The R language is one of the best solutions for statistical data analysis. R is ideal for tasks such as data science and machine learning. R, which was created by Ross Ihaka and Robert Gentleman at the University of Auckland in 1991, is a GNU project that is similar to the S language, which was developed in the 1970s at Bell Labs.

R is an interpreted language. Input is either executed directly in the command-line interface or collected in scripts. The R language is open source and completely free. R, which runs on Linux, Windows, and macOS, has a large and active community that is constantly creating new, customized modules.

R was developed for statistics, and it comes with fast algorithms that let users analyze large datasets. There is a free and very well-integrated development environment named RStudio, as well as an excellent help system that is available in many languages.

The R language works with a library system, which makes it easy to install extensions as prebuilt packages. It is also very easy to integrate R with other well-known software tools, for example Tableau, SQL, and MS Excel. All of the libraries are available from a worldwide repository, the Comprehensive R Archive Network (CRAN) [1]. The repository contains over 10,000 packages for R, as well as important updates and the R source code.

The R language includes a variety of functions for managing data, creating and customizing data structures and types, and other tasks. R also comes with analysis functions, descriptive statistics, mathematical set and matrix operations, and higher-order functions, such as those of the Map Reduce family. In addition, R supports object-oriented programming with classes, methods, inheritance, and polymorphism.

Installing R

You can download R from the CRAN website. The CRAN site also has installation instructions for various Linux distributions. It is a good idea to also use an IDE. In this article, I will use RStudio, which is the most popular IDE for R.

RStudio is available in two formats [2]. RStudio Desktop is a normal desktop application, and RStudio server runs as a remote web server that gives users access to RStudio via a web browser. I used RStudio Desktop for the examples in this article.

When you launch RStudio Desktop after the install, you are taken to a four-panel view (Figure 1). On the left is an editor, where you can create an R script, and a console that lets you enter queries and display the output directly. Top right, the IDE shows you the environment variables and the history of executed commands. The visualizations (plots) are output at the bottom right. This is also where you can add packages and access the extensive help feature.

Figure 1: The main window of the RStudio IDE is divided into panels.

First Commands

When you type a command at the command prompt and press Enter, RStudio immediately executes that command and displays the results. Next to the first result, the IDE outputs [1]; this stands for the first value in your result. Some commands return more than one value, and the results can fill several lines.

To get started, it is a good idea to take a look at R's data types and data structures. More advanced applications build on this knowledge; if you skip over it, you might be frustrated later. Plan some time for the learning curve. The basic data types in R are summarized in Table 1. Table 2 summarizes some R data structures.

Table 1

Data Types in R

Type

Designation

Examples

Logical values

LOGICAL

TRUE and FALSE

Integers

INTEGER

1, 100, 101

Floating-point numbers

NUMERIC

5.1, 100.1

Strings

CHARACTER

"a", "abc", "house"

Table 2

Data Structures in R

Name

Description

Vector

The basic data structure in R. A vector consists of a certain number of components of the same data type.

List

A list contains elements of different types, such as numbers, strings, vectors, matrices, or functions.

Matrix

Matrices do not form a separate object class in R but consist of a vector with added dimensions. The elements are arranged in a two-dimensional layout and have rows and columns.

Data frame

One of the most important data structures in R. This is a table in which each column contains values of a variable and each row contains a set of values from each column.

Array

An array stores data in more than two dimensions. An array with the dimensions (2, 3, 4) creates four rectangular matrices, each with two rows and three columns.

To create an initial graph, you first need to define two vectors x and y, as shown in the first two lines of Listing 1. The c stands for concatenate, but you could also think of it as collect or combine. You then pass the variables x and y to the plot() function (last line of Listing 1), along with vectors; the col parameter defines the color of the points in the output. Figure 2 shows the results.

Listing 1

First Chart

x <- c(1, 3, 5, 8, 12)
y <- c(1, 2, 2, 4, 6)
plot(x,y,col="red")
Figure 2: An initial, very simple chart in R. The coordinates of the data points were passed in as vectors.

Installing Packages

Each R package is hosted on CRAN, where R itself is also available. But you do not need to visit the website to download an R package. Instead, you can install packages directly at the R command line. The first thing you will want to do is fetch a library for visualizations. To do this, call the install.packages("ggplot2") command in the command prompt console. The installation requires a working C compiler.

Setting up a package does not make its features available in R yet – it just puts them on your storage medium. To use the package, you need to call it in the R session with the library("ggplot2") command. After restarting R, the library is no longer active; you might need to re-enable it. Newcomers tend to overlook this step, which often leads to time-consuming troubleshooting.

RStudio Scripts

A script is a plain text file in which you store the R code. You can open a script file in RStudio via the File menu.

RStudio has many built-in features that make working with scripts easier. First, you can run a line of code automatically in a script by clicking the Run button or pressing Ctrl+Enter. R then executes the line of code in which the cursor is located. If you highlight a complete section, R will execute all the highlighted code. Alternatively, you run the entire script by clicking the Source button.

Data Analysis

A typical process in data analysis involves a series of phases. The primary step in any data science project is to gather the right data from various internal and external sources. In practice, this step is often underestimated – in which case problems arise with data protection, security, or technical access to interfaces.

Data cleaning or data preparation is a critical step in data analysis. The data collected from various sources might be disorganized, incomplete, or incorrectly formatted. If the quality of the data is not good, the findings will not be of much use to you later on. Data preparation usually takes the most time in the data analysis process.

After cleaning up the data, you need to visualize the data for a better understanding. Visualization is usually followed by hypothesis testing. The objective is to identify patterns in the dataset and find important potential features through statistical analysis.

After you draw insights from the data, a further step typically follows: You will want to predict how the data will evolve in the future. Prediction models are used for this purpose. Historical data is divided into training and validation sets, and the model is trained with the training dataset. You then verify the trained model using the validation dataset and evaluate its accuracy and efficiency.

Data Visualization

R has powerful graphics packages that help with data visualization. These tools produce graphics in a variety of formats, which can also be inserted into documents of popular office suites. The formats include bar charts, pie charts, histograms, kernel density charts, line charts, box plots, heat maps, and word clouds.

To quickly generate a couple of plots using the previously installed ggplot2 package, first create two vectors of equal length. The first is a set of x-values; the second is a set of y-values. Next, square the values of the x vector to generate the values for the y vector, and finally output the graph (Listing 2).

Listing 2

Sample Graph

> x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1)
> y <- x^2
> qplot(x, y)

The scatter plot is one of the chart types commonly used in data analysis; you can create a scatter plot using the plot(x, y) function. You can pass in other parameters, such as main for the header input, xlab for the x-axis labels, and ylab for the y-axis labels. Listing 3 uses a dataset supplied by R from the US magazine Motor Trend in 1974, covering 10 aspects of 32 vehicle models, including number of cylinders, vehicle weight, and gasoline consumption. Load the dataset by typing:

data(mtcars

Listing 3

Vehicle Data Example

> plot(mtcars$wt, mtcars$mpg, main = "Scatter chart", xlab = "Weight (wt)", ylab = "Miles per gallon (mpg)",
    pch = 20, frame = FALSE)
> fit <- lm(mpg ~ wt, data=mtcars)
> abline(fit, col="red")

The command head(mtcars) then displays the first six lines.

Use the abline() function to add a regression line to the graph (Figure 3). To do this, lm() first calculates the linear regression between the range and the weight, which shows that there is a relationship. This is a negative correlation: The lighter a vehicle is, the farther it can travel on the same amount of gasoline. The graph says nothing about the strength of the relationship, but summary(fit) provides a variety of characteristic values of the calculation. This includes a fairly high R-squared value, a statistical measure of how close the data points are to the regression line.

Figure 3: The regression line illustrates the relationship between the vehicle weight and range.

Histograms visualize the distribution of a single variable. A histogram shows how often a certain measured value occurs or how many measured values fall within a certain interval. The qplot command automatically creates a histogram if you only pass in one vector to plot. qplot(x) creates a simple histogram from x <- c(1, 2, 2, 3, 3, 4, 4, 4).

The box plot, also known as a whisker diagram, is another type of chart. A box plot is a standardized method of displaying the distribution of data based on a five-value summary: minimum, first quartile (Q1), median, third quartile (Q3), and maximum. In addition, a box plot highlights outliers and reveals whether the data points are symmetrical and how closely they cluster.

In R you can generate a box plot, for example, with qplot(). The best way to generate a box plot is with the sample data from mtcars. To use the cyl column as a category, factor() first needs to convert the values from numeric variables to categorical variables. This is done with the factor() command (Listing 4).

Listing 4

Box plots

> qplot(factor(cyl), mpg, data = mtcars, geom = "violin", color = factor(cyl), fill = factor(cyl))

Thanks to the special display form that the geom="violin" parameter sets here, you can see at first glance that, for example, the vast majority of eight-cylinder engines can travel around 15 miles on a gallon of fuel, whereas the more frugal four-cylinder engines manage between 20 and 35 miles with the same amount (Figure 4).

Figure 4: Miles per gallon for 4-, 6-, and 8-cylinder vehicles.

Data Cleanup

Data cleanup examples are difficult to generalize, because the actions you need to take heavily depend on the individual dataset. But there are a number of fairly common actions. For example, you might need to rename cryptically labeled columns. The recommended approach is to first standardize the designations. Then change the column names with the colnames() command. Then pass in the index of the column whose name you want to change in square brackets. The index of a particular column can also be found automatically (Listing 5, first line). If you do not want to overwrite the column caption of the original mtcars dataset, first copy the data to a new data frame with df <- mtcars.

Listing 5

Data Cleanup

> colnames(mtcars)[colnames(mtcars) == 'cyl'] <- 'Zylinder'
> without.zeros <- na.omit(mtcars)
> without.duplicates <- unique( mtcars )

If the records have empty fields, this can lead to errors. That's why it is a good idea to resolve this potential worry at the start of the cleanup. Depending on how often empty fields occur, you can either fill them with estimated values (imputation) or delete them. The command from the second line of Listing 5 removes all lines that contain at least one zero (also NaN or NA).

Records also often contain duplicates. If the duplicate is the result of a technical error in data retrieval or in the source system, you should first try to correct this error. R provides an easy way to clean up the dataset and assign the results to a new, clean data frame with the unique() command (Listing 5, last line).

Predictive Modeling

In reality, there are a variety of prediction models with a wide range of parameters that provide better or worse results depending on the requirements and data. For an example, I'll use a dataset for irises (the flowers) – one of the best-known datasets for machine learning examples.

As an algorithm, I use a decision tree to predict the iris species – given certain properties, for example, the length (Petal.Length) and width (Petal.Width) of the calyx. To do this, I first need to load the data, which already exists in an R library (Listing 6, line 1).

Listing 6

Prediction with Iris Data

01 > data(iris)
02 > n <- nrow(iris)
03 > n_train <- round(.70 * n)
04 > set.seed(101)
05 > train_indicise <- sample(1:n, n_train)
06 > iris_train <- iris[train_indicise, ]
07 > iris_test <- iris[-train_indicise, ]
08 > install.packages("rpart ")
09 > install.packages("rpart.plot")
10 > library(rpart)
11 > library(rpart.plot)
12 > iris_model <- rpart(formula = Species ~.,data = iris_train, method = "class")
13 > rpart.plot(iris_model, type=4)

The next thing to do is to split the data into training and test data. The training data is used to train the model, whereas the test data checks the predictions and evaluates how well the model works. You would typically use about 70 percent of the data for training and the remaining 30 percent for testing. To do this, first determine the length of the record using the nrow() function and multiply the number by 0.7 (Listing 6, lines 2 and 3). Then randomly select an appropriate amount of data (line 5).

I have set a seed of 101 for the random value selection in the example (line 4). If you set the same value for the seed, you will see identical random values. Following this, split the data into iris_train for training and iris_test for validation (lines 6 and 7).

After splitting the data, you can train and evaluate the decision tree model. To do this, you need the rpart library. rpart.plot visualizes the decision tree (lines 8 to 11). Next, generate the decision tree based on the training data. When doing so, pass in the Species column in order to predict which iris species you are looking at (line 12).

One advantage of the decision tree is that it is relatively easy to see which parameters the model refers to. rpart.plot lets you visualize and read the parameters (line 13). Figure 5 shows that the iris species is setosa if the Petal.Length is greater than 2.5. If the Petal.Length exceeds 2.5 and the Petal.Width is less than 1.7, then the species is probably versicolor. Otherwise, the virginica species is the most likely.

Figure 5: Visualizing the decision tree model with the iris data.

The next step in the analysis process is to find out how accurate the results are. To do this, you need to feed the model data that it hasn't seen before. The previously created test data is used for this purpose. Then use predict() to generate predictions based on the test data using the iris_model model (Listing 7, line 1).

Listing 7

Accuracy Estimation

01 > iris_pred <- predict(object = iris_model, newdata = iris_test, type = "class")
02 > install.packages("caret")
03 > library(caret)
04 > confusionMatrix(data = iris_pred, reference = iris_test$Species)

There are a variety of metrics for determining the quality of the model. The best known of these metrics is the confusion matrix. To compute a confusion matrix, first install the caret library (lines 2 and 3), which will give you enough time for an extensive coffee break even on a fast computer. Then evaluate the iris_pred data (line 4).

The statistics show that the model operates with an accuracy of 93 percent. The next step would probably be to optimize the algorithm or find a different algorithm that offers greater accuracy.

You can now also imagine how this algorithm could be applied to other areas. For example, you could use environmental climate data (humidity, temperature, etc.) as the input, combine it with information on the type and number of defects in a machine, and use the decision tree to determine the conditions under which the machine is likely to fail.

Importing Data

If you want to analyze your own data now, you just need to import the data into R to get started. R lets you import data from different sources.

To import data from a CSV file, first pass the file name (including the path if needed) to the read.table() function and optionally specify whether the file contains column names. You can also specify the separator character for the fields in the lines (Listing 8, first line).

Listing 8

Data Import

> df <- read.table("meine_datei.csv", header = FALSE, sep = ",")
> my_daten <- read_excel("my_excel-file.xlsx")

If the data takes the form of an Excel spreadsheet, you can also import it directly. To do this, install the readxl library and use read_excel() (second line) to import the data.

Conclusions

The R language is a powerful tool for analyzing and visualizing scientific data. This article took a look at how to install R, RStudio, and the various R libraries. I also described the various data structures in R and introduced some advanced analysis methods. Now you can jump in and start using R for your own scientific data analyses.

The Author

Rene Brunner is the founder of Datamics, a consulting company for Data Science Engineering, and Chair of the Digital Technologies and Coding study program at the Macromedia University. With his online courses on Udemy and his "Data Science mit Milch und Zucker" podcast, he hopes to make data science and machine learning accessible to everyone.