Creating low-cost "learning clusters" for HPC
Starting out in the HPC world requires learning to write parallel applications and learning to administer and manage clusters. We take a look at some ways to get started.
On the Beowulf mailing list  recently, Jim Lux started a discussion about "learning clusters"  – that is, low-cost units that can be used to learn about clusters. People often wonder how to get started with clusters, usually in one of two ways: "I want to learn parallel programming" or "I want to learn how to build and administer clusters." Sometimes the question is simple curiosity: "What is a Beowulf cluster and how can I build one?"
In the past, you had just a few options, including getting access to a cluster at a nearby university, buying/borrowing/using some old hardware from various sources to build one yourself, or designing your own cluster by shopping for parts. However, with the rise of virtualization, you now have more system options than ever, particularly if your budget is limited.
In this article, I'll take a quick look at various system options for people who want to learn about clusters, focusing on the programming and administration aspects. These options range in price, learning curve, ease of use, complexity, and just plain fun, but the focus is to keep the cost down while you are learning. The one aspect of clusters that I'm not really going to focus on is performance. The goal is to learn, not to find the best price per 109 floating point operations per second ($/GFLOPS) or the fastest performance. Performance can come later, once you have learned the fundamentals.
Learning to Program
Usually developers get started writing programs for high-performance computing (HPC) clusters by learning MPI programming. MPI (Message Passing Interface) is the mechanism used to pass data between nodes (really, processes).
Typically, a parallel program starts the same program on all nodes, usually with one process per core, using tools that come with MPI implementations (i.e.,
mpiexec). For example, if you have two four-core systems, your applications would likely use up to eight processes total – four per node. Your application starts with the desired number of processes on the various nodes specified either by you or the job scheduler (resource manager). MPI then opens up communications between each process, and they exchange data according to your application program. This is how you achieve parallelism – each application runs with its own set of data, and they communicate with each other to exchange data as needed.
To get started, just search the web for "MPI tutorial," and you will find several options. To run your application, you need some hardware, and the simplest thing to do is use what you already have. Most likely you have at least a dual-core system, so run your application with two processes (the MPI parlance is
np=2, or "number of processes = 2"). The kernel scheduler should put one process on each core. You can also run with more than two processes, but the kernel will time-slice (context switch) between them, which could cause a slowdown.
Moreover, you need to make sure you have enough memory and the system does not start swapping (hint, use
vmstat to watch for swapping). Although it's best to run one process per core, while you are learning, and if you have enough memory, it doesn't hurt to try more processes than physical cores. Remember that in this instance you're not looking for performance, you're only learning how to write applications. If you run your application on a single system with a single core and then run on two cores, you would hope to see some improvement in performance, but that's not always true (and that's the subject of another article).
If you want to extend your programming to larger systems, you will need something larger than your desktop or laptop computer. Later, I will present some options for this, but in the next section, I will talk about how you can use virtualization on your laptop or desktop computer to learn about cluster tools and management.
Virtual Machines on Your Computer
For this lesson, I'm going to assume you have access to a desktop or a laptop with some amount of memory (e.g., 2-4GB as a starting point) and a reasonably new version of Linux that supports virtualization. Although you could use Microsoft Windows, that is a road for others to map – I will focus on using Linux.
The goal is to spin up at least one, if not more, virtual machines (VMs) on your system and use them to simulate a cluster. I won't go into detail about creating a VM or starting it up because you have access to a number of tutorials on the web -.
The advantage of using VMs is being able to spin up more VMs than you have real physical processors and letting the kernel time-slice between them. Because they are logically isolated from one another, you can think of VMs as different nodes in a cluster. With your "cluster simulator," you can learn to program for clusters or learn how to create and manage HPC clusters.
Several cluster toolkits are available that you can use to learn about building HPC clusters. One example is Warewulf. I've written a series of articles  about how to build a cluster using Warewulf, but you might need to adjust a few things because the newer version is slightly different from the version used when I wrote this article (the changes are small, in my opinion). The developers of Warewulf routinely use VMs on their laptops for development and testing, as do many developers, so it's not an unusual choice.
Once the cluster is configured, you can also run your MPI or parallel applications across the VMs. For maximum scalability, you would like to have one VM per physical core so that performance is not impeded by context switching; however, you can easily create more VMs than you have physical cores, as long as you have enough memory in the system for all the VMs to run. Fortunately, you can restrict the amount of memory for each VM, keeping the total memory of all VMs below a physical limit. (Don't forget the memory needed by the hypervisor system!)
Just remember that you don't have to stop at one system when using VMs. You can take several systems and create VMs on each, growing the cluster to a greater number of systems. For example, if you have two quad-core systems, you can easily set up four VMs on each node, giving you a total of eight VMs. But, what if you want to go larger?
Send in the Clouds
Although you might not be a fan of the word "cloud," the concepts behind it have utility. In the case of HPC, clouds allow you to spin up a cluster fairly quickly, run some applications, and shut down the cluster fairly quickly. This capability is particularly useful while you are learning about clusters, because you can start with a few instances and then add more as you need them. And, once you are done, you can just shut it all down and never have to worry about it.
It's fairly obvious that Amazon EC2 is the thousand-pound gorilla in the cloud room. Amazon EC2 has been around for about six years and has developed good tools and instructions  on creating clusters using Amazon Web Services (AWS). Also, they have a very nice video  that explains how to build a cluster quickly using AWS.
In general, you can take a couple of routes when dealing with the cloud and clusters. The first is to install the OS and cluster tools on your various VMs using something like Warewulf, as mentioned earlier. This method is useful if you want to understand how cluster tools work and how you administer a system. If you just want a cluster up and working in the cloud, you have another option. StarCluster  was developed at MIT specifically for building clusters in the cloud, namely Amazon's EC2. A very nice StarCluster How-To  gets you up and running very quickly, and a nice slide deck on SlideShare  has a video that explains how to get StarCluster up and running.
A subtle point in the video is that Amazon has created pre-configured images called "Amazon Machine Images" (AMIs). AMIs are a collection of pre-configured OS images for VMs that are primarily open source because of licensing issues. In addition to Amazon-defined images are community-defined images and images that you create. These images can save you a lot of time when installing an OS because the tools can be integrated into the image, and you can add data and applications to images as you like.
In addition to compute services, Amazon EC2 also provides storage, which comes in several forms, including local storage, Amazon Elastic Block Storage (EBS), Amazon S3, and Amazon Glacier. Local storage is a very interesting option but, remember, this storage also goes away once the VMs are stopped. Using EC2, you can perhaps learn about building and installing Lustre, GlusterFS, Ceph, or other storage solutions, without having to buy a bunch of systems.
As an example, Robert Read from Intel has recently experimented with building Lustre in Amazon EC2 . The goal was to understand performance and how easy it is (or is not) to build Lustre within EC2. Read found that Lustre performed well and scaled fairly easily (you just add VMs) on AWS. Additionally, the new distributed metadata (distributed namespace, DNE) is a good fit for AWS. However, currently, AWS needs a more dynamic failover capability because many people run metadata servers (MDSs) and object storage servers (OSSs) in failover mode (for better uptime).
Amazon Elastic Block Store (EBS) provides block-level storage volumes to images. They are network-attached (think iSCSI) with the option of being "optimized" so that throughput is improved to about 500-1,000Mbps. Most importantly, they are persistent, so if the VM is stopped, data is not lost. They have a limit of 1TB per EBS volume, but you can attach multiple volumes per image. Discussing Amazon S3 and Glacier is the subject for another article at another time, but just know that they can be used to store data.
The large number of VM instances on Amazon are organized by breaking them up into instance families:
- General-purpose (M1 and M3)
- Compute-optimized (C1 and CC2)
- Memory-optimized (M2 and CR1)
- Storage-optimized (HI1 and HS1)
- Micro instances (T1)
- GPU instances (CG1)
A number of VM instances  for each of these families vary on the basis of performance and cost. If you examine them, you will discover that Amazon gives you 750 free hours on micro instances plus some storage.
You can use any number of VM instances to learn about clusters, although you should examine the pricing carefully because some fees apply for moving and storing data. However, using a cloud such as Amazon EC2 is a great way to get started learning about clusters.
Buy this article as PDF
Linux Magazine will include the best of both magazines.
Popular open source encryption tool is vulnerable to attack
New “Yakkety Yak” edition emphasizes cloud and servers
Google finally enters the phone hardware business.
Innovative system adds a hard drive and Ubuntu Core to the RPi for an IoT hub.
Linux is two weeks younger than we thought!
The Apache Software Foundation considers retiring OpenOffice
Adobe won’t kill the plugin in 2017
Linux Foundation's big event celebrates the 25th anniversary of Linux