Turn your desktop computer into a high-performance cluster with PelicanHPC

Run Tests

To run the tests, open a terminal and start GNU Octave by typing octave on the command line, which brings you to the Octave interface.

Here you can run various examples of sample code by typing in a name. For example, the kernel estimations are performed by typing kernel_example.

Similarly, pea_example shows the parallel implementation of the parameterized expectation algorithm, and mc_example2, shown in Figure 4, shows the result of the Monte Carlo test.

Figure 4: Gnuplot plots the results of a Monte Carlo test example.

Creel also suggests that PelicanHPC can be used for molecular dynamics with the open source software, GROMACS (GROningen MAchine for Chemical Simulations). The distributed project for studying protein folding, Folding@home, also uses GROMACS, and Creel believes that one could also replicate this setup on a cluster created by PelicanHPC.

Creel also suggests that users solely interested in learning about high-performance computing should look to ParallelKnoppix, the last version of which is still available for download [4].

Parallel Programming with PelicanHPC

One of the best uses for PelicanHPC is for compiling and running parallel programs. If this is all you want to use PelicanHPC for, you don't really need the slave nodes because the tools can compile your programs on the front-end node itself.

PelicanHPC includes several tools for writing and processing parallel code. OpenMPI compiles programs in C, C++, and Fortran. SciPy and NumPy [5] are Python-based apps for scientific computing. PelicanHPC also has the MPI toolbox (MPITB) for Octave, which lets you call MPI library routines from within Octave.

Passing the Buck

If you're new to parallel programming, you might not be aware of MPI (Message-Passing Interface), which is key to parallel computing. It is a software system that allows you to write message-passing parallel programs that run on a cluster. MPI isn't a programming language, but a library that can pass messages between multiple processes. The process can be either on a local machine or running across the various nodes on the cluster.

Popular languages for writing MPI programs are C, C++ and Fortran. MPICH was the first implementation of the MPI 1.x specification. LAM/MPI is another implementation that also covers significant bits of the MPI 2.x spec. LAM/MPI can pass messages via TCP/IP, shared memory, or Infiniband. The most popular implementation of MPI is OpenMPI, which is developed and maintained by a consortium and combines the best of various projects, such as LAM/MPI. Many of the Top 500 supercomputers use it, including IBM Roadrunner, which is currently the fastest.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Number Crunching with Pelican HPC 2.0

    Pelican HPC, a Live distro for High Performance Computing (HPC), is now available in version 2.0 with updated software.

  • Data Center Intro

    The server space is changing, bringing new opportunities for the agile admin. If you plan on taking up residence in the new data center, you'd better get your toolkit ready. This month we explore tools and techniques for data center and server room environments.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News