Turn your desktop computer into a high-performance cluster with PelicanHPC

Run Tests

To run the tests, open a terminal and start GNU Octave by typing octave on the command line, which brings you to the Octave interface.

Here you can run various examples of sample code by typing in a name. For example, the kernel estimations are performed by typing kernel_example.

Similarly, pea_example shows the parallel implementation of the parameterized expectation algorithm, and mc_example2, shown in Figure 4, shows the result of the Monte Carlo test.

Figure 4: Gnuplot plots the results of a Monte Carlo test example.

Creel also suggests that PelicanHPC can be used for molecular dynamics with the open source software, GROMACS (GROningen MAchine for Chemical Simulations). The distributed project for studying protein folding, Folding@home, also uses GROMACS, and Creel believes that one could also replicate this setup on a cluster created by PelicanHPC.

Creel also suggests that users solely interested in learning about high-performance computing should look to ParallelKnoppix, the last version of which is still available for download [4].

Parallel Programming with PelicanHPC

One of the best uses for PelicanHPC is for compiling and running parallel programs. If this is all you want to use PelicanHPC for, you don't really need the slave nodes because the tools can compile your programs on the front-end node itself.

PelicanHPC includes several tools for writing and processing parallel code. OpenMPI compiles programs in C, C++, and Fortran. SciPy and NumPy [5] are Python-based apps for scientific computing. PelicanHPC also has the MPI toolbox (MPITB) for Octave, which lets you call MPI library routines from within Octave.

Passing the Buck

If you're new to parallel programming, you might not be aware of MPI (Message-Passing Interface), which is key to parallel computing. It is a software system that allows you to write message-passing parallel programs that run on a cluster. MPI isn't a programming language, but a library that can pass messages between multiple processes. The process can be either on a local machine or running across the various nodes on the cluster.

Popular languages for writing MPI programs are C, C++ and Fortran. MPICH was the first implementation of the MPI 1.x specification. LAM/MPI is another implementation that also covers significant bits of the MPI 2.x spec. LAM/MPI can pass messages via TCP/IP, shared memory, or Infiniband. The most popular implementation of MPI is OpenMPI, which is developed and maintained by a consortium and combines the best of various projects, such as LAM/MPI. Many of the Top 500 supercomputers use it, including IBM Roadrunner, which is currently the fastest.

Read full article as PDF:

030-035_pelicanHPC.pdf (817.68 kB)

Related content

  • Number Crunching with Pelican HPC 2.0

    Pelican HPC, a Live distro for High Performance Computing (HPC), is now available in version 2.0 with updated software.

  • Data Center Intro

    The server space is changing, bringing new opportunities for the agile admin. If you plan on taking up residence in the new data center, you'd better get your toolkit ready. This month we explore tools and techniques for data center and server room environments.


  • Why not use something real for clustering HPC like Perceus?

    You can download perceus from Infiscale for free or download their own caos linux, their red hat rebuild, or their new ubuntu rebuild and use the same hpc software as the systems at top500.org do. Search google for perceus and see that is is off the hook and also way better than mosix, haha.
comments powered by Disqus

Direct Download

Read full article as PDF:

030-035_pelicanHPC.pdf (817.68 kB)


njobs Europe
Njobs Netherlands Njobs Deutschland Njobs United Kingdom Njobs Italia Njobs France Njobs Espana Njobs Poland
Njobs Austria Njobs Denmark Njobs Belgium Njobs Czech Republic Njobs Mexico Njobs India Njobs Colombia