Building high-performance clusters with LAM/MPI

Launching the Run-Time Environment

Once the lamhosts file is complete, you can use the lamboot command to start the LAM run-time environment on all cluster nodes:

/home/lamuser> lamboot -v /etc/lam/lamhosts

The output of the preceding command is shown in Listing 3.

Listing 3

lamboot Output

 

If you have any problems with lamboot, the -d option will output enormous amounts of debugging information.

The recon tool (see an example of some output from this command in Figure 1) verifies that the cluster is bootable. Although recon does not boot the LAM run-time environment, and it definitively does not guarantee that lamboot will succeed, it is a good tool for testing your configuration.

Figure 1: Verifying LAM/MPI clusters with the recon tool.

Another important LAM tool is tping, which verifies the functionality of a LAM universe by sending a ping message between the LAM daemons that constitute the LAM environment.

tping commonly takes two arguments: the set of nodes to ping (in N notation) and how many times to ping them. If the number of times to ping is not specified, tping will continue until it is stopped (usually by hitting Ctrl+C). The command in Figure 2 pings all nodes in the LAM universe once.

Figure 2: Verifying LAM/MPI nodes with tping.

Parallel Computation with LAM/MPI

Once your LAM universe and all nodes are up and running, you are ready to start your parallel computation journey. Although it is possible to compile MPI programs without booting LAM, administrators commonly boot LAM services first before compiling.

One of the basic rules of LAM/MPI is that the same compilers that were used to compile LAM should be used to compile and link user MPI programs. However, this requirement is largely invisible to administrators because each LAM universe provides specific wrapper compilers to perform compilation tasks.

Some examples of the wrapper compilers are mpicc, mpic++/mpiCC, and mpif77, which are provided to compile C, C++, and Fortran LAM/MPI programs, respectively, in a LAM/MPI environment.

With the following commands, you can compile C and FORTRAN MPI programs, respectively:

/home/lamuser> mpicc foo1.c -o foo1
/home/lamuser> mpif77 -O test1.f -o test1

Note, too, that any other compiler/linker flags can be passed through the wrapper compilers (such as -g and -O); they will then pass to the back-end compiler.

Wrapper compilers only add all the LAM/MPI-specific flags when a command-line argument that does not begin with a dash (-) is present. For example, when you execute the mpicc command without any arguments, it will invoke a back-end compiler (which is nothing more than the GCC compiler):

/home/lamuser> mpicc
gcc: no input files

Now the resulting executables (test1 and foo1 in preceding examples) are ready to run in the LAM run-time environment.

You have to copy the executable to all of client nodes and assure that execution permissions are available for lamuser on that executable.

Now you are ready to start parallel execution. You can run all the processes of your parallel application on a single machine or more than one machine (master/client nodes in LAM/MPI cluster). In fact, LAM/MPI also lets you launch multiple processes on a single machine as well, regardless of how many CPUs are actually present on the machine.

Alternatively, you can use the -np option of the mpirun command to specify the number of processes to launch.

For example, you can start the LAM/MPI cluster on all nodes by using lamboot and then compiled the sample C program cpi.c (see Listing 4), which is available for download from the LAM/MPI website.

Listing 4

cpi.c

 

To compile the program with the mpicc wrapper compiler, enter the following:

/home/lamuser> mpicc cpi.c -o cpi

I executed this executable cpi with six parallel processes on a five-node LAM/MPI cluster.

/home/lamuser> mpirun -np 6 cpi

According to the lamhosts file defined previously, this command starts one copy of cpi on the n0 and n3 nodes, whereas two copies will start on n1 and n2.

The use of the C option with the mpirun command will launch one copy of the cpi program on every CPU that was listed in the boot schema (the lamhosts file):

/home/lamuser> mpirun -C cpi

The C option therefore serves as a convenient shorthand notation for launching a set of processes across a group of SMPs.

On the other hand, the use of the N option with mpirun will launch exactly one copy of the cpi program on every node in the LAM universe. Consequently, the use of N with the mpirun command tells LAMP to disregard the CPU count.

LAM Commands

The LAM Universe is easy to manage, but it has no GUI-based management infrastructure, so most of the time administrators have to rely on built-in LAM commands. Some of the important LAM commands are shown in Table 1.

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • PelicanHPC

    Crunch big numbers with your very own high-performance computing cluster.

  • HPC Cluster Basics

    The beginning for high-performance computing is understanding what you are trying to achieve, the assumptions you make to get there, and the resulting boundaries and limitations imposed on you and your HPC system.

  • Samba for Clusters

    Samba Version 3.3 and the CTDB lock manager provide full cluster support.

  • OpenSSI

    The OpenSSI framework rearranges processes for easy and transparent clustering.

  • Rocks Clustering

    Rocks offers an easy solution for clustering with virtual machines.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News