Building high-performance clusters with LAM/MPI

Launching the Run-Time Environment

Once the lamhosts file is complete, you can use the lamboot command to start the LAM run-time environment on all cluster nodes:

/home/lamuser> lamboot -v /etc/lam/lamhosts

The output of the preceding command is shown in Listing 3.

Listing 3

lamboot Output

01 LAM 7.0.6/MPI 2 C++/ROMIO - Indiana University
02
03 n0<1234> ssi:boot:base:linear: booting n0 (bravo1.cluster.com)
04 n0<1234> ssi:boot:base:linear: booting n1 (bravo2.cluster.com)
05 n0<1234> ssi:boot:base:linear: booting n2 (bravo3.cluster.com)
06 n0<1234> ssi:boot:base:linear: booting n3 (bravo4.cluster.com)
07 n0<1234> ssi:boot:base:linear: finished

If you have any problems with lamboot, the -d option will output enormous amounts of debugging information.

The recon tool (see an example of some output from this command in Figure 1) verifies that the cluster is bootable. Although recon does not boot the LAM run-time environment, and it definitively does not guarantee that lamboot will succeed, it is a good tool for testing your configuration.

Figure 1: Verifying LAM/MPI clusters with the recon tool.

Another important LAM tool is tping, which verifies the functionality of a LAM universe by sending a ping message between the LAM daemons that constitute the LAM environment.

tping commonly takes two arguments: the set of nodes to ping (in N notation) and how many times to ping them. If the number of times to ping is not specified, tping will continue until it is stopped (usually by hitting Ctrl+C). The command in Figure 2 pings all nodes in the LAM universe once.

Figure 2: Verifying LAM/MPI nodes with tping.

Parallel Computation with LAM/MPI

Once your LAM universe and all nodes are up and running, you are ready to start your parallel computation journey. Although it is possible to compile MPI programs without booting LAM, administrators commonly boot LAM services first before compiling.

One of the basic rules of LAM/MPI is that the same compilers that were used to compile LAM should be used to compile and link user MPI programs. However, this requirement is largely invisible to administrators because each LAM universe provides specific wrapper compilers to perform compilation tasks.

Some examples of the wrapper compilers are mpicc, mpic++/mpiCC, and mpif77, which are provided to compile C, C++, and Fortran LAM/MPI programs, respectively, in a LAM/MPI environment.

With the following commands, you can compile C and FORTRAN MPI programs, respectively:

/home/lamuser> mpicc foo1.c -o foo1
/home/lamuser> mpif77 -O test1.f -o test1

Note, too, that any other compiler/linker flags can be passed through the wrapper compilers (such as -g and -O); they will then pass to the back-end compiler.

Wrapper compilers only add all the LAM/MPI-specific flags when a command-line argument that does not begin with a dash (-) is present. For example, when you execute the mpicc command without any arguments, it will invoke a back-end compiler (which is nothing more than the GCC compiler):

/home/lamuser> mpicc
gcc: no input files

Now the resulting executables (test1 and foo1 in preceding examples) are ready to run in the LAM run-time environment.

You have to copy the executable to all of client nodes and assure that execution permissions are available for lamuser on that executable.

Now you are ready to start parallel execution. You can run all the processes of your parallel application on a single machine or more than one machine (master/client nodes in LAM/MPI cluster). In fact, LAM/MPI also lets you launch multiple processes on a single machine as well, regardless of how many CPUs are actually present on the machine.

Alternatively, you can use the -np option of the mpirun command to specify the number of processes to launch.

For example, you can start the LAM/MPI cluster on all nodes by using lamboot and then compiled the sample C program cpi.c (see Listing 4), which is available for download from the LAM/MPI website.

Listing 4

cpi.c

001 ------------------------------------
002 Sample MPI enabled C program "cpi.c"
003 ------------------------------------
004
005 /*
006  * Copyright (c) 2001-2002 The Trustees of Indiana University.
007  * All rights reserved.
008  * Copyright (c) 1998-2001 University of Notre Dame.
009  * All rights reserved.
010  * Copyright (c) 1994-1998 The Ohio State University.
011  * All rights reserved.
012  *
013  * This file is part of the LAM/MPI software package.  For license
014  * information, see the LICENSE file in the top level directory of the
015  * LAM/MPI source distribution.
016  *
017  * $HEADER$
018  *
019  * $Id: cpi.c,v 1.4 2002/11/23 04:06:58 jsquyres Exp $
020  *
021  * Portions taken from the MPICH distribution example cpi.c.
022  *
023  * Example program to calculate the value of pi by integrating f(x) =
024  * 4 / (1 + x^2).
025  */
026
027 #include <stdio.h>
028 #include <math.h>
029 #include <mpi.h>
030
031
032 /* Constant for how many values we'll estimate */
033
034 #define NUM_ITERS 1000
035
036
037 /* Prototype the function that we'll use below. */
038
039 static double f(double);
040
041
042 int
043 main(int argc, char *argv[])
044 {
045   int iter, rank, size, i;
046   double PI25DT = 3.141592653589793238462643;
047   double mypi, pi, h, sum, x;
048   double startwtime = 0.0, endwtime;
049   int namelen;
050   char processor_name[MPI_MAX_PROCESSOR_NAME];
051
052   /* Normal MPI startup */
053
054   MPI_Init(&argc, &argv);
055   MPI_Comm_size(MPI_COMM_WORLD, &size);
056   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
057   MPI_Get_processor_name(processor_name, &namelen);
058
059   printf("Process %d of %d on %s\n", rank, size, processor_name);
060
061   /* Do approximations for 1 to 100 points */
062
063   for (iter = 2; iter < NUM_ITERS; ++iter) {
064     h = 1.0 / (double) iter;
065     sum = 0.0;
066
067     /* A slightly better approach starts from large i and works back */
068
069     if (rank == 0)
070       startwtime = MPI_Wtime();
071
072     for (i = rank + 1; i <= iter; i += size) {
073       x = h * ((double) i - 0.5);
074       sum += f(x);
075     }
076     mypi = h * sum;
077
078     MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
079
080     if (rank == 0) {
081       printf("%d points: pi is approximately %.16f, error = %.16f\n",
082              iter, pi, fabs(pi - PI25DT));
083       endwtime = MPI_Wtime();
084       printf("wall clock time = %f\n", endwtime - startwtime);
085       fflush(stdout);
086     }
087   }
088
089   /* All done */
090
091   MPI_Finalize();
092   return 0;
093 }
094
095
096 static double
097 f(double a)
098 {
099   return (4.0 / (1.0 + a * a));
100 }

To compile the program with the mpicc wrapper compiler, enter the following:

/home/lamuser> mpicc cpi.c -o cpi

I executed this executable cpi with six parallel processes on a five-node LAM/MPI cluster.

/home/lamuser> mpirun -np 6 cpi

According to the lamhosts file defined previously, this command starts one copy of cpi on the n0 and n3 nodes, whereas two copies will start on n1 and n2.

The use of the C option with the mpirun command will launch one copy of the cpi program on every CPU that was listed in the boot schema (the lamhosts file):

/home/lamuser> mpirun -C cpi

The C option therefore serves as a convenient shorthand notation for launching a set of processes across a group of SMPs.

On the other hand, the use of the N option with mpirun will launch exactly one copy of the cpi program on every node in the LAM universe. Consequently, the use of N with the mpirun command tells LAMP to disregard the CPU count.

LAM Commands

The LAM Universe is easy to manage, but it has no GUI-based management infrastructure, so most of the time administrators have to rely on built-in LAM commands. Some of the important LAM commands are shown in Table 1.

Read full article as PDF:

Related content

comments powered by Disqus

Direct Download

Read full article as PDF:

News

njobs Europe
What:
Where:
Country:
Njobs Netherlands Njobs Deutschland Njobs United Kingdom Njobs Italia Njobs France Njobs Espana Njobs Poland
Njobs Austria Njobs Denmark Njobs Belgium Njobs Czech Republic Njobs Mexico Njobs India Njobs Colombia