Building high-performance clusters with LAM/MPI

Conclusion

Many applications used in engineering, oil exploration, simulation, and scientific research require the power of parallel computation, and that is why developers continue to use LAM/MPI for building HPC applications.

Although the next-generation Open MPI implementation [6] includes many new features that are not present in LAM/MPI, LAM/MPI has a very large base of users who are quite happy with its reliability, scalability, and performance.

Improving Performance

In parallel computation scenarios, the main objective is often to reduce the total wall clock execution time rather than simply reduce CPU time. Because so many different factors are present, you cannot expect a linear improvement in performance just by adding more and more nodes.

One of the most important factors is the inherent parallelism present in the code (i.e., how well the problem is broken into pieces for parallel execution). From an infrastructure point of view, many additional factors can also contribute to improved performance.

In most LAM/MPI cluster implementations, because client nodes have to communicate with each other through the MPI architecture, it is important to have a fast and dedicated network between nodes (e.g., gigabit Ethernet interfaces with bonding).

Also, it is a good idea to create a separate VLAN for a private communication network so that no other traffic can contribute to performance degradation.

If your application is performing any kind of data mining (which is often the case for commercial implementations of LAM/MPI), disk I/O from master and client nodes also has an effect on performance. However, because of the nature of parallel execution, it is important that source data for data mining (or the executables in simpler implementations) is available to all nodes for simultaneous read and write operations.

If you are using SAN-based external disks along with NFS, setting NFS parameters can be beneficial in terms of performance improvement. If you are using NAS storage subsystems and NFS/CIFS protocols to make shared data sources available to all nodes for simultaneous read/write, it is highly recommended that you use a separate VLAN and Ethernet interfaces on each node for disk I/O from the NAS subsystem, so that storage traffic is isolated from MPI traffic.

Finally, cluster filesystems (such as GFS, GPFS, and Veritas) can also help speed up disk I/O for large LAM/MPI implementations.

Infos

  1. LAM/MPI website: http://www.lam-mpi.org/
  2. C3: http://www.csm.ornl.gov/torc/C3/
  3. C3 download: http://www.csm.ornl.gov/torc/C3/C3softwarepage.shtml
  4. LAM/MPI download page: http://www.lam-mpi.org/7.1/download.php
  5. LAM run time in Debian:http://packages.debian.org/lenny/lam-runtime
  6. Open MPI: http://www.open-mpi.org
  7. LAM/MPI User's Guide: http://www.lam-mpi.org/download/files/7.1.4-user.pdf
  8. Openshaw, Stan, and Ian Turton. High Performance Computing and the Art of Parallel Programming. ISBN: 0415156920
  9. Lafferty, Edward L., et al. Parallel Computing: An Introduction. ISBN: 0815513291

The Author

Khurram Shiraz is a technical consultant at GBM in Kuwait. In his nine years of IT experience, he has worked mainly with high-availability technologies and monitoring products such as HACMP, RHEL Clusters, and ITM . On the storage side, his main experience is with implementations of IBM and EMC SAN/ NAS storage. His area of expertise also includes design and implementation of high-availability parallel computing and DR solutions based on IBM pSeries, Linux, and Windows. You can reach him mailto:atkshiraz12@hotmail.com.

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • PelicanHPC

    Crunch big numbers with your very own high-performance computing cluster.

  • HPC Cluster Basics

    The beginning for high-performance computing is understanding what you are trying to achieve, the assumptions you make to get there, and the resulting boundaries and limitations imposed on you and your HPC system.

  • Samba for Clusters

    Samba Version 3.3 and the CTDB lock manager provide full cluster support.

  • OpenSSI

    The OpenSSI framework rearranges processes for easy and transparent clustering.

  • Rocks Clustering

    Rocks offers an easy solution for clustering with virtual machines.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News