Creating virtual clusters with Rocks
In the Rocks
Rocks offers an easy solution for clustering with virtual machines.
Rocks is a Linux distribution and cluster management system that allows for rapid deployment of Linux clusters on physical hardware or virtual Xen containers. A Rocks cluster  is easy to deploy, and it offers all the benefits of virtualization for the cluster member nodes. With a minimum of two physical machines, Rocks allows for simple and rapid cluster deployment and management, freeing the cluster administrator to focus on supporting grid computing and the distributed applications that make clustering an attractive option.
Included in the standard Rocks distribution are various open source high-performance distributed and parallel computing tools, such as Sun's Grid Engine , OpenMPI , and Condor. This powerful collection of advanced features is one reason why NASA, the NSA, IBM Austin Research Lab, the U.S. Navy, MIT, Harvard, and Johns Hopkins University are all using Rocks for some of their most intensive applications.
Why Virtualize a Cluster?
The arguments for deploying virtual clusters are the same arguments that justify any virtualization solution: flexibility, ease of management, and efficient hardware resource utilization. For example, in an environment in which 64-bit and 32-bit operating systems must run simultaneously, virtualization is a much more efficient solution than attempting to support two separate hardware platforms in a single cluster.
Before installing the cluster, make sure all of the necessary components are readily available. Rocks clusters can be configured in a multitude of different ways, with various network configurations. Rocks can be installed within virtual containers (VM containers) or directly on physical hardware. The example provided in this article assumes that you have at least two physical machines for deploying a front-end node and at least one VM container. The front-end node requires at least 1GB of RAM, and the VM container should have at least 4GB of RAM (Rocks requires a minimum of 1GB).
It is essential to ensure that the hardware is supported by the Rocks OS distribution. The Rocks OS is based on CentOS, so make sure your hardware complies with the CentOS/Red Hat Hardware Compatibility list. The general rule of thumb is to use widely supported, commodity hardware, especially when selecting network adapters and graphics adapters.
The basic Rocks network configuration assumes the presences of a public network and a private network for the VM container and its compute nodes. The front-end node should have two network interface cards, and the compute nodes require at least one card to connect to the private compute node network. Also, you will need a switch that connects the various VM containers to the front-end node. See Figure 1 for a sample Rocks network configuration.
Preparing the Installation
Insert the Rocks DVD (or boot CD) and boot the system off of the CD/DVD drive. If you are using CDs, insert the Rocks Kernel/Boot CD first. Rocks will prompt for the various rolls. In the Rocks lexicon, a roll is a collection of software intended for a specific task. A base configuration requires the Kernel/Boot roll, Base roll, Web Server roll, and OS roll 1 and roll 2, as well as the Xen roll for cluster virtualization support.
The base configuration is not a very exciting configuration, so research the various rolls that are available  and include the various distributed and grid computing rolls as desired to really have fun with Rocks. Sun Grid Engine (SGE), Torque, and the high-performance computing (HPC) roll are good starting points for really making the most out of a Rocks cluster.
A splash screen will prompt for a boot mode. To boot into the front-end installation, type frontend and hit Enter. If this is not done within a few seconds, the Rocks installer will boot into a compute node installation. If this happens, reboot the system and type frontend in the prompt before it automatically boots again.
Once the Rocks install CD boots, it will attempt to contact a DHCP server, but if it cannot find a DHCP server on both network interfaces, it will prompt for a network configuration. Most likely, eth0 will get a lease, but eth1 (private cluster network) will not have a DHCP server on it. In this case, either have a DHCP server on the private network as well or select manual configuration and enter the IPv4 address, gateway, and name server manually. Once network connectivity is established, you should select OK to continue with the front-end installation.
A screen that says "Welcome to Rocks" will appear that lets you launch the installation off the DVDs, the CDs, or the network. The simplest approach is to download the DVD in advance and install from the DVD because it contains most of the rolls or software packages that are offered on the Rocks site.
With the Rocks installation DVD is in the drive, click CD/DVD Based Roll, then select the rolls you want to install. A base Rocks system consists of the kernel, OS, web server, and base rolls. To configure a virtual cluster, the Xen roll is also required (Figure 2).
Now select the recommended rolls and click "Submit." The selected rolls will now appear on the left of the installation screen. Clicking Next begins the installation.
Entering the cluster information will provide identification for the cluster if it is registered with rocksclusters.org. Various prompts ask for configuration information, such as the network settings for eth0 and eth1, the root password, the time zone, and the partition scheme.
Buy this article as PDF
Kernel king admits his tone has alienated volunteers, but says the demands of the process require directness.
New flaw in an old encryption scheme leaves the experts scrambling to disable SSL 3
Lennart Poettering wants to change the way Linux developers talk to each other.
Enterprise giant frees itself from ink and home PCs (and visa versa).
Mozilla’s product think tank sinks silently into history.
TODO group will focus on open source tools in large-scale environments.
New tool will look like GParted but support a wider range of storage technologies.
New public key pinning feature will help prevent man-in-the-middle attacks.
Carnegie Mellon researchers say 3 million pages could fall down the phishing hole in the next year.
The US government rolls new best-practice rules for protecting SSH.