Administering virtual machines with MLN

Superclass

Listing 2 illustrates the use of a superclass. The superclass, which is called basic in Listing 2, provides a bundle of settings that are then available for host definitions. Listing 2 defines three hosts based on the superclass. The third host illustrates the technique of overriding a setting from the superclass. Of course, many more settings could be placed into the superclass or individual host stanzas as required. One project file can include multiple superclasses, and the superclasses can even be nested.

Substanzas, such as network interface definitions, which appear both within superclass and host definitions, are merged. Directives that appear in the host definition have precedence over directives within the superclass.

Listing 2

Working with a Superclass

01 global { project italy }
02
03 superclass basic {
04    vmware
05    template rhel5.vmdk
06    memory 128M
07    network eth0 {
08       address dhcp
09       switch lan
10    }
11    service_host milano
12 }
13
14 host uno { superclass basic }
15 host due { superclass basic }
16 host tre {
17    superclass basic
18    memory 256MB
19 }
20
21 switch lan { vmware }

Distributed Projects and Live Migration

So far, all the examples have been located on a single server. However, MLN supports distributed projects in which virtual machines are located on one or more remote hosts. The host and switch definitions in a distributed project require an additional service_host directive, which specifies the server location:

host rem-vm {
   vmware
   template rhel5.vmdk
   ...
   service_host bigvmbox
}

Superclasses can include the service_host directive as well (as shown in Listing 2).

For servers that will host virtual machines, the MLN daemon must be running. In addition, the MLN daemon's configuration file, /etc/mln/mln.conf, must define the server to be referenced in project files and must grant access to remote systems from which projects will be managed. For example, the MLN daemon configuration file on the target computer for the rem-vm virtual machine would contain directives like the following:

service_host bigvmbox
daemon_allow 192.168.10.*

In this case, the daemon_allow setting permits access only from systems on the local subnet.

The argument to service_host can be either a resolvable hostname or an IP address.

With MLN, it is a very small step from distributed projects to live VM migration in Xen environments.

To move virtual machines from one server to another, all you need do is modify the service_host settings within the project file to reflect the desired new destination and then rebuild the project with the mln upgrade command. For example, the following commands modify and rebuild the italy project shown in Listing 2, resulting in the virtual machines migrating from the server denoted by bigvmbox to the one denoted newvmbox:

# vi italy.mln
Change bigvmbox to newvmbox
# mln upgrade -f italy.mnl

Note that this command can run even when the virtual machines within the project are currently running.

Live migration also requires a few preparatory steps:

  • All of the servers must run the MLN daemon. Access control for the daemon must grant access to the other servers involved in the migration.
  • The virtual machines must use shared storage to hold images that will be accessible to all relevant servers. The best solution is usually a storage area network (SAN) controlled by the logical volume manager (LVM), but you can also use a shared NFS directory for testing, especially when performance is not a consideration. The location is specified in the host definitions in the project (see Listing 3).

In addition, with a SAN, the /etc/mln/mln.conf file on all relevant servers must contain the san_path directive. The argument for san_path specifies the location of the SAN, which is either the volume group name (like lvm_vg within the host definition) or the local mount point for the SAN.

Xen must be configured to allow migration on all relevant servers via the following entries in the Xen daemon's configuration file /etc/xen/xend-config.sxp:

(xend-address '')
(xend-relocation-hosts-allow '^localhost$ ^.*\\.ahania\\.com$')

The argument in the second directive is a single-quoted list of regular expressions specifying allowed hosts. In this case, we specify the local host and hosts anywhere within the ahania.com domain.

Listing 3

MLN with a SAN

01 host odysseus {
02      xen
03      lvm
04      lvm_vg volume-group-name
05      service_host scylla
06      ...
07    }
08
09 host ulysses {
10      xen
11      filepath /nfs-mount-point
12      service_host charybdis
13      ...
14 }
01 host odysseus {
02      xen
03      lvm
04      lvm_vg volume-group-name
05      service_host scylla
06      ...
07    }
08
09 host ulysses {
10      xen
11      filepath /nfs-mount-point
12      service_host charybdis
13      ...
14 }

One Is Easy, and So Are 60

Consider the case of an operating system class taught to, say, 120 students. The students are organized in groups of two for lab work, with each group using a network of one Linux and one Windows virtual machine to solve the course exercises. The Linux virtual machine will have two network cards and share its connection to the LAN with the Windows virtual machine.

The challenge for the instructor is to create 60 of those mini-networks quickly, each with an individual public IP address and password.

The main philosophy for this task is "design once, deploy often." All 60 mini-networks must be as consistent as possible, and the system also must be easy to reconfigure, so if you later decide that the Windows virtual machines need more memory, you can modify all of them easily. Putting all of the virtual machines into a single project might seem like an easy way to accomplish this, but a single project would make it more difficult to manage one group's virtual machines separately.

A better solution is to use one project per student group. However, it will not be necessary to write 60 complete project definitions. The #include statement lets you compress as much information as possible, so each project file only contains the information unique to that project. For example, you can create 60 small project files and let them all point to the same main configuration file (see Listing 4).

Listing 4

Use of the #include Statement

01 global {
02    $gnum = 03
03    $userpasswd = unique-value-1
04    $rpasswd = unique-value-2
05    $vncpasswd = unique-value-3
06    project = os$[gnum]
07 }
08
09 #include oscourse.mln

The items beginning with a dollar sign are variable definitions, which will be used within the main configuration file.

Listing 5 shows the main file, oscourse.mln. (Note that this file has no global stanza.)

Listing 5

oscourse.min

01 superclass common {
02    xen
03    lvm
04    memory 256M
05    network eth0 {
06       switch lan
07       netmask 255.255.255.0
08    }
09    users {
10       osuser$[gnum] $userpasswd
11    }
12    root_passwd $rpasswd
13 }
14
15 switch lan { }
16
17 host ubuntu {
18    superclass common
19    template os_linux_1.ext3
20    memory 128M
21    network eth0 {
22       address 10.0.0.1
23    }
24    network eth1 {
25       address dhcp
26    }
27 }
28
29 host win {
30    superclass common
31    hvm
32    template winXP.template
33    vncpasswd $vncpasswd
34    vncdisplay 300
35 }

The hvm directive in the definition of the virtual machine running Windows XP indicates that it uses Xen full (hardware) virtualization. The Linux virtual machine uses Xen paravirtualization. As such, networking configuration is internal to this virtual machine (as IP address 10.0.0.2), so it is not configured by MLN and there is no network substanza. This configuration file also introduces the root_passwd keyword, as well as two keywords that set up the VNC display.

With a shell or Perl script, you can loop over the various project files with the mln create, the mln update, or both commands if necessary, as in when a change occurs to a customized template file used by the Linux virtual machine. Also, you can run mln commands manually.

Read full article as PDF:

022-030_mln.pdf  (750.70 kB)

Related content

  • Linux Integration Services

    Microsoft provides a collection of tools for faster and more efficient Linux virtualization in the Hyper-V environment.

  • Virtualization Intro

    Good tools are half the battle – even if you are just managing virtual machines. This month we take a practical look at virtualization, and we show you a new threat to watch for in the virtual future.

  • Networking with VirtualBox

    Tour the VirtualBox virtualization tool, a free and easy environment for virtual versions of Linux, Unix, and Windows.

  • VMware Monitoring

    Dynamic resource allocation and migration of virtual machines between hosts mean that VMware environments pose new monitoring challenges. A new version of the free OpenNMS network management tool now includes an option for monitoring VMware-based infrastructures.

  • oVirt

    The days when administrators were forced to manage virtual machines at the console are gone. Competitors of VMware and Citrix now offer equally sophisticated GUIs. One promising alternative is Red Hat’s oVirt – a free Java interface for Libvirt.

comments powered by Disqus

Direct Download

Read full article as PDF:

022-030_mln.pdf  (750.70 kB)

News

njobs Europe
What:
Where:
Country:
Njobs Netherlands Njobs Deutschland Njobs United Kingdom Njobs Italia Njobs France Njobs Espana Njobs Poland
Njobs Austria Njobs Denmark Njobs Belgium Njobs Czech Republic Njobs Mexico Njobs India Njobs Colombia