Five OpenStack solutions tested

Red Hat Installation

Red Hat lists four requirements to proceed with installation: a RHEL 7 installation DVD, a network connection to the Internet, a computer, and 30 to 45 minutes of your time. Red Hat divides the installation process into six sections:

  • Equip the computer with RHEL 7
  • Register the system on the customer portal
  • Remove the unrequired software directories from the Yum configuration
  • Install a few auxiliary tools for the package manager
  • Adjust the repository entries
  • Install available patches and updates

This process can take some time, depending on the size of the software. The final step is to disconnect the network manager and restart the system. The Getting Started Guide suggests installing PackStack [7] to get OpenStack up and running easily (Figure 1).

Figure 1: Installing OpenStack in fell swoop – thanks to PackStack.

The Puppet-based PackStack runs in the background, and the --allinone option lets you set everything up in one fell swoop, without preliminary considerations about the space you will need or the planned network setup. That which helps beginners, however, also makes life easier for experienced OpenStack users: You can configure several computers using PackStack and customize the preconfigured default values to suit your own needs. For die-hard admins, instructions are supplied to lead them through a completely manual setup.

Either way, the user sees a login screen at the end of the procedure. Fans of shell access should take a look in the /root/keystonerc_admin and /root/keystonerc_demo files. Thanks to additional services such as the Red Hat Enterprise Linux Open Stack Platform Installer or the included CirrOS image [8], the Red Hat installation is thankfully a non-event, especially in view of the complexity of the OpenStack stack.

Red Hat Enterprise Linux OpenStack Platform Installer is a kind of control center for deploying OpenStack and its components. Puppet again runs in the background, actively supported by Foreman. A PXE service that boots the clients runs on the admin server. Unfortunately, we fought with a node-provisioning failure [9] during the tests.

The cloud admin needs to study the documentation in detail before getting started. Decisions and configurations made cannot always be corrected later. A Live version is also available; however, it is unclear as to whether this method is an option with RHEL 7 underpinnings. The hard disk installation drew attention to itself in our lab with its intensive retroactive installations of various software packages: It should be less touchy. If you run

yum -y install rhel-osp-installer

it indicates that the server will be used as a RHELOSP control center later on.

Use of the RHELOSP installer is largely intuitive. After the PXE boot of a server, you will find it on the Discovered Hosts page (Figure 2). The next step is to set up a basic operating system and configuration using Puppet modules.

Figure 2: Future OpenStack machines waiting for their final installation with Red Hat.

SUSE Installation

SUSE, like Red Hat, describes the installation of their cloud comprehensively and in detail. However, a few considerations are necessary when starting for the first time. For example, SUSE Cloud expects at least three computers for an infrastructure. The OpenStack control center runs separately from compute nodes, storage nodes, or both. This distinction is fairly artificial and unnecessary for your first steps. The third computer is the Admin server. It has two functions: as a PXE boot server that installs SLES on the future OpenStack computers and as a Crowbar [10] master, which builds the OpenStack deployment.

The documentation describes hardware requirements for the Admin and Control machines in great detail. Future cloud admins need to pay particular attention to the network configuration. This proves to be significantly more complex on SUSE than on Red Hat. You need to keep an eye on no fewer than five subnets (Figure 3), and the selected configuration cannot be changed again later. Good planning is required, because corrections only work if you start again from scratch.

Figure 3: The network configuration for SUSE Cloud requires advance planning.

Puzzling out the network configuration seems to be the be-all and end-all of installing the SUSE Cloud. This includes setting up the software directories. In the simplest version, the Admin computer takes on the role of an SMT (Subscription Management tool) server. Otherwise, the configuration of a Bastion host [11] is almost essential. However, things become much simpler after that: Simply start the corresponding computers that receive their operating systems via the Admin server and then wait for further instructions.

Now Crowbar comes into play: You can configure the computers belonging to the SUSE Cloud using what are known as barclamps [12], which express Crowbar functionality. Order is important: A minor bug in the documentation fails to say that RabbitMQ must be set up before setting up Keystone. Fortunately, the software is smarter than the documentation and gives the user the correct instructions (Figure 4). Otherwise, setting up the OpenStack cloud in SUSE proceeds exactly as specified in the installation instructions.

Figure 4: RabbitMQ must be running before Keystone.

In the minimum version, the three computers are installed and ready for use at the end of the day. Our lab revealed no abnormalities. All further control is now the domain of the controlling instance: When a user enters the server name as a URL in their browser, a login screen appears. Although this has been customized by SUSE, you can quite easily see its OpenStack affiliation. Anyone who prefers to work at the command line will find the necessary shell variables stored under /root/.openrc.

Apart from the necessary network considerations and the fact that the SUSE stack expects more than one computer, installing the SUSE Cloud is painless, with only a few problems running barclamps; however, anyone who also manages their server using Crowbar has new opportunities for synergy.

Features

SUSE Cloud is based on the OpenStack release Icehouse and dispenses with newer features such as those from Juno. Because almost all major IT environments already have a hypervisor infrastructure, or at least a corresponding strategy, SUSE fits in well: It supports KVM, VMware, Hyper-V, and Xen straight out of the box. Although LBaaS (load balancing as a service) and FWaaS (firewall as a service) are enabled and supported, Trove (DBaaS, database as a service) is only included as a technology preview.

Ceph (Firefly) is now also fully supported; the user can set up a Ceph network while installing the SUSE Cloud. The corresponding Crowbar barclamps are available and documented accordingly. Integrating an existing installation is, of course, also possible. The default configuration of SUSE Cloud does not enable the latest API version for all components (e.g., Nova and Cinder).

OpenStack from Red Hat is also based on Icehouse. Unlike SUSE, Red Hat supports Trove fully. Sahara (Big Data with Hadoop) is already included as a technology preview. The Red Hat version is slightly more choosy on the hypervisor side. Here, the user can only opt for KVM or VMware. With KVM, it is important which RHEL version is running on the host. The Microsoft operating systems in the current version 7 are not certified.

Support for Gluster on the storage side is not a big surprise. The commercial version even expects the Red Hat storage server; the integration of Inktank Ceph Enterprise (ICE) [13] is also the logical consequence of the distributor acquiring Inktank. As with SUSE, Red Hat also does not enable the latest version of the APIs for Nova and Cinder.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News