Combine Ansible and Docker to streamline software deployment
Tutorials – Ansible
Streamline software deployment with Ansible and Docker containers.
Why do this?
- Quickly create environments that are reproducible on many machines and infrastructures.
- Automate your deployment for ease of reuse.
- Add the word DevOps to your CV and earn more pounds, euros, or dollars.
Computing is bigger than it's ever been. There are now more devices producing and processing more data than last year, and it's only going to get bigger in coming years. For every device in the hands of a user, back-end services are needed to keep everything running smoothly. The tools that have served administrators well for years now struggle to cope with server farms packed to the rafters with computers whirring away and needing maintenance.
Apt and Yum are great at installing software on one machine, but what if you need to update a package on a hundred machines? What if a configuration file needs to be updated but in a slightly different way on hundreds of machines?
Now imagine each of those machines is running a dozen containerized applications that can be launched or stopped depending on how much load is currently on the system. How do you manage now? We'll look at one option for keeping everything humming along smoothly – Ansible Container [1].
Ansible Container uses Docker to create containers that host your code and uses Ansible playbooks to set everything up inside these environments.
The first thing you need is all the software you'll be using to manage the environments you'll create. Ansible Container is written in Python and is available through PIP, but at the time of writing, this was throwing errors on some systems, so I opted to download straight from GitHub.
git clone https://github.com/ansible/ansible-container.git cd ansible-container sudo python ./setup.py install
The second bit of software you'll need is Docker, which will run behind the scenes and manage the containers themselves. You can run Ansible Container with either Docker Engine (normal Docker) or Docker Machine (which makes it easy to run containers in virtual machines). I opted to go with Docker Machine. Although it may seem a little over-complicated to run containers on separate virtual machines, it eases the set up for running locally.
The install is a little involved and is different on different distros. The basic process will be the same on every distro, but the commands will be different. The following works on Ubuntu 16.04. If you're unsure what to do, the process is well documented for different distros [2] .
Docker maintains their own repositories with the latest software versions. With everything changing rapidly, it's best to stay up to date or things may not work, so let's add these repositories to our system. You'll need a couple of things to be able to add the Docker repository to APT. You can get these with:
sudo apt-get install apt-transport-https ca-certificates
Cryptographic keys enable APT to verify that the downloads are really coming from Docker. You can install them with:
sudo apt-key adv --keyserverhkp://p80.pool.sks-keyservers.net:80 --recv-keys58118E89F3A912897C070ADBF76221572C52609D
Now you're ready to add the Docker repository to APT. Create the file /etc/apt/sources.list.d/docker.list
with your favorite text editor (you'll need sudo permissions) and add the following line:
deb https://apt.dockerproject.org/repo ubuntu-xenial main
With everything set up, you can now grab the latest version of Docker from the repositories:
sudo apt-get update sudo apt-get install docker-engine
The Docker service runs in the background controlling the containers that are running. The command-line tool sends instructions to this service about what you want to do, so you need to ensure that the background service is running before you can do anything with the client:
sudo service docker start
You can make sure that everything is running properly with:
sudo docker run hello-world
If everything has gone well, you should see a message with the following text:
Hello from Docker! This message shows that your installation appears to be working correctly.
That's Docker Engine set up. Now let's move onto Docker Machine, which is available as a binary file. You can download it ready to run with:
sudo curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine
You'll need to enable execute permissions before you can run this file. Add this with:
sudo chmod +x /usr/local/bin/docker-machine
There is one thing left to get, but don't despair, this one's easy to install. You should find VirtualBox in your distro's repositories. On Ubuntu, you can install it with:
sudo apt install virtualbox
After all that installing, you're now ready to get into Ansible Container.
First, create a directory for your new container (we'll call ours ansible-test
), then cd into it:
mkdir ansible-test cd ansible-test
You'll need a few files for your project, and using the ansible-container init
command will set everything up for you.
This will create a subdirectory called ansible
that contains the critical files for your project, and they contain some example code that's commented out. The two most important files are container.yml
and main.yml
(Figure 1). Open up container.yml
and uncomment lines so that it looks like Listing 1.
Listing 1
container.yml
This file defines the containers you want to run. In this case, it's a single one called web
that's based on the Ubuntu Trusty image, has port 80 bound to localhost port 80, and runs Apache though dumb-init.
Now open main.yml
and uncomment lines until it reads as shown in Listing 2.
Listing 2
main.yml
This is an Ansible playbook that tells Ansible what to do to configure the container correctly. Let's look at the format of this file. It's text based using the YML markup language. In essence, it's one big list with various items at various depths. A hyphen adds a new item to the list, and adding a two-spaces indent adds another level to the list. In this case, the main list has two items (hosts: all
and hosts: web
). These items are plays, and the hosts line tells Ansible which hosts to apply this play to. In this project, there is only one host (called web
), but there can be many. The hosts: all
line tells Ansible to apply this play to every host that it knows about.
Inside the play, there's a sublist called tasks, which is the sequence of things that you want Ansible to do. Tasks typically have a name and an action (the name is optional but useful). The action has to link to an Ansible module that defines what you want to do at each stage. In the playbook in Listing 2, there are two different types of action: raw, which is just a command to be run via SSH, and apt, which uses APT to install or manipulate packages. There are many modules that enable us to do other things (Figure 2).
You can now build the container by moving back to the directory in which you ran the first ansible-container command and running:
ansible-container build
This may take a little while to run, but it should end by successfully building the machine. There will be some terminal output like:
Exporting built containers as images... Committing image... Exported web with image ID
Once this has completed, you can start the container with:
ansible-container run
This will spin up a new Docker container and bind it to port 80 so you can access the web server. Point your web browser to http://localhost
and you should see an Apache page.
Up to the Clouds
Now we'll move on to a more complex example – installing a Nextcloud server. This will have exactly the same structure as before (with a playbook in main.yml
that tells Ansible how to configure your machine), but there will be more to do to get the server up and running.
To begin, create a new directory and run ansible-container init
to set up the files needed. Again, the majority of the configuration will be in the main.yml
file. This starts in the same way as in the previous example (Listing 3).
Listing 3
More main.yml Config
You've got dumb-init
installed now, which allows you to run a single command as you start the machine. However, your container will need to run two services (Apache and MySQL), so you'll need to create a script to run the init process to start both of these (see Listing 4).
Listing 4
Run and Start Apache and MySQL
Two modules are used to create the init.sh
file and lineinfile
, which you use to add the lines you need. lineinfile
is far more powerful than this; it can be used to search and replace particular lines by regular expressions. However, you don't need this capability. You might have noticed that the init.sh
file is placed in the root and anyone can execute it.
We're side-stepping some issues by setting it up this way. To deploy this service properly, you wouldn't want to set it up like this, but then you probably wouldn't want to run the database in the container as well (at least not without some clustering setup to allow it to repopulate itself if the container is recreated). This setup is designed to test out software before deploying it to a permanent home.
The set of tasks is to grab all the packages that you'll need (Listing 5).
Listing 5
Grab the Packages
Here you can see how to iterate through a list of items with each option being inserted as {{ item }}
in the command.
With everything set up, you only have to download the latest version of Nextcloud (Listing 6).
Listing 6
Download Nextcloud
This uses the unarchive
command to pull the software from a remote repository straight into the web root on the host. The copy=no
option to unarchive
is needed because of a bug in Ansible when using remote_src
on unarchive
. The final task here makes sure that files in the webroot are owned by the correct user.
With main.yml
set up, you just need to adjust container.yml
. The only difference between this and the previous example is that you need to tell dumb-init
to run the init.sh
script rather than the Apache start script:
command: ['/usr/bin/dumb-init', '/init.sh']
Now you're ready to launch our containers with the same commands as before:
ansible-container build ansible-container run
If you point your web browser to localhost/nextcloud
, you'll be able to enter the details needed to finish the Nextcloud install. The database login is root with no password. Again, this isn't the most secure setup, but again, this database setup isn't designed for live use. The best setup depends a lot on your current system, whether you've already got a database server you can use, and the amount of load you're expecting on the system.
When the containers are running, you might want to interact with them – maybe to just have a poke around in the system or maybe to add or remove something without having to rebuild the container. Ansible Container creates your containers, but they're run through Docker, so this is the tool you'll need to access them (See the box entitled "Why Not Dockerfiles?"). You can see a list of currently running Docker containers with:
Why Not Dockerfiles?
Docker does of course have its own provisioning system – Dockerfiles – that can be used to set up containers. However, the problem with Dockerfiles is that they can only be used with Docker. Effectively, they bundle everything up into a single technology stack for both your provisioning and hosting.
With Ansible Container, you separate this out. Docker is used to run the images, but the provisioning is done via Ansible playbooks. This means you can use Ansible Container to create your development environment. You can also deploy with Ansible Container if you wish; but, if this isn't the right choice for you, the same playbooks you used to build your containers can be used to set up virtual servers or physical servers.
By decoupling the provisioning from the hosting, Ansible Containers give you far more flexibility than Dockerfiles.
sudo docker ps
You can then attach a Bash session (as root) to the container using the ID listed in the first column of the output of the previous command with:
sudo docker exec -i -t <id> /bin/bash
With this, you have a container up and running Nextcloud, and you can connect to it via a terminal to take care of any admin (Figure 3). By using a combination of Ansible and Docker, you can easily customize your containers to perform almost any function and share the configurations online using Ansible Galaxy (see boxout). You can then deploy your handiwork on almost any infrastructure, giving you a great combination of ease, flexibility, and power.
Ansible Roles and Galaxy
In this tutorial, we've built a playbook that sets up our server. It works, and we can use it. However, it's not the most reusable script. For example, if you now wanted to set up another server that used MySQL and Apache, you'd have to copy or rewrite the part of the playbook that installs and enables this. In this example, everything's quite simple, so it wouldn't be much work to do this. In more involved installs, however, this becomes a lot of duplication of effort and code. Ansible has a couple of ways of reducing the duplication that can make it much easier to manage large numbers of different configurations.
- Tasklists. The simplest option for sharing work between projects is tasklists. These are just YML files that contain tasks, and you can include them in playbooks with a line like: include:
<tasklist-file>
. - Roles. Rather than think about the tasks you want to perform on a server, we can think of the roles you want the server to perform. For example, you might want a server to perform the role of a LAMP web server. Ansible allows you to encapsulate the tasks necessary for this to work as a Role. These roles can then be added to hosts, and Ansible will work out what needs to happen to the server to enable it to perform that role.
- Galaxy. Well-defined roles are often not project-specific. For example, the LAMP web server will just need a few variable changes to make it applicable to almost any project that needs a server like this. Ansible Galaxy [3] is a website that enables Ansible users to share roles so that they can be easily added to different projects without everyone having to create them from scratch (Figure 4). If you need to set up a piece of open source software, there's a good chance that someone has already shared a role for this.
Infos
- Ansible Container: https://www.ansible.com/ansible-container
- Docker Machine: https://docs.docker.com/machine/install-machine/
- Galaxy: https://galaxy.ansible.com
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.