Docker 101

Tutorials – Docker

Article from Issue 215/2018

You might think Docker is a tool reserved for gnarly sys admins, useful only to service companies that run complicated SaaS applications, but that is not true: Docker is useful for everybody.

Docker [1] manages and runs containers, a thing that acts like an operating system. It is similar to a virtual machine, but a container uses a lot of the underlying operating system (called the "host") to work. Instead of building a whole operating system with emulated hardware, its own kernel, and so on and so forth, a container uses everything it can from the underlying machine, and, if it is well-designed, implements only the bare essentials to run the application or service you want it to run.

Whereas virtual machines are designed to run everything a regular machine can run, containers are usually designed to run very specific jobs. That is why Docker is so popular for online platforms: You can have a blogging system in one container, a user forum in another, a store in another, and the database engine they all use in the background in another. Every container is perfectly isolated from the others. Docker allows you to link them up and pass information between them. If one goes down, the rest continue working; when the time comes to migrate to a new host, you just have to copy over the containers.

But there's more: Docker is building a library of images [2] that lets you enjoy whole services just by downloading and running them. These libraries are provided by the Docker company or shared by users and go from the very, very general, like a WordPress container [3], to the very, very niche, like a container that provides the framework to run a Minetest [4] server [5].

This means exactly what you think it means: Download the image, run it (with certain parameters), and your service is ready, madam – no dependency hunting, very little configuring, and not much more beyond hooking up the service to a database (running in another container) and setting your password as the service administrator.

Getting Started

To enjoy the marvels of Docker, first install it on your box. Most, if not all, of the main distributions have relatively modern versions of the Docker packages in their repositories. In Debian, Ubuntu, and other Debian-based distributions, look for a package called In Fedora, openSUSE, Arch, Manjaro, Antergos, and others, it is simply docker. You will also find official and updated versions of the software for several systems at the Docker website [6].

Once Docker is downloaded and installed, check that the daemon is running:

systemctl status docker

If it is not, start it and enable it so that it runs every time you boot your machine:

sudo systemctl start docker
sudo systemctl enable docker

Now Docker is running, it is time to get some images.


An image is similar to an ISO image you would use to install a GNU/Linux operating system, except you don't need to burn it to a DVD or USB thumb drive.

You can use the docker utility to search for images like this:

docker search peertube

Docker will show you all the available images that contain the word "peertube" in the name or description (Figure 1). It will also tell you its rating given by users – more stars is better.

Figure 1: You can search Docker for images just as if you were using your software manager.

To install an image, you can pull it from a repository:

docker pull chocobozzz/peertube

This will download a PeerTube image (see the "What is PeerTube?" box) from Docker's repository and add it to your roster.

What is PeerTube?

PeerTube [7] is a video portal service akin to YouTube and Vimeo (Figure 2), but without any of the dumb restrictions of those closed and proprietary alternatives. It is called PeerTube because anyone can set up a server and join a federated network of PeerTube instances; any video a user uploads to one instance gets propagated to the other instances. All instances share the load of streaming the videos to visitors using P2P technology.

Figure 2: PeerTube is a much more democratic and freedom-respecting video platform than YouTube and Vimeo.

You can check that the image is now installed by running:

docker image list

Among other things, the list will give you a unique identifier (just in case you have two images with the same name) and will tell you how much space the image takes up on disk.

You could also just run the image, even before downloading it. The command

docker run chocobozzz/peertube

will have Docker look for the PeerTube image on your hard disk, and, if it can't find it, it will download it, drag in all the dependencies it needs (including other images, like an image for a PostgreSQL server), and run it (Figure 3, top).

Figure 3: Running an image not already on your hard disk makes Docker download it and then run it.

When you run an image, Docker creates a container with the software running inside it. In many cases, it will show the software's output so you can check that everything is working correctly (Figure 3, bottom). In this case, the output tells you that your PeerTube instance is running on localhost. However, if you visit http://localhost:80 with your browser, you probably won't see the PeerTube interface, because Docker sets up its own network for its containers.

To know which IP PeerTube is running on, first list your running docker containers like this:

docker container list

This will give you a container ID (something like 8577b5867b93) and a name that Docker makes up by mashing together random words (something like hopeful_volhard) for your container. You can use either to identify your container and get some details using:

docker container inspect <container_id_or_name>

Toward the end of the output, you will see a line that says "IPAddress" : and, well, an IP address. If you haven't changed Docker's default configuration, it will be something like Point your browser at that, and … Voilà! PeerTube (Figure 4).

Figure 4: Setting up a PeerTube video platform requires virtually no work if you use a Docker image.

You can stop a container with stop:

docker stop <container_id_or_name>

And start it again with start:

docker start <container_id_or_name>

Using run will create a completely new Docker container from your original image. If you have made changes (like created or modified a file) within a Docker container of the same image, your changes will not be in the new container. The "Getting Rid of Stuff" box explains how you can cleanly remove both containers and images.

Getting Rid of Stuff

List the images you have installed and use the ID of the one you want to remove to delete it:

docker image rm <id_number>

You may get an error informing you that the image is in use or needed by a certain container. Note that, even if all your containers are stopped, they are not necessarily removed and are sitting there waiting to be restarted. You can see all you containers, even those that are not running, with:

docker container list --all

and then you can remove the offending container with:

docker container rm <container_id_or_name>

After that, you can go back and remove the image.

Inside the Container

Finishing off the configuration of PeerTube would require an article of its own (watch this space!), so I'll move on to a more generic image for experimentation. Grab yourself a Linux distro image, like Ubuntu,

docker pull ubuntu

and run it with:

docker run -i -t ubuntu bash

After a few seconds, Docker will dump you into a shell within the container. Unpacking that last command line, the -i option tells Docker that you want an interactive exchange with the container, which means that the commands you type into the host's stdin (usually your shell) will be pushed to the Docker image. The -t option tells docker to emulate a terminal over which you can send the commands. You will often see both options combined together as -it.

Next comes the ID or name of the image you want to interact with (ubuntu in this case). Finally, you pass the name of the command you want to run, in this case a Bash shell.

Find out what the name or ID of the container is (docker container list), and you can open a new shell in the running container using the exec command:

docker exec -it <container_id_or_name> bash

The instruction above logs you into the container, and you can install and remove software, edit files, start and stop services, and so on.

To stop the shell in the container, issue an exit as you would do to exit a regular shell. Once you log out from all the shells, and as long as no other processes are executing, your Ubuntu container will stop. Docker containers are designed to run one process and one process only. Although you can run more, this is frowned upon by Docker purists and considered suboptimal. When that unique process ends, Docker is designed to close down the container.

However, if you want to keep a container running in the background (so you can have it run a non-interactive command sent to it from time to time), you can do this:

docker run -t -d <image_id_or_name>

As you saw above, -t tells Docker to create a faux terminal. The -d option stands for detached and tells Docker to run the container in the background.

To run a command non-interactively in a running container and have the output appear under the command, enter

docker exec <container_id_or_name> ls

which will show the default working directory's contents. You can also show the contents of a directory that is not the default by adding the path, as you would with a regular ls command:

docker exec <container_id_or_name> ls </path/to/container/directory>

Talking of working directories, if you are not sure which is the container's current working directory, try this:

docker exec <container_id_or_name> pwd

Another thing you can do is share directories between the host and a container. For example:

docker run -it -v /home/<your_username>:/home/brian ubuntu bash

The -v option takes the path to the directory on the host (in this case, your own home directory) and maps it to the directory within the container. If either of these directories do not exist, Docker will try and create them for you.

Once you have shared your directory, from within the container, ls the /home/brian directory, and you will see the files from your own home directory. If you execute touch /home/brian/from_docker.txt from inside your container, you will see the file from_docker.txt pop up in your home directory on the outside.

This is very useful for when you want to use a Docker container to do some dirty work for you, like when you want to make an app for Android.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Tutorials – Cordova Sensor

    Frameworks like Cordova make creating simple mobile apps quite easy. Making apps that use your phone's sensor is slightly trickier, but, thanks to a new universal standard, things are not as hard as you may think.

  • Docker with OwnCloud

    Run your application smoothly and portably in the cloud with the Docker container system. This workshop takes a practical look deploying Docker with the OwnCloud cloud environment.

  • Cordova

    Roll out an app elegantly and quickly for up to eight operating systems using the Cordova framework. According to the Apache Foundation, the only requirements are knowledge of HTML, CSS, and JavaScript.

  • Perl: Testing Modules with Docker

    If you want to distribute your programs across multiple platforms, you need to prepare them to run in foreign environments from the start. Linux container technology and the resource-conserving Docker project let you test your own Perl modules on several Linux distributions in one fell swoop.

  • Docker

    Docker is an economical alternative to conventional virtualization. Because each Docker container shares the underlying operating system, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95