Docker basics

Docker is a client-server application. Both the client and the server use the same binary /usr/bin/docker. The difference comes from the flags that are passed when the binary is executed. You should consult the

from time to time when you are searching for a particular

command and try to look at the parameters it accepts before executing it, using the --help flag.

The Docker daemon runs with root privileges and uses by default a unix socket for the communication between the server and the clients. Therefore, when you run docker commands you need to have root privileges - put sudo in front of every docker client command that you execute. If you get docker: command not found or something like /var/lib/docker/repositories: permission denied you may have an incomplete Docker installation or insufficient privileges to access the Docker daemon on your machine.

The Docker daemon can also bind on a TCP socket with TLS enabled so that it can be contacted over the network, but for the purpose of this tutorial we will have the Docker daemon running on a unix socket.

Getting docker images[edit | edit source]

Use the "'docker images"' command to list the currently available Docker images. Images are used as the root file system (/) of containers. For example, an image can contain a Fedora operating system with a web server or a web application. Images contain the necessary binaries, libraries and configuration files that are required for a certain application to run. This collection of files represents the single fundamental difference between different Linux distributions or versions of the same distribution. Since we started out fresh, there should be no images stored with the "'Docker daemon"':

Docker images are distributed using the

which is for Docker images what "'GitHub"' is for git repositories: namely a centralised place that facilitates the collaboration and easy sharing of "recipes". Using the "'Docker client"' one can search and pull "'Docker images"' from the "'Docker registry (Hub)"' which can then be used to build containers. Newly downloaded "'Docker images"' can also be used as a starting point for creating new images.

At this point we can pull an image from the "'Docker Hub"' and start our first container. We will start with a basic "'Fedora 22"' Docker image.

As you can see there are many "'Docker images"' that contain the "'fedora"' keyword in their name, but we will use the official one. An offcial image is created by the developers of the company that own that particular piece of software. Once we find a image that we are interested in, we can download it to our "'Docker host"'.

Listing the available images, we get image "'ID: ded7cd95e059"' that was tagged with different names: "'latest"' and "'22"'. In "'Docker"' one can tag a particular image with different tags so that they can be easily referred to. There is no easy way to list the available tags for a particular repository. The first option is to go to the

web page and search for the repository and there you will also

find the list of available tags.

The second option is to use the "'REST API"' of "'Docker Hub"' to get this list:

What happens here is that we request the list of tags for the "'ubuntu"' repository by doing a simple "'HTTP GET"' request and then we format nicely the "'JSON"' response after which we display only the field that we are interested in i.e. the name of the tag. After downloading the desired "'Docker image"', we can fire-up our first "'Docker container"'. Note that "'Docker Hub"' is currently migrating to v2 of the REST API and there is some functionality which migth not work as expected.

Starting a Docker container[edit | edit source]

To start a new "'Docker container"' we need to use the "'Docker client"' command and instruct the "'Docker daemon"' to take the necessary steps to fire-up our container. At this point we can see how the building blocks of Docker containers fit together. In order to have a running container, we need to have an image which will be used as the underlying "operating system" for the processes that run inside the container. In our case this is the "'fedora:22"' image. Another important aspect is that in order to start a "'Docker container"' we need to tell it what process/processes to run inside the container.

A "'Docker container"' is in running mode as long as there is at least one process running inside it. Therefore, we can not have a running "'Docker container"' without a running process inside it. An analogy can be made between the "'init"' process in a Linux operating system and the initial process that is used when we start a container. This initial process is the parent of all other processes that are started inside the container. As soon as this initial process is killed the entire "'Docker container"' is stopped - and also all other processes running inside the container.

Once you execute the docker run command you will notice that the prompt has changed and it should look similar to the following:

After executing the docker run command with the -it flags, basically requesting a console to the container, we end up with a prompt inside the container. The hostname of the container in this case 755ede9cd3d7 is generated by the "'Docker daemon"' but can also be enforced when starting the container by using the -h flag. Now, the question might be how do we really know we are inside the container and that the container is using the image that we requested initially? There are several ways to answer this:

  • First we do a cat /etc/redhat-release and we can see that
 the operating system is "'Fedora 22"'. Remember that the host
 operating system is a "'CentOS7"' machine.
  • We can open another terminal on the host machine and execute the
 docker ps command which will list the running "'Docker

  • We can also list the installed packages using rpm -qa in the
 terminal window of the container.

Another interesting point would be to get the list of running processes inside the container. In order to do this we need to install the procps package which is missing from the default "'Fedora 22"' image. At this point you will see that the ps aux command will output only two running processes:

  • The initial process that started the container (in the listing above
 the one with PID 1)
  • The process corresponding to the command that we just executed (PID

Compare the output of this command with the output you get by executing the same command on the host operating system. One fundamental difference between a container and a virtual machine, is that the container can leverage the capabilities of the host kernel without the need to spawn a fully running operating system for the processes that run inside it. What this means is that you can run a simple process inside the container without any additional overhead.

Tracking container changes[edit | edit source]

What we have done so far is started a container using a fedora:22 image and installed the procps package on top. Therefore the image that the container is running on is different from the one we started with. In order to track the exact changes done on the root filesystem of the container we can use the following command:

This outputs the list of newly added or modified files. All these changes are not saved yet anywhere, so when we kill the container we would also loose them. What would be a desirable scenario is to create a new image from the currently running container, so that the next time we start a container we won't have to install again the procps package. To achieve this, we use another Docker command that snapshots the current container and creates an image that we can reuse in the future.

Working with volumes[edit | edit source]

The root filesystem which represents the image the container runs on is a Union File System in the sense that it can record the modifications done to the files as we've seen earlier by using the docker diff command. This type of filesystem incurs an overhead which might not be acceptable in some cases. Therefore, we would like to avoid storing data which is frequently accessed or modified on the root filesystem. In order to achieve this we need to use data volumes. Data volumes persist data, even when containers are destroyed. They can also be shared and reused among containers. To achieve this we'll create a directory on the Docker host and mount it inside the container at start up.

Any data that was previously in the directory will be visible inside the container and any new data written from inside the container will be visible on the Docker host.


If SElinux is enforced on the Docker host machine, then you need to add the proper label to the directory which is going to be mounted inside the container so that processes running inside the container can access it: chcon -Rt svirt_sandbox_file_t /tmp/data_store/.

Exposing ports[edit | edit source]

In order to make services running inside containers accessible from the exterior we need to have a mechanism to expose ports. This can be achieved by passing the -p option when starting the container.

The docker ps command lists the status of the running containers and we can see in the PORTS section the mapping that we just added. Therefore, any traffic that comes to the Docker host machine on port 80 is then forwarded to our container on port 8080.

Linking containers[edit | edit source]

When running a more complex service, exposing ports might just not be enough. Sometimes, we need to run several daemons in different containers and these daemons need to interact between each other. Among the many actions that the Docker daemon takes when starting a container, it is also responsible for allocating IP addresses to containers. We can retrieve the IP address of a running container by using the docker inspect command:

Once we start a container the Docker daemon is aware of the mapping between the container name and its IP address. Let's assume now, that we would like to start a second container that needs to contact over the network a service running inside the first container. All containers are allocated IP addresses from the same subnetwork. Therefore, if we know the IP address for the first container we can reach it directly from the second one.

But this approach is not practical, since a process running inside the second container can not discover on its own the IP address of the first running container. This where we can use the -- link option for the docker run command to link two containers. The link option takes the name of a running container and the Docker daemon will insert the necessary information in the /etc/hosts file of the new container so that it knows the IP address of the containers that it is linked agains. The Docker daemon can do this, since it has a global overview of all the running containers on the local machine.