left-icon

Docker Succinctly®
by Elton Stoneman

Previous
Chapter

of
A
A
A

CHAPTER 1

Introducing Docker

Introducing Docker


What is Docker?

Docker is an application platform. It lets you package your application with everything it needs, from the operating system upwards, into a single unit that you can share and run on any computer that has Docker. Docker runs your application in a lightweight, isolated component called a container.

It’s a simple proposition, but it is hugely powerful. The application package, called a Docker image, is typically only tens or hundreds of megabytes, so it’s cheap to store and fast to move. When you run a container from the image, it will start in seconds and the application process actually runs on the host, which means you can run hundreds of containers on a single machine. Images can be versioned, so you can be sure the software you release to production is exactly what you’ve tested, and the Docker tools can even scan images for security vulnerabilities, so you will know if your application is safe.

With Docker, you can build your application image and know that it will run in the same way on your development laptop, on a VM in an on-premise test lab, or on a cluster of machines in the cloud. It’s a facilitator for some of the most popular trends in software delivery. You can easily add a packaging step into your continuous integration process to generate a versioned image for every commit. You can extend that to continuous delivery, automatically deploying the latest image through environments to production. In Docker, the packaging process is where development and operations meet, which means it’s a great start for the transition to DevOps. And having a framework for orchestrating work between many containers gives you the foundation for microservice architectures.

Docker is open source and cross-platform, and one of its ecosystem’s most compelling aspects is the Docker Hub—a public registry where organizations and individuals share their own application container images. On the Hub, you’ll find official, supported images for popular technologies such as Nginx, MariaDB, and Redis alongside custom community images, and you can share your own images, too. Images on the Hub can be as simple as a Hello World app or as complex as a fully distributed Hadoop cluster, and because the images are usually open source, navigating the Hub is a great way to get started with Docker.

In this chapter, we’ll do just that—we’ll get Docker installed, and we’ll run some containers using images from the Docker Hub. We’ll see how easy it is to get up and running with Docker, and we’ll begin to understand the power of the platform. In the rest of this e-book, we’ll dig deeper and walk through all you’ll need to know in order to be comfortable using Docker in production.

Installing Docker

Docker is a single product which has three components—the background server that does the work; the Docker client, a command-line interface for working with the server; and a REST API for client-server communication.

The client is cross-platform, which means you can run it natively from Linux, Windows, and OS/X machines, and you can manage Docker running locally or on a remote machine. The Docker server runs on Linux and on the latest versions of Windows.

You don’t need to be a Linux guru to use Docker effectively. The Docker team has put together packages for Mac and Windows that make use of virtualization technology on the host so that your Docker server runs inside a Linux VM on your OS/X or Windows machine (you run the client locally and talk to the server through the REST API exposed on the VM).

Note: The latest Docker for Mac and Docker for Windows packages require up-to-date versions of the operating systems OS/X Yosemite or Windows 10. If you’re running older versions, you can still use Docker with the Docker Toolbox. It’s an older package that uses VirtualBox to run the Linux VM, but you use it in the same way.

Figure 1 shows the different options for running the Docker Engine on various operating systems.

Running Docker on Windows, Mac, and Linux

Figure 1: Running Docker on Windows, Mac, and Linux

That figure may look complex, but it’s all wrapped up in simple installations. Knowing how Docker actually runs on your machine is a good idea, but the installation itself will only take a few minutes to download, followed by just a couple of clicks. Docker’s documentation is first-rate, and the Get started section on Docker Store includes detailed instructions for Mac, Windows, and Linux.

Note: On Windows, you’ll need to have Hardware Virtualization (VT-x) enabled in the BIOS to run the Docker Linux VM, and after installing you’ll need to reboot. You can switch between Windows containers and Linux containers—the examples in this book use Linux containers.

After you’ve installed Docker, simply launch a command-line window (or the Docker Terminal if you’re using Docker Toolbox) and you can start running Docker client commands. Code Listing 1 shows the output from running docker version, which gives you details on the installed version of Docker.

Code Listing 1: Checking the Version of Docker

$ docker version

Client:

 Version:      17.11.0-ce

 API version:  1.34 (downgraded from 1.35)

 Go version:   go1.9.2

 Git commit:

 Built:        Fri Nov 24 16:01:38 2017

 OS/Arch:      darwin/amd64

 Orchestrator: kubernetes

Server:

 Version:      17.11.0-ce

 API version:  1.34 (minimum version 1.12)

 Go version:   go1.8.5

 Git commit:   1caf76c

 Built:        Mon Nov 20 18:39:28 2017

 OS/Arch:      linux/amd64

 Experimental: true

Docker reports separately on the client and the server because you might be using a local client to manage a remote server, and those could be on different versions or different platforms. Docker is built using Go, but it ships as compiled binaries, which means you don’t need to install the Go runtime beforehand.

With Docker installed, you’re ready to start running some containers.

Running containers

Docker images are packaged applications. You can push them to a central store (called a registry) and pull them on any machine that has access to the registry. The image is a single logical unit that contains the application package. In order to start the app, you run a container from the image.

Images are typically built to run a single process. If your app needs to work with other services, you run those services in their own containers and orchestrate them so that all the containers can work together (which you’ll learn about in Chapter 5, Orchestrating Systems with Docker.

When you run a container from an image, it may be a short-lived app that runs some functionality and then ends; it may be a long-running app that runs like a background service; or it may be an interactive container that you can connect with as though it was a remote machine.

Hello World

Let’s start with the simplest container you can run. With Docker installed and an Internet connection, you can run the command in Code Listing 2 and see the Hello World container in action.

Code Listing 2: Running Hello World

$ docker container run hello-world

Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world

b04784fba78d: Pull complete

Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f

Status: Downloaded newer image for hello-world:latest

Hello from Docker!

...

You’ll see some helpful text written out, but don’t be underwhelmed that your first container merely writes to the console. When you run the command, there’s a lot happening:

  • Your local Docker client sends a request to the Docker server to run a container from the image called hello-world.
  • The Docker server checks to see if it has a copy of the image in its cache. If not, it will download the image from Docker Hub.
  • When the image is downloaded locally, the Docker server runs a container from the image, and sends the output back to the Docker client.

With this image, the process inside the container ends when the console output has been written, and Docker containers exit when there are no processes running inside. You can check that by getting a list of running containers from Docker using the container ls (container list) command. Because the hello-world container has ended, there are no running containers and the command output will be empty, as in Code Listing 3.

Code Listing 3: Checking for Running Containers

$ docker container ls

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Tip: You can see all your containers, including the ones that have exited, by running docker container ls --all, which lists containers in any status.

This type of container, which executes some code and then exits, is a very useful pattern. You can use this approach for containers that script repetitive tasks such as backing up data, creating infrastructure in the cloud, or processing a message from a message queue. But containers are equally well-suited for long-running background processes.

Hello Nginx

Nginx is a powerful, lightweight, open-source HTTP server. It’s been growing in popularity for many years—as a Web server, it has been progressively taking Internet market share from Apache and IIS. With the growth of Docker, Nginx has seen an acceleration in popularity because it’s easy to configure, builds into a very small image, and has many features that gel nicely with orchestrated container workloads.

The Docker Hub has an official Nginx image that is maintained and supported by the Nginx team. It comes in several variations, but they fundamentally do the same thing—start the Nginx server process listening on port 80, inside a container. With Code Listing 4, you can run the smallest version of the Nginx Docker image, which is based on Alpine Linux.

Code Listing 4: Running Nginx in a Container

$ docker container run nginx:alpine

Unable to find image 'nginx:alpine' locally

alpine: Pulling from library/nginx

019300c8a437: Pull complete

2425a41f485c: Pull complete

26e59859b15d: Pull complete

a69539b662c9: Pull complete

Digest: sha256:6cf0606c8010ed70f6a6614f8c6dfedbdb5e2d207b5dd4b0fab846bbc26f263e

Status: Downloaded newer image for nginx:alpine

When you run that image, a container will start in the foreground, running the Nginx process in your terminal so that you can’t run any other commands. The container is listening for HTTP requests on port 80, but that’s port 80 inside the container, so we can’t reach it from the host machine. This container isn’t doing much, so we can kill it by ending the process with Ctrl+C.

Docker supports long-running background processes, such as web servers, by allowing containers to run in detached mode, so the container keeps running in the background. Code Listing 5 runs a new container from the same Nginx image, which will run in the background with the --detach flag and with port 80 published with the --publish flag.

Code Listing 5: Running Nginx as a Background Container

$ docker container run --detach --publish 80:80 nginx:alpine

a840ccbfc8652cb6d52b5489146a59e8468747f3372e38426fe3deb40d84372a

That command publishes port 80 inside the container to port 80 on the host. Ports can’t be shared, so this will fail if you have another process listening on port 80. However, you can publish the container port to any free port on the host: --publish 8081:80 maps port 8081 on the host to port 80 in the container.

The output from Docker shows the unique ID of the new container, then control returns to the terminal. You can check if the container is running with the container ls command, as in Code Listing 6.

Code Listing 6: Listing the Background Container

$ docker container ls

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                         NAMES

a840ccbfc865        nginx:alpine        "nginx -g 'daemon ..."   47 seconds ago      Up 45 seconds       0.0.0.0:80->80/tcp          heuristic_roentgen

The output tells us a number of things: which image the container is running; a short form of the container ID—starting a840 in this case—that Docker uniquely generates; the container name—heuristic_roentgen—that Docker will randomly assign unless we supply a name; and the command running in the container—nginx. This container is running in the background, Nginx is listening on port 80, and we’ve published port 80 from the container, mapping it to port 80 on the host machine running the container.

When requests come in to port 80 on the host now, they will be routed to the container, and the response will come from the Nginx process running inside the container. On Linux, the Docker Engine is running directly on your host machine, and Docker for Mac and Docker for Windows use native network sharing, which means you can browse to http://localhost and see the Nginx welcome page, as in Figure 2.

Browsing the Web Server Inside the Container

Figure 2: Browsing the Web Server Inside the Container

Tip: On older versions of Mac and Windows (using the Docker Toolbox), the Docker server is running inside a Linux VM on VirtualBox, which will have its own IP address so that you won’t use the localhost address. In order to access ports mapped from Docker containers, you can find the IP address of your Docker VM by running docker-machine ip, which will give you an IP address like 192.168.99.100—and that’s where you browse.

Docker is an ideal platform for long-running background services. The Nginx web server running in this container uses next to zero resources unless someone is accessing the site—but the container doesn’t have a resource limit, which means that under peak load the Nginx container can grab more resources, and the container process can max out 100% of CPU and memory as if it was running directly on the host.

You can run hundreds of background containers on a modestly specified server this way. Provided the usage patterns are varied and the container loads don’t all peak at the same time, the host can happily share resources between all the containers.

Hello Ubuntu

The last type of container is one you run interactively. It stays alive as long as you’re connected to it with the Docker CLI, and it behaves like a remote connection to a separate machine. You can use containers in this way to evaluate images, to use images as software tools, or to work through the steps when you’re building up your own image.

The majority of Docker Hub images use Linux as the base OS, and Ubuntu is one of the most popular base images. Canonical publishes the official Ubuntu image, and they have integrated Docker Hub with their release cycle so that the latest Ubuntu versions are available on the Hub. You can run an interactive Ubuntu container using Code Listing 7’s command.

Code Listing 7: Running an Interactive Ubuntu Container

$ docker container run --interactive --tty ubuntu:16.04

root@dafaf06d4ceb:/#

With the --interactive and --tty flags, Docker runs the container interactively with terminal emulation (it’s commonly abbreviated to -it). The container is still running on the Docker server, but the client maintains an open connection to it until you exit the container. We’re using the Ubuntu official image, but in the run command we’ve specified a particular version of the image—16.04—that gives us the current Long Term Support version of Ubuntu.

However, if you're used to working with Ubuntu, you’ll find that the version running in this container won’t behave in the same way as the full Ubuntu Server edition. Code Listing 8 shows that the normal Linux commands, such as ls and cat, work as expected in the container.

Code Listing 8: Linux Commands in the Ubuntu Container

root@dafaf06d4ceb:/# ls -l /tmp

total 0

root@dafaf06d4ceb:/# cat /etc/hosts

127.0.0.1       localhost

::1     localhost ip6-localhost ip6-loopback

fe00::0 ip6-localnet

ff00::0 ip6-mcastprefix

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

172.17.0.5      dafaf06d4ceb

Note: There are some interesting entries in the hosts file. Docker injects some runtime details about the container into that file in order to help with discoverability between containers. We’ll see more of that in Chapter 5, Orchestrating Systems with Docker.

The version of Ubuntu in the Docker image is a heavily stripped down version of Ubuntu Server, which means some of the most basic utilities aren’t available. In order to edit the hosts file, we might expect to use the Nano text editor, but it’s not installed, and if we try to install it, we’ll see the software libraries aren’t up to date in the image either, as shown in Code Listing 9.

Code Listing 9: Missing Utilities in the Ubuntu Image

root@dafaf06d4ceb:/# nano /etc/hosts

bash: nano: command not found

root@dafaf06d4ceb:/# apt-get install nano

Reading package lists... Done

Building dependency tree

Reading state information... Done

E: Unable to locate package nano

You can still use the container like any other Ubuntu installation so that you can update the package repositories with apt-get update and install whichever tools you like. But you’re only changing this instance of the container, not the underlying image. When you run the exit command, the container will be stopped but your changes won’t be saved. The next time you run a container from the Ubuntu image, it will be the same minimal version of the OS.

The Docker Hub has many such images that are intended to be used as a base image for your own apps. Ubuntu, Alpine, and BusyBox are popular, and they’re deliberately minimal. Having less software installed means less bloat, which means the images are smaller, and it also means a reduced attack vector because there are fewer packages with potential vulnerabilities.

Application containers are not meant to be treated like VMs or physical servers—you wouldn’t normally connect to a running container to fix an issue or patch the OS. Containers are so cheap to build and run that you would update the image instead, using a newer version of the base image if it had patches, then creating a new container and killing the old one. Not only will that fix the problem with your application, but it will also give you an updated image with the problem fixed for any future containers you run.

Summary

We’ve seen what Docker does, learned how to use the Docker client, and looked at how the Docker Engine runs on different platforms. We’ve walked through the main usage patterns with Docker, running short-lived task containers that do a single job and then exit; we’ve looked at long-running containers that keep background tasks running as long as the container is running; and we’ve examined interactive containers that exist as long as your client keeps an open connection with them.

The foundations of Docker are very simple—applications are packaged into images, images are used to run containers on the Docker server, and the Docker client manages the containers. There’s a lot more to learn about Docker, and in the rest of this e-book we’ll cover the features that have made Docker a revolutionary technology in software delivery.

In the next chapter, we’ll look at packaging your own applications as Docker images.

Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.