docker in a nutshell

Hakan Eröztekin
18 min readJun 19, 2022

--

original photo by Juan Pablo Daniel

Welcome to the one-stop shop for (almost) all aspects of Docker.

In this article, we’re going to look at the features of Docker in sufficient detail. We will cover a lot of topics that are listed right after this section. It will be useful whether you’re refreshing your knowledge or just starting to learn Docker.

Throughout the article, we will build a project. In the end, we will have built a project that uses docker compose and has multiple containers — one with our custom image, a custom network, and a volume. Don’t worry if these terms sound odd, they are, though they are simpler than they look and we’ll cover them one by one.

You can quickly copy the commands in the images by clicking the source code text below them. The complete project code is available on Github. You can check similar articles I’ve written before on my Medium profile. You can also contact me on LinkedIn.

Feel free to drop a comment and let me know whether you liked it or not, your questions, and if there’s an incorrect/missing part in the article.

Alright, let’s start!

table of contents

why docker
docker terms
docker images
docker containers
demo time: pull and inspect a docker image then run it
containerizing an app
dockerfile
demo time: nginx Dockerfile with custom homepage
demo time: run a spring app as a container
docker compose
docker networking
demo time: docker networking
docker volumes
demo time: docker volumes
credits
what’s next

Two notes before we proceed,

  • Feel free to click the titles here, I provide links to navigate at the end of each section to eliminate the hassle of scrolling back and forth to find the related section.
  • We’re not going to cover docker stacks and security right now. Docker stacks help us to deploy our applications in multiple hosts (clusters). We can achieve the same thing with Kubernetes which we’ll discuss in the upcoming article. However, we can also talk about them if you want to — just let me know.

why docker

virtual machines vs containers (source code)
  • in the old old days, there were physical servers. they could only run one application at a time which was a huge waste of resources. they were occupying a lot of physical space and were hard to scale.
  • virtual machines (VMs) made it possible to run multiple applications. but they were wasting resources too since each guest OS was using some of the resources and it was still too slow to boot an application.
  • containers, such as docker, made efficient use of resources and it’s way faster. it uses fewer resources because containers share OS resources, it’s faster because it converts our applications and all the dependencies into small packets.

By the way, you see Hypervisor in such graphics a lot. This fancy term means an OS tailored for VMs.

Both VMs and containers solve the “but it works on my machine” problem. Once you containerize your app you can run it practically anywhere.

» go back to table of contents

docker terms

Let’s take a look at the terms you hear a lot when using Docker.

  • docker container is all your app needs to run. it includes the application code, all the dependencies, and a part of the OS.
  • docker image is all your app needs to run, very similar to the container. it’s like a stopped container. we’ll talk more about this relationship in the next section.
  • docker desktop is software to use Docker on your Mac or Windows. it installs two tools; the docker engine and the docker client.
  • docker engine handles docker container lifecycle, network, and OS operations.
  • docker client is a gateway (a portal) that connects your terminal to the docker engine so you can use your terminal to manage your docker containers.
  • dockerfile describes how to build an image.
  • docker compose describes how Docker should build and deploy an app.
  • docker volume is a shared folder between the host(s) and the container(s).

Don’t worry if you don’t get the terms yet. We’ll talk through them in the article.

» go back to table of contents

docker images

A Docker image contains everything required for an application to run. It includes application code, dependencies, and a part of the OS. Images are similar to classes in OOP whereas containers are similar to objects. A container is a running instance of an image.

If you have an application’s Docker image, the only other thing you need to run that application is a computer running Docker.

image vs container (source code)
» go back to table of contents

docker containers

A container is a runtime instance of an image. Similar to the OOP class and object analogy, you can instantiate multiple containers from the same image.

Let’s run a ubuntu container.

The command template is shown below,

docker container run image-name application-name
docker container run (source code)

We use -it flag for the interactive terminal. With this, we can run commands inside the container. /bin/bash is the main application to run. Thus, when we exit the terminal, the container will terminate.

Containers are designed to be immutable. Don’t update them by, for example, logging in to it and updating the configuration.

The data is stored in the Docker host’s filesystem which won’t survive in case of failure. So, you should use volumes to store data. We will talk more about volumes in the article.

» go back to table of contents
» skip the demo, jump to containerizing an app

demo time: pull and inspect a docker image then run it

In this demo, we will,

  • check out the details of Docker in your system
  • pull an image
  • inspect the image
  • run it as a container

If you don’t already have Docker installed, you can install Docker Desktop. Start with it and use the command to ensure it’s up and running.

docker version

An example output,

docker version (source code)

As you can see, we have a client that connects our terminal commands to the Docker engine, and the engine that does the actual work.

We’re going to pull the alpine image,

docker image pull nginx
docker image pull (source code)
  • if we don’t specify a version (for e.g docker pull nginx:1.22.0), it uses the latest image.
  • it pulls the image from the public Docker repository, i.e. Docker Hub.

List the images in our host with the following command,

docker image ls
docker image ls (source code)

Inspect the image,

docker image inspect nginx
docker image inspect (source code)

The trimmed output shows that the image consists of 6 layers. These layers are generated during the image creation via Dockerfile. They can be the OS (e.g. nginx uses debian), dependencies, and source code. Docker caches the layers so if you’re going to build an image that has some layers in the cache, cached layers will be used to speed up the process.

Let’s run it!

docker container run --name my-container -p 3000:80 -d nginx
docker container run (source code)

Let’s examine what we do here,

  • we run the container in the background with -d flag. it’s not attached to our terminal so we can use our terminal normally.
  • we give a custom name to our container with name. it’s useful to identify the container.
  • by default, nginx container exposes the port 80. here, we connect the host port 3000 to the container port 80 to make it accessible outside the host.
docker host and container port mapping (source code)

Open up a new tab in the browser and hit localhost:3000 or 127.0.0.1:3000 to access the container.

page of localhost:3000

Voila! We can access our container via browser. nginx shows its default page. Let’s update it!

We can execute commands in our containers with exec commands. We will use bash command to connect to the container terminal.

docker exec -it my-container bash
docker exec (source code)

nginx keeps the static content at usr/share/nginx/html/index.html. Update it with a welcome message. You can copy the command from the source code. Then type exit to leave the container terminal.

update nginx homepage (source code)

Refresh the page to see the results.

Updated homepage

If the page has not changed, try the incognito mode.

Check our container details,

docker container ls
docker container ls (source code)

Notice the name and the port mapping.

To remove the image, we should first remove the containers which use the image.

docker container rm my-container
docker container rm (source code)

Use the following command to remove the image,

docker image rmi nginx
docker image rmi (source code)

We can also verify the image is deleted by listing the images via docker image ls command.

Well done, we’ve pulled an image, run it, accessed it in the browser, and even hacked into it and updated the contents. Updating the container like this is not recommended since containers are immutable by design. Also, if we need to show our custom page, how do we automatize it so that our custom container shows that homepage by default?

The answer is Dockerfile, which is our next stop.

» go back to table of contents

containerizing an app

Containerizing an app is making your application run in a container. For that, we should create Dockerfile which describes how to build an image for our app, then build the image and run it as a container.

We will take a look at Dockerfile, then in the demo section, we will provide a custom homepage for our nginx container.

» go back to table of contents

dockerfile

A Dockerfile is the starting point for creating a container image — it describes an application and tells Docker how to build it into an image.

The directory containing the application and dependencies is referred to as the build context.

Let’s look at the contents of the Dockerfile.

Dockerfile (source code)

Every Dockerfile starts with a base image. In our Dockerfile, we use ubuntu. Our app has a custom label and environment variable, and it prints directory contents when started. Every container needs the main app to run, so it’s essential to use CMD or ENTRYPOINT .

For the sake of simplicity, assume that we have table-of-contents.txt file in the src folder.

Let’s build our image,

docker build -t my-awesome-app .

We tag our image as my-awesome-appand use the current directory (.) as the build context (where our Dockerfile is located).

docker build (source code)

Notice the [3/3] notation. This means our image has 3 layers. One for the base image, one for COPY command, and one for WORKDIRcommand. Not every Dockerfile command creates a layer, some add metadata, like LABEL.

Inspect our image,

docker image inspect (source code)

The trimmed output has a PUBLISHING_AT environment label, ARTICLE_NAMElabel, and three layers.

Let’s run our image.

docker container run my-awesome-app
docker container run (source code)

We can list running containers with the following command,

docker container ls

However, since our application‘s main command (ls -l) is executed, and our container has terminated.

We can list all the containers with the following command

docker container ls -l
docker container ls -l (source code)

Congratulations. We’ve built our custom image and run it as a container.

» go back to table of contents
» skip the demos, jump to docker compose

demo time: nginx Dockerfile with a custom homepage

Let's containerize our nginx app. Remember what we’ve done in the demo before,

  • use nginx image
  • update contents
  • run the container

We’ve had run the container before updating the contents but normally we want to see our custom page even from the first-page load. So we will update it before running the container.

So that’s exactly what we’re going to do with Dockerfile. We use nginx image and copy the contents.

Dockerfile for nginx (source code)

As you see, we did not use CMDor ENTRYPOINT .This is because we’re going to use CMDand ENTRYPOINT provided by the nginx image. You can check those with docker image inspect nginx command.

Build the image,

docker image build -t my-app .

Run it,

docker container run --name my-app -p 3000:80 my-app

The result,

result of localhost:3000

Great. We have a custom webpage ready to use with our container.

» go back to table of contents
» skip the next demo, jump to docker compose

demo time: run a spring app as a container

Let’s get our hands dirty with an example closer to the real world. We’re going to build an image for our Spring application and run it as a container. We are going to add more Docker features like volumes, network, and compose to our example throughout the article. But first, let’s containerize a simple Spring app.

Don’t worry if you’re not familiar with Spring or Java. Following along with the article will provide you with enough information. Otherwise, practicing while reading would be the best option.

create a simple Spring app

You can initialize a Spring app via Spring Initializr. Create an app named spring-app and create a simple controller.

You can also get the code from Github and switch to the branch demo/1-basic-spring-app

git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
Example controller (source code)

Run maven clean package for your application. This will generate a jar file at /target/spring-app.jarthat we’ll use to run our application.

Run the following command in your terminal,

java -jar target/spring-app.jar
Our app is started

We are going to take a similar step for our Dockerfile.

Dockerfile for Spring app (source code)

Build our image,

docker image build -t spring-app .

And run it,

docker container run -d -p 5000:8080 --name my-spring-app spring-app

Wait a few seconds for it to boot up, and voila :)

Result of localhost:5000

Congratulations!

As you’ve seen, we need to generate our jar manually for our Dockerfile to work. We can also automate this with the Dockerfile.

multi-stage build (source code)

This looks cool but also frightening. It is simpler than it looks. This is called a multi-stage build. Multi-stage builds have multiple FROM commands (one for each stage) and we can pass content from earlier stages. It is a good practice since it reduces eventual image size.

In the Build stage, we generate a spring-app.jar file, the same as our maven clean package command. In the Package stage, we use the jar file as we did before.

You can try it out by getting the code from Github if you haven’t already and switching to the branch demo/2-multi-stage-dockerfile

git clone https://github.com/hakaneroztekin/java-spring-with-docker.git

Are you tired of writing port and name each time running a container? We’ll get rid of them with Docker compose. We can also do much more with it, so let’s get into it.

» go back to table of contents

docker compose

Docker compose is a tool to deploy and manage multiple related containers at the same time. It works in a single cluster (i.e. Docker host). For production builds, multi-cluster is a better approach, so Docker Stacks is the tool for that. Alternatively, Kubernetes is one of the most popular tools for that, which we’re going to cover in the next article.

Docker compose is managed with a file docker-compose.yml. It has four main fields,

  • version defines Docker compose version
  • services defines containers and their configuration
  • networks defines networks
  • volumes defines volumes

Let’s see an example.

docker-compose.yml (source code)

This docker compose file,

  • Uses compose file version 3.7
  • Defines one service (container) named backend-service.
  • backend-service uses the current directory which includes the Dockerfile, and maps the host port 4000 to container port8080.

The equivalent command would be,

docker image build -t spring-app . && docker container run --name backend-service -p 4000:8080 spring-app

The command gets more complex when there are multiple services (containers) that have port mappings, volumes, and networks.

You can try it out by getting the code from Github if you haven’t already and switching to the branch demo/3-docker-compose-basic

git clone https://github.com/hakaneroztekin/java-spring-with-docker.git

Boot up the application with the following command,

docker compose up
docker compose up (source code)

You can access the app at localhost:4000 or 127.0.0.1:4000.

result page

Check the running compose projects,

docker compose ls

Stop the compose,

docker compose stop

Stopping the compose keeps the containers and networks. If you want to delete them,

docker compose down

Delete the compose project

docker compose rm

Great, now that we know about Docker Compose, we should talk about networks and volumes and then use them in our Docker compose in the demo.

» go back to table of contents

docker networking

Docker containers need to communicate with each other, this is where Docker networking comes into play. Fortunately, it is simple and easy to use.

There are three natively supported network drivers. It is possible to plug in 3rd party network drivers too.

  • bridge enables communication only in a single cluster (host)
  • overlay enables the communication between multiple clusters
  • macvlan used for integrating external networks (such as non-containerized running on physical networks or VLANs)

We won’t discuss the details of overlay and macvlan since their a little too much detail and we‘re going to talk about multi-cluster networking in the upcoming kubernetes in a nutshell article.

We’re going to create a network, attach two containers and make containers communicate.

Create a bridge network,

docker network create -d bridge my-bridge-network
creating a bridge network (source code)

Attach a container to the network,

docker container run -d --network my-bridge-network 
--name my-app alpine sleep 10m

Inspect the network,

docker network inspect my-bridge-network
docker network inspect (source code)

Run a second container,

docker container run -it --network my-bridge-network 
--name my-second-app
alpine sh

Ping the first container by name,

ping my-app
accessing to other containers

Voila. We’ve created a network and made communication between containers in the network.

» go back to table of contents
» skip the demo, jump to docker volumes

demo time: docker networking

So far we’ve created a docker-compose file that has a single service and we know we get rid of long commands by docker compose. So let’s create two services and network through Docker Compose.

That was our docker compose file from the previous demo,

docker-compose.yml

We will add a network with an arbitrary name and a redis service.

You can get the code if you haven’t already and switch to the branch demo/4-docker-compose-with-network

git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
docker compose with a network (source code)
  • We define our network below networks field.
  • Docker compose will start spring-service after booting up the redis-service. Note that this doesn’t guarantee that redis-servicewill be ready before spring-service.

That’s it for our docker compose file.

Now we should use redis in our project.

Add redis dependency to pom.xml,

pom.xml (source code)

Add a model,

ClickCount model (source code)

Create a repository,

ClickCountRepository (source code)

If we were using redis installed on our computer, we’d access it via localhost, but now backend-service will access it via redis-service. So we will update the hostname in the application.properties (or application.yaml).

application.properties

Lastly, we will update our controller. The best practice is to move the logic to service but for the demonstration purposes, we will keep it simple.

Our app will store click count on redis. So when the page refreshes it will get the click count if it exists it will be returned, otherwise, a new record will be saved. Also, it will be incremented each time the page is refreshed.

The main controller method,

printClickCount method (source code)

To keep the article simple we won’t cover the implementation details of getClickCountInDatabase() and incrementClickCountInDatabase(). Respectively, they get an existing key or save a new one with zero count, and increment the click count of the key in the database. If you want to see the implementation of the methods, click here.

Run the app,

docker compose up

When you run the project and hit localhost:4000, you will see the output,

Hello, this page is visited 1 times.

With the console output,

When you refresh the page, the output should be,

Hello, this page is visited 2 times.

With the console output,

Great. We’ve created two containers, one with our custom Dockerfile, made established a network between them. We’ve done this with the help of Docker compose so that we don’t have to write long commands.

There’s only one thing left to do.

If you stop the project with the following command,

docker compose stop

The containers created by Docker Compose will remain in the host. So our click count data will exist in the redis container.

However, we can’t rely on this since in modern applications the containers come and go whether with code changes, adding up more containers, or unexpected errors. Thus, it’s not a good practice to store data in the container.

When you use the following command, compose will remove the containers and networks,

docker compose down

And when you run the compose project, you will see the click count data from before doesn’t exist anymore. Solution? Docker volumes, which is our next stop. We’ll make the data persistent in the upcoming and last demo.

» go back to table of contents

docker volumes

There are two ways of persisting data in Docker.

  • container’s storage is used for non-persistent (temporary) data. the data gets removed if you delete the container.
  • volumes are used for persistent (permanent) data. the data remains if you delete the container.

There are three points to note about volumes,

  • Volume is a shared directory between container(s) and host(s).
  • Multiple containers from different hosts can share the same volume.
  • Volumes can be connected to external storage systems.

Create a volume,

docker volume create my-volume

List the volumes,

docker volume ls
docker volume ls

Remove a volume,

docker volume rm my-volume

Let’s see volumes at work. By default, redis image stores the data in /data folder. Let’s attach a volume to a redis container.

docker container run -it --name my-redis --mount source=my-redis-volume,target=/data redis

Open another terminal, and connect to redis-cli and save a key.

docker exec -it my-redis redis-cli
saving to redis

Remove the container,

docker container rm my-redis

Start a new container with the same volume,

docker container run -it --name my-redis2 — mount source=my-redis-volume,target=/data redis

Connect to the terminal to get the key,

docker exec -it my-redis2 redis-cli
reading from redis

Congrats. We’ve created and attached a volume to a container, saved new data, and read it using volume.

So far so good. In the demo, we’re going to use volumes in Docker compose.

» go back to table of contents
» skip the demo, jump to credits

demo time: docker volumes

We attached a volume to redis container’s /datadirectory and made our data persistent. Let’s do the same thing with Docker compose. All we need to do is define a volume and use it in the container configuration.

This is the shortest and the final demo but if you want to see the working code, get the code if you haven’t already and switch to the branch demo/5-docker-compose-with-volume

git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
docker-compose.yml (source code)

That’s it! Now our click count is persistent. If you remove the containers with docker compose down and create new ones with docker compose up and hit localhost:4000 you will see the click count data from before is still alive.

» go back to table of contents

credits

Special thanks to Nigel Poulton. His book Docker Deep Dive is an amazing head-start to the Docker world. Some of the information and examples in this article are adapted from the book.

what’s next

That’s it for the Docker. You can follow me on Medium, and jump next to kubernetes in a nutshell. You can also follow me on LinkedIn, I share posts when I publish new articles. Let me know whether you liked the article and what I can change to make it better. Thanks for reading.

--

--

No responses yet