docker in a nutshell
Welcome to the one-stop shop for (almost) all aspects of Docker.
In this article, we’re going to look at the features of Docker in sufficient detail. We will cover a lot of topics that are listed right after this section. It will be useful whether you’re refreshing your knowledge or just starting to learn Docker.
Throughout the article, we will build a project. In the end, we will have built a project that uses docker compose and has multiple containers — one with our custom image, a custom network, and a volume. Don’t worry if these terms sound odd, they are, though they are simpler than they look and we’ll cover them one by one.
You can quickly copy the commands in the images by clicking the source code text below them. The complete project code is available on Github. You can check similar articles I’ve written before on my Medium profile. You can also contact me on LinkedIn.
Feel free to drop a comment and let me know whether you liked it or not, your questions, and if there’s an incorrect/missing part in the article.
Alright, let’s start!
table of contents
∘ why docker
∘ docker terms
∘ docker images
∘ docker containers
∘ demo time: pull and inspect a docker image then run it
∘ containerizing an app
∘ dockerfile
∘ demo time: nginx Dockerfile with custom homepage
∘ demo time: run a spring app as a container
∘ docker compose
∘ docker networking
∘ demo time: docker networking
∘ docker volumes
∘ demo time: docker volumes
∘ credits
∘ what’s next
Two notes before we proceed,
- Feel free to click the titles here, I provide links to navigate at the end of each section to eliminate the hassle of scrolling back and forth to find the related section.
- We’re not going to cover docker stacks and security right now. Docker stacks help us to deploy our applications in multiple hosts (clusters). We can achieve the same thing with Kubernetes which we’ll discuss in the upcoming article. However, we can also talk about them if you want to — just let me know.
why docker
- in the old old days, there were physical servers. they could only run one application at a time which was a huge waste of resources. they were occupying a lot of physical space and were hard to scale.
- virtual machines (VMs) made it possible to run multiple applications. but they were wasting resources too since each guest OS was using some of the resources and it was still too slow to boot an application.
- containers, such as docker, made efficient use of resources and it’s way faster. it uses fewer resources because containers share OS resources, it’s faster because it converts our applications and all the dependencies into small packets.
By the way, you see Hypervisor in such graphics a lot. This fancy term means an OS tailored for VMs.
Both VMs and containers solve the “but it works on my machine” problem. Once you containerize your app you can run it practically anywhere.
» go back to table of contents
docker terms
Let’s take a look at the terms you hear a lot when using Docker.
- docker container is all your app needs to run. it includes the application code, all the dependencies, and a part of the OS.
- docker image is all your app needs to run, very similar to the container. it’s like a stopped container. we’ll talk more about this relationship in the next section.
- docker desktop is software to use Docker on your Mac or Windows. it installs two tools; the docker engine and the docker client.
- docker engine handles docker container lifecycle, network, and OS operations.
- docker client is a gateway (a portal) that connects your terminal to the docker engine so you can use your terminal to manage your docker containers.
- dockerfile describes how to build an image.
- docker compose describes how Docker should build and deploy an app.
- docker volume is a shared folder between the host(s) and the container(s).
Don’t worry if you don’t get the terms yet. We’ll talk through them in the article.
» go back to table of contents
docker images
A Docker image contains everything required for an application to run. It includes application code, dependencies, and a part of the OS. Images are similar to classes in OOP whereas containers are similar to objects. A container is a running instance of an image.
If you have an application’s Docker image, the only other thing you need to run that application is a computer running Docker.
» go back to table of contents
docker containers
A container is a runtime instance of an image. Similar to the OOP class and object analogy, you can instantiate multiple containers from the same image.
Let’s run a ubuntu container.
The command template is shown below,
docker container run image-name application-name
We use -it
flag for the interactive terminal. With this, we can run commands inside the container. /bin/bash
is the main application to run. Thus, when we exit the terminal, the container will terminate.
Containers are designed to be immutable. Don’t update them by, for example, logging in to it and updating the configuration.
The data is stored in the Docker host’s filesystem which won’t survive in case of failure. So, you should use volumes to store data. We will talk more about volumes in the article.
» go back to table of contents
» skip the demo, jump to containerizing an app
demo time: pull and inspect a docker image then run it
In this demo, we will,
- check out the details of Docker in your system
- pull an image
- inspect the image
- run it as a container
If you don’t already have Docker installed, you can install Docker Desktop. Start with it and use the command to ensure it’s up and running.
docker version
An example output,
As you can see, we have a client that connects our terminal commands to the Docker engine, and the engine that does the actual work.
We’re going to pull the alpine image,
docker image pull nginx
- if we don’t specify a version (for e.g
docker pull nginx:1.22.0
), it uses thelatest
image. - it pulls the image from the public Docker repository, i.e. Docker Hub.
List the images in our host with the following command,
docker image ls
Inspect the image,
docker image inspect nginx
The trimmed output shows that the image consists of 6 layers. These layers are generated during the image creation via Dockerfile. They can be the OS (e.g. nginx uses debian
), dependencies, and source code. Docker caches the layers so if you’re going to build an image that has some layers in the cache, cached layers will be used to speed up the process.
Let’s run it!
docker container run --name my-container -p 3000:80 -d nginx
Let’s examine what we do here,
- we run the container in the background with
-d
flag. it’s not attached to our terminal so we can use our terminal normally. - we give a custom name to our container with
name
. it’s useful to identify the container. - by default, nginx container exposes the port
80
. here, we connect the host port3000
to the container port80
to make it accessible outside the host.
Open up a new tab in the browser and hit localhost:3000
or 127.0.0.1:3000
to access the container.
Voila! We can access our container via browser. nginx shows its default page. Let’s update it!
We can execute commands in our containers with exec
commands. We will use bash
command to connect to the container terminal.
docker exec -it my-container bash
nginx keeps the static content at usr/share/nginx/html/index.html
. Update it with a welcome message. You can copy the command from the source code. Then type exit
to leave the container terminal.
Refresh the page to see the results.
If the page has not changed, try the incognito mode.
Check our container details,
docker container ls
Notice the name and the port mapping.
To remove the image, we should first remove the containers which use the image.
docker container rm my-container
Use the following command to remove the image,
docker image rmi nginx
We can also verify the image is deleted by listing the images via docker image ls
command.
Well done, we’ve pulled an image, run it, accessed it in the browser, and even hacked into it and updated the contents. Updating the container like this is not recommended since containers are immutable by design. Also, if we need to show our custom page, how do we automatize it so that our custom container shows that homepage by default?
The answer is Dockerfile, which is our next stop.
» go back to table of contents
containerizing an app
Containerizing an app is making your application run in a container. For that, we should create Dockerfile which describes how to build an image for our app, then build the image and run it as a container.
We will take a look at Dockerfile, then in the demo section, we will provide a custom homepage for our nginx container.
» go back to table of contents
dockerfile
A Dockerfile is the starting point for creating a container image — it describes an application and tells Docker how to build it into an image.
The directory containing the application and dependencies is referred to as the build context.
Let’s look at the contents of the Dockerfile.
Every Dockerfile starts with a base image. In our Dockerfile, we use ubuntu. Our app has a custom label and environment variable, and it prints directory contents when started. Every container needs the main app to run, so it’s essential to use CMD
or ENTRYPOINT
.
For the sake of simplicity, assume that we have table-of-contents.txt
file in the src
folder.
Let’s build our image,
docker build -t my-awesome-app .
We tag our image as my-awesome-app
and use the current directory (.) as the build context (where our Dockerfile
is located).
Notice the [3/3] notation. This means our image has 3 layers. One for the base image, one for COPY
command, and one for WORKDIR
command. Not every Dockerfile command creates a layer, some add metadata, like LABEL
.
Inspect our image,
The trimmed output has a PUBLISHING_AT
environment label, ARTICLE_NAME
label, and three layers.
Let’s run our image.
docker container run my-awesome-app
We can list running containers with the following command,
docker container ls
However, since our application‘s main command (ls -l
) is executed, and our container has terminated.
We can list all the containers with the following command
docker container ls -l
Congratulations. We’ve built our custom image and run it as a container.
» go back to table of contents
» skip the demos, jump to docker compose
demo time: nginx Dockerfile with a custom homepage
Let's containerize our nginx app. Remember what we’ve done in the demo before,
- use nginx image
- update contents
- run the container
We’ve had run the container before updating the contents but normally we want to see our custom page even from the first-page load. So we will update it before running the container.
So that’s exactly what we’re going to do with Dockerfile. We use nginx image and copy the contents.
As you see, we did not use CMD
or ENTRYPOINT
.This is because we’re going to use CMD
and ENTRYPOINT
provided by the nginx image. You can check those with docker image inspect nginx
command.
Build the image,
docker image build -t my-app .
Run it,
docker container run --name my-app -p 3000:80 my-app
The result,
Great. We have a custom webpage ready to use with our container.
» go back to table of contents
» skip the next demo, jump to docker compose
demo time: run a spring app as a container
Let’s get our hands dirty with an example closer to the real world. We’re going to build an image for our Spring application and run it as a container. We are going to add more Docker features like volumes, network, and compose to our example throughout the article. But first, let’s containerize a simple Spring app.
Don’t worry if you’re not familiar with Spring or Java. Following along with the article will provide you with enough information. Otherwise, practicing while reading would be the best option.
create a simple Spring app
You can initialize a Spring app via Spring Initializr. Create an app named spring-app and create a simple controller.
You can also get the code from Github and switch to the branch demo/1-basic-spring-app
git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
Run maven clean package
for your application. This will generate a jar file at /target/spring-app.jar
that we’ll use to run our application.
Run the following command in your terminal,
java -jar target/spring-app.jar
We are going to take a similar step for our Dockerfile.
Build our image,
docker image build -t spring-app .
And run it,
docker container run -d -p 5000:8080 --name my-spring-app spring-app
Wait a few seconds for it to boot up, and voila :)
Congratulations!
As you’ve seen, we need to generate our jar manually for our Dockerfile to work. We can also automate this with the Dockerfile.
This looks cool but also frightening. It is simpler than it looks. This is called a multi-stage build. Multi-stage builds have multiple FROM
commands (one for each stage) and we can pass content from earlier stages. It is a good practice since it reduces eventual image size.
In the Build stage, we generate a spring-app.jar
file, the same as our maven clean package
command. In the Package stage, we use the jar file as we did before.
You can try it out by getting the code from Github if you haven’t already and switching to the branch demo/2-multi-stage-dockerfile
git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
Are you tired of writing port and name each time running a container? We’ll get rid of them with Docker compose. We can also do much more with it, so let’s get into it.
» go back to table of contents
docker compose
Docker compose is a tool to deploy and manage multiple related containers at the same time. It works in a single cluster (i.e. Docker host). For production builds, multi-cluster is a better approach, so Docker Stacks is the tool for that. Alternatively, Kubernetes is one of the most popular tools for that, which we’re going to cover in the next article.
Docker compose is managed with a file docker-compose.yml
. It has four main fields,
version
defines Docker compose versionservices
defines containers and their configurationnetworks
defines networksvolumes
defines volumes
Let’s see an example.
This docker compose file,
- Uses compose file version 3.7
- Defines one service (container) named
backend-service
. backend-service
uses the current directory which includes the Dockerfile, and maps the host port4000
to container port8080
.
The equivalent command would be,
docker image build -t spring-app . && docker container run --name backend-service -p 4000:8080 spring-app
The command gets more complex when there are multiple services (containers) that have port mappings, volumes, and networks.
You can try it out by getting the code from Github if you haven’t already and switching to the branch demo/3-docker-compose-basic
git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
Boot up the application with the following command,
docker compose up
You can access the app at localhost:4000
or 127.0.0.1:4000
.
Check the running compose projects,
docker compose ls
Stop the compose,
docker compose stop
Stopping the compose keeps the containers and networks. If you want to delete them,
docker compose down
Delete the compose project
docker compose rm
Great, now that we know about Docker Compose, we should talk about networks and volumes and then use them in our Docker compose in the demo.
» go back to table of contents
docker networking
Docker containers need to communicate with each other, this is where Docker networking comes into play. Fortunately, it is simple and easy to use.
There are three natively supported network drivers. It is possible to plug in 3rd party network drivers too.
bridge
enables communication only in a single cluster (host)overlay
enables the communication between multiple clustersmacvlan
used for integrating external networks (such as non-containerized running on physical networks or VLANs)
We won’t discuss the details of overlay and macvlan since their a little too much detail and we‘re going to talk about multi-cluster networking in the upcoming kubernetes in a nutshell
article.
We’re going to create a network, attach two containers and make containers communicate.
Create a bridge network,
docker network create -d bridge my-bridge-network
Attach a container to the network,
docker container run -d --network my-bridge-network
--name my-app alpine sleep 10m
Inspect the network,
docker network inspect my-bridge-network
Run a second container,
docker container run -it --network my-bridge-network
--name my-second-app
alpine sh
Ping the first container by name,
ping my-app
Voila. We’ve created a network and made communication between containers in the network.
» go back to table of contents
» skip the demo, jump to docker volumes
demo time: docker networking
So far we’ve created a docker-compose file that has a single service and we know we get rid of long commands by docker compose. So let’s create two services and network through Docker Compose.
That was our docker compose file from the previous demo,
We will add a network with an arbitrary name and a redis service.
You can get the code if you haven’t already and switch to the branch demo/4-docker-compose-with-network
git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
- We define our network below
networks
field. - Docker compose will start
spring-service
after booting up theredis-service
. Note that this doesn’t guarantee thatredis-service
will be ready beforespring-service
.
That’s it for our docker compose file.
Now we should use redis in our project.
Add redis dependency to pom.xml,
Add a model,
Create a repository,
If we were using redis installed on our computer, we’d access it via localhost, but now backend-service
will access it via redis-service
. So we will update the hostname in the application.properties
(or application.yaml
).
Lastly, we will update our controller. The best practice is to move the logic to service but for the demonstration purposes, we will keep it simple.
Our app will store click count on redis. So when the page refreshes it will get the click count if it exists it will be returned, otherwise, a new record will be saved. Also, it will be incremented each time the page is refreshed.
The main controller method,
To keep the article simple we won’t cover the implementation details of getClickCountInDatabase()
and incrementClickCountInDatabase()
. Respectively, they get an existing key or save a new one with zero count, and increment the click count of the key in the database. If you want to see the implementation of the methods, click here.
Run the app,
docker compose up
When you run the project and hit localhost:4000
, you will see the output,
Hello, this page is visited 1 times.
With the console output,
When you refresh the page, the output should be,
Hello, this page is visited 2 times.
With the console output,
Great. We’ve created two containers, one with our custom Dockerfile, made established a network between them. We’ve done this with the help of Docker compose so that we don’t have to write long commands.
There’s only one thing left to do.
If you stop the project with the following command,
docker compose stop
The containers created by Docker Compose will remain in the host. So our click count data will exist in the redis container.
However, we can’t rely on this since in modern applications the containers come and go whether with code changes, adding up more containers, or unexpected errors. Thus, it’s not a good practice to store data in the container.
When you use the following command, compose will remove the containers and networks,
docker compose down
And when you run the compose project, you will see the click count data from before doesn’t exist anymore. Solution? Docker volumes, which is our next stop. We’ll make the data persistent in the upcoming and last demo.
» go back to table of contents
docker volumes
There are two ways of persisting data in Docker.
- container’s storage is used for non-persistent (temporary) data. the data gets removed if you delete the container.
- volumes are used for persistent (permanent) data. the data remains if you delete the container.
There are three points to note about volumes,
- Volume is a shared directory between container(s) and host(s).
- Multiple containers from different hosts can share the same volume.
- Volumes can be connected to external storage systems.
Create a volume,
docker volume create my-volume
List the volumes,
docker volume ls
Remove a volume,
docker volume rm my-volume
Let’s see volumes at work. By default, redis image stores the data in /data
folder. Let’s attach a volume to a redis container.
docker container run -it --name my-redis --mount source=my-redis-volume,target=/data redis
Open another terminal, and connect to redis-cli and save a key.
docker exec -it my-redis redis-cli
Remove the container,
docker container rm my-redis
Start a new container with the same volume,
docker container run -it --name my-redis2 — mount source=my-redis-volume,target=/data redis
Connect to the terminal to get the key,
docker exec -it my-redis2 redis-cli
Congrats. We’ve created and attached a volume to a container, saved new data, and read it using volume.
So far so good. In the demo, we’re going to use volumes in Docker compose.
» go back to table of contents
» skip the demo, jump to credits
demo time: docker volumes
We attached a volume to redis container’s /data
directory and made our data persistent. Let’s do the same thing with Docker compose. All we need to do is define a volume and use it in the container configuration.
This is the shortest and the final demo but if you want to see the working code, get the code if you haven’t already and switch to the branch demo/5-docker-compose-with-volume
git clone https://github.com/hakaneroztekin/java-spring-with-docker.git
That’s it! Now our click count is persistent. If you remove the containers with docker compose down
and create new ones with docker compose up
and hit localhost:4000
you will see the click count data from before is still alive.
» go back to table of contents
credits
Special thanks to Nigel Poulton. His book Docker Deep Dive is an amazing head-start to the Docker world. Some of the information and examples in this article are adapted from the book.
what’s next
That’s it for the Docker. You can follow me on Medium, and jump next to kubernetes in a nutshell
. You can also follow me on LinkedIn, I share posts when I publish new articles. Let me know whether you liked the article and what I can change to make it better. Thanks for reading.