Understanding what is docker and why it is useful
Docker is a tool to package software.
It is useful to build, share and run applications without having to worry about software dependencies or operating systems, it maximizes portability of app development and deployment.
The main concept behind Docker is: containers.
containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.
As a containerization platform, it enables developers to package applications into containers. This makes it easy to automate the deployment of applications in different environments.
Classes of Docker Objects
A container is a standard unit of software.
“A container is a sandboxed process running on a host machine that is isolated from all other processes running on that host machine.”1
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
With containers we don’t have to worry about software dependencies being present in other environments. Apps can be deployed easily in a developer’s laptop, a data center, or anywhere in the cloud, and we will be sure they will work the same everywhere.
|Application 1||Application 2||Application 3||Application 4|
|Host Operating System|
A Docker container image is a package of software.
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
A running container uses an isolated filesystem.1
Containers versus Images
Differences between Docker containers and docker images can be seen in the following table:
|Docker container||Docker image|
A container is a runnable instance of an image.1
The image also contains other configurations for the container, such as environment variables, a default command to run, and other metadata.
In general, container images become containers at runtime.
Docker images become Docker containers when they run on Docker Engine
Docker Engine is a Container Runtime. It is a background service that manages Docker containers.
Containers versus Virtual Machines
Containers are an abstraction at the app layer that packages code and dependencies together.
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers.
Virtual Machine scheme
The hypervisor allows multiple VMs to run on a single machine.
|Definition||abstraction at the app layer||abstraction of physical hardware|
|Share OS kernel||Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space||turn one server into many servers. The hypervisor allows multiple VMs to run on a single machine.|
|can handle more applications||Each VM includes a full copy of an operating system, the application, necessary binaries and libraries|
|system libraries||require fewer VMs and Operating systems.|
There are three main components:
The docker daemon
dockerd, is a process that manages Docker
containers and handles container objects.
The daemon can be accessed through the command-line client:
$ docker --help Usage: docker [OPTIONS] COMMAND A self-sufficient runtime for containers Options: --config string Location of client config files (default "/home/marcanuy/snap/docker/2893/.docker") -c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use") -D, --debug Enable debug mode -H, --host list Daemon socket(s) to connect to -l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info") --tls Use TLS; implied by --tlsverify --tlscacert string Trust certs signed only by this CA (default "/home/marcanuy/snap/docker/2893/.docker/ca.pem") --tlscert string Path to TLS certificate file (default "/home/marcanuy/snap/docker/2893/.docker/cert.pem") --tlskey string Path to TLS key file (default "/home/marcanuy/snap/docker/2893/.docker/key.pem") --tlsverify Use TLS and verify the remote -v, --version Print version information and quit Management Commands: builder Manage builds buildx* Docker Buildx (Docker Inc., v0.10.4) compose* Docker Compose (Docker Inc., v2.17.2) config Manage Docker configs container Manage containers context Manage contexts image Manage images manifest Manage Docker image manifests and manifest lists network Manage networks node Manage Swarm nodes plugin Manage plugins secret Manage Docker secrets service Manage services stack Manage Docker stacks swarm Manage Swarm system Manage Docker trust Manage trust on Docker images volume Manage volumes Commands: attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes Run 'docker COMMAND --help' for more information on a command. To get more help with docker, check out our guides at https://docs.docker.com/go/guides/
Docker objects are various entities used to assemble an application in Docker.
The main classes of Docker objects are:
- A container is managed using the Docker API or command-line interface.
- A Docker service makes it possible to scale containers across multiple Docker daemons. This is called a swarm, “a set of cooperating daemons that communicate through the Docker API”2.
A Docker registry is a repository for Docker images.
Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. The main public registry is Docker Hub. Docker Hub is the default registry where Docker looks for images. 2
docker-compose is a tool for defining and running multi-container Docker
It is configured using YAML3 files and it is used to define the services which will be run at start up with the creation of containers.
It is also possible to run a command on multiple containers at once.
$ docker-compose --help Usage: docker compose [OPTIONS] COMMAND Docker Compose Options: --ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto") --compatibility Run compose in backward compatibility mode --env-file stringArray Specify an alternate environment file. -f, --file stringArray Compose configuration files --parallel int Control max parallelism, -1 for unlimited (default -1) --profile stringArray Specify a profile to enable --project-directory string Specify an alternate working directory (default: the path of the, first specified, Compose file) -p, --project-name string Project name Commands: build Build or rebuild services config Parse, resolve and render compose file in canonical format cp Copy files/folders between a service container and the local filesystem create Creates containers for a service. down Stop and remove containers, networks events Receive real time events from containers. exec Execute a command in a running container. images List images used by the created containers kill Force stop service containers. logs View output from containers ls List running compose projects pause Pause services port Print the public port for a port binding. ps List containers pull Pull service images push Push service images restart Restart service containers rm Removes stopped service containers run Run a one-off command on a service. start Start services stop Stop services top Display the running processes unpause Unpause services up Create and start containers version Show the Docker Compose version information Run 'docker compose COMMAND --help' for more information on a command.
It provides native clustering functionality for Docker containers, which turns a group of Docker engines into a single virtual Docker engine.4
- Kubernetes https://kubernetes.io/
- Is a container management system developed by Google.
- “Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.”
- Vagrant https://www.vagrantup.com/
- “Vagrant enables the creation and configuration of lightweight, reproducible, and portable development environments.”
- Openvz https://openvz.org/
- Open source container-based virtualization for Linux.
- “Multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict.”
- “Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.”
- linuxcontainers https://linuxcontainers.org/
- Container and virtualization tools
- linuxcontainers.org is the umbrella project behind LXC, LXCFS, distrobuilder, libresource and lxcri.
- “The goal is to offer a distro and vendor neutral environment for the development of Linux container technologies.”
- “Focuses in providing containers and virtual machines that run full Linux systems. While VMs supply a complete environment, system containers offer an environment as close as possible to the one you’d get from a VM, but without the overhead that comes with running a separate kernel and simulating all the hardware.”
- podman https://podman.io/
- “Manage containers, pods, and images with Podman. Seamlessly work with containers and Kubernetes from your local environment.”
Containerize App Example Workflow
As an example of how Docker works, this is a simple guide that shows the steps needed to containerize an application.
1. Build the app’s container image
1.1 Create configuration
Dockerfile with instructions on how to build the container image.
1.2 Build the image
Build the image with
docker build -t my-app ..
$ docker build --help Usage: docker build [OPTIONS] PATH | URL | - Build an image from a Dockerfile Options: --add-host list Add a custom host-to-IP mapping (host:ip) --build-arg list Set build-time variables --cache-from strings Images to consider as cache sources --cgroup-parent string Optional parent cgroup for the container --compress Compress the build context using gzip --cpu-period int Limit the CPU CFS (Completely Fair Scheduler) period --cpu-quota int Limit the CPU CFS (Completely Fair Scheduler) quota -c, --cpu-shares int CPU shares (relative weight) --cpuset-cpus string CPUs in which to allow execution (0-3, 0,1) --cpuset-mems string MEMs in which to allow execution (0-3, 0,1) --disable-content-trust Skip image verification (default true) -f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile') --force-rm Always remove intermediate containers --iidfile string Write the image ID to the file --isolation string Container isolation technology --label list Set metadata for an image -m, --memory bytes Memory limit --memory-swap bytes Swap limit equal to memory plus swap: '-1' to enable unlimited swap --network string Set the networking mode for the RUN instructions during build (default "default") --no-cache Do not use cache when building the image --pull Always attempt to pull a newer version of the image -q, --quiet Suppress the build output and print image ID on success --rm Remove intermediate containers after a successful build (default true) --security-opt strings Security options --shm-size bytes Size of /dev/shm --squash Squash newly built layers into a single new layer -t, --tag list Name and optionally a tag in the 'name:tag' format --target string Set the target build stage to build. --ulimit ulimit Ulimit options (default )
2. Start an app container (run)
Run the application in a container:
docker run with the name of the
tag image previously created:
docker -dp 127.0.0.1:3000:3000 run my-app
$ docker run --help Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Run a command in a new container Options: --add-host list Add a custom host-to-IP mapping (host:ip) -a, --attach list Attach to STDIN, STDOUT or STDERR --blkio-weight uint16 Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0) --blkio-weight-device list Block IO weight (relative device weight) (default ) --cap-add list Add Linux capabilities --cap-drop list Drop Linux capabilities --cgroup-parent string Optional parent cgroup for the container --cgroupns string Cgroup namespace to use (host|private) 'host': Run the container in the Docker host's cgroup namespace 'private': Run the container in its own private cgroup namespace '': Use the cgroup namespace as configured by the default-cgroupns-mode option on the daemon (default) --cidfile string Write the container ID to the file --cpu-period int Limit CPU CFS (Completely Fair Scheduler) period --cpu-quota int Limit CPU CFS (Completely Fair Scheduler) quota --cpu-rt-period int Limit CPU real-time period in microseconds --cpu-rt-runtime int Limit CPU real-time runtime in microseconds -c, --cpu-shares int CPU shares (relative weight) --cpus decimal Number of CPUs --cpuset-cpus string CPUs in which to allow execution (0-3, 0,1) --cpuset-mems string MEMs in which to allow execution (0-3, 0,1) -d, --detach Run container in background and print container ID --detach-keys string Override the key sequence for detaching a container --device list Add a host device to the container --device-cgroup-rule list Add a rule to the cgroup allowed devices list --device-read-bps list Limit read rate (bytes per second) from a device (default ) --device-read-iops list Limit read rate (IO per second) from a device (default ) --device-write-bps list Limit write rate (bytes per second) to a device (default ) --device-write-iops list Limit write rate (IO per second) to a device (default ) --disable-content-trust Skip image verification (default true) --dns list Set custom DNS servers --dns-option list Set DNS options --dns-search list Set custom DNS search domains --domainname string Container NIS domain name --entrypoint string Overwrite the default ENTRYPOINT of the image -e, --env list Set environment variables --env-file list Read in a file of environment variables --expose list Expose a port or a range of ports --gpus gpu-request GPU devices to add to the container ('all' to pass all GPUs) --group-add list Add additional groups to join --health-cmd string Command to run to check health --health-interval duration Time between running the check (ms|s|m|h) (default 0s) --health-retries int Consecutive failures needed to report unhealthy --health-start-period duration Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s) --health-timeout duration Maximum time to allow one check to run (ms|s|m|h) (default 0s) --help Print usage -h, --hostname string Container host name --init Run an init inside the container that forwards signals and reaps processes -i, --interactive Keep STDIN open even if not attached --ip string IPv4 address (e.g., 172.30.100.104) --ip6 string IPv6 address (e.g., 2001:db8::33) --ipc string IPC mode to use --isolation string Container isolation technology --kernel-memory bytes Kernel memory limit -l, --label list Set meta data on a container --label-file list Read in a line delimited file of labels --link list Add link to another container --link-local-ip list Container IPv4/IPv6 link-local addresses --log-driver string Logging driver for the container --log-opt list Log driver options --mac-address string Container MAC address (e.g., 92:d0:c6:0a:29:33) -m, --memory bytes Memory limit --memory-reservation bytes Memory soft limit --memory-swap bytes Swap limit equal to memory plus swap: '-1' to enable unlimited swap --memory-swappiness int Tune container memory swappiness (0 to 100) (default -1) --mount mount Attach a filesystem mount to the container --name string Assign a name to the container --network network Connect a container to a network --network-alias list Add network-scoped alias for the container --no-healthcheck Disable any container-specified HEALTHCHECK --oom-kill-disable Disable OOM Killer --oom-score-adj int Tune host's OOM preferences (-1000 to 1000) --pid string PID namespace to use --pids-limit int Tune container pids limit (set -1 for unlimited) --platform string Set platform if server is multi-platform capable --privileged Give extended privileges to this container -p, --publish list Publish a container's port(s) to the host -P, --publish-all Publish all exposed ports to random ports --pull string Pull image before running ("always"|"missing"|"never") (default "missing") --read-only Mount the container's root filesystem as read only --restart string Restart policy to apply when a container exits (default "no") --rm Automatically remove the container when it exits --runtime string Runtime to use for this container --security-opt list Security Options --shm-size bytes Size of /dev/shm --sig-proxy Proxy received signals to the process (default true) --stop-signal string Signal to stop a container (default "SIGTERM") --stop-timeout int Timeout (in seconds) to stop a container --storage-opt list Storage driver options for the container --sysctl map Sysctl options (default map) --tmpfs list Mount a tmpfs directory -t, --tty Allocate a pseudo-TTY --ulimit ulimit Ulimit options (default ) -u, --user string Username or UID (format:
[: ]) --userns string User namespace to use --uts string UTS namespace to use -v, --volume list Bind mount a volume --volume-driver string Optional volume driver for the container --volumes-from list Mount volumes from the specified container(s) -w, --workdir string Working directory inside the container
2.1 Access app
App fronted will be available at http://localhost:3000
3. List containers (ps)
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9cd87256445 my-app "docker-entrypoint.s…" About a minute ago Up About a minute 127.0.0.1:3000->3000/tcp vibrant_chatelet
3.1 Run commands when starting up a container (run)
To start a container running a command in the
ubuntu image using
docker run -d ubuntu bash -c “shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null”
3.2 Exec commands in the container (exec)
Commands can be executed in already running container instances with
exec i.e.: docker exec
4 Update the application (build)
After updating app’s source code, it is required to update the container image too.
4.1 Build the image
We use the same
build command that we used to generate the image: docker build -t my-app
4.2 Stop and remove old container image
And then stop old container and start a new container with the updated code.
Look for the container ID with docker ps
And then stop it: docker stop asdlfhasgn3k45
Then remove it: docker rm asdlfhasgn3k45
4.3 Start the new container image
Now start the container image with the updated code: docker run -dp 127.0.0.1:3000:3000 my-app
5. Share the app (hub)
Using Docker HUB https://hub.docker.com/ we can share the app, it is a library and community for container images, which is also the default Docker registry.
After creating the repo there, we log in from console: docker login -u YOUR-USER-NAME
5.1 Create tag
Create a new name for
docker tag my-app YOUR-USER-NAME/my-app
5.2 push code
We push our code: docker push docker/my-app
6. Persist data (volume)
Each container has a different filesystem from each other even when using the same image.
Volumes provide the ability to connect specific filesystem paths of the container back to the host machine. If you mount a directory in the container, changes in that directory are also seen on the host machine. If you mount that same directory across container restarts, you’d see the same files.
There are two main types of volumes:
- named volumes
- bind mounts
6.1 Volume mount (named volumes)
“By creating a volume and attaching (often called “mounting”) it to the directory where you stored the data, you can persist the data. As your container writes to the my.db file, it will persist the data to the host in the volume.”5
To create a volume mount: docker volume create my-db
Start the app container with the
--mount option with the volume
mount name and path docker run -dp 127.0.0.1:3000:3000 –mount type=volume,src=my-db,target=/etc/mys getting-started
Now the app can access and persist data in the
To see where the volume is storing data: docker volume inspect
6.2 Bind mounts
Files can be shared between the host and the container with bind mounts, and changes will be immediately reflected on both sides.
Bind mounts are useful for developing software.
A bind mount is another type of mount, which lets you share a directory from the host’s filesystem into the container. When working on an application, you can use a bind mount to mount source code into the container. The container sees the changes you make to the code immediately, as soon as you save a file. This means that you can run processes in the container that watch for filesystem changes and respond to them.
To start an interactive bash session in the root directory of an ubuntu container with a bind mount: docker run -it –mount type=bind,src="$(pwd)",target=/src ubuntu bash
In the container’s
/src directory we will have our host’s current
directory (pwd) files. Any file added/removed/modified in this
container’s directory, will also be modified in host’s directory and vice-versa.
6.2.1 Development containers
Using bind mounts is common for local development setups. The advantage is that the development machine doesn’t need to have all of the build tools and environments installed. With a single docker run command, Docker pulls dependencies and tools.
To run a development container with a bind mount
- Mount your source code into the container
- Install all dependencies
- Start nodemon to watch for filesystem changes
6.2.2 Mount source code into the container
docker run -dp 127.0.0.1:3000:3000 \ -w /app --mount type=bind,src="$(pwd)",target=/app \ node:18-alpine \ sh -c "yarn install && yarn run dev"
We run a docker container that:
-dp 127.0.0.1:3000:3000- Run in detached (background) mode and create a port mapping
-w /app- sets the working directory or the current directory that the command will run from
--mount type=bind,src="$(pwd)",target=/app- bind mount the current directory from the host into the
/appdirectory in the container
node:18-alpine- the image to use, that this is the base image for your app.
sh -c "yarn install && yarn run dev"- Start a shell using sh (alpine doesn’t have bash) and run
yarn installto install packages and then run
yarn run devto start the development server. In the package.json, that the dev script starts
nodemonfor monitoring changes in files.
This can be monitored by watching Docker logs: docker logs
Now every change to a file in the source code directory, will update
the container with the latest changes, the
docker build command is
rebuilding the image each time.
7. Multi container apps (network)
If we need adding MySQL to the application stack, instead of working with SQLite, we must create a specific container for it, instead of adding it to the app’s source code container.
Reasons6 for having MySQL in a separate container than source code:
- “to scale APIs and front-ends differently than databases”
- “Separate containers let you version and update versions in isolation.”
- To use another database in production or “to use a managed service for the database in production”.
- “Running multiple processes will require a process manager (the container only starts one process), which adds complexity to container startup/shutdown.”
To make isolated containers communicate with each other, they have to be in the same network.
“There are two ways to put a container on a network”6:
- “Assign the network when starting the container.”
- “Connect an already running container to a network.”
7.1.1 Create netweork and attach a MySQL container at startup
Create the network: docker network create my-app
Start a MySQL container and attach it to the network:
docker run -d \ --network my-app --network-alias mysql \ -v my-mysql-data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=secret \ -e MYSQL_DATABASE=mys \ mysql:8.0
The volume is created automatically.
Try connecting to the database: docker exec -it
7.1.2 Run the app with MySQL
Run the app in the same network as the MySQL container
The host of the MySQL container is the alias we specified in the above
docker run -dp 127.0.0.1:3000:3000 \ -w /app -v "$(pwd):/app" \ --network my-app \ -e MYSQL_HOST=mysql \ -e MYSQL_USER=root \ -e MYSQL_PASSWORD=secret \ -e MYSQL_DB=mys \ node:18-alpine \ sh -c "yarn install && yarn run dev"
Check what’s going: docker logs -f
8 Docker Compose
Docker Compose is a tool that was developed to help define and share multi-container applications The big advantage of using Compose is you can define your application stack in a file, keep it at the root of your project repo (it’s now version controlled), and easily enable someone else to contribute to your project.
$ docker compose --help Usage: docker compose [OPTIONS] COMMAND Docker Compose Options: --ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto") --compatibility Run compose in backward compatibility mode --env-file stringArray Specify an alternate environment file. -f, --file stringArray Compose configuration files --parallel int Control max parallelism, -1 for unlimited (default -1) --profile stringArray Specify a profile to enable --project-directory string Specify an alternate working directory (default: the path of the, first specified, Compose file) -p, --project-name string Project name Commands: build Build or rebuild services config Parse, resolve and render compose file in canonical format cp Copy files/folders between a service container and the local filesystem create Creates containers for a service. down Stop and remove containers, networks events Receive real time events from containers. exec Execute a command in a running container. images List images used by the created containers kill Force stop service containers. logs View output from containers ls List running compose projects pause Pause services port Print the public port for a port binding. ps List containers pull Pull service images push Push service images restart Restart service containers rm Removes stopped service containers run Run a one-off command on a service. start Start services stop Stop services top Display the running processes unpause Unpause services up Create and start containers version Show the Docker Compose version information Run 'docker compose COMMAND --help' for more information on a command.
8.1 Create file
At the root of the project create
8.2 Services (containers)
“Define the list of services (or containers) we want to run as part of our application.”
“Define the service entry and the image for the container. We can pick any name for the service. The name will automatically become a network alias, which will be useful when defining our MySQL service.”
services: app: image: node:18-alpine command: sh -c "yarn install && yarn run dev" ports: - 127.0.0.1:3000:3000 working_dir: /app volumes: - ./:/app environment: MYSQL_HOST: mysql MYSQL_USER: root MYSQL_PASSWORD: secret MYSQL_DB: myapps mysql: image: mysql:8.0 volumes: - myapp-mysql-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: myapps volumes: myapp-mysql-data:
8.3 Run the composer
“Start up the application stack using the docker compose up command. We’ll add the -d flag to run everything in the background.”
docker compose up -d
Look at composer logs: docker compose logs -f
8.4 Tear it all down
docker compose down
- Docker basic concepts, definition and utility.