Docker Notes

What is Docker? courtesy:

Some notes while learning Docker with the Docker Getting Started Tutorial.


Concept: container is an isolated environment for build, run, deploy, and share applications. Container interacts with its own private file system, and runs in different namespace, which is provided by a Docker image.

Relevant commands

  • Images:
    • Build image where the Dockerfile is in current directory: docker build -t <image[:tag]> .
    • Pull image from registry: docker image pull <image>
    • List images: docker image ls
  • Containers:
    • Start a container and for example with a web application based on an built image: docker run -d -p <host_port>:<container_port> --name <container_name> <image>
    • Show available containers: docker ps -a
    • Stop a container: docker stop <container_name> or docker stop <container_id
    • Start a stopped container: docker start <container_name> or docker start <container_id>
    • Start a container and run a specific command: docker container run -w <working_dir> <image> <command>
    • Send a command to an UP container: docker exec <container_name> <command>
    • Remove a container: docker rm <container_name>, or docker rm -f <container_name> to stop and remove a container in one command.
  • Share a local image to Docker Hub:
    1. Login to Docker Hub: docker login
    2. Create a tag image referring to the local one: docker tag <source_image[:tag]> <Docker_ID>/<repository_name[:tag]>
    3. Push to Docker Hub: docker push <Docker_ID>/<repository_name[:tag]>

Working with image

We can create an image from a custom container, which is a container that we have already performed necessary actions on it, by committing it to an image. A much more powerful and practical way is to create an image through Dockerfile.

Build a custom image from a container

  • Commit a container to an image: docker commit <container_id>
  • Tag an image: docker tag <image_id> <image_tag>

Build a container with Dockerfile

A Dockerfile contains instructions for building an image. It is simply a text file.

  • Build an image from a Dockerfile docker build -t <image[:tag]> .
  • Create a ‘Dockerfile’: ( Dockerfile reference)
    • FROM: specifies the base image we are going to build on. It is encouraged to build the image based on an Official Image.
    • WORKDIR: specifies the working directory of the subsequent actions.
    • COPY: copy the files from host to image filesystem.
    • CMD: specifies the default command to run when container starts.

Each command in Dockerfile (that changes the content in the filesystem, e.g. install software, copy files from host to image, etc.) will add an layer to the image.

Persistent data, sharing data between containers

Named volume

Data manipulated within a container will be lost after the container is removed. Moreover, the data is confined/isolated within the particular container. Volumes are here to help, by connecting a specific path in the container’s filesystem to the host platform.

  • Create a volume: docker volume create <volume_name>
  • Run a container with a volume mount: docker run [OPTIONS] -v <volume_name>:<container_mount_point> <image> [COMMAND] (Note: if the volume has not been created yet, Docker will automatically create one for us.)

Bind mount

To mount a specific local directory to a container, such as all the development libraries, data and programs, bind mount is of great help. It enables the container or target machine does not need to have all the build tools and environments installed.

  • Run a container with bind mount: docker run [OPTIONS] -v "<local_path>:<container_mount_point>" <image> [COMMAND]

NOTE: To mount a specific local directory on Windows to a Linux container, File Sharing must be enabled for that directory. Go to Docker Dashboard>Setting>Resources>FILE SHARING and add the directory you would like to share with a container.

Multi-Container Apps

For programs or systems that have multiple things or logics need to be handled, e.g. frontend, backend, database, etc., we can create multiple containers and each one is responsible for one job. We can then use networking to connect them. Simply put, containers can communicate with each other if they are on the same network.

  • Create a network: docker network create <network_name>
  • Run an image with network: docker run --network <network_name> --network-alias <network_alias> [OPTIONS] <image>
  • To list available networks: docker network ls
  • To get the details of a network, such IP address, gateway, etc.: docker network inspect <network_name>

Combining all the steps into an automation process

With a YAML file, which defines all the necessary services and information related to the multi-container environment for an application stack, we can start the system by just one Docker Compose command. This also make version control and sharing of the whole project much easier.

  1. Create a Docker Compose file named docker-compose.yml and place it at the root of the project.

  2. Define schema version:

    version: "3.7"
  3. Define list of services/containers will be run. For each service, pick a name for it and that will also be the network alias. The content for each of the service are basically the parameters that will be passed to docker run. E.g.

        image: <image_name>
        command: <command>
        ports: - <host_port:container_port>
        working_dir: <working_dir>
        volumes: - <volume_name>:<container_mount_point>
  4. If we need to use a named volume, we also need to define it in the top level of the file:

      # other services
        # other definitions
          - <volume_name>:<container_mount_point>
  5. Start the application: docker-compose up [-d]. The -d is put all the programs to the background.

  6. Stop AND remove containers, networks and volumes defined in the networks and volumes (with -v) section respectively: docker-compose down -v.

    Note: Both the up and down command should be run in the directory where the docker-compose.yml resides.

Best practices for building image

Put always updated layers at downstream

As mention previously in build a container with Dockerfile section, an image is built on layers. When rebuilding an image, we want to minimize the build time. For a changed layer, Docker will recreate all the newer/downstream layers, and all the older/upstream layers will be cached. So in order to have an efficient building process, we should put all the external dependencies and other settings that will not often get changed at the beginning of the Dockerfile. And place the stuff that get updated consistently, such as the source code, at the lower part of the file. We can also utilize .dockerignore to exclude any files that should not be copied to the image.

Multi-stage builds

By defining several stages in Dockerfile, we can separate the build-time dependencies and run-time dependencies.

FROM <base_image_name> AS <stage_name>  # This is one stage
# other settings

FROM <another_base_image_name>
COPY --from=<stage_name> <source_files> <destination_files>
Leo Mak
Make the world a better place, piece by piece.
comments powered by Disqus