Docker Quick Guide


This document is a paraphrase / condensation / explanation of the new Getting Started tutorial included with Docker Desktop. The tutorial is ideal for those just learning how to use Docker, this document is meant to be a cheat sheet after one has completed the tutorial, an easier way to reference important commands and information.

Running an Existing Image

  • To run an existing container: docker run -d -p 80:80 provider/container.
    • -d – Runs in detached mode.
    • -p 80:80 – Maps the port 80 on host to port on container.

Creating an Image for Our Application

  • Within a folder containing our application we create a Dockerfile:
FROM node:12-alpine
COPY . .
RUN yarn install --production
CMD ["node", "/app/src/index.js"]
  • FROM node:12-alpine – The base image we’ll be building on.1It is possible to build without a base image – e.g., you are building the base image. Generally this is a lot of unnecessary reinvention of the wheel.
  • WORKDIR /app – We are copying all our app files into the image using COPY . .
  • We install our applications dependencies using RUN yarn install --production
  • When a container is started based on this image it will launch with the command node /app/src/index.js.
  • We can then build the container image: docker build -t name-of-image . 2When performing this on a Windows host to a Linux container image all files added to the image will have permissions set as -rwxr-xr-x.
    • -t gives our image a human friendly name.
  • To start a container based on our image: docker run -dp 3000:3000 name-of-image.
    • Single character switches can be combined into a single switch, here d is detached and p is port.
  • Navigate browser to http://localhost:3000.

Updating Our Application and Its Container

  • Make edits to application.
  • Build image: docker build -t name-of-image ..
  • Find the ID of the container by using docker ps.
  • Stop the container: docker stop container-id.
  • Remove the container: docker rm container-id.
    • Use the -f switch to bypass the need for using the stop command.
  • Start a new container from the updated image: docker run -dp 3000:3000 name-of-image.

Sharing App

  • Use Docker Hub to host the image.
    • Log in.
    • Create Repository.
    • Name repository name-of-container.
      • Ensure visibility is public.
    • Click Create.
  • Login to Docker Hub from CLI: docker login -u user-name.
  • Tag the image: docker tag name-of-image user-name/name-of-image.
  • Push the image to the repo: docker push user-name/name-of-container.
  • You can test your image by using Play with Docker.


  • Changes made in a container are not reflected in other containers made from the same image unless the build process is run. Thus file changes do not persist by default outside of the life of a specific container (which should be treated as ephemeral).
  • By using Volumes, which connect a location in a container to a location on the host, one can persist files on the host and utilize them within containers.
  • Named Volumes
    • Create volume: docker volume create name-of-volume
    • Start a container with the -v switch to mount the named volume we just created: docker run -dp 3000:3000 -v name-of-volume:/path/to/file name-of-container
    • To find the location of the named volume use: docker volume inspect name-of-volume.
      • The location is actually within a VM used by Docker to run commands. You’d need to connect to that VM before being able to see the files.
    • If we do not create a named volume but specify one when we execute docker run, Docker will create the named volume.
  • Bind Mounts
    • Allows us to specify where the mountpoint will occur on our host system.
    • Run docker run -dp 3000:3000 \ -w /app -v ${PWD}:/app \ repo:name-of-image \ sh -c "yarn install && yarn run dev"
      • -w sets the working directory, that is the directory from which the command is being run.
      • repo:name-of-image defines the base image our container is running off of.
      • On Windows, replace ${PWD} (present working directory) with %cd%.
    • Local dev tools don’t need to be installed in a bind mount situation as they can be run using docker exec command.
  • Additional volume types are available including SFTP, S3, etc.


  • To see the logs run docker logs -f container-id.
    • -f switch tells docker to “follow” the log, that is, continue updating the console with the latest log messages.
    • To see logs for only a specific service add the name of the service to the end of the logs command, e.g., docker-compose logs -f app.
    • To exit, hit Ctrl+C.


  • By default, containers are isolated from all other systems. But if placed on the same network they can talk to each other.
  • Create a network: docker network create name-of-network.
  • We connect a container to an existing network like so: docker run --network name-of-network [other params].
    • We can use a --network-alias friendly-network-name to provide a friendly DNS name to our container.
  • To connect another container to the same network we use the same process: docker run --network name-of-network [other params].
    • Again, we can use --network-alias to set a friendly DNS name for our container.

Multi-Container Applications

  • Docker Compose allows us to define in a YAML file multiple services spread across containers and then bring all of these up with a single command (or tear them down).
  • If on Linux, you’ll need to install docker-compose, it comes bundled with the Windows/Mac Docker Desktop/Toolbox packages.
  • We store our YAML in a file in the root of our project: docker-compose.yml.
  • Define the schema version we’ll be using in this yaml file and then the services we will use:
version: "3.7"
    image: node:12-alpine
    command: sh -c "yarn install && yarn run dev"
      - 3000:3000
    working_dir: /app
      - ./:/app
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos
    image: mysql:5.7
      - todo-mysql-data:/var/lib/mysql
      MYSQL_DATABASE: todos
  • We can start the application stack by running docker-compose up -d.
  • Note that the network is created automatically by default by Docker Composer for the stack.
  • Also note that the service name provided becomes the network (DNS) alias by default.
    • Note that the name of each container is in the form of project-service-replicainstance, where the last is going to be 1 until you start spinning up multiple instances of a single service (e.g. two MySQL servers).
  • To tear down the stack: docker-composer down.
    • This removes the containers and network but not the volumes, add the --volumes flag to also remove these.

Image Building Best Practices

  • Use dock image history to see the layers in an image, for example: dock image history getting-started.
    • The base image is at the bottom, newest at the top of the output.
    • You can see the commands that were run as well as the size of each layer.
    • If some output is truncated you can use --no-trunc to ensure full output is provided.
  • When a layer image is changed all downstream layers must be recreated.
    • We want to move things that are unlikely to change or rarely change into the earliest layers – for example a command to install dependencies – so that layers which are static don’t end up being rebuilt because they are downstream of more frequently changed layers.
    • For example, one might copy in a package.json and a yarn.lock file, run yarn install --production and only then copy in the actual application files.
    • One can also use multi-stage builds which separates build-time dependencies from runtime dependencies and reduces image size by reducing what is shipped with the app to only the bare essentials it needs to run.
      • For example, one can use one image to handle the build and then only copy in the production files to the production images, not everything related to the build (e.g. source files and tooling).

Container Orchestration

  • Software such as Kubernetes, Swarm, Nomad, and ECS act as orchestrators of containers.
  • They receive (from us) an expected state (how many instances of x container, etc.) and then manage the machines in a cluster to ensure that the expected state conforms to the actual state.