Docker Context
This guide shows how contexts make it easy for a single Docker CLI to manage multiple Swarm clusters, multiple Kubernetes clusters, and multiple individual Docker nodes.
A single Docker CLI can have multiple contexts. Each context contains all of the endpoint and security information required to manage a different cluster or node. The docker context command makes it easy to configure these contexts and switch between them.
As an example, a single Docker client on your company laptop might be configured with two contexts; dev-k8s and prod-swarm. dev-k8s contains the endpoint data and security credentials to configure and manage a Kubernetes cluster in a development environment. prod-swarm contains everything required to manage a Swarm cluster in a production environment. Once these contexts are configured, you can use the top-level docker context use <context-name> to easily switch between them.
For information on using Docker Context to deploy your apps to the cloud, see Deploying Docker containers on Azure and Deploying Docker containers on ECS.
Prerequisites
To follow the examples in this guide, you’ll need:
- A Docker client that supports the top-level context command
Run docker context to verify that your Docker client supports contexts.
You will also need one of the following:
- Docker Swarm cluster
- Single-engine Docker node
- Kubernetes cluster
The anatomy of a context
A context is a combination of several properties. These include:
- Name
- Endpoint configuration
- TLS info
- Orchestrator
The easiest way to see what a context looks like is to view the default context.
This shows a single context called “default”. It’s configured to talk to a Swarm cluster through the local /var/run/docker.sock Unix socket. It has no Kubernetes endpoint configured.
The asterisk in the NAME column indicates that this is the active context. This means all docker commands will be executed against the “default” context unless overridden with environment variables such as DOCKER_HOST and DOCKER_CONTEXT , or on the command-line with the —context and —host flags.
Dig a bit deeper with docker context inspect . In this example, we’re inspecting the context called default .
This context is using “swarm” as the orchestrator ( metadata.stackOrchestrator ). It is configured to talk to an endpoint exposed on a local Unix socket at /var/run/docker.sock ( Endpoints.docker.Host ), and requires TLS verification ( Endpoints.docker.SkipTLSVerify ).
Create a new context
You can create new contexts with the docker context create command.
The following example creates a new context called “docker-test” and specifies the following:
- Default orchestrator = Swarm
- Issue commands to the local Unix socket /var/run/docker.sock
The new context is stored in a meta.json file below
/.docker/contexts/ . Each new context you create gets its own meta.json stored in a dedicated sub-directory of
Note: The default context behaves differently than manually created contexts. It does not have a meta.json configuration file, and it dynamically updates based on the current configuration. For example, if you switch your current Kubernetes config using kubectl config use-context , the default Docker context will dynamically update itself to the new Kubernetes endpoint.
You can view the new context with docker context ls and docker context inspect <context-name> .
The following can be used to create a config with Kubernetes as the default orchestrator using the existing kubeconfig stored in /home/ubuntu/.kube/config . For this to work, you will need a valid kubeconfig file in /home/ubuntu/.kube/config . If your kubeconfig has more than one context, the current context ( kubectl config current-context ) will be used.
You can view all contexts on the system with docker context ls .
The current context is indicated with an asterisk (“*”).
Use a different context
You can use docker context use to quickly switch between contexts.
The following command will switch the docker CLI to use the “k8s-test” context.
Verify the operation by listing all contexts and ensuring the asterisk (“*”) is against the “k8s-test” context.
docker commands will now target endpoints defined in the “k8s-test” context.
You can also set the current context using the DOCKER_CONTEXT environment variable. This overrides the context set with docker context use .
Use the appropriate command below to set the context to docker-test using an environment variable.
Run a docker context ls to verify that the “docker-test” context is now the active context.
You can also use the global —context flag to override the context specified by the DOCKER_CONTEXT environment variable. For example, the following will send the command to a context called “production”.
Exporting and importing Docker contexts
The docker context command makes it easy to export and import contexts on different machines with the Docker client installed.
You can use the docker context export command to export an existing context to a file. This file can later be imported on another machine that has the docker client installed.
By default, contexts will be exported as a native Docker contexts. You can export and import these using the docker context command. If the context you are exporting includes a Kubernetes endpoint, the Kubernetes part of the context will be included in the export and import operations.
There is also an option to export just the Kubernetes part of a context. This will produce a native kubeconfig file that can be manually merged with an existing
/.kube/config file on another host that has kubectl installed. You cannot export just the Kubernetes portion of a context and then import it with docker context import . The only way to import the exported Kubernetes config is to manually merge it into an existing kubeconfig file.
Let’s look at exporting and importing a native Docker context.
Exporting and importing a native Docker context
The following example exports an existing context called “docker-test”. It will be written to a file called docker-test.dockercontext .
Check the contents of the export file.
This file can be imported on another host using docker context import . The target host must have the Docker client installed.
You can verify that the context was imported with docker context ls .
The format of the import command is docker context import <context-name> <context-file> .
Now, let’s look at exporting just the Kubernetes parts of a context.
Exporting a Kubernetes context
You can export a Kubernetes context only if the context you are exporting has a Kubernetes endpoint configured. You cannot import a Kubernetes context using docker context import .
These steps will use the —kubeconfig flag to export only the Kubernetes elements of the existing k8s-test context to a file called “k8s-test.kubeconfig”. The cat command will then show that it’s exported as a valid kubeconfig file.
Verify that the exported file contains a valid kubectl config.
You can merge this with an existing
/.kube/config file on another machine.
Updating a context
You can use docker context update to update fields in an existing context.
The following example updates the “Description” field in the existing k8s-test context.
Build context
The docker build or docker buildx build commands build Docker images from a Dockerfile and a “context”.
A build’s context is the set of files located at the PATH or URL specified as the positional argument to the build command:
The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context or a RUN —mount=type=bind instruction for better performance with BuildKit. The build context is processed recursively. So, a PATH includes any subdirectories and the URL includes the repository and its submodules.
PATH context
This example shows a build command that uses the current directory ( . ) as a build context:
With the following Dockerfile:
And this directory structure:
The legacy builder sends the entire directory to the daemon, including bar and node_modules directories, even though the Dockerfile does not use them. When using BuildKit, the client only sends the files required by the COPY instructions, in this case foo .
In some cases you may want to send the entire context:
You can use a .dockerignore file to exclude some files or directories from being sent:
Warning
Avoid using your root directory, / , as the PATH for your build context, as it causes the build to transfer the entire contents of your hard drive to the daemon.
URL context
The URL parameter can refer to three kinds of resources:
Git repositories
When the URL parameter points to the location of a Git repository, the repository acts as the build context. The builder recursively pulls the repository and its submodules. A shallow clone is performed and therefore pulls down just the latest commits, not the entire history. A repository is first pulled into a temporary directory on your host. After that succeeds, the directory is sent to the daemon as the context. Local copy gives you the ability to access private repositories using local user credentials, VPN’s, and so forth.
Note
If the URL parameter contains a fragment the system will recursively clone the repository and its submodules using a git clone —recursive command.
Git URLs accept a context configuration parameter in the form of a URL fragment, separated by a colon ( : ). The first part represents the reference that Git will check out, and can be either a branch, a tag, or a remote reference. The second part represents a subdirectory inside the repository that will be used as a build context.
For example, run this command to use a directory called docker in the branch container :
The following table represents all the valid suffixes with their build contexts:
Build Syntax Suffix | Commit Used | Build Context Used |
---|---|---|
myrepo.git | refs/heads/<default branch> | / |
myrepo.git#mytag | refs/tags/mytag | / |
myrepo.git#mybranch | refs/heads/mybranch | / |
myrepo.git#pull/42/head | refs/pull/42/head | / |
myrepo.git#:myfolder | refs/heads/<default branch> | /myfolder |
myrepo.git#master:myfolder | refs/heads/master | /myfolder |
myrepo.git#mytag:myfolder | refs/tags/mytag | /myfolder |
myrepo.git#mybranch:myfolder | refs/heads/mybranch | /myfolder |
By default .git directory is not kept on Git checkouts. You can set the BuildKit built-in arg BUILDKIT_CONTEXT_KEEP_GIT_DIR=1 to keep it. It can be useful to keep it around if you want to retrieve Git information during your build:
Tarball contexts
If you pass a URL to a remote tarball, the URL itself is sent to the daemon:
The download operation will be performed on the host the daemon is running on, which is not necessarily the same host from which the build command is being issued. The daemon will fetch context.tar.gz and use it as the build context. Tarball contexts must be tar archives conforming to the standard tar UNIX format and can be compressed with any one of the xz , bzip2 , gzip or identity (no compression) formats.
Text files
Instead of specifying a context, you can pass a single Dockerfile in the URL or pipe the file in via STDIN . To pipe a Dockerfile from STDIN :
With Powershell on Windows, you can run:
If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file called Dockerfile , and any -f , —file option is ignored. In this scenario, there is no context.
The following example builds an image using a Dockerfile that is passed through stdin. No files are sent as build context to the daemon.
Omitting the build context can be useful in situations where your Dockerfile does not require files to be copied into the image, and improves the build-speed, as no files are sent to the daemon.
Gotchas in Writing Dockerfile
Once, you write build instructions into Dockerfile, you can build the same image just with docker build command.
Dockerfile is also useful to tell the knowledge of what a job the container does to somebody else. Your teammates can tell what the container is supposed to do just by reading Dockerfile. They don’t need to know login to the container and figure out what the container is doing by using ps command.
For these reasons, you must use Dockerfile when you build images. However, writing Dockerfile is sometimes painful. In this post, I will write a few tips and gochas in writing Dockerfile so that you love the tool.
ADD and understanding context in Dockerfile
ADD is the instruction to add local files to Docker image. The basic usage is very simple. Suppose you want to add a local file called myfile.txt to /myfile.txt of image.
Then your Dockerfile looks like this.
Very simple. However, if you want to add /home/vagrant/myfile.txt, you can’t do this.
You got no such file or directory error even if you have the file. Why? This is because /home/vagrant/myfile.txt is not added to the context of Dockerfile. Context in Dockerfile means files and directories available to the Dockerfile instructions. Only files and directories in the context can be added during build. Files and sub directories under the current directory are added to the context. You can see this when you run build command.
What’s happening here is Docker client makes tarball of entries under the current directory and send it to Docker daemon. The reason why thiis is required is because your Docker daemon may be running on remote machine. That’s why the above command says Uploading.
There is a pitfall, though. Since automatically entries under current directories are added to the context, it tries to upload huge files and take longer time for build even if you don’t add the file.
So the best practice is only placing files and directories that you need to add to image under current directory.
Treat your container like a binary with CMD
By using CMD instruction in Dockerfile, your container acts like a single executable binary. Suppose you have these instructions in your Dockerfile.
When you build a container from this Dockerfile and run with docker run -i run_image , it runs /usr/local/bin/run.sh script and exists.
If you don’t use CMD , you always have to pass the command to the argument: docker run -i run_image /usr/local/bin/run.sh .
This is not just cumbersome, but also considered to be a bad practice from the perspective of operation.
If you have CMD instruction, the purpose of the container becomes explicit: all what the container wants to do is running the command.
But, if you don’t have the instruction, anybody except the person who made the container need to rely on external documentation to know how to run the container properly.
So, in general, you should have CMD instruction in your Dockerfile.
Difference between CMD and ENTRYPOINT
CMD and ENTRYPOINT are confusing.
Every commands, either passed as an argument or specified from CND instruction are passed as argument of binary specified in ENTRYPOINT .
/bin/sh -c is the default entrypoint. So if you specify CMD date without specifying entrypoint, Docker executes it as /bin/sh -c date .
By using entrypoint, you can change the behaviour of your container at run time that makes container operation a bit more flexible.
With the entrypoint above, the container prints out current date with different format.
exec format error
There is one caveat in default entrypoint. For example, you want to execute the following shell script.
/usr/local/bin/run.sh
Dockerfile
When you run the container, your expectation is the container prints out hello, world . However, what you will get is a error message that doesn’t make sense.
You see this message when you didn’t put shebang in your script, and because of that, default entrypoint /bin/sh -c does not know how to run the script.
To fix this, you can either add shebang
/usr/local/bin/run.sh
or you can specify from command line.
Build caching: what invalids cache and not?
Docker creates a commit for each line of instruction in Dockerfile. As long as you don’t change the instruction, Docker thinks it doesn’t need to change the image, so use cached image which is used by the next instruction as a parent image. This is the reason why docker build takes long time in the first time, but immediately finishes in the second time.
However, when cache is used and what invalids cache are sometimes not very clear. Here is a few cases that I found worth to note.
Cache invalidation at one instruction invalids cache of all subsequent instructions
This is the basic rule of caching. If you cause cache invalidation at one instruction, subsequent instructions doesn’t use cache.
Since you add Run apt-get update instruction, all instructions after that have to be done from the scratch even if they are not changed. This is inevitable because Dockerfile uses the image built by the previous instruction as a parent image to execute next instruction. So, if you insert an instruction that creates a new parent image, all subsequent instructions cannot use cache because now parent image differs.
Cache is invalid even when adding commands that don’t do anything
This invalidates caching. For example,
Even if true command doesn’t change anything of the image, Docker invalids the cache.
Cache is invalid when you add spaces between command and arguments inside instruction
This invalids cache
Cache is used when you add spaces around commands inside instruction
Cache is valid even if you add space around commands
Cache is used for non-idempotent instructions
This is kind of pitfall of build caching. What I mean by non-idempotent instructions is the execution of commands that may return different result each time. For example, apt-get update is not idempotent because the content of updates changes as time goes by.
You made this Dockerfile and create image. 3 months later, Ubuntu made some security updates to their repository, so you rebuild the image by using the same Dockerfile hoping your new image includes the security updates. However, this doesn’t pick up the updates. Since no instructions or files are changed, Docker uses cache and skips doing apt-get update .
If you don’t want to use cache, just pass -no-cache option to build.
Instructions after ADD never cached (Only versions prior to 0.7.3)
If you use Docker before v7.3, watch out!
If you have Dockerfile like this, Run apt-get update and Run apt-get install openssh-server will never be cached.
The behavior is changed from v7.3. It caches even if you have ADD instruction, but invalids cache if file content is changed.
Since you change rock.you file, instructions after Add doesn’t use cache.
Hack to run container in the background
If you want to simplify the way to run containers, you should run your container on background with docker run -d image your-command . Instead of running with docker run -i -t image your-command , using -d is recommended because you can run your container with just one command and you don’t need to detach terminal of container by hitting Ctrl + P + Q .
However, there is a problem with -d option. Your container immediately stops unless the commands are not running on foreground.
Let me explain this by using case where you want to run apache service on a container. The intuitive way of doing this is
However, the container stops immediately after it is started. This is because apachectl exits once it detaches apache daemon.
Docker doesn’t like this. Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
You can solve this by directly running apache executable with foreground option.
Here we are manually doing what apachectl does for us and run apache executable. With this approach, apache keeps running on foreground.
The problem is that some application does not run in the foreground. Also, we need to do extra works such as exporting environment variables by ourselves. How can we make it easier?
In this situation, you can add tail -f /dev/null to your command. By doing this, even if your main command runs in the background, your container doesn’t stop because tail is keep running in the foreground. We can use this technique in the apache case.
Much better, right? Since tail -f /dev/null doesn’t do any harm, you can use this hack to any applications.
Program Is Made At Night
- kim hirokuni
Blog about programming, problem solving, e-scooter, and other random things that I love
What is context: . in docker-compose?
I am trying learn docker and created my first docker compose file. Basically I am trying to create a Django web microservice and use docker compose file to manage it. Here is my code for my docker-compose.yml:
I don’t understand the use of context: . Can someone please explain me this
2 Answers 2
CONTEXT
Either a path to a directory containing a Dockerfile , or a url to a git repository.
When the value supplied is a relative path, it is interpreted as relative to the location of the Compose file. This directory is also the build context that is sent to the Docker daemon.
Compose builds and tags it with a generated name, and uses that image thereafter.
That mean, its a path to the directory that contains Dockerfile file. It purposes to build a new image for the service.
Dockerfile file should be existing in the current directory (in the same folder of the docker-compose.yml file).
context defines either a path to a directory containing a Dockerfile, or a URL to a git repository.
In your case, . is a relative path representing the current directory where you run docker-compose command and where Compose can find a Dockerfile (and obviously also the docker-compose.yaml file).
The Dockerfile can also be elsewhere using the dockerfile keyword like that in this example:
The additional context keyword here is to tell the Dockerfile where to find its dependent files.