To pass environment variables to a container via a Dockerfile, you have 3 main methods:
(1) Using the -e
flag:
docker run -e MY_VAR1="foo" -e MY_VAR2="fii" my_docker_image
(2) Using a .env
file:
docker run --env-file=.env my_docker_image
(3) Mounting a volume:
docker run -v /path/on/host:/path/in/container my_docker_image
Let’s explore the 3 main methods to pass environment variables to a container via Dockerfile.
To perfectly understand, I pre-baked a short example you can glance over in the pre-requisite section that we will reuse in the following sections.
Pre-requisite
You have the following structure:
.
├── Dockerfile
└── scripts
│ └── create_file.sh
The `create_file.sh“ bash script contains the following lines:
#!/bin/bash
set -e
cat> dummy.txt <<EOL
here is my first var: $MY_VAR1
and here my second one: $MY_VAR2.
EOL
and the Dockerfile is as follow:
FROM python
WORKDIR /app
COPY . /app
COPY --chmod=755 ./scripts/create_file.sh /app/scripts/create_file.sh
CMD /app/scripts/create_file.sh && cat dummy.txt
Using the -e
flag
zsh> docker build -t my_docker_image .
zsh> docker run -e MY_VAR1="foo" -e MY_VAR2="fii" my_docker_image
here is my first var: foo
and here my second one: fii.
Note: the variables are replaced at run time, i.e. in the processes launched by the CMD
command. Should you run the aforementioned script in a RUN
instruction (i.e. during the build time), the variables would not have been replaced. See the below example:
FROM python
WORKDIR /app
COPY . /app
COPY --chmod=755 ./scripts/create_file.sh /app/scripts/create_file.sh
RUN /app/scripts/create_file.sh
CMD cat dummy.txt
The above image would have rendered the following once executed:
zsh> docker build -t my_docker_image .
zsh> docker run -e MY_VAR1="foo" -e MY_VAR2="fii" my_docker_image
here is my first var:
and here my second one: .
It could be however cumbersome to pass on all your variables in a command line, especially when you have multiple environment variables. It can then become handy to rather use an environment file.
Using an .env
file
You can achieve the same results as previously. Simply add a .env
file in your root project containing the following lines:
MY_VAR1="foo"
MY_VAR2="fii"
You should now have the following structure:
.
├── Dockerfile
├── scripts
│ └── create_file.sh
└── .env
Then, simply run:
zsh> docker run --env-file=.env my_docker_image
here is my first var: "foo"
and here my second one: "fii".
Note: You most always want to .gitignore
the content of the .env
file.
Mounting volumes
Sometimes you want to share files stored on your host system directly with the remote container. This can be useful for instance in the case where you want to share configuration files for a server that you intend to run on a Docker container.
Via this method, you can access directories from the remote.
So, let’s say you have the following architecture:
.
├── Dockerfile
├── conf
│ └── dummy.txt
└── scripts
│ └── create_file.sh
The content of the text file is as follow:
here is my first var: "foo"
and here my second one: "fii".
And the Dockerfile contains the following lines:
FROM python
WORKDIR /app
COPY . /app
CMD cat /conf/dummy.txt
You can therefore see the outcome:
zsh> docker build -t my_docker_image .
zsh> docker run -v /relative/path/project/conf:/conf my_docker_image
here is my first var: "foo"
and here my second one: "fii".
Should you change the content of the dummy.txt
file on the host, the outcome would also be changed while running the image in the container without you needing to build the image again:
zsh> docker run -v /relative/path/project/conf:/conf my_docker_image
here is my first var: "fuu"
and here my second one: "faa".
Note: A container is a running instance of an image. Multiple containers can derive from an image. An image is a blueprint, a template for containers, containing all the code, libraries and dependencies.
You should be now ready to go!