How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.
I am working on VM instances from the Google Cloud Platform and I am using Docker for the first time, so please bear with me. I am trying to follow steps to build a container because it is supposed to be a certain way for a project. I am stuck here:
Create the directory named ~/keto (~/ refers to your home directory)
Create a file ~/keto/Dockerfile
Add the following content to ~/keto/Dockerfile and save
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME spock
ENV SSH_PASSWORD Vulcan
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
COPY ./id_rsa.pub /home/spock/.ssh/authorized_keys
I was able to Pull the keto/ssh image from the Docker hub
with no issues, but my problem is that I am unable to create the directory and I am also stuck when it comes to creating the environment variable. Can anyone guide me to what is the correct approach to:
A-build a directory and B- after I am done with the directory to create environment variables I would really appreciate it a lot. thank you
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME=spock
ENV SSH_PASSWORD=Vulcan
# Create keto directory:
RUN mkdir ~/keto
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
ADD ./id_rsa.pub /home/spock/.ssh/authorized_keys
You may find useful the Docker’s official documentation on how to create a Dockerfile or to check how ENV variable has to be set.
I recommend always having a look at Dockerfile's hub, for this case is keto's ssh because it usually contains some guidance about the image we are going to build.
I am using aws codepipeline.
I have 2 codecommit repo say source1 and source2.
I am using codepipeline for CI/CD.
Codepipeline that i have created, is using both the codecommit repo i.e. source1 and source2 in codepipeline's source.
Now Codebuild is also using both the input source i.e source1 and source2 in its Input Artifacts.
Source1 is primary and source2 is secondary Input artifact
I have a buildspec.yml file which is using dockerfile stored in the root directory of source1 to build the code.
Now issue is, dockerfile is not able to copy source2 code in the container.
i.e
say source1 has folder abc in that and source2 has folder xyz in that
I am doing below in docker file
COPY ./abc /source1/abc/ --working
COPY ./xyz /source2/xyz/ --Not working,getting below error
COPY failed: stat /var/lib/docker/tmp/docker-builder297252497/xyz: no such file or directory.
then i tried below in dockerfile
COPY ./abc /source1/abc/ --working
COPY $CODEBUILD_SRC_DIR_source2/xyz /source2/xyz/ --Not working,getting same error
also tried to CD in $CODEBUILD_SRC_DIR_source2 and than run COPY command, but same error.
Afterwards, I tried printing PWD,CODEBUILD_SRC_DIR,CODEBUILD_SRC_DIR_source2 in both yaml as well as in dockerfile.
it yields below output
in yaml file
echo $CODEBUILD_SRC_DIR prints --> /codebuild/output/src886/src/s3/00
echo CODEBUILD_SRC_DIR_source2 --> /codebuild/output/src886/src/s3/01
echo $PWD --> /codebuild/output/src886/src/s3/00
in dockerfile
echo $CODEBUILD_SRC_DIR prints --> prints nothing
echo CODEBUILD_SRC_DIR_source2 --> prints nothing
echo $PWD --> print '/'
Seems like dockerfile doesn't have access to CODEBUILD_SRC_DIR and CODEBUILD_SRC_DIR_source2 env variables.
Anyone have any idea how can i access CODEBUILD_SRC_DIR_source2 or source2 in dockerfile so that I can copy them in container and make codebuild successful.
Thanks in Advance !!!
Adding answer for anyone else who is facing the same issue.
Hope this will help someone !
The issue was regarding build context passed to docker
when there is only one repo as input source, then codebuild uses this directory to build as pwd --> CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
The source in first repo (in case only one repo) is present in the same directory i.e. CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
and in buildspec.yml file we had following command to build the image
docker build -t tag . (uses the dockerfile present in root directory of first source)
But when we have multiple source then code build stores the input artifacts like this
CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00
CODEBUILD_SRC_DIR_source2=/codebuild/output/src886/src/s3/01
instead of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/
where CODEBUILD_SRC_DIR is first input artifact(1st codecommit repo)
and CODEBUILD_SRC_DIR_source2 is second input artifact (2nd codecommit repo)
In this case codebuild was using directory -> CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 as pwd
So below command where context was passed as dot '.' (pwd)
docker build -t tag .
As a result only first source was passed to the docker as CODEBUILD_SRC_DIR was PWD and docker was failed to refer to the second source.
To fix this we passed the parent directory of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 i.e /codebuild/output/src886/src/s3/
in docker build command like this.
docker build -t tag -f $CODEBUILD_SRC_DIR/Dockerfile /codebuild/output/src886/src/s3/
and in dockerfile reffered the source1 and source2 as below
source1=./00
source2=./01
and it worked !!!
I am trying to COPY JAVA binaries from an already existing image over to new image using multistage dockerfile.
After the image is built, I do see all the files in the new image but when I execute JAVA, it gives me no such file or directory.
FROM quay.io/<private-repo>/node:12.8.0-slim
COPY --from=quay.io/<private-repo>/openjdk:8u212-jre-alpine /usr/lib/jvm/java-1.8-openjdk/ /usr/lib/jvm/java-8-openjdk-amd64/
# Setup JAVA_HOME, this is useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
ENV PATH $PATH:$JAVA_HOME/bin
RUN export JAVA_HOME - will export the environment file only for the specific run instance. If you are logging in using docker exec and verifying the JAVA_HOME value. It will not exists.
Similarly ENV PATH $PATH:$JAVA_HOME/bin - will be used only on the run instance.
If you want to have these variables across multiple sessions append these entries to /etc/profile
I am using webfaction for my web deployment.
I have a Django app at: webapps/django_app/project_name/
I have a Git repo at: webapps/git_app/repos/my_repo.git
my_repo.git is a bare repository. It is not a working directory.
whenever I push from my local development computer to the remote (webfaction --> my_repo.git), I want my django_app to get the pushed code.
I followed this post which works fine. But no explanation of how this works is given.
I have added these two lines in post_recieve hook in my_repo.git.
#!/bin/sh
GIT_WORK_TREE=/home/username/webapps/django/myproject git checkout -f
GIT_WORK_TREE=/home/username/webapps/django/myproject git reset --hard
what does this two lines actually do?
Moreover, my Djangoapp folder is not a git repo. still whenever push is made to my_repo.git, Djangoapp gets updated. so how does it work?
When you are managing files locally with .git, you typically have two things:
Your git repository, which is contained in the .git directory, and
Your work tree, which is the set of files you are actually editing.
By default, the repository is a subdirectory of the work tree, but this is not a requirement. Setting the GIT_WORK_TREE environment variable directs git to use a different location for your checkout out files.
So the first line...
GIT_WORK_TREE=/home/username/webapps/django/myproject git checkout -f
...is asking git to check out the HEAD of the repository into /home/username/webapps/django/myproject.
The second line...
GIT_WORK_TREE=/home/username/webapps/django/myproject git reset --hard
...makes sure that /home/username/webapps/django/myproject does not have any local changes. reset --hard discards any changes to files that are tracked by git. By "local changes" I mean any changes that you or someone else has made to files in this directory; ideally, there won't be any, but if there were some there, reset -f makes sure that the modified files are overwritten with the version of the file stored in the repository.
For more details on any of the commands listed here, try running git <command> --help for the man page, or see The Git Book.