Why does google cloud build run differently for these two commands? - google-cloud-platform

We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean

Related

How to use Dockerfile COPY command to copy files from parent directories [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

Using 2 Dockerfiles in Cloud Build to re-use intermediary step image if CloudBuild fails

Cloud Build fails with Timeout Error (I'm trying to deploy CloudRun with Prophet). Therefore I'm trying to split the Dockerfile into two (saving the image in between in case it fails). I'd split the Dockerfile like this:
Dockerfile_one: python + prophet's dependencies
Dockerfile_two: image_from_Dockerfile_one + prophet + other dependencies
What should cloudbuild.yaml should look like to:
if there is a previously image available skip the step, else run the step with the Dockerfile_one and save the image
use the image from the step (1), add more dependencies to it and save the image for deploy
Here is how cloudbuild.yaml looks like right now
steps:
# create gcr source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcr_source directory for ${_GCR_NAME}'
mkdir _gcr_source
cp -r cloudruns/${_GCR_NAME}/. _gcr_source
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${_GCR_NAME}', '.']
dir: '_gcr_source'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/${_GCR_NAME}']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- ${_GCR_NAME}
- --image=gcr.io/$PROJECT_ID/${_GCR_NAME}
Thanks a lot!
You need to have 2 pipelines
The first one create the base image. Like that, you can trigger it everytime that you need to rebuild this base image, with, possibly a different lifecycle than your application lifecycle. Something similar to that
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<PROJECT_ID>/base-image', '-f', 'DOCKERFILE_ONE', '.']
images: ['gcr.io/<PROJECT_ID>/base-image']
Then, in your second dockerfile, start from the base image and use a second Cloud Build pipeline to build, push and deploy it (as you do in your 3 last steps in your question)
FROM gcr.io/<PROJECT_ID>/base-image
COPY .....
....
...
Not the answer, but as a workaround. If anybody has the same issue, using Python3.8 instead 3.9 worked for Cloud Build.
This what the Dockerfile looks like:
RUN pip install --upgrade pip wheel setuptools
# Install pystan
RUN pip install Cython>=0.22
RUN pip install numpy>=1.7
RUN pip install pystan==2.19.1.1
# Install other prophet dependencies
RUN pip install -r requirements.txt
RUN pip install prophet
Though figuring out how to iteratively build images for CloudRun, would be really great.
Why did your Cloud Build fail with Timeout Error ?
While building images in docker, it is important to keep the image size down. Often multiple dockerfiles are created to handle the image size constraint. In your case, you were not able to reduce the image size and include only what is needed.
What can be done to rectify it ?
As per this documentation, multi-stage builds, (introduced in
Docker 17.05) allows you to build your app in a first "build"
container and use the result in another container, while using the
same Dockerfile.
You use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage
to another, leaving behind everything you don’t want in the final
image. To show how this works, follow this link.
You only need a single Dockerfile.
The result is the same tiny production image as before, with a
significant reduction in complexity. You don’t need to create any
intermediate images and you don’t need to extract any artifacts to
your local system at all.
How does it work?
You can name your build stages. By default, the stages are not
named, and you refer to them by their integer number, starting with 0
for the first FROM instruction. However, you can name your stages, by
adding an AS to the FROM instruction.
When you build your image, you don’t necessarily need to build the
entire Dockerfile including every stage. You can specify a target
build stage.
When using multi-stage builds, you are not limited to copying from
stages you created earlier in your Dockerfile. You can use the
COPY --from instruction to copy from a separate image,either
using the local image name, a tag available locally or on a Docker
registry, or a tag ID.
You can pick up where a previous stage left off by referring
to it when using the FROM directive.
In the Google documentation, there is an example of dockerfile
which uses multi-stage builds. The hello binary is built in a first
container and injected in a second one. Because the second container
is based on scratch, the resulting image contains only the hello
binary and not the source file and object files needed during the
build.
FROM golang:1.10 as builder
WORKDIR /tmp/go
COPY hello.go ./
RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello
FROM scratch
CMD [ "/hello" ]
COPY --from=builder /tmp/go/hello /hello
Here is a tutorial to understand how multi staging builds work.

cloud run build and building docker images while building a docker image

Currently, I found out a google cloud build happens during building a docker image time(not as I thought in that it would build my image and then execute my image to do all the building). That was in this post
quick start in google cloud build
soooo, I have a Dockerfile that is real simple now like so
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
RUN ["/bin/bash", "./downloadAndExtract.sh"]
and I have a single downloadAndExtract that downloads any artifacts(zip files) from the last monobuild run that were built(only modified servers are built OR servers that dependend on changes in the last CI build are built like downstream libraries may be changed). This first line just lists urls of the zip files I need in a file...
curl "https://circleci.com/api/v1.1/project/butbucket/Twilio/orderly/latest/artifacts?circle-token=$token" | grep -o 'https://[^"]*zip' > artifacts.txt
while read url; do
echo "Downloading url=$url"
zipFile=${url/*\//}
projectName=${zipFile/.zip/}
echo "Zip filename=$zipFile"
echo "projectName=$projectName"
wget "$url?circle-token=$token"
mv "$zipFile?circle-token=$token" $zipFile
unzip $zipFile
rm $zipFile
cd $projectName
./deployGcloud.sh
cd ..
done <artifacts.txt
echo "DONE"
Of course, the deployGcloud script has these commands in it sooooo this means we are building docker images WHILE building the google cloud build docker image(which still seems funny to me)....
docker build . --tag gcr.io/twix/authservice
docker push gcr.io/twix/authservice
gcloud run deploy staging-admin --region us-west1 --image gcr.io/twix/authservice --platform managed
BOTH docker commands seem to be erroring out on this..
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
while the gcloud command seems to be very happy doing a deploy but just using a previous image we deployed at that location.
So, how to get around this error so my build will work and build N images and deploy them all to cloud run?
oh, I finally figured it out. Google has this weird thing in it's config.yaml files of use this docker image to run a curl command and then on next step use this OTHER dockerr image to run some other command and so on using 5 different images. This is all very confusing so instead, I realized I had to figure out how to create my ONE docker image and just run it as a command so I modify the above to have an ENTRYPOINT instead and then docker build and docker push my image into google. Then, I have a cloudbuild.yaml with a single step image command to run.
In this way, we can tweak our builds easily within our docker image that is just run. This is now way simpler than the complex model that google had setup as it becomes basic do your build in the container however you like and install whatever tools you need in the one docker image.
ie. beware the google quick starts which honestly IMHO are really overcomplicating it compared to circleCI and other systems. (of course, that is just an opinion and each to their own).

Perform actions on server after CircleCI deployment

I have a Django project that I deploy on a server using CircleCI. The server is a basic cloud server, and I can SSH into it.
I set up the deployment section of my circle.yml file, and everything is working fine. I would like to automatically perform some actions on the server after the deployment (such as migrating the database or reloading gunicorn).
I there a way to do that with CircleCI? I looked in the docs but couldn't find anything related to this particular problem. I also tried to put ssh user#my_server_ip after my deployment step, but then I get stuck and cannot perform any action. I can successfully SSH in, but the rest of the commands is not called.
Here is what my ideal circle.yml file would look like:
deployment:
staging:
branch: develop
commands:
- rsync --update ./requirements.txt user#server:/home/user/requirements.txt
- rsync -r --update ./myapp/ user#server:/home/user/myapp/
- ssh user#server
- workon myapp_venv
- cd /home/user/
- pip install -r requirements.txt
I solved the problem by putting a post_deploy.sh file on the server, and putting this line on the circle.yml:
ssh -i ~/.ssh/id_myhost user#server 'post_deploy.sh'
It executes the instructions in the post_deploy.sh file, which is exactly what I wanted.

How to update code in a docker container?

I've set up a docker django container and made its image using build using tutorial here. The tutorial shows how to make a basic django application and mounts the application to "/code", which as I understand, is contained in a data-volume.
However I want to understand that how will I be able to update and develop this code, and be able to ship/deploy it. Since when I make a commit, it's doesn't take account any changes in the code, since it's a part of the data volume.
Is there any way I can make the django code a part of the image, or update the image with the updated code?
In my experience Docker serves two purposes:
To be able to develop code in a containerized environment. This is very useful as I am now able to get new developers on my team ready to work in about 5 mins Previously, this could have taken anywhere from an hour to several hours for misc issues, especially on older projects.
To be able to package an application in a containerized environment. This is also a great time saver as the only requirement for the environment is to have Docker installed.
When you are developing your code you should mount the source/volume so that your changes are always reflected inside the container. When you want to package an app for deployment you should COPY the source into the container and package it appropriately.
Here is a docker-compose file I use to (1) build an image to develop against, (2) develop my code, and (3) ship it (I'm using spring boot):
version: '3.7'
services:
dev:
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
volumes:
- './src:/build/src'
- './db:/build/db'
- './target:/build/target'
- './logs:/build/logs'
command: 'mvn spring-boot:run -Drun.jvmArguments="-Xmx512m" -Dmaven.test.skip=true'
deploy:
build:
context: .
dockerfile: Dockerfile-Deploy
args:
MVN_BUILDER: '${MVN_BUILDER}'
image: '${DEPLOYMENT_IMAGE}'
container_name: '${CONTAINER_NAME}'
ports:
- '8080:8080'
maven:
build:
context: .
dockerfile: Dockerfile
image: '${MVN_BUILDER}'
container_name: '${CONTAINER_NAME}'
I would run docker-compose build maven to build my base image. This is needed so that when I run my code in a container all the dependencies are installed in the image. The Dockerfile for this essentially copies to pom.xml into the image and downloads the dependencies needed for the app. Note this would need to be performed anytime dependencies change. Here is the Dockerfile to build that image that is referenced in the maven service:
### BUILD a maven builder. This will contain all mvn dependencies and act as an abstraction for all mvn goals
FROM maven:3.5.4-jdk-8-alpine as builder
#Copy Custom Maven settings
#COPY settings.xml /root/.m2/
# create app folder for sources
RUN mkdir -p /build
RUN mkdir -p /build/logs
# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /build
COPY pom.xml /build
#Download all required dependencies into one layer
RUN mvn -B dependency:go-offline dependency:resolve-plugins
RUN mvn clean install
Next, I would run docker-compose up dev to start my dev service and begin to develop my application. This service mounts my code into the container and uses Maven to start a spring boot application. Anytime I change the code spring boot restarts the server and my changes are reflected.
Finally, once I am happy with my application I build an image that has my application packaged for deployment using docker-compose build deploy. I use a two-stage build process to first copy the source into a container and package it for deployment as a Jar then that Jar is put into the 2nd stage where I can simply run java -jar build/app.jar (in the container) to start my application and the first stage is removed. That's It! Now you can deploy this latest image anywhere Docker is installed.
Here is what that last Dockerfile (Dockerfile-Deploy) looks like:
ARG MVN_BUILDER
### Stage 1 - BUILD image
FROM $MVN_BUILDER as builder
COPY src /build/src
RUN mvn clean package -PLOCAL
### Stage 2 - Deploy Jar
FROM openjdk:8
RUN mkdir -p /build
COPY --from=builder /build/target/*.jar /build/app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","build/app.jar"]
Here the .env file in the same directory as the docker-compose file. I use it to abstract image/container names and simply bump up the version number in one place when a new image is needed.
MVN_BUILDER=some/maven/builder:0.1
DEPLOYMENT_IMAGE=some/deployment/spring:0.1
CONTAINER_NAME=spring-container
CONTAINER_NAME_DEBUG=spring-container-debug
I think it's too late to answer your question, however, it might be beneficial for others who reach out.
The tutorial you mentioned is a bit tricky to use for the first-timers, so, I change the structure a little bit. I assume that you have a docker registry account (like Dockerhub) for the purpose of publishing the images to. This is required if you want to access the image on a remote host (you can copy the actual image file but is not recommended).
creating a project
Assume that you are going to create a website with Django and dockerize it, first, you do:
django-admin startproject samplesite
It creates a directory samplesite that includes the following (I added requirements.txt):
db.sqlite3 manage.py requirements.txt samplesite
adding Dockerfile and docker-compose.yml
For the Dockerfile, as you can see, nothing is changed compared to the Dockerfile.
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
However for the docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
image: yourUserNameOnDockerHub/mywebsite:0.1 # this line is added
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
docker-compose.yml is also almost identical to the one mentioned in the tutorial, with volume commented and one line added image: mywebsite:0.1. This line allows us to track the built image and deploy it whenever we want. The volume mounting is not related to the code you write and was put there to take out the dynamic content that is changed by Django (sqlite, uploaded files, etc.).
build and run
If you run docker-compose up the first time everything works fine, however, because of the new line added, when you change the code after the first time, the changes will not reflect in the container that runs. This is because upon each docker-compose up, compose will look for mywebsite:0.1 (that already exists) and does not build a new image and creates a container based on the old one. As we need that image name and tag to publish/deploy our image, we need to instead use:
docker-compose up --build
It will re-build an image with the changes reflected. Every time you make some changes, run it and a new fresh image is created that can be seen with (note that although the name and tag remain unchanged, change in image id shows that this is a new image):
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
yourUserNameOnDockerHub/mywebsite 0.1 033c9d2bfac0 7 seconds ago 974MB
publishing and deployment
If you have set up an account on Dockerhub (or any other registry) you can publish the image for later use or deployment on a remote server:
docker push yourUserNameOnDockerHub/mywebsite:01
If you want to deploy it on a remote host and want to use docker-compose again, Just change the docker-compose.yml to:
version: '3'
services:
db:
image: postgres
web:
image: yourUserNameOnDockerHub/mywebsite:0.1
command: python manage.py runserver 0.0.0.0:8000
#volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
Note that the build: . line is removed (as we are going to run it only). When developing locally, whenever you run docker-compose up --build a new image will be created and tagged and a container based on that will run in the compose stack. If you thought that you are happy with the changes, you follow the publishing step to make it live on the server.
When you want to update an image, lets say due to your application code changes, you use COPY during your image-build, so in the Dockerfile you do something like
COPY /you/code/on/the/host /var/www
Also see my answer about "volumes" and image-building https://stackoverflow.com/a/39314602/3625317 to clarify why your code is missing in the build
In step 9 of the tutorial you set a volume. This volume will link your current directory and your container /code directory. In other words, they will be the same.
So any updates to your local files will change the files in your container as well. Remember that you will need to restart your app so the changes can take place.
Before you deploy your image, you will need to create a second docker compose file. This file will remove the volume so the code will stay inside the container and won't change from outside. You can follow the steps provided in docker compose documentation.