How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.
We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean
From the link below:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy#shell_1;
I am going through a tutorial on how to deploy an app on Cloud Run, and I keep having errors. see details below:
Quickstart: Build and Deploy
Directory: helloworld-shell
Files in Directory:
script.sh
invoke.go
Dockerfile
Code of each file
Script Code
script.sh
#!/bin/sh
echo Hello ${TARGET:=World}!
Invoke Code:
invoke.go
package main
import (
"fmt"
"log"
"net/http"
"os"
"os/exec"
)
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("helloworld: received a request")
cmd := exec.CommandContext(r.Context(), "/bin/sh", "script.sh")
cmd.Stderr = os.Stderr
out, err := cmd.Output()
if err != nil {
w.WriteHeader(500)
}
w.Write(out)
}
func main() {
log.Print("helloworld: starting server...")
http.HandleFunc("/", handler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("helloworld: listening on %s", port)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
Dockerfile
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY invoke.go ./
# Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
COPY script.sh ./
# Run the web service on container startup.
CMD ["/server"]
After running the build cmd on the cloud shell, as below:
gcloud builds submit --tag gcr.io/ultra-complex-282611/helloworld
I keep getting output as below:
sunny#cloudshell:~/helloworld-shell (ultra-complex-282611)$ gcloud builds submit --tag gcr.io/ultra-complex-282611/helloworld-shell/script.sh
Creating temporary tarball archive of 3 file(s) totalling 1.8 KiB before compression.
Uploading tarball of [.] to [gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/ultra-complex-282611/builds/ec154f13-cc1e-4082-bcc0-e47804d201cb].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/ec154f13-cc1e-4082-bcc0-e47804d201cb?project=413771885505].
---------------------------------------------------------------------------- REMOTE BUILD OUTPUT ----------------------------------------------------------------------------
starting build "ec154f13-cc1e-4082-bcc0-e47804d201cb"
FETCHSOURCE
Fetching storage object: gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz#1595280009304068
Copying gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz#1595280009304068...
/ [1 files][ 1.1 KiB/ 1.1 KiB]
Operation completed over 1 objects/1.1 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
Sending build context to Docker daemon 5.632kB
Step 1/11 : FROM golang:1.13 as builder
1.13: Pulling from library/golang
e9afc4f90ab0: Already exists
989e6b19a265: Already exists
af14b6c2f878: Already exists
5573c4b30949: Already exists
d4020e2aa747: Already exists
78b4a3dfc225: Pulling fs layer
2ade102f7410: Pulling fs layer
2ade102f7410: Verifying Checksum
2ade102f7410: Download complete
78b4a3dfc225: Verifying Checksum
2ade102f7410: Download complete
78b4a3dfc225: Verifying Checksum
78b4a3dfc225: Download complete
78b4a3dfc225: Pull complete
2ade102f7410: Pull complete
Digest: sha256:ffb07735793859dc30a06503eb4cbc5c9523b1477ac55155c61a2285abd4c89d
Status: Downloaded newer image for golang:1.13
---> afae231e0b45
Step 2/11 : WORKDIR /app
---> Running in 5f7bae1883f6
Removing intermediate container 5f7bae1883f6
---> 3405fccd5cd0
Step 3/11 : COPY go.* ./
COPY failed: no source files were specified
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build ec154f13-cc1e-4082-bcc0-e47804d201cb completed with status "FAILURE"
How else am I supposed to specify the source file?
I had the same issue. I solved it by creating the go.mod file in the directory helloworld-shell shown in the go tab of the documentation, this allowed for the build to succeed.
module github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go
go 1.13
You should have the following files:
Dockerfile go.mod invoke.go script.sh
I am trying to setup continuous deployment of my golang backend using the Google documentation, but when my trigger fires, it fails with the following error:
starting build "eba3ce39-caad-43f0-a255-0a3cacec4913"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my-porject/r/github_myusername_myproject.com
* branch 660796f575bae6860d6f96df60cfd631a730c3ae -> FETCH_HEAD
HEAD is now at 660796f cloudbuild.yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
My project file structure looks like:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
where my cloudbuild.yaml looks like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
".",
]
# Push the image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
]
# Deploy image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "[SERVICE_NAME]"
- "--image"
- "gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- gcr.io/my-project/github.com/username/project.com
and my Dockerfile looks like
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
I got the Dockerfile from Quickstart: Build and Deploy
.
When you execute a push command to your github repo, the Cloud Build will triggers and look for the cloudbuild.yaml file. You can specify the cloudbuild.yaml location when you create the build trigger by editing the Configuration section and Cloud Build configuration file (yaml or json) in which you can choose the cloudbuild.yaml location. in your case just make it backend/cloudbuild.yaml.
Now, that's not enough because when the build start, docker build command will initiate to build your image as per your first step. However, your build context for docker is . which should not be because all your repo was copied to GCP and the build context here is relational to the project and not where the cloud build is.
To solve this issue just change the build context of docker to ./backend. Your cloudbuild final version should be something like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
"./backend",
]
#Rest of the steps ...
The Cloud Build trigger is currently pointing to /project/ while your directory structure is as follows:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
When you execute the trigger, the directory workspace is copied to /workspace/, thus it cannot find the Dockerfile therein.
You can move everything to the same working directory.
.
├── main.go
├── cloudbuild.yaml
├── Dockerfile
If you would like to keep your current directory structure,your Cloud Build trigger will need to point to /project/backend/, instead. Note that you can check your directory structure using the ls -la linux command.
I have a pre-existing golang project with the a following folder structure (minimized the folder for readability).
- postgre
- service.go
- cmd
- vano
- main.go
- vanoctl
- main.go
vano.go
Now since my project web server is in ./cmd/vano I need to create a custom Buildfile and Procfile. So I did that
Here is my Buildfile
make: ./build.sh
build.sh file:
#!/usr/bin/env bash
# Install dependencies.
go get ./...
# Build app
go build ./cmd/vano -o bin/application
and finally my Procfile:
web: bin/application
So now my folder structure looks like this:
- postgre
- service.go
- cmd
- vano
- main.go
- vanoctl
- main.go
vano.go
Buildfile
build.sh
Procfile
I zip up the source using git:
git archive --format=zip HEAD > vano.zip
And upload it to AWS Beanstalk. How ever I keep getting errors and AWS errors don't seem to be the most read. Here is my error
Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Error Message
[Instance: i-0d8f642474e3b2c68] Command failed on instance. Return code: 1 Output: (TRUNCATED)...' Failed to execute 'HOME=/tmp /opt/elasticbeanstalk/lib/ruby/bin/ruby /opt/elasticbeanstalk/lib/ruby/bin/foreman start --procfile /tmp/d20170213-1941-1baz0rh/eb-buildtask-0 --root /var/app/staging --env /var/elasticbeanstalk/staging/elasticbeanstalk.env'. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/01_configure_application.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Extra Error info:
Failed to execute 'HOME=/tmp /opt/elasticbeanstalk/lib/ruby/bin/ruby /opt/elasticbeanstalk/lib/ruby/bin/foreman start --procfile /tmp/d20170213-1941-1baz0rh/eb-buildtask-0 --root /var/app/staging --env /var/elasticbeanstalk/staging/elasticbeanstalk.env'
Another approach here instead of using a procfile etc would be to cross-compile your binary (usually pretty painless in go) and upload it that way, as per the simple instructions in the guide:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/go-environment.html
You can just compile it locally with:
GOARCH=amd64 GOOS=linux go build -o bin/application ./cmd/vano
Then upload zip of the application file and it should work, assuming your setup only requires this one binary to run.