GCP Container Builder unable to evaluate symlinks - google-container-registry

Is Container Builder not able to handle Git Repos with symlinks?
Step #1 - "device-registry-php": unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/services/device-registry/build: no such file or directory

Related

AWS CodeBuild with Multi Docker Containers: unable to prepare context: unable to evaluate symlinks in Dockerfile path

so I am trying to deploy my multi-docker container(Frontend, Backend, and Nginx containers) application in AWS BeansTalk. I am using CodeBuild to build the docker images using buildspec.yml file. The build fails when trying to build the first container(containerizing frontend application). Kindly refer to the image attached for the error details.
It is basically saying could not find the Dockerfile in the client directory but the funny thing is that it exists and it works as expected locally when I build the containers with docker-compose.
Here is the project directory:
buildspec.yml file:
For the benefit of others, The reason for the error is that the Dockerfile is missing in the location. Make sure you have the DockerFile inside the directory (./client in this case). It has to be exactly spelled as Dockerfile. If it's not, check the source repo and ensure that the Dockerfile file is committed.

Cloud build can't open requirements.txt

I want to setup a cloud build trigger so that each time I modify (commit and push) main.py, it execute test_mainpytest.py with pytest
I have a project that look like this :
My_Project\function_one\
main.py
deploy.yaml
requirements.txt
dir_pytest\
test_mainpytest.py
My deploy.yaml contain thoose steps :
steps:
- name: 'python'
args: ['pip3', 'install', '-r', 'My_Project/function_one/requirements.txt', '--user']
- name: 'python'
args: ['python3', 'pytest', 'My_Project/function_one/dir_pytest/']
For the moment I just want to try to execute pytest using the trigger. When I execute the cloud build trigger, I get this error :
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'My_Project/function_one/requirements.txt'
Also my project is saved in a google cloud repository.
Edit :
I tried to add dir in my step, so it currently look like this :
steps:
- name: 'python'
dir: 'MyProject/function_one/'
args: ['pip3', 'install', '-r', 'My_Project/function_one/requirements.txt', '--user']
- name: 'python'
dir: 'MyProject/function_one/'
args: ['python3', 'pytest', 'My_Project/function_one/dir_pytest/']
Yet I still get the error, (I also tried to put dir after args but it didn't change much
I also noticed; when executing the trigger in Cloud Build; thoose 2 lines :
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my_id_1234/r/My_Project
Should I use https://source.developers.google.com/p/my_id_1234/r/My_Project and add the path to my requirement.txt and my py_test directory ?
Could you show your whole cloudbuild.yaml? If you are using a build trigger, the repository is imported directly in /workspace. If you are doing a git clone, then your repository is inside a directory with the name of the repository. The difference is:
/workspace/my-repository/My_Project/function_one/requirements.txt
versus
/workspace/My_Project/function_one/requirements.txt
If nothing else works, you can do ls -R to show you the directory structure within the build. Add this as a first build step:
- name: 'list recursively'
args: ['ls', '-R']
Notice that Cloud Build uses a directory called /workspace as a working directory in order to persist the contents. You can add the dir field within your cloudbuild.yaml file in order for Cloud Build to find the requirements.txt file and then run the tests.

Continuous deployment from git using Cloud Build

I am trying to make a build trigger for Cloud Run using this tutorial,
but I get the following error message:
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
Does anyone know why?
EDIT: My project repo is split into frontend and backend folders. I am just trying to deploy my backend folder which contains a go api.
I have followed the tutorial you provided and I encountered the same error message.
It seems like the steps specified inside the cloudbuild.yaml file are requiring a Dockerfile to be created on the repositories root folder. Precisely, the following instruction is building the image on your . folder.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA', '.']
There are two solutions to your problem. If you need build a docker image, simply creating the Dockerfile will solve your issue. Another solution would be to not use a custom image. I have used the following cloudbuild.yaml file in order to deploy successfully:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '[SERVICE-NAME]'
- '--image'
- 'gcr.io/cloudrun/hello'
- '--region'
- '[REGION]'
- '--platform'
- 'managed'
Notice how I'm still using a container image (gcr.io/cloudrun/hello).
-- edit
As explained by #guillaume-blaquiere, the tutorial takes for granted that your repository is already working on Cloud Run. You should check a Cloud Run tutorial before this one.
-- edit 2
A third solution that worked for OP is to specify the path of the Dockerfile in the build instruction. That is done by changing the . directory for the relative directory that contains the Dockerfile.
The error says /workspace/Dockerfile: no such file or directory
I suppose your repository does not contain a Dockerfile at its root.

Google Cloud Build Trigger failing with "ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1"

I am trying to setup continuous deployment of my golang backend using the Google documentation, but when my trigger fires, it fails with the following error:
starting build "eba3ce39-caad-43f0-a255-0a3cacec4913"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my-porject/r/github_myusername_myproject.com
* branch 660796f575bae6860d6f96df60cfd631a730c3ae -> FETCH_HEAD
HEAD is now at 660796f cloudbuild.yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
My project file structure looks like:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
where my cloudbuild.yaml looks like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
".",
]
# Push the image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
]
# Deploy image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "[SERVICE_NAME]"
- "--image"
- "gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- gcr.io/my-project/github.com/username/project.com
and my Dockerfile looks like
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
I got the Dockerfile from Quickstart: Build and Deploy
.
When you execute a push command to your github repo, the Cloud Build will triggers and look for the cloudbuild.yaml file. You can specify the cloudbuild.yaml location when you create the build trigger by editing the Configuration section and Cloud Build configuration file (yaml or json) in which you can choose the cloudbuild.yaml location. in your case just make it backend/cloudbuild.yaml.
Now, that's not enough because when the build start, docker build command will initiate to build your image as per your first step. However, your build context for docker is . which should not be because all your repo was copied to GCP and the build context here is relational to the project and not where the cloud build is.
To solve this issue just change the build context of docker to ./backend. Your cloudbuild final version should be something like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
"./backend",
]
#Rest of the steps ...
The Cloud Build trigger is currently pointing to /project/ while your directory structure is as follows:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
When you execute the trigger, the directory workspace is copied to /workspace/, thus it cannot find the Dockerfile therein.
You can move everything to the same working directory.
.
├── main.go
├── cloudbuild.yaml
├── Dockerfile
If you would like to keep your current directory structure,your Cloud Build trigger will need to point to /project/backend/, instead. Note that you can check your directory structure using the ls -la linux command.

Deploying golang app in cmd folder to AWS Beanstalk

I have a pre-existing golang project with the a following folder structure (minimized the folder for readability).
- postgre
- service.go
- cmd
- vano
- main.go
- vanoctl
- main.go
vano.go
Now since my project web server is in ./cmd/vano I need to create a custom Buildfile and Procfile. So I did that
Here is my Buildfile
make: ./build.sh
build.sh file:
#!/usr/bin/env bash
# Install dependencies.
go get ./...
# Build app
go build ./cmd/vano -o bin/application
and finally my Procfile:
web: bin/application
So now my folder structure looks like this:
- postgre
- service.go
- cmd
- vano
- main.go
- vanoctl
- main.go
vano.go
Buildfile
build.sh
Procfile
I zip up the source using git:
git archive --format=zip HEAD > vano.zip
And upload it to AWS Beanstalk. How ever I keep getting errors and AWS errors don't seem to be the most read. Here is my error
Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Error Message
[Instance: i-0d8f642474e3b2c68] Command failed on instance. Return code: 1 Output: (TRUNCATED)...' Failed to execute 'HOME=/tmp /opt/elasticbeanstalk/lib/ruby/bin/ruby /opt/elasticbeanstalk/lib/ruby/bin/foreman start --procfile /tmp/d20170213-1941-1baz0rh/eb-buildtask-0 --root /var/app/staging --env /var/elasticbeanstalk/staging/elasticbeanstalk.env'. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/01_configure_application.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Extra Error info:
Failed to execute 'HOME=/tmp /opt/elasticbeanstalk/lib/ruby/bin/ruby /opt/elasticbeanstalk/lib/ruby/bin/foreman start --procfile /tmp/d20170213-1941-1baz0rh/eb-buildtask-0 --root /var/app/staging --env /var/elasticbeanstalk/staging/elasticbeanstalk.env'
Another approach here instead of using a procfile etc would be to cross-compile your binary (usually pretty painless in go) and upload it that way, as per the simple instructions in the guide:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/go-environment.html
You can just compile it locally with:
GOARCH=amd64 GOOS=linux go build -o bin/application ./cmd/vano
Then upload zip of the application file and it should work, assuming your setup only requires this one binary to run.