GCP Quickstart: Build and Deploy (Cloud Run Tutorial) - google-cloud-platform

From the link below:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy#shell_1;
I am going through a tutorial on how to deploy an app on Cloud Run, and I keep having errors. see details below:
Quickstart: Build and Deploy
Directory: helloworld-shell
Files in Directory:
script.sh
invoke.go
Dockerfile
Code of each file
Script Code
script.sh
#!/bin/sh
echo Hello ${TARGET:=World}!
Invoke Code:
invoke.go
package main
import (
"fmt"
"log"
"net/http"
"os"
"os/exec"
)
func handler(w http.ResponseWriter, r *http.Request) {
log.Print("helloworld: received a request")
cmd := exec.CommandContext(r.Context(), "/bin/sh", "script.sh")
cmd.Stderr = os.Stderr
out, err := cmd.Output()
if err != nil {
w.WriteHeader(500)
}
w.Write(out)
}
func main() {
log.Print("helloworld: starting server...")
http.HandleFunc("/", handler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("helloworld: listening on %s", port)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
Dockerfile
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY invoke.go ./
# Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
COPY script.sh ./
# Run the web service on container startup.
CMD ["/server"]
After running the build cmd on the cloud shell, as below:
gcloud builds submit --tag gcr.io/ultra-complex-282611/helloworld
I keep getting output as below:
sunny#cloudshell:~/helloworld-shell (ultra-complex-282611)$ gcloud builds submit --tag gcr.io/ultra-complex-282611/helloworld-shell/script.sh
Creating temporary tarball archive of 3 file(s) totalling 1.8 KiB before compression.
Uploading tarball of [.] to [gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/ultra-complex-282611/builds/ec154f13-cc1e-4082-bcc0-e47804d201cb].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/ec154f13-cc1e-4082-bcc0-e47804d201cb?project=413771885505].
---------------------------------------------------------------------------- REMOTE BUILD OUTPUT ----------------------------------------------------------------------------
starting build "ec154f13-cc1e-4082-bcc0-e47804d201cb"
FETCHSOURCE
Fetching storage object: gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz#1595280009304068
Copying gs://ultra-complex-282611_cloudbuild/source/1595280001.548751-6f55216d642d438a82392a7ae1688fbe.tgz#1595280009304068...
/ [1 files][ 1.1 KiB/ 1.1 KiB]
Operation completed over 1 objects/1.1 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
Sending build context to Docker daemon 5.632kB
Step 1/11 : FROM golang:1.13 as builder
1.13: Pulling from library/golang
e9afc4f90ab0: Already exists
989e6b19a265: Already exists
af14b6c2f878: Already exists
5573c4b30949: Already exists
d4020e2aa747: Already exists
78b4a3dfc225: Pulling fs layer
2ade102f7410: Pulling fs layer
2ade102f7410: Verifying Checksum
2ade102f7410: Download complete
78b4a3dfc225: Verifying Checksum
2ade102f7410: Download complete
78b4a3dfc225: Verifying Checksum
78b4a3dfc225: Download complete
78b4a3dfc225: Pull complete
2ade102f7410: Pull complete
Digest: sha256:ffb07735793859dc30a06503eb4cbc5c9523b1477ac55155c61a2285abd4c89d
Status: Downloaded newer image for golang:1.13
---> afae231e0b45
Step 2/11 : WORKDIR /app
---> Running in 5f7bae1883f6
Removing intermediate container 5f7bae1883f6
---> 3405fccd5cd0
Step 3/11 : COPY go.* ./
COPY failed: no source files were specified
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build ec154f13-cc1e-4082-bcc0-e47804d201cb completed with status "FAILURE"
How else am I supposed to specify the source file?

I had the same issue. I solved it by creating the go.mod file in the directory helloworld-shell shown in the go tab of the documentation, this allowed for the build to succeed.
module github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go
go 1.13
You should have the following files:
Dockerfile go.mod invoke.go script.sh

Related

Jenkins for C++ Project with Docker

I want to integrate my C++ project with Jenkins using Docker.
First I created a setup for Jenkins using:
Dockerfile:
FROM jenkins/jenkins:latest
USER root
RUN apt-get update && apt-get install git && apt-get install make g++ -y
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
ENV CASC_JENKINS_CONFIG /var/jenkins_home/casc.yaml
ENV JENKINS_ADMIN_ID=admin
ENV JENKINS_ADMIN_PASSWORD=password
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli -f /usr/share/jenkins/ref/plugins.txt
COPY casc.yaml /var/jenkins_home/casc.yaml
plugins.txt
ant:latest
antisamy-markup-formatter:latest
authorize-project:latest
build-timeout:latest
cloudbees-folder:latest
configuration-as-code:latest
credentials-binding:latest
email-ext:latest
git:latest
github-branch-source:latest
gradle:latest
ldap:latest
mailer:latest
matrix-auth:latest
pam-auth:latest
pipeline-github-lib:latest
pipeline-stage-view:latest
ssh-slaves:latest
timestamper:latest
workflow-aggregator:latest
ws-cleanup:latest
casc.yaml
jenkins:
securityRealm:
local:
allowsSignup: false
users:
- id: ${JENKINS_ADMIN_ID}
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
- "USER:Overall/Administer:admin"
- "GROUP:Overall/Read:authenticated"
remotingSecurity:
enabled: true
security:
queueItemAuthenticator:
authenticators:
- global:
strategy: triggeringUsersAuthorizationStrategy
unclassified:
location:
url: http://127.0.0.1:8080/
So far, so good. I used docker build -t jenkins:jcasc . and docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc and Jenkins starts normally. I entered the user name (admin) and the password and it worked.
There are although 2 issues that Jenkins displays:
-> It appears that your reverse proxy set up is broken.
-> Building on the built-in node can be a security issue. You should set up distributed builds. See the documentation.
I ignored them so far, and continued with my project and installed Blue Ocean.
After installing Blue Ocean I connected it to my GitHub repository and for this I created a token and the pipeline was created, but it is not giving any result.
The log says:
Branch indexing
Connecting to https://api.github.com with no credentials, anonymous access
Jenkins-Imposed API Limiter: Current quota for Github API usage has 37 remaining (1 over budget). Next quota of 60 in 36 min. Sleeping for 4 min 43 sec.
Jenkins is attempting to evenly distribute GitHub API requests. To configure a different rate limiting strategy, such as having Jenkins restrict GitHub API requests only when near or above the GitHub rate limit, go to "GitHub API usage" under "Configure System" in the Jenkins settings.
My C++ project looks like this:
-->HelloWorld
|
-->HelloWorld.cpp
-->scripts
|
-->Linux-Build.sh
|
-->Linux-Run.sh
-->Jenkinsfile
HelloWorld.cpp:
#include <iostream>
int main(){
std::cout << "Hello World!" << std::endl;
return 0;
}
Linux-Build.sh
#!/bin/bash
vendor/bin/premake/premake5 gmake2
make
Linux-Run.sh
#!/bin/bash
bin/Debug/HelloWorld
Jenkinsfile
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'echo "Building..."'
sh 'chmod +x scripts/Linux-Build.sh'
sh 'scripts/Linux-Build.sh'
archiveArtifacts artifacts: 'bin/Debug/*', fingerprint: true
}
}
stage('Test') {
steps {
sh 'echo "Running..."'
sh 'chmod +x scripts/Linux-Run.sh'
sh 'scripts/Linux-Run.sh'
}
}
}
}
So, where might be the problem? I don't know if the problem is in my C++ code, or in the Docker configuration.
I am also a big fan of cmake, instead of make and I would prefer to use cmake for this, but I am not sure how to integrate this and make it work.
EDIT:
In the tutorial that I followed there was a "vendor" directory but the instructor didn't show what was in that directory.
EDIT:
The program took 4 minutes but it eventually did something and the error is:
+ scripts/Linux-Build.sh
scripts/Linux-Build.sh: line 3: vendor/bin/premake/premake5: No such file or directory
make: *** No targets specified and no makefile found. Stop.
script returned exit code 2
So, the problem is in the C++ project, not Docker.

Running docker build with bazel genrule and a dockerfile

I have a monorepo with multiple languages and artifacts and I want to transition to Bazel.
We want to build docker images using our existing Dockerfiles, using a genrule - to avoid translating all dockerfiles to docker-rules (at least at this point).
We know it's not Bazel's best practice, but we assumed it can allow us easy transition.
I'm testing with this Dockerfile
FROM alpine:3.8
ENTRYPOINT ["echo"]
CMD ["Hello Bazel!"]
I tried following this post, but when running the docker build command (even out of Bazel) I'm getting this -
> tar -czh . | docker build -t hello-bazel -
[+] Building 0.1s (2/2) FINISHED
=> [internal] load remote build context 0.0s
=> ERROR copy /context / 0.1s
------
> copy /context /:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: Error processing tar file(gzip: invalid header):
I tried using a genrule with the basic docker build command -
genrule(
name = "gc-hello-bazel",
srcs = ["Dockerfile"],
outs = ["imagesha.txt"],
cmd = "docker build -t hello-bazel -f $(location Dockerfile) . > $#",
tools = ["Dockerfile"],
)
But the build fails with
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open Dockerfile: no such file or directory
in case it matters, this is my directory structure:
-WORKSPACE
-<some-root-dirctories>
-<a-root-directory>
-<subdir>
-<subsubdir1>
-my_docker
-Dockerfile
-BUILD.bazel
What am I doing wrong?
TL;DR: I'm looking for a working example of docker build with Dockerfile and Bazel's Genrule
This is not an exact answer to your question as it uses a rule rather than a genrule. But I think it should solve your underlying problem.
Under bazelbuild/rules_docker there is a (non-hermetic) rule for building docker images from a Dockerfile.
To use it you will need to add the following to your WORKSPACE file;
# file: //WORKSPACE
http_archive(
name = "io_bazel_rules_docker",
sha256 = "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
)
Then in your build file you can add the following;
# file: BUILD.bazel
load("#io_bazel_rules_docker//contrib:dockerfile_build.bzl", "dockerfile_image")
dockerfile_image(
name = "my_non_hermetic_image",
dockerfile = ":Dockerfile",
)

Why does google cloud build run differently for these two commands?

We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean

Google Cloud Build Trigger failing with "ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1"

I am trying to setup continuous deployment of my golang backend using the Google documentation, but when my trigger fires, it fails with the following error:
starting build "eba3ce39-caad-43f0-a255-0a3cacec4913"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my-porject/r/github_myusername_myproject.com
* branch 660796f575bae6860d6f96df60cfd631a730c3ae -> FETCH_HEAD
HEAD is now at 660796f cloudbuild.yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
My project file structure looks like:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
where my cloudbuild.yaml looks like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
".",
]
# Push the image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
]
# Deploy image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "[SERVICE_NAME]"
- "--image"
- "gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- gcr.io/my-project/github.com/username/project.com
and my Dockerfile looks like
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
I got the Dockerfile from Quickstart: Build and Deploy
.
When you execute a push command to your github repo, the Cloud Build will triggers and look for the cloudbuild.yaml file. You can specify the cloudbuild.yaml location when you create the build trigger by editing the Configuration section and Cloud Build configuration file (yaml or json) in which you can choose the cloudbuild.yaml location. in your case just make it backend/cloudbuild.yaml.
Now, that's not enough because when the build start, docker build command will initiate to build your image as per your first step. However, your build context for docker is . which should not be because all your repo was copied to GCP and the build context here is relational to the project and not where the cloud build is.
To solve this issue just change the build context of docker to ./backend. Your cloudbuild final version should be something like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
"./backend",
]
#Rest of the steps ...
The Cloud Build trigger is currently pointing to /project/ while your directory structure is as follows:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
When you execute the trigger, the directory workspace is copied to /workspace/, thus it cannot find the Dockerfile therein.
You can move everything to the same working directory.
.
├── main.go
├── cloudbuild.yaml
├── Dockerfile
If you would like to keep your current directory structure,your Cloud Build trigger will need to point to /project/backend/, instead. Note that you can check your directory structure using the ls -la linux command.

Google Cloud Container Build trigger crashes during gradle build

I was trying to setup a build trigger for an kotlin app that is build using gradle. For that I put together the following Dockerfile:
FROM gradle:jdk8 as builder
WORKDIR /home/gradle/project
COPY . .
WORKDIR ./Kuroji-Eventrouter-Server
RUN gradle shadowJar
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=builder /home/gradle/project/Kuroji-Eventrouter-Server/build/libs/kuroji-eventrouter-server-*-all.jar kuroji-eventrouter-server.jar
ENTRYPOINT ["java", "-jar", "kuroji-eventrouter-server.jar"]
And that file works on my machine with docker build and it starts normally on google container registry however during the RUN gradle shadowJar task it crashes with some gradle error:
Step 5/9 : RUN gradle shadowJar
---> Running in ddd190fc2323
Starting a Gradle Daemon (subsequent builds will be faster)
[91m
[0m[91mFAILURE: [0m[91mBuild failed with an exception.[0m[91m
[0m[91m
[0m[91m* What went wrong:
[0m[91mCould not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
[0m[91m> [0m[91mCould not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
[0m[91m
[0m[91m* Try:
[0m[91mRun with [0m[91m--stacktrace[0m[91m option to get the stack trace. Run with --info[0m[91m or --debug[0m[91m option to get more log output. Run with [0m[91m--scan[0m[91m to get full insights.[0m[91m
[0m[91m
[0m[91m* Get more help at https://help.gradle.org
[0m[91m
[0m[91mBUILD FAILED in 3s
The command '/bin/sh -c gradle shadowJar' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
[0m
I tried building the Image on docker HUB and the same thing happend: https://hub.docker.com/r/usbpc/kuroji-eventrouter-server/builds/bnknnpqowwabdy82ydxiypc/
This is very confusing to me as I thought containers should be able to run anywhere and not depend on the enviroment. What can I do to make google build my container?
The problem was a file permission problem. Using the --stacktrace option I found that the gradle process didn't have permissions to create a folder inside the sources.
The solution I would like to do is use the --chown=gradle:gradle option on the COPY instruction, unfortunatly this it not supported in the google cloud yet.
So the solution is to add USER root before executing the gradle build.