quick start in google cloud build - google-cloud-platform

I ran the quick start
https://cloud.google.com/cloud-build/docs/quickstart-build
and in the section "View the build details", I don't see the output of the quickstart.sh file anywhere. Where is the logs from actually running the quickstart.sh file?
Without any output from quickstart.sh, I am unsure how to log what is going on in docker so I can fix broken builds that build in docker.

In this official tutorial, a docker container is built via Cloud Build, with only one executable bash script which is displaying the current date.
#!/bin/sh
echo "Hello, world! The time is $(date)."
Here is the Dockerfile :
FROM alpine
COPY quickstart.sh /
CMD ["/quickstart.sh"]
It means quickstart.sh is never executed during build phase but only at the execution step of container.
To see the output of script, you should run container (either locally on your computer, or via Cloud Shell) :
$ docker run gcr.io/[PROJECT-ID]/quickstart-image:latest
Hello, world! The time is Sat Jun 13 05:10:41 UTC 2020.
If you want to execute a script during build phase of container, you should use RUN command.
For example, let's create a second executable script called build.sh in the same directory:
#!/bin/sh
echo "Hello, build at $(date)."
Then, add it on Dockerfile file description :
FROM alpine
COPY quickstart.sh /
COPY build.sh /
RUN /build.sh
CMD ["/quickstart.sh"]
Now, we can build a new version of container image :
gcloud builds submit --tag gcr.io/[PROJECT-ID]/quickstart-image
This time, output of build.sh could be seen in the details output log in Cloud Build console:
Of course, here it's just a simple example to give you a quick answer. You may check all other possible options to write a correct and clean Dockerfile. But it's not really linked with Cloud Build.

Related

Why does google cloud build run differently for these two commands?

We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean

cloud run build and building docker images while building a docker image

Currently, I found out a google cloud build happens during building a docker image time(not as I thought in that it would build my image and then execute my image to do all the building). That was in this post
quick start in google cloud build
soooo, I have a Dockerfile that is real simple now like so
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
RUN ["/bin/bash", "./downloadAndExtract.sh"]
and I have a single downloadAndExtract that downloads any artifacts(zip files) from the last monobuild run that were built(only modified servers are built OR servers that dependend on changes in the last CI build are built like downstream libraries may be changed). This first line just lists urls of the zip files I need in a file...
curl "https://circleci.com/api/v1.1/project/butbucket/Twilio/orderly/latest/artifacts?circle-token=$token" | grep -o 'https://[^"]*zip' > artifacts.txt
while read url; do
echo "Downloading url=$url"
zipFile=${url/*\//}
projectName=${zipFile/.zip/}
echo "Zip filename=$zipFile"
echo "projectName=$projectName"
wget "$url?circle-token=$token"
mv "$zipFile?circle-token=$token" $zipFile
unzip $zipFile
rm $zipFile
cd $projectName
./deployGcloud.sh
cd ..
done <artifacts.txt
echo "DONE"
Of course, the deployGcloud script has these commands in it sooooo this means we are building docker images WHILE building the google cloud build docker image(which still seems funny to me)....
docker build . --tag gcr.io/twix/authservice
docker push gcr.io/twix/authservice
gcloud run deploy staging-admin --region us-west1 --image gcr.io/twix/authservice --platform managed
BOTH docker commands seem to be erroring out on this..
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
while the gcloud command seems to be very happy doing a deploy but just using a previous image we deployed at that location.
So, how to get around this error so my build will work and build N images and deploy them all to cloud run?
oh, I finally figured it out. Google has this weird thing in it's config.yaml files of use this docker image to run a curl command and then on next step use this OTHER dockerr image to run some other command and so on using 5 different images. This is all very confusing so instead, I realized I had to figure out how to create my ONE docker image and just run it as a command so I modify the above to have an ENTRYPOINT instead and then docker build and docker push my image into google. Then, I have a cloudbuild.yaml with a single step image command to run.
In this way, we can tweak our builds easily within our docker image that is just run. This is now way simpler than the complex model that google had setup as it becomes basic do your build in the container however you like and install whatever tools you need in the one docker image.
ie. beware the google quick starts which honestly IMHO are really overcomplicating it compared to circleCI and other systems. (of course, that is just an opinion and each to their own).

Google Cloud Container Build trigger crashes during gradle build

I was trying to setup a build trigger for an kotlin app that is build using gradle. For that I put together the following Dockerfile:
FROM gradle:jdk8 as builder
WORKDIR /home/gradle/project
COPY . .
WORKDIR ./Kuroji-Eventrouter-Server
RUN gradle shadowJar
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=builder /home/gradle/project/Kuroji-Eventrouter-Server/build/libs/kuroji-eventrouter-server-*-all.jar kuroji-eventrouter-server.jar
ENTRYPOINT ["java", "-jar", "kuroji-eventrouter-server.jar"]
And that file works on my machine with docker build and it starts normally on google container registry however during the RUN gradle shadowJar task it crashes with some gradle error:
Step 5/9 : RUN gradle shadowJar
---> Running in ddd190fc2323
Starting a Gradle Daemon (subsequent builds will be faster)
[91m
[0m[91mFAILURE: [0m[91mBuild failed with an exception.[0m[91m
[0m[91m
[0m[91m* What went wrong:
[0m[91mCould not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
[0m[91m> [0m[91mCould not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
[0m[91m
[0m[91m* Try:
[0m[91mRun with [0m[91m--stacktrace[0m[91m option to get the stack trace. Run with --info[0m[91m or --debug[0m[91m option to get more log output. Run with [0m[91m--scan[0m[91m to get full insights.[0m[91m
[0m[91m
[0m[91m* Get more help at https://help.gradle.org
[0m[91m
[0m[91mBUILD FAILED in 3s
The command '/bin/sh -c gradle shadowJar' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
[0m
I tried building the Image on docker HUB and the same thing happend: https://hub.docker.com/r/usbpc/kuroji-eventrouter-server/builds/bnknnpqowwabdy82ydxiypc/
This is very confusing to me as I thought containers should be able to run anywhere and not depend on the enviroment. What can I do to make google build my container?
The problem was a file permission problem. Using the --stacktrace option I found that the gradle process didn't have permissions to create a folder inside the sources.
The solution I would like to do is use the --chown=gradle:gradle option on the COPY instruction, unfortunatly this it not supported in the google cloud yet.
So the solution is to add USER root before executing the gradle build.

Extract unit test results from multi-stage Docker build (.NET Core 2.0)

I am building a .NET Core 2.0 web API and I am creating a Docker image. I am quite new to Docker so apologies if the question has been answered before.
I have the following Docker file for creating the image. In particular, I run the unit tests during the build process and the results are output to ./test/test_results.xml (in a temporary container created during the build, I guess). My question is, how do I access these test results after the build has finished?
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# Copy main csproj file for DataService
COPY src/DataService.csproj ./src/
RUN dotnet restore ./src/DataService.csproj
# Copy test csproj file for DataService
COPY test/DataService.Tests.csproj ./test/
RUN dotnet restore ./test/DataService.Tests.csproj
# Copy everything else (excluding elements in dockerignore)
COPY . ./
# Run the unit tests
RUN dotnet test --results-directory ./ --logger "trx;LogFileName=test_results.xml" ./test/DataService.Tests.csproj
# Publish the app to the out directory
RUN dotnet publish ./src/DataService.csproj -c Release -o out
# Build the runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
EXPOSE 5001
COPY --from=build-env /app/src/out .
# Copy test results to the final image as well??
# COPY --from=build-env /app/test/test_results.xml .
ENTRYPOINT ["dotnet", "DataService.dll"]
One approach that I have taken is to comment in the line # COPY --from=build-env /app/test/test_results.xml .. This puts the test_results.xml in my final image. I can then extract these results and remove the test_results.xml from the final image using the following powershell script.
$id=$(docker create dataservice)
docker cp ${id}:app/test_results.xml ./test/test_results.xml
docker start $id
docker exec $id rm -rf /app/test_results.xml
docker commit $id dataservice
docker rm -vf $id
This however seems ugly and I am wondering is there a cleaner way to do it.
I was hoping that there was a way to mount a volume during docker build but it does not appear that this is going to be supported in the official Docker.
I am looking now at creating a separate image, solely for the unit tests.
Not sure if there is a recommended way of achieving what I want.
Thanks for your question - I needed to solve the same thing.
I added a separate container stage based on results of the build. The tests and its output are all handled in there so they never reach the final container. So build-env is used to build and then an intermediate test container is based on that build-env image and final is based on runtime container with results of build-env copied in.
# ---- Test ----
# run tests and capture results for later use. This use the results of the build stage
FROM build AS test
#Use label so we can later obtain this container from the multi-stage build
LABEL test=true
WORKDIR /
#Store test results in a file that we will later extract
RUN dotnet test --results-directory ../../TestResults/ --logger "trx;LogFileName=test_results.xml" "./src/ProjectNameTests/ProjectNameTests.csproj"
I added a shell script as a next step that then tags the image as project-test.
#!bin/bash
id=`docker images --filter "label=test=true" -q`
docker tag $id projectname-test:latest
After that, I basically do what you do which is use docker cp and get the file out. The difference is my test results were never in the final image so I don't touch the final image.
Overall I think the correct way to handle tests is probably create a test image (based on the build image) and run it with a mounted volume for test results and have it run the unit tests when that container starts. Having a proper image/container would also allow you to run integration tests etc. This article is older but details similar https://blogs.infosupport.com/build-deploy-test-aspnetcore-docker-linux-tfs2015/
This is an old question, but I ended up here looking for the same thing so will drop my tuppence worth in here for posterity!
Nowadays you can use DOCKER_BUILDKIT=1 to make docker use buildkit to build your images, which is quicker and caches better, and has an --output option which solves this problem for you. I've got a golang-based example below, but this should work equally well for pretty much anything.
# syntax=docker/dockerfile:1.2
FROM golang:1.17 as deps
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY command ./command
COPY internal ./internal
# Tests
FROM deps as test
RUN --mount=type=cache,target=/root/.cache go test -v ./... 2>&1 | tee unit.out
FROM scratch as test-output
COPY --from=test /src/unit.out /unit.out
# Build
FROM deps as build
RUN your build steps
This dockerfile has a bunch of stages:
copy across requirements spec and install dependencies (for caching)
run your tests (re-use deps stage to avoid repeating ourselves)
copy your test results to a scratch image
build your image / binary / whatever you'd normally do from deps
Now if you set DOCKER_BUILDKIT=1 somewhere and run:
docker build
then docker builds your code / image but DOES NOT RUN THE TESTS! Because your build stage isn't linked to the test stage at all it bypasses both test stages entirely. However, you can now use the --target option to select the test-output build stage and the --output option to tell it where on your local disk to copy the result:
docker build --target test-output --output results .
Docker will then build the dependencies (or re-use the cache from the image build if you did that first), run your tests and copy the contents of your scratch image (i.e. your test report) into the results directory. Job done. :)
Edit:
An article using this approach with a .NET app and explaining the whole thing a bit better! https://kevsoft.net/2021/08/09/exporting-unit-test-results-from-a-multi-stage-docker-build.html

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.