Google Container Builder - image versioning best practice - google-container-registry

What would be the general guidance on versioning when building a docker image using Google Cloud Builder?
I am specifically interested in the situation when the build is triggered by a code commit. Should the image tag reflect the version (as recommended by Kubernetes API)? If it should, how to achieve that?

In your configuration, you can specify the image as (for example) gcr.io/$PROJECT_ID/my-image:${REVISION_ID}, and it will have a tag corresponding to the revision of the commit that triggered the build.
For documentation on the substitution syntax, see https://cloud.google.com/container-builder/docs/api/build-requests#substitutions

Related

Can you cache community builders?

I need to use Helm in my build pipeline. As described in the docs I downloaded the source of the Helm community builder and pushed the built image to GCR.
Now if i use the builder in my pipeline it takes an absurd amount of time for Google Build to download the builder image from GCR and run the Helm commands.
Is there any way I could speed this process up? Can I somehow cache intermediate layers of a builder image?
With Cloud Build community images, there's a (disappointing) requirement that you must build the image for yourself before you may use it.
If you're building the community image every time you use it, it's going to take more time than is possibly necessary.
IIUC, your cacheing solution is to disconnect building the community image (and storing it in e.g. a Google Container Registry to which you have access) from using the community image in your Cloud Builds that use the Helm builder.

SonarQube integration withn GCP cloud build

I have a task to use SonarQube.
My build are done using Google Cloud Build. How can I integrate SonarQube with Google Cloud Build
Thanks for your help
You can use custom builders. At the end, each build step is a container image:
Cloud builders are container images with common languages and tools installed in them. You can configure Cloud Build to run a specific command within the context of these builders.
There GCP documentation provides a guide on how to create a custom build. However, notice that it's inteded to be general and doesn't include any specific functionality that you might require. Nevertheless, is a great starting point for understanding how the custom builders work and create your own.
Aside from this approach, there is a community builder for Sonarqube that you can use as reference or might even suit your needs.
Edit:
In case your question is about code analysis with Sonarqube. The community builder is still relevant as it allows you to run static code analysis for your project from sonarcloud.io.

Copy a GCR image from one project to another

I aim to copy a gcr image from one project to another as soon as the image lands in the container registry of the first project. I am aware of the gcloud container images add-tag command, looking for a more automated option. Also the second project where the image has to be copied is protected by VPC-SC. Any leads will be appreciated...
I understand that you are looking for the best way to mirror the GCR images between two projects. Currently, you can follow the workaround in this document click to copy the container images for your use case. At the moment, the only way to move between two registries is by pulling from one and pushing to another, if you have the right permission. There is currently a tool on github that can automate this for you, gcrane click . However, for mirroring the container images between two projects, a feature request has already been submitted but there is no ETA.
According to the GCP documentation click , If the project is protected by VPC-SC, the container registry does not use googleapis.com domain. To achieve this, container registry need to configured via private DNS or BIND to map to the restricted VIP separately from other APIs.
When a change is made to a container registry that you own, a Pub/Sub message can be published. You can use this Pub/Sub message as a trigger to perform work. My immediate thought would be to create a Cloud Function that is triggered by the arrival of a message which then fires off a Cloud Build recipe. The Cloud Build would perform a docker pull of your original image and then a tag rename and then a docker push. It feels like this would be 100% automated and use components that are designed for CI/CD pipelines.
References:
Configuring Pub/Sub notifications
Cloud Build documentation

Separate URL for each git branch in Cloud Run

I am looking into Cloud Run to host my new app, and I am wondering if it is possible to generate a separate URL for each git branch.
I use Netlify to host my other app. When it is connected to GitHub (or other VCS services), it builds the source code in a branch and deploy it to a URL that is specific to the branch.
Can it be done easily or do I have to write some logic?
Or do you think AWS amplify or some other services are of better fit?
The concept of Cloud Run and URLs is quite simple:
https://<service-name>-<project hash>.<region>.run.app
If your project and region are the same for all the branches, you simply have to deploy a different service for each branch to get a different URL.
That was for Cloud Run. Now, I'm not sure that Netlify is compliant with Cloud Run. I found no documentation on this.
This answer won't be directly useful to you but I think it's relevant and worth mentioning
The open source Knative API (and implementation actually exposes a "tag" feature while splitting the traffic between multiple revisions: https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#traffictarget
This feature is not currently supported on Cloud Run fully managed, but it will be.
By tagging releases this way, you could define tag: v1 and tag: v2 in your traffic configuration, and you would get new URLs like:
https://v1-SERVICE_NAME...run.app
https://v2-SERVICE_NAME...run.app
that directly go to these specific versions.
And the interesting thing is, these revisions you specified in the traffic: block of the Service object do not have to receive any traffic (you can say traffic percentage: 0) but it would still create a domain name like I showed above to the inactive revisions of your app.
So when Cloud Run fully-managed supports tag fields, you can actually achieve this, although it will be less out-of-the-box experience than Netlify.

Google Container Builder: How to cache dependencies between two builds

We are migrating our container building process to Google Container Builder. We have multiple repo using Node or Scala.
As of actual container builder features, is it possible to cache dependencies between two builds (ex: node_modules, .ivy, ...). It's really time (money) consuming to download everything each time.
I know it's possible to build a custom docker image with all packaged within, but we would prefer avoiding this solution.
For example can we mount a persistent volume for that purpose, as we used to do with DroneIO? or even better automatically like in Bitbucket Pipelines?
Thanks
GCB doesn't currently support mounting a persistent volume across builds.
In the meantime, the team recently published a document outlining some options for speeding up builds, which might be useful: https://cloud.google.com/container-builder/docs/speeding-up-builds
In particular, caching generated output to Google Cloud Storage and pulling it in at the beginning of your build might help in your case.