Google Cloud Build not caching custom build steps? - google-cloud-platform

Is it possible to have Google Cloud Build cache custom build step images? It appears to re-download them every build regardless of latest vs specific tags used in the name, which makes things slower as opposed to faster.

This is not possible today. There are official Cloud Build buildsteps that are cached, but all custom build steps will be pulled. You can minimize the pull latency by using one of the official buildsteps as a base image of your custom build step.

Related

Are incremental builds possible with copied files not built on the machine?

I'm having trouble setting up incremental builds in Azure DevOps. There are too many variables with workspace cleaning to ensure that I don't have to do a full build every time.
I had a thought that I could just always copy the built files to a location outside of the agents' purview, and then copy those files into my release directory before each build.
Would that allow for an incremental build?
You probably can 'fool' the incremental logic but you would be working against the tooling.
For an actual incremental build you need to build in the same place.
In the context of Azure DevOps, that means building the same job of the same pipeline on the same agent. You can't let the build move around between agents or even between work folders of the same agent. (It also means that your agent and the state of the agent work folder must be persistent across the builds.)
You can make the job, stage, or pipeline 'sticky' to one dedicated agent by using demands and capabilities.
Decide what will be on your dedicated agent. Will it be the entire pipeline or just a stage of the pipeline or just a job of a stage?
For the dedicated agent, create a capability that represents the build. Using the name of the pipeline (or pipeline+stage or pipeline+stage+job depending) for the name of the capability is handy and self-documenting. You can create the capability in Azure DevOps as a 'user capability' of the agent.
Change your pipeline to add a demand on the custom capability. The demand can test if the custom capability exists. In a YAML pipeline the demands are configured in the pool definition.
This is an easier and less brittle approach then trying to outsmart the incremental logic.
With this approach, all builds will be done in series on the one agent. If the build takes a long time (which may be the motivation for building incrementally) and the build is tied to one agent, the 'throughput' of builds will be limited. If a build's duration is 1 hour, there will be a maximum of 8 builds in an 8 hour work day.
Tying specific builds to specific agents is not the intent in Azure DevOps. For a monolithic legacy codebase where there is no notion of semantic versioning and immutable interfaces, you may have little choice. But a better way is to use package management. Instead of one big build, have multiple smaller builds that produce packages that are used by other builds. The challenge is that packages will not work well without some attention and discipline around versioning and keeping published interfaces and contracts unchanged.

Can you cache community builders?

I need to use Helm in my build pipeline. As described in the docs I downloaded the source of the Helm community builder and pushed the built image to GCR.
Now if i use the builder in my pipeline it takes an absurd amount of time for Google Build to download the builder image from GCR and run the Helm commands.
Is there any way I could speed this process up? Can I somehow cache intermediate layers of a builder image?
With Cloud Build community images, there's a (disappointing) requirement that you must build the image for yourself before you may use it.
If you're building the community image every time you use it, it's going to take more time than is possibly necessary.
IIUC, your cacheing solution is to disconnect building the community image (and storing it in e.g. a Google Container Registry to which you have access) from using the community image in your Cloud Builds that use the Helm builder.

Simultaneous Deploys from Github with Cloud Build for Multi-tenant architecture

My company is developing a web application and have decided that a multi-tenant architecture would be most appropriate for isolating individual client installs. An install would represent an organization (a nonprofit, for example) and not an individual users account. Each install would be several Cloud Run applications bucketed to an individual GCP project.
We want to be able to take advantage of Cloud Build's GitHub support to deploy from our main branch in GitHub to each individual client install. So far, I've been able to get this setup working across two individual GCP projects, where Cloud Build runs in each project individually and deploys to the individual GCP project Cloud Runs at roughly the same time and with the same duration. (Cloud Build does some processing unique to each client install so the build processes in each install are not performing redundant work)
My specific question is can we scale this deployment technique up? Is there any constraint preventing using Cloud Build in multiple GCP projects to deploy to our client installs, or will we hit issues when we continue to add more GCP projects? I know that so far this technique works for 2 installs, but will it work for 20, 200 installs?
You are limited to 10 concurrent builds per project. But if you run one cloud build per project, there is no limitation or known issues.

Google Container Builder: How to cache dependencies between two builds

We are migrating our container building process to Google Container Builder. We have multiple repo using Node or Scala.
As of actual container builder features, is it possible to cache dependencies between two builds (ex: node_modules, .ivy, ...). It's really time (money) consuming to download everything each time.
I know it's possible to build a custom docker image with all packaged within, but we would prefer avoiding this solution.
For example can we mount a persistent volume for that purpose, as we used to do with DroneIO? or even better automatically like in Bitbucket Pipelines?
Thanks
GCB doesn't currently support mounting a persistent volume across builds.
In the meantime, the team recently published a document outlining some options for speeding up builds, which might be useful: https://cloud.google.com/container-builder/docs/speeding-up-builds
In particular, caching generated output to Google Cloud Storage and pulling it in at the beginning of your build might help in your case.

Is it possible to dynamically get a build from a teamcity server using python?

Have regular builds that appear on a teamcity server. I manually take them down and configure for integration testing and so forth.
The build link location is of the following format:
http://TCServer.com/repository/download/constant/321812:id/BuildB.zip
Previous build could look like:
http://TCServer.com/repository/download/constant/321796:id/BuildA.zip
The url as far as "constant" never changes but the rest is dynamic.
Because the "Artifacts" links are popups, it's unclear how to get this link through scripting, (still wet behind ears when it comes to this language).
Is there a python plugin for TC that may help in this regard?
There is a way to download all artifacts of a build in single zip archive:
http:///repository/downloadAll//61158:id/artifacts.zip
You can also download all artifacts of the last finished/successful/pinned build (useful if you don't know build id):
http:///repository/downloadAll//latest.lastSuccessful/artifacts.zip
You can use latest.lastSuccessful, latest.lastFinished and latest.lastPinned locators.
The backend for teamcity artifacts actually uses Apache Ivy (and optionally also Nuget). You can pull artifacts directly out of it, i do this using ant following the example jetbrains gives:
http://confluence.jetbrains.com/display/TCD7/Artifact+Dependencies