Is "Cloud build Custom Worker" available now? - google-cloud-platform

I'm working on a project where I need to access VM instances from Cloud build to download dependencies.
Cloud build uses worker pool and it will reside outside the VPC so, its not able to go through firewall.
Can anyone suggest me how to resolve this issue or help me with Custom workerpool creating.

From Documentation:
gcloud alpha builds worker-pools create
This command is currently in ALPHA and may change without notice. If this command fails with API permission errors despite specifying the right project, you may be trying to access an API with an invitation-only early access whitelist.

Related

How to use existing Compute Engine for Google Cloud Build?

How would you use an existing Compute Engine VM instance for a Google Cloud Build pipeline?
I know there's been a similar question in the past, however, the suggested answer is not really what I want - creating and then destroying a Compute Engine with every build.
In settings, Cloud Build allows you to enable "service account permissions" for Compute Engine (Compute Instance Admin (v1)), but I've found no information how to use that permission and service for running the build process with one of your predefined VM instances.
Or maybe I misunderstand the answer in the linked thread above and
COMMAND=sudo supervisorctl restart
actually restarts the existing VM supervisorctl? Any help would be appreciated.
You can't run a Cloud Build build on a GCE instance. The most customizable option you would have is to run the build on a private pool. But even in those cases it's always managed, you never have access to the underlying VM.
Another option would be to start a powerful GCE instance with Cloud Build via the GCE API, run your operations there and then stop the GCE instance.

gcloud builds submit fails while docker push + gcloud run deploy work just fine?

EDIT: The so called duplicate question was way off since 1. I could push another image and 2. I could not push a build image. Finally, point #3 is the solution was totally different and ONLY related to pushing build images via cloudbuild. ie. I beg to differ that this question WAS different.
Running into some more google cloud security stuff. We currently deploy to cloud run like so
docker build . --tag gcr.io/myproject/authservice
docker push gcr.io/myproject/authservice
gcloud run deploy staging-admin --region us-west1 --image gcr.io/myproject/authservice --platform managed
I did the quick start for google builds but I am getting permission errors. I did this command
https://cloud.google.com/cloud-build/docs/quickstart-build
The command I ran was
gcloud builds submit --tag gcr.io/myproject/quickstart-image
This is all the same project but submitting builds gets this same error over and over and over(I am not sure why it doesn't just exit on first error.
The push refers to repository [gcr.io/myproject/quickstart-image]
e3831abe9997: Preparing
60664c29ef5a: Preparing
denied: Token exchange failed for project 'myproject'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
Any ideas how to fix so I can use google cloud build?
Complementing the previous answer, as is mentioned in this document to perform actions in Container Registry the role "sotrage admin" is necessary
Do you have "roles/storage.admin" role? If not, add it and try.
The Could build service account has this format [project_number]#cloudbuild.gserviceaccount.com please add the role "roles/storage.admin" by following this steps
Open the Cloud IAM page
Select your Cloud project.
In the permissions table, locate the row with the email address
ending with #cloudbuild.gserviceaccount.com. This is your Cloud
Build service account.
Click on the pencil icon.
Select the role you wish to grant to the Cloud Build service
account.
Click Save.
BE WARNED: I read the duplicate question post but in my case
I can push items
only the build one is failing AND the solution I found is different than any of the other question answers
This was a VERY weird issue. The storage permission MUST be a red herring because these permissions fixed the issue
I found some documentation somewhere that I can't seem to find on a google github repo about adding these permissions AND a document on the TWO #cloudbuild.gserviceaccount.com accouts AND you must add the permissions to the correct one!!!! One is owned by google and you should not touch.
In my case, the permission / token exchange failed error was caused by having the storage bucket used by Google Container Registry inside a VPC Service Perimeter.
This can be checked / confirmed via the VPC Service Controls logs - accessible easily from the troubleshooting page.
There is a (very clunky) way to get Cloud Build working to push images to a registry inside a VPC perimeter. It involves running a build worker pool and applying appropriate config + permissions to the perimeter etc.

How to get Google Cloud Build working inside VPC Perimeter?

I have a question that is confusing me a little. I have a project locked down at the org level through a perimeter fence. This is to whitelist ip ranges to access a cloud storage bucket as the user has no ability to authenticate through service accounts or api's and requires a streaming of data.
This is fine and working however I am confused about how to open up access to serverless enviroments aswell inside gcp. The issue in question is cloud build. Since introduction of the perimeter I can no longer run cloud build due to violation of vpc controls. Wondering can anyone point me in the direction of how to enable this as obviously white listing the entire cloud build ip range is not an option?
You want to create a Perimeter Bridge between the resources that you want to be able to access each other. You can do this in the console or using gcloud as noted in the docs that I linked.
The official documentation mention that if you use VPC service controls, some services are not supported, for example, Cloud Build, for this reason the problem started right after you deployed the perimeter.
Hi all so the answer is this.
What you want to do is set up one project that is locked down by vpc and has no api's available for ingestion of the ip white listed storage bucket. Then you create a 2nd project that has a vpc but does not disable cloud storage api's etc. Now from here you can read directly from the ip whitelisted cloud storage bucket in the other project.
Hope this makes sense as I wanted to share back to the awesome guys above who put me on the right track.
Thanks again
Cloud Build is now supported by VPC Service Controls VPC Supported products and limitations

deploy different resources using deployment manager?

I'm planning to use the deployment manager to deploy a new project for each of our client.
I'm just wondering can I do the following using the deployment manager or put into script/YAML, so it deploys all components all at once through the command shell?
create a new GCP project
create a VPC for the client with custom subnet assigned
create a VM and set the network to the custom VPC/subnet
create an app engine with different services using the yaml file
create storage buckets
create cloud Postgres SQL instance
What I tried so far, I can deploy the VM only through the deployment manager, I can do them individually using the command line, but not using the deployment manager in one single step.
Thanks for your help.
Deployment Manager should work perfectly for this type of setup. There are a few minor caveats though.
You need to have a project in place where you can run deployment manager from
You will need to provide the deployment manager service account all the required permissions before creating the deployment (such as project creator at the org level). The service account is [PROJECT_ID]#cloudservices.gserviceaccount.com
Next, you will want to call each of the resources individually in your deployment manager manifest, luckily all these resource APIs are supported by DM:
Projects to create the project.
** All following resources should make a reference to this resource to create a dependancy so that DM does not try to create them before the project exists... which would result in a failure
VPC and VMs: use something like this
** This includes adding GKE clusters at the end and a VPC peering you won't need, but it demonstrates the creation of a VPC, subnets, firewall rules and a VM
App Engine
GCS Bucket
SQL instance
As long as your overall config is less than 1 MB, you can place all these resources into a single config.
If you are new to DM, I recommend trying each of these resources individually to make sure that you have the syntax correct. Trying to debug syntax errors with multiple resources is much more difficult.
I also recommend using the --preview flag before creating or updating resources so that you can make sure that your configurations or changes will come into effect the way you planned.
Finally, you can either write all this directly into a YAML config or you can create templates using either jinja or python2 which can be imported into your config.yaml
Please take a look at the Deployment Manager Cloud Foundation Toolkit which is a sets of well designed templates.

Kubernetes Engine unable to pull image from non-private / GCR repository

I was happily deploying to Kubernetes Engine for a while, but while working on an integrated cloud container builder pipeline, I started getting into trouble.
I don't know what changed. I can not deploy to kubernetes anymore, even in ways I did before without cloud builder.
The pods rollout process gives an error indicating that it is unable to pull from the registry. Which seems weird because the images exist (I can pull them using cli) and I granted all possibly related permissions to my user and the cloud builder service account.
I get the error ImagePullBackOff and see this in the pod events:
Failed to pull image
"gcr.io/my-project/backend:f4711979-eaab-4de1-afd8-d2e37eaeb988":
rpc error: code = Unknown desc = unauthorized: authentication required
What's going on? Who needs authorization, and for what?
In my case, my cluster didn't have the Storage read permission, which is necessary for GKE to pull an image from GCR.
My cluster didn't have proper permissions because I created the cluster through terraform and didn't include the node_config.oauth_scopes block. When creating a cluster through the console, the Storage read permission is added by default.
The credentials in my project somehow got messed up. I solved the problem by re-initializing a few APIs including Kubernetes Engine, Deployment Manager and Container Builder.
First time I tried this I didn't succeed, because to disable something you have to disable first all the APIs that depend on it. If you do this via the GCloud web UI then you'll likely see a list of services that are not all available for disabling in the UI.
I learned that using the gcloud CLI you can list all APIs of your project and disable everything properly.
Things worked after that.
The reason I knew things were messed up, is because I had a copy of the same things as a production environment, and there these problems did not exist. The development environment had a lot of iterations and messing around with credentials, so somewhere things got corrupted.
These are some examples of useful commands:
gcloud projects get-iam-policy $PROJECT_ID
gcloud services disable container.googleapis.com --verbosity=debug
gcloud services enable container.googleapis.com
More info here, including how to restore service account credentials.