Due to corporate restrictions, I'm supposed to host everything on GCP in Europe. The organisation I work for, has set a restriction policy to enforce this.
When I deploy a cloud run instance from source with gcloud beta run deploy --source . --region europe-west1 it seems the command tries to store the temporary files in a storage bucket in the us, which is not allowed. The command then throws a 412 error.
➜ gcloud beta run deploy cloudrun-service-name --source . --platform managed --region=europe-west1 --allow-unauthenticated
This command is equivalent to running `gcloud builds submit --tag [IMAGE] .` and `gcloud run deploy cloudrun-service-name --image [IMAGE]`
Building using Dockerfile and deploying container to Cloud Run service [cloudrun-service-name] in project [PROJECT_ID] region [europe-west1]
X Building and deploying new service... Uploading sources.
- Uploading sources...
. Building Container...
. Creating Revision...
. Routing traffic...
. Setting IAM Policy...
Deployment failed
ERROR: (gcloud.beta.run.deploy) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
I see the Artifact Registry Repository being created in the correct region, but not the storage bucket.
To bypass this I have to create a storage bucket first in the correct region with the name PROJECT_ID_cloudbuild. Is there any other way to fix this?
Looking at the error message indicates that the bucket is forced to be created in the US regardless of the Organisation policy set in Europe. As per this public issue tracker comment,
“Cloud build submit creates a [PROJECT_ID]_cloudbuild bucket in the
US. This will of course not work when resource restrictions apply.
What you can do as a workaround is to create that bucket yourself in
another location. You should do this before your first cloud build
submit.”
This has been a known issue and I found two workarounds that can help you achieve what you want.
The first workaround is by using “gcloud builds submit” with additional flags:
Create a new bucket with the name [PROJECT_ID]_cloudbuild in the
preferred location.
Specify non-buckets using --gcs-source-staging-dir and
--gcs-log-dir 2 ===> this flag is required as if it is not set
it will create a bucket in the US.
The second workaround is by using a cloudbuild.yaml and the “--gcs-source-staging-dir” flag:
Create a bucket in the region, dual-region or multi-region you may
want
Create a cloudbuild.yaml for storing a build artifacts
You can find an example of the YAML file in the following external
documentation, please note that I cannot vouch for its accuracy
since it is not from GCP.
Run the command :
gcloud builds submit
--gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml
Please try these workarounds and let me know if it worked for you.
Related
I'm trying to create a Compute Engine VM instance sample in Google Cloud that has an associated startup script startup_script.sh. On startup, I would like to have access to files that I have stored in a Cloud Source Repository. As such, in this script, I clone a repository using
gcloud source repos clone <repo name> --project=<project name>
Additionally, startup_script.sh also runs commands such as
gcloud iam service-accounts keys create key.json --iam-account <account>
which creates .json credentials, and
EXTERNAL_IP = $(gcloud compute instances describe sample --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone=us-central1-a)
to get the external IP of the VM within the VM. To run these commands without any errors, I found that I need partial or full access to multiple Cloud API access scopes.
If I manually edit the scopes of the VM after I've already created it to allow for this and restart it, startup_script.sh runs fine, i.e. I can see the results of each command completing successfully. However, I would like to assign these scopes upon creation of the VM and not have to manually edit scopes after the fact. I found in the documentation that in order to do this, I can run
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --metadata-from-file=startup-script=startup_script.sh --zone=us-central1-a --scopes=[cloud-platform, cloud-source-repos, default]
When I run this command in the Cloud Shell, however, I can either only add one scope at a time, i.e. --scopes=cloud_platform, or if I try to enter multiple scopes as shown in the command above, I get
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
cloud-source-repos,
default]
Adding multiple scopes as the documentation suggests doesn't seem to work. I get a similar error when use the scope's URI instead of it's alias.
Any obvious reasons as to why this may be happening? I feel this may have to do with the service account (or lack thereof) associated with the sample VM, but I'm not entirely familiar with this.
BONUS: Ideally I would like to run the VM creation cloud shell command in a cloudbuild.yaml file, which I have as
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args: ['compute', 'instances', 'create', 'sample', '--image-family=ubuntu-1804-lts', '--image-project=ubuntu-os-cloud', '--metadata-from-file=startup-script=startup_sample.sh', '--zone=us-central1-a', '--scopes=[cloud-platform, cloud-source-repos, default]']
I can submit the build using
gcloud builds submit --config cloudbuild.yaml .
Are there any issues with the way I've setup this cloudbuild.yaml?
Adding multiple scopes as the documentation suggests doesn't seem to work
Please use the this command with --scopes=cloud-platform,cloud-source-reposCreated and not --scopes=[cloud-platform, cloud-source-repos, default]:
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --zone=us-central1-a --scopes=cloud-platform,cloud-source-reposCreated
[https://www.googleapis.com/compute/v1/projects/wave25-vladoi/zones/us-central1-a/instances/sample].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
sample us-central1-a n1-standard-1 10.128.0.17 35.238.166.75 RUNNING
Also consider #John Hanley comment.
I'm trying to figure out the absolute minimum set of IAM permissions I need to assign to a service key that will be used to run the following commands:
gcloud builds submit --tag gcr.io/MYPROJECT/MYNAME
gcloud run deploy --allow-unauthenticated --platform=managed --image gcr.io/MYPROJECT/MYNAME ...
I've had a lot of trouble figuring out IAM, so the more detailed instructions anyone can give me the better!
Here's what I've figured out so far (I ended up going with way more open permissions than I wanted): https://simonwillison.net/2020/Jan/21/github-actions-cloud-run/#google-cloud-service-key
I'm actually running these commands inside a Python script - relevant code is here: https://github.com/simonw/datasette/blob/07e208cc6d9e901b87552c1be2854c220b3f9b6d/datasette/publish/cloudrun.py#L134-L141
I understand you are running these commands with a service account, and your goal is to determine the minimal set of IAM permissions to assign to this service account so that it can build and deploy. I am going to list a set of minimal IAM roles (not IAM permissions)
To run gcloud builds submit --tag gcr.io/MYPROJECT/MYNAME, you need:
roles/cloudbuild.builds.editor to trigger the build
roles/storage.admin to push te image
To run gcloud run deploy --allow-unauthenticated --platform=managed --image gcr.io/MYPROJECT/MYNAME ... you need:
roles/run.admin (to deploy and allow allUsers to access the service)
roles/iam.serviceAccountUser (because the code will then run under a servie account, so the service account used to deploy needs to also be able to "act as" the runtime service account)
When deploying a docker container image to Cloud Run, I can choose a region, which is fine. Cloud Run delegates the build to Cloud Build, which apparently creates two buckets to make this happen. The unexpected behavior is that buckets aren't created in the region of the Cloud Run deployment, and instead default to multi-region US.
How do I specify the region as "us-east1" so the cost of storage is absorbed by the "always free" tier? (Apparently US multi-region storage buckets store data in regions outside of the free tier limits, which resulted in a surprise bill - I am trying to avoid that bill.)
If it matters, I am also using Firebase in this project. I created the Firebase default storage bucket in the us-east1 region with the hopes that it might also become the default for other buckets, but this is not so. The final bucket list looks like this, where you can see the two buckets created automatically with the undesirable multi-region setting.
This is the shell script I'm using to build and deploy:
#!/bin/sh
project_id=$1
service_id=$2
if [ -z "$project_id" ]; then
echo "First argument must be the Google Cloud project ID" >&2
exit 1
fi
if [ -z "$service_id" ]; then
echo "Second argument must be the Cloud Run app name" >&2
exit 1
fi
echo "Deploying $service_id to $project_id"
tag="gcr.io/$project_id/$service_id"
gcloud builds submit \
--project "$project_id" \
--tag "$tag" \
&& \
gcloud run deploy "$service_id" \
--project "$project_id" \
--image "$tag" \
--platform managed \
--update-env-vars "GOOGLE_CLOUD_PROJECT=$project_id" \
--region us-central1 \
--allow-unauthenticated
As you mention, Cloud Build creates a bucket or buckets with multi region because when creating the service in Cloud Run, there are only added the needed flags and arguments to deploy the service.
The documentation for the command gcloud builds submit mentions the following for the flag --gcs-source-staging-dir:
--gcs-source-staging-dir=GCS_SOURCE_STAGING_DIR
A directory in Google Cloud Storage to copy the source used for staging the build. If the specified bucket does not exist, Cloud Build will create one. If you don't set this field, gs://[PROJECT_ID]_cloudbuild/source is used.
As this flag is not set, the bucket is created in multi-region and in us. This behavior also applies for the flag --gcs-log-dir.
Now the necessary steps to use the bucket in the dual-region, region or multi-region you want is using a cloudbuild.yaml and using the flag --gcs-source-staging-dir. You can do the following:
Create a bucket in the region, dual-region or multi-region you may want. For example I created a bucket called "example-bucket" in australia-southeast1.
Create a cloudbuild.yaml file. This is necessary to store the artifacts of the build in the bucket you want as mentioned here. An example is as follows:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'cloudrunservice'
- '--image'
- 'gcr.io/PROJECT_ID/IMAGE'
- '--region'
- 'REGION_TO_DEPLOY'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
artifacts:
objects:
location: 'gs://example-bucket'
paths: ['*']
Finally you could run the following command:
gcloud builds submit --gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml
The steps mentioned before can adapted to your script. Please give a try :) and you will see that even if the Cloud Run service is deployed in Asia, Europe or US, the bucket specified before can be in another location.
Looks like this is only possible by doing what you're mentioning in the comments:
Create a storage bucket in us-east1 as the source bucket ($SOURCE_BUCKET);
Create a Artifact Registry repo in us-east1;
Create the following cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image', '.']
images:
- 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image'
Deploy with:
$ gcloud builds submit --config cloudbuild.yaml --gcs-source-staging-dir=gs://$SOURCE_BUCKET/source
More details here: https://cloud.google.com/artifact-registry/docs/configure-cloud-build
I think it should at least be possible to specify the Artifact Registry repo with the --tag option and have it be automatically created, but it currently rejects any domain that isn't gcr.io outright.
I'm trying to setup credentials for kubernetes on my local.
gcloud container clusters get-credentials ***** --zone **** --project elo-project-267109
This query works fine when I tried it from cloud shell, but I got this error when I tried run it from my terminal:
ERROR: (gcloud.container.clusters.get-credentials) get-credentials requires edit permission on elo-project-267109
I've tried this query from admin account as well as default service account also from new service account by assigning editor role and it still doesn't seem to work for me.
i am using macOs Mojave(10.14.6) and gcloud SDK version installed in my system is 274.0.1
i was able to resolve this issue on my local but i was actually trying to build a CI/CD from gitlab and the issue persists there, i have tried using gcloud(279.0.0) image version.
i am new to both gitlab and gcloud. i am trying to build CI/CD pipeline for the first time.
Do gcloud auth list to see which account are you logged into.
You need to login with the account which has the correct credentials to access the action that you're trying to perform.
To set the gcloud account: gcloud config set account <ACCOUNT>
It's turned out to be the image version mismatch issue on GitLab.
So I'm trying to run a training job on google cloud's AI-platform for an image classifier written in tensorflow by the command line:
gcloud ai-platform jobs submit training my_job \
--module-name trainer.final_task \
--staging-bucket gs://project_bucket \
--package-path trainer/ \
but I keep getting the ERROR: (gcloud.ai-platform.jobs.submit.training) User [myemail#gmail.com] does not have permission to access project [my_project] (or it may not exist): Permission denied on 'locations/value' (or it may not exist).
I don't get how this is possible as I own the project on gcloud (with that e-mail address) and am even expressly linked to it on the IAM policy bindings. Has anyone experienced this before?
EXTRA INFO:
I am using gcloud as an individual, there are no organisations involved. Hence the only members linked in IAM policy bindings are me and gcloud service accounts.
The code works perfectly when trained locally (using gcloud ai-platform local train) with the same parameters.
I encountered the same problem, having an owner account have permissions denied for training a job. I had accidentally added "central1" as the server when it had to be "us-central1". Hopefully this helps!
I need little more information to be sure, but such error appears when you have different project set in Gcloud SDK. Please verify if project in gcloud config list project is the same as the project you want to use. If not please submit gcloud config set project [YOUR PROJECT]. You can verify the changes with list command again.
The issue with me was that my notebook location was in a different region and I was trying to deploy in a different region. After I changed the location to my notebook location, it worked.