I uploaded a model with
gcloud beta ai models upload --artifact-uri
And in the docker I access AIP_STORAGE_URI.
I see that AIP_STORAGE_URI is another Google Storage location so I try to download the files using storage.Client() but then it says that I don't have access:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/caip-tenant-***-***-*-*-***?projection=noAcl&prettyPrint=false: custom-online-prediction#**.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket
I am running this endpoint with the default service account.
https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#artifacts
According to the above link:
The service account that your container uses by default has permission to read from this URI.
What am I doing wrong?
The reason behind the error being, the default service account that Vertex AI uses has the “Storage Object Viewer” role which excludes the storage.buckets.get permission. At the same time, the storage.Client() part of the code makes a storage.buckets.get request to the Vertex AI managed bucket for which the default service account does not have permission to.
To resolve the issue, I would suggest you to follow the below steps -
Make changes in the custom code to access the bucket with the model artifacts in your project instead of using the environment variable AIP_STORAGE_URI which points to the model location in the Vertex AI managed bucket.
Create your own service account and grant the service account with all the permissions needed by the custom code. For this specific error, a role with the storage.buckets.get permission, eg. Storage Admin ("roles/storage.admin") has to be granted to the service account.
Provide the newly created service account in the "Service Account" field when deploying the model.
Related
I am granted with a customized role with below permissions
bigquery.connections.delegate
bigquery.savedqueries.get
bigquery.savedqueries.list
logging.views.access
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.list
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.get
storage.objects.list
When I try to create a transfer job to bring data into a bucket/folder that I created,
this error pops up (Failed to obtain the location of the GCS bucket contributed Additional details: project-720965328418#storage-transfer-service.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist).)
However, this project doesn't have any service account (I can view this because I have a different account with the owner privilege)
You are looking in the wrong place. Go to IAM & Admin -> IAM which is a different screen. Then, click the checkbox located in the top right Include Google-managed role grants. If the service account still does not show, click the GRANT ACCESS button and enter the email address.
I have a cloud function that is accessing a store bucket, and have assigned the function a service account that has the following roles: Cloud functions developer and Storage Admin.
When I try to run the function using this service account with these roles, it works fine.
But when I try to fine grain the access using IAM conditions on the Storage Admin role, it is giving me a "myserviceaccount.iam.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object".
The IAM conditions I am using on the service account on the storage admin role are the below:
"resource.name.endsWith("us-west1-test") ||
resource.name.endsWith("europe-west2-test")"
From my understanding this should work because the storage bucket name ends in "us-west1-test", so I'm not sure why it's giving me this 403 forbidden error. P.s I am also adding the condition just in case the resource it was trying to use was the europe-west2 function but have tried without and it gives the same error.
Summary of resources and names below:
Name of function - func-europe-west2-test
Name of storage bucket - buc-us-west1-test
Roles assigned to service account - Cloud Functions Developer & Storage Admin
Appreciate any help or suggestions.
I have an auto build pipe line in google cloud build :
- name: "gcr.io/cloud-builders/gsutil"
entrypoint: gsutil
args: ["-m","rsync","-r","gs://my-bucket-main","gs://my-bucket-destination"]
I gave the following permissions to
xxxxxx#cloudbuild.gserviceaccount.com
Cloud Build Service Account
Cloud Functions Developer
Service Account User
Storage Admin
Storage Object Admin
But I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 xxxxx#cloudbuild.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
Even if I add permission owner to xxxxxx#cloudbuild.gserviceaccount.com I get the same error. I do not understand how it is possible that Storage Admin and Storage Object Admin does not provide storage.object.list access!
Even when I am doing that in my local machine where gcloud is pointed to the project and I use gsutil -m rsync -r gs://my-bucket-main gs://my-bucket-destination still I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 XXXXX#YYYY.com does not have storage.objects.list access to the Google Cloud Storage bucket.
XXXXX#YYYY.com account is the owner and I also gave "Storage Admin" and
"Storage Object Admin" access to it too
any idea?
The service account is creating that error. My suggestion is to set the correct IAM roles of your service account on a bucket-level.
There are two approaches to set permission of the service account on the two buckets:
1. Using Google Cloud Console:
Go to the Cloud Storage Browser page.
Click the Bucket overflow menu on the far right of the row associated with the bucket.
Choose Edit bucket permissions.
Click +Add members button.
In the New members field, enter one or more identities that need access to your bucket.
Select a role (or roles) from the Select a role drop-down menu. The roles you select appear in the pane with a short description of the permissions they grant. You can choose Storage Admin role for full control of the bucket.
Click Save.
2. Using gsutil command:
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-main
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-destination
For full gsutil command documentation, You may refer here: Using IAM with buckets
I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.
Trying o build sonatype-nexus-community/nexus-blobstore-google-cloud but cannot succeed without Project Owner iam role in GCP.
If I understand everything correctly Storage Admin IAM role should be sufficient, at least according to the documentation:
https://github.com/sonatype-nexus-community/nexus-blobstore-google-cloud
Also tried Storage Admin + Service Account User + Service Account Token Creator but could not succeed either.
Integration test fails with a message:
org.sonatype.nexus.blobstore.api.BlobStoreException: BlobId: e0eb4ae2-f425-4598-aa42-fc03fb2e53b2, com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions.
In details, the integration test creates a blob storage than tries to delete than undelete it, using two different methods:
def "undelete successfully makes blob accessible"
def "undelete does nothing when dry run is true"
This is where the issue starts. Execution fails on delete:
assert blobStore.delete(blob.id, 'testing')
It's another question how to undelete something in Google Storage that does not support undelete but versioning only.
_
Here is what the documentation says about permissions:
Google Cloud Storage Permissions
Next, you will need to create an account with appropriate permissions.
Of the predefined account roles, Storage Admin will grant the plugin to > create any Google Cloud Storage Buckets you require and administer all of the objects within, but it will also have access to manage any other Google Cloud Storage Buckets associated with the project.
If you are using custom roles, the account will need:
(required) storage.objects.*
(required) storage.buckets.get
or storage.buckets.*.
Storage Admin IAM role covers both storage.objects.* and storage.buckets.* so not sure what causes the issue.
References:
https://cloud.google.com/storage/docs/access-control/iam-roles
https://cloud.google.com/storage/docs/access-control/iam-json
The integration test fails at a blob storage delete attempt:
15:27:10.042 [main] DEBUG o.s.n.b.g.i.GoogleCloudBlobStore - Writing blob 2e22e0e9-1fef-4620-a66e-d672b75ef924 to content/vol-18/chap-33/2e22e0e9-1fef-4620-a66e-d672b75ef924.bytes
15:27:24.430 [main] DEBUG o.s.n.b.g.i.GoogleCloudBlobStore - Soft deleting blob 2e22e0e9-1fef-4620-a66e-d672b75ef924
at
org.sonatype.nexus.blobstore.gcloud.internal.GoogleCloudBlobStoreIT.undelete successfully makes blob accessible(GoogleCloudBlobStoreIT.groovy:164)
Caused by: org.sonatype.nexus.blobstore.api.BlobStoreException: BlobId: 2e22e0e9-1fef-4620-a66e-d672b75ef924, com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions., Cause: Missing or insufficient permissions.
... 1 more
at
org.sonatype.nexus.blobstore.gcloud.internal.DeletedBlobIndex.add(DeletedBlobIndex.java:55)
at
org.sonatype.nexus.blobstore.gcloud.internal.GoogleCloudBlobStore.delete(GoogleCloudBlobStore.java:276)
Could you please help me out if I overlook something?
A Datastore database needs to be created and Datastore Owner role need to be added besides Storage Admin, Service Account User, and Service Account Token Creator