I can copy file to Google Cloud Storage:
% gsutil -m cp audio/index.csv gs://passive-english/audio/
If you experience problems with multiprocessing on MacOS, they might be related to https://bugs.python.org/issue33725. You can disable multiprocessing by editing your .boto config or by adding the following flag to your command: `-o "GSUtil:parallel_process_count=1"`. Note that multithreading is still available even if you disable multiprocessing.
Copying file://audio/index.csv [Content-Type=text/csv]...
\ [1/1 files][196.2 KiB/196.2 KiB] 100% Done
Operation completed over 1 objects/196.2 KiB.
But I can't change it metadata:
% gsutil setmeta -h "Cache-Control:public, max-age=7200" gs://passive-english/audio/index.csv
Setting metadata on gs://passive-english/audio/index.csv...
AccessDeniedException: 403 Access denied.
I'm authorizing using json file:
% env | grep GOOGL
GOOGLE_APPLICATION_CREDENTIALS=/app-342xxx-2cxxxxxx.json
How can I grant access so that gsutil can change metadata for the file?
Update 1:
I give the service account role Editor and Storage Object Admin permission.
Update 2:
I give the service account role Owner and Storage Object Admin permission. Still no use
To update an object's metadata you need the IAM permission storage.objects.update.
That permission is contained in the roles:
`Storage Object Admin (roles/storage.objectAdmin)
`Storage Admin (roles/storage.admin)
To add the required role using the CLI:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SERVICE_ACCOUNT_EMAIL}
--role=REPLACE_WITH_REQUIRED_ROLE (e.g. roles/storage.objectAdmin)
Using the Google Cloud Console GUI:
In the Cloud Console, go to the IAM & Admin -> IAM page.
Locate the service account.
Click the pencil icon on the right hand side.
Click ADD ROLE.
Select one of the required roles.
I tried to update metadata, I can able to successfully edit without errors.
According to documention , you need to have Owner role on the object to edit meatadata.
you can also refer this document 1 & 2
Related
I have an auto build pipe line in google cloud build :
- name: "gcr.io/cloud-builders/gsutil"
entrypoint: gsutil
args: ["-m","rsync","-r","gs://my-bucket-main","gs://my-bucket-destination"]
I gave the following permissions to
xxxxxx#cloudbuild.gserviceaccount.com
Cloud Build Service Account
Cloud Functions Developer
Service Account User
Storage Admin
Storage Object Admin
But I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 xxxxx#cloudbuild.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
Even if I add permission owner to xxxxxx#cloudbuild.gserviceaccount.com I get the same error. I do not understand how it is possible that Storage Admin and Storage Object Admin does not provide storage.object.list access!
Even when I am doing that in my local machine where gcloud is pointed to the project and I use gsutil -m rsync -r gs://my-bucket-main gs://my-bucket-destination still I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 XXXXX#YYYY.com does not have storage.objects.list access to the Google Cloud Storage bucket.
XXXXX#YYYY.com account is the owner and I also gave "Storage Admin" and
"Storage Object Admin" access to it too
any idea?
The service account is creating that error. My suggestion is to set the correct IAM roles of your service account on a bucket-level.
There are two approaches to set permission of the service account on the two buckets:
1. Using Google Cloud Console:
Go to the Cloud Storage Browser page.
Click the Bucket overflow menu on the far right of the row associated with the bucket.
Choose Edit bucket permissions.
Click +Add members button.
In the New members field, enter one or more identities that need access to your bucket.
Select a role (or roles) from the Select a role drop-down menu. The roles you select appear in the pane with a short description of the permissions they grant. You can choose Storage Admin role for full control of the bucket.
Click Save.
2. Using gsutil command:
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-main
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-destination
For full gsutil command documentation, You may refer here: Using IAM with buckets
I am not able to copy files from VM to CLoud storage bucket on GCP.
Here is what I tried.
Created VM Instace and allowed Full access APIs did not work then gave full access individually still not working.
Added a file in it.
Created a bucket
Tried copying file from VM to bucket
Here is the code snippet from terminal
learn_gcp#earthquakevm:~$ ls
test.text training-data-analyst
learn_gcp#earthquakevm:~$ gsutil cp test.text gs://kukroid-gcp
Copying file://test.text [Content-Type=text/plain]...
AccessDeniedException: 403 Provided scope(s) are not authorized
learn_gcp#earthquakevm:~$
My VM details:
My Bucket Details
Can anyone suggest what am I missing? how to fix this?
Maybe Your VM still uses cached credential which access scope has not changed.
Trying to delete ~/.gstuil directory and perform gsutil again.
The error 403 Provided scope(s) are not authorized, shows that the service account you're using to copy doesn't have permission to write an object to the kukroid-gcp bucket.
And based on your screenshot, you are using the Compute Engine default service account and by default it does not have access to the bucket. To make sure your service account has the correct scope you can use curl to query the GCE metadata server:
curl -H 'Metadata-Flavor: Google' "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes"
You can add the result of that command on your post as an additional information. If there are no scopes that gives you access to storage bucket, then you will need to add the necessary scopes. You can read about scopes here. And you can see the full list of available scopes here
Another workaround is to create a new service account instead of using the default. To do this, here is the step by step process without deleting the existing VM:
Stop the VM instance
Create a new service account IAM & Admin > Service Accounts > Add service account
Create a new service account with the Cloud Storage Admin role
Create private key for this service account
After creating the new service account, go to vm and click on it's name > then click on edit
Now, in the editing mode, scroll down to the service-accounts section and select the new service account.
Start your instance, then try to copy the file again
I'm creating a sink by running the following command (as an organization administrator):
gcloud logging sinks create vpc_flow_sink storage.googleapis.com/<storage_bucket_name> --include-children --organization=<organization_id> --log-filter="resource.type="gce_subnetwork" AND logName:"logs/compute.googleapis.com%2Fvpc_flows""
The command executes successfully and outputs the following text:
Created [https://logging.googleapis.com/v2/organizations/<organization_id>/sinks/<sink_name>].
Please remember to grant serviceAccount:o<organization_id>-511237#gcp-sa-logging.iam.gserviceaccount.com the Storage Object Creator role on the bucket.
However, when I go to actually apply the permission to the storage bucket, I cannot find this account (in either the project or within the organization). The accounts also do not appear when I run:
gcloud organizations get-iam-policy <organization_id>
When I describe the sink, the service account exists within the writerIdentity field:
gcloud beta logging sinks describe vpc_flow_sink --organization <organization_id>
...
writerIdentity: serviceAccount:o<organization_id>-511237#gcp-sa-logging.iam.gserviceaccount.com
...
For reference, to try debug this issue, I've attached the following roles: Organization Role Administrator, Logging Admin, Owner, Project Owner, Organization Administrator, Storage Admin.
I am genuinely lost on what to do, how do I go about granting the bucket the role to this account?
When applying the permission to you export destination, don't copy:
serviceAccount:o<organization_id>-511237#gcp-sa-logging.iam.gserviceaccount.com
but instead just use everything after serviceAccount:
o<organization_id>-511237#gcp-sa-logging.iam.gserviceaccount.com...
Google will then recognize the service account. However, I still cannot detect it via gcloud organizations get-iam-policy <organization_id>
I want to copy an object from a Google Compute Engine instance to a Google Storage Bucket using gsutil cp. Both belong to the same owner (me) and to the same project. I want to automate the whole machine so authenticating manually is not what I want.
I have activated the necessary permissions to use a service account on a Compute instance (details below) but when I try to gsutil cp a file to the bucket, I get an AccessDeniedException.
The error message complains about missing storage.object.create or storage.object.list permissions depending on if my bucket target path ends in a folder (gs://<bucket>/test/) or file (gs://<bucket>/test.txt).
What I did to get permissions (I have already tried a lot, including creating redundant custom roles which I also assigned to the service account):
Start the instance:
gcloud instances create <instance> [...] \
--service--account <name>#<project>.iam.gserviceaccount.com \
--scopes cloud-platform,storage-full
Give the service account permissions on creation.
Give the service account permissions afterwards as well (just to be safe):
gcloud projects add-iam-policy-binding <project> \
--member serviceAccount:<name>#<project>.iam.gserviceaccount.com \
--role roles/storage.objectAdmin
Edit Storage bucket permissions for the service account:
gsutil iam ch \
serviceAccount:<name>#<project>.iam.gserviceaccount.com:roles/storage.objectAdmin \
gs://<bucket>
Edit Storage bucket access control list (owner permission):
gsutil acl ch -u <name>#<project>.iam.gserviceaccount.com:O gs://<bucket>
At some point enabled bucket-level IAM policies instead of per-object policies (just to be safe).
On the instance, use
gcloud auth activate-service-account --key-file <key>.json to authenticate the account.
However, no matter, what I do, the error does not change and I am not able to write to the bucket. I can, however, read files from the bucket. I also get the same error on a local machine using the service account, so the problem is not related to instances.
As well as ensuring that the service account that you're using has appropriate permissions, you also need to ensure that the instance you're using has the appropriate scopes to access Google Cloud Storage, which they usually don't be default. You can either set the scopes to Allow full access to all Cloud APIs or set them individually if you'd prefer. You can find instructions on how to do so here.
I created a new service account and executed the exact same steps... Now it works.
We are attempting to import an image into GCP with the following command
gcloud compute images import
under the context of a service account. When running this command, the message states that it wants to elevate the permissions of the service account to a "Service Account Actor". Since this role is deprecated (i.e. - https://cloud.google.com/iam/docs/service-accounts#the_service_account_actor_role ) and the recommendation of effectively setting the service account to a "service account user" and "service account token creator" does not work. What would be the correct role or set of roles for the execution of this command?
We are running the following version for the gcloud cli
Google Cloud SDK 232.0.0
alpha 2019.01.27
beta 2019.01.27
bq 2.0.40
core 2019.01.27
gsutil 4.35
kubectl 2019.01.27
Also, if this is not the correct forum to ask this type of question, please let me know which and I will be glad to move this to the correct location.
If this is a one-time operation, upload the image to a bucket and execute gcloud compute image import from the cloud shell--which will execute using your user permissions (likely owner). Reference the image in the shell like gs://my-bucket/my-image.vmd
The instructions below will be necessary if you are forced to use a service account on a VM or another resource.
You'll need to (a) identify the active service account and (b) grant the roles/compute.admin role.
(a) Identify the service Account
On the system running gcloud compute images import run this command to identify the active service account
gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* SERVICE_ACCOUNT#googlexxx.com
(b) Add the roles/compute.admin role
You'll need to add the role roles/compute.admin (once working, find a privileged role for POLP)
Open a separate Google Cloud Shell or another shell where you are authenticated with an "owner" role.
Grant the role.computeAdmin permission
# replace this with the active service acct above
ACTIVE_SERVICE_ACCOUNT=SERVICE_ACCOUNT#googlexxx.com
gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
--member="serviceAccount:${ACTIVE_SERVICE_ACCOUNT}" \
--role=roles/compute.admin
this is what worked for me (in my case, compute.admin was not enough):
# this project hosts the service account and the instance that the service account calls `gcloud compute images import ...` from.
worker_project=my-playground-for-building-stuff
# this project hosts your images (it can be the same project as ${worker_project} if that's how you roll)
image_project=my-awesome-custom-images
# this bucket will host resources required by, and artifacts created by cloudbuild during image creation (if you have already run `gcloud compute images import ...` as a normal user (not serviceaccount), then the bucket probably already exists in your ${image_project})
cloudbuild_bucket=${image_project}-daisy-bkt-us
# this is your service account in your ${worker_project}
service_account=my-busy-minion-who-loves-to-work#${worker_project}.iam.gserviceaccount.com
for legacy_role in legacyBucketReader legacyBucketWriter; do
gsutil iam ch serviceAccount:${service_account}:${legacy_role} gs://${cloudbuild_bucket}
done
for role in editor compute.admin iam.serviceAccountTokenCreator iam.serviceAccountUser; do
gcloud projects add-iam-policy-binding ${image_project} --member serviceAccount:${service_account} --role roles/${role}
done
for api in cloudbuild cloudresourcemanager; do
gcloud services enable ${api}.googleapis.com --project ${worker_project}
done