I just created a new bucket under the default project "My First Project".
I accidentally deleted all permissions on the bucket. Is it possible for the default permissions to be restored?
I don't need the bucket so it can be deleted, but I don't have permission to do that either.
Update
To clarify, I own the project and bucket. No other user should have access.
Following suggestions by #gso_gabriel I have tried the following:
I can list objects in the bucket:
> gsutil ls -r gs://my-bucket-name/
gs://my-bucket-name/name-of-my-file
I cannot change the ACL:
> gsutil defacl set public-read gs://my-bucket-name/
Setting default object ACL on gs://my-bucket-name/...
AccessDeniedException: 403 my-email-address does not have storage.buckets.update access to the Google Cloud Storage bucket.
> gsutil acl set -R public-read gs://my-bucket-name/
Setting ACL on gs://my-bucket-name/name-of-my-file...
AccessDeniedException: 403 my-email-address does not have storage.objects.update access to the Google Cloud Storage object.
I think there is no ACL (see the last line):
> gsutil ls -L gs://my-bucket-name/
gs://my-bucket-name/name-of-my-file
Creation time: Wed, 10 Jun 2020 01:31:20 GMT
Update time: Wed, 10 Jun 2020 01:31:20 GMT
Storage class: STANDARD
Content-Length: 514758
Content-Type: application/octet-stream
Hash (crc32c): AD4ziA==
Hash (md5): W3aLFrdB/eF85IZux9UVfQ==
ETag: CIPc1uiM9ukCEAE=
Generation: 1591752680386051
Metageneration: 1
ACL: []
Update 2
The output from the gcloud command suggested by #gso_gabriel is:
> gcloud projects get-iam-policy my_project_ID
bindings:
- members:
- user:my-email-address
role: roles/owner
etag: BwWnsC5jgkw=
version: 1
I also tried the "Policy Troubleshooter" in the IAM & Admin section of the GCP console. It showed the following:
I can create buckets and objects on the project e.g. storage.buckets.create is enabled
I cannot delete buckets and objects on the project e.g. storage.buckets.delete is disabled
I cannot get the IAM policy on buckets and objects on the project e.g. storage.buckets.getIamPolicy is disabled
The "Roles" associated with the project include permissions in the Storage Admin group (see the Roles subsection in the IAM & Admin section of the GCP console). i.e. permissions such as storage.objects.delete is supposedly enabled, but the Policy Troubleshooter shows that they are not being granted.
As well explained here, if you are the owner of the bucket - or at least has access to the account who owns it - you should be able to modify the ACL of it and add the permissions back as they were.
Once you are logged in as the owner, you just need to run the command gsutil acl set -R public-read gs://bucketName to provide public read to the bucket for users. You can also check the exactly default permissions here. In case you are not sure which account is the Owner, run the below command - as indicated here - that it will returns all accounts with permissions, including one that will mention Owner on it.
gsutil ls -L gs://your-bucket/your-object
The return should be something like this.
{
"email": "your-service-account#appspot.gserviceaccount.com",
"entity": "user-your-service-account#appspot.gserviceaccount.com",
"role": "OWNER"
}
Let me know if the information helped you!
Related
I can copy file to Google Cloud Storage:
% gsutil -m cp audio/index.csv gs://passive-english/audio/
If you experience problems with multiprocessing on MacOS, they might be related to https://bugs.python.org/issue33725. You can disable multiprocessing by editing your .boto config or by adding the following flag to your command: `-o "GSUtil:parallel_process_count=1"`. Note that multithreading is still available even if you disable multiprocessing.
Copying file://audio/index.csv [Content-Type=text/csv]...
\ [1/1 files][196.2 KiB/196.2 KiB] 100% Done
Operation completed over 1 objects/196.2 KiB.
But I can't change it metadata:
% gsutil setmeta -h "Cache-Control:public, max-age=7200" gs://passive-english/audio/index.csv
Setting metadata on gs://passive-english/audio/index.csv...
AccessDeniedException: 403 Access denied.
I'm authorizing using json file:
% env | grep GOOGL
GOOGLE_APPLICATION_CREDENTIALS=/app-342xxx-2cxxxxxx.json
How can I grant access so that gsutil can change metadata for the file?
Update 1:
I give the service account role Editor and Storage Object Admin permission.
Update 2:
I give the service account role Owner and Storage Object Admin permission. Still no use
To update an object's metadata you need the IAM permission storage.objects.update.
That permission is contained in the roles:
`Storage Object Admin (roles/storage.objectAdmin)
`Storage Admin (roles/storage.admin)
To add the required role using the CLI:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SERVICE_ACCOUNT_EMAIL}
--role=REPLACE_WITH_REQUIRED_ROLE (e.g. roles/storage.objectAdmin)
Using the Google Cloud Console GUI:
In the Cloud Console, go to the IAM & Admin -> IAM page.
Locate the service account.
Click the pencil icon on the right hand side.
Click ADD ROLE.
Select one of the required roles.
I tried to update metadata, I can able to successfully edit without errors.
According to documention , you need to have Owner role on the object to edit meatadata.
you can also refer this document 1 & 2
I have an auto build pipe line in google cloud build :
- name: "gcr.io/cloud-builders/gsutil"
entrypoint: gsutil
args: ["-m","rsync","-r","gs://my-bucket-main","gs://my-bucket-destination"]
I gave the following permissions to
xxxxxx#cloudbuild.gserviceaccount.com
Cloud Build Service Account
Cloud Functions Developer
Service Account User
Storage Admin
Storage Object Admin
But I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 xxxxx#cloudbuild.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
Even if I add permission owner to xxxxxx#cloudbuild.gserviceaccount.com I get the same error. I do not understand how it is possible that Storage Admin and Storage Object Admin does not provide storage.object.list access!
Even when I am doing that in my local machine where gcloud is pointed to the project and I use gsutil -m rsync -r gs://my-bucket-main gs://my-bucket-destination still I get :
Caught non-retryable exception while listing gs://my-bucket-destination/: AccessDeniedException: 403 XXXXX#YYYY.com does not have storage.objects.list access to the Google Cloud Storage bucket.
XXXXX#YYYY.com account is the owner and I also gave "Storage Admin" and
"Storage Object Admin" access to it too
any idea?
The service account is creating that error. My suggestion is to set the correct IAM roles of your service account on a bucket-level.
There are two approaches to set permission of the service account on the two buckets:
1. Using Google Cloud Console:
Go to the Cloud Storage Browser page.
Click the Bucket overflow menu on the far right of the row associated with the bucket.
Choose Edit bucket permissions.
Click +Add members button.
In the New members field, enter one or more identities that need access to your bucket.
Select a role (or roles) from the Select a role drop-down menu. The roles you select appear in the pane with a short description of the permissions they grant. You can choose Storage Admin role for full control of the bucket.
Click Save.
2. Using gsutil command:
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-main
gsutil iam ch serviceAccount:xxxxx#cloudbuild.gserviceaccount.com:objectAdmin gs://my-bucket-destination
For full gsutil command documentation, You may refer here: Using IAM with buckets
I created a service user:
gcloud iam service-accounts create test01 --display-name "test01"
And I gave him full access to Cloud Storage:
gcloud projects add-iam-policy-binding project-name \
--member serviceAccount:test01#project-name.iam.gserviceaccount.com \
--role roles/storage.admin
This code works:
from google.cloud import storage
client = storage.Client()
buckets = list(client.list_buckets())
print(buckets)
bucket = client.get_bucket('bucket-name')
print list(bucket.list_blobs())
But my project has multiple buckets for different environments, and for security reasons I want to add access for only one bucket per user.
In the documentation I found this text:
When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket.
How to apply roles/storage.admin to an individual bucket?
Update:
I tried ACL, and there is a problem: I add access to user:
gsutil iam ch \
serviceAccount:test01#project-name.iam.gserviceaccount.com:legacyBucketOwner \
gs://bucket-name
User can list all files, add files, create files, view his own files.
But user can't view files of other users.
Update 2:
I updated default ACL:
gsutil defacl ch -u \
test01#project-name.iam.gserviceaccount.com:OWNER gs://bucket-name
I waited a lot of time, created another file by another user, and it's still inaccessible by test01.
Solution:
I made it from scratch, and it works:
gsutil mb -p example-logs -c regional -l EUROPE-WEST2 gs://example-dev
gcloud iam service-accounts create test-dev --display-name "test-dev"
gcloud iam service-accounts create test-second --display-name "test-second"
# download 2 json keys from https://console.cloud.google.com/iam-admin/serviceaccounts
gsutil iam ch serviceAccount:test-dev#example-logs.iam.gserviceaccount.com:legacyBucketOwner gs://example-dev
gsutil iam ch serviceAccount:test-second#example-logs.iam.gserviceaccount.com:legacyBucketOwner gs://example-dev
gsutil defacl ch -u test-dev#example-logs.iam.gserviceaccount.com:OWNER gs://example-dev
In order for a user to work with a bucket, that user must be granted authority to work with that bucket. This is achieved with permissions. Permissions can be bundled into roles and we can give a user a role which means that the user will have that role.
For example, a user can be given the role "Storage Admin" and will then be able to perform work against all buckets in your project.
If that is too much, then you can choose NOT to give the user "Storage Admin" and then it will not be allowed to access any bucket. Obviously that is too restrictive. What you can then do is pick the individual buckets that you wish the user to access and, for each of those buckets, change the permissions of THOSE buckets. Within the permissions of a bucket you can name users and roles. For just THAT bucket, the named user will have the named role.
For more details see Creating and Managing Access Control Lists (ACLs).
You can apply storageAdmin to individual bucket like below:
gsutil iam ch serviceAccount:service_account_email:admin gs://bucket_name
How do I set permissions in such a way that anyone can upload files to my bucket?
Here is an example that has these 3 features:
I can upload any file and download my file from anywhere.
But I am not able to download files uploaded by others.
However, I can delete files uploaded by others.
I will like to know how this bucket (abc) was set up and who owns it.
1) I can upload:
[root#localhost ~]# aws s3 cp test.txt s3://abc/
upload: ./test.txt to s3://abc/test.txt
2) I can list contents:
[root#localhost ~]# aws s3 ls s3://abc | head
PRE doubleverify-iqm/
PRE folder400/
PRE ngcsc/
PRE out/
PRE pd/
PRE pit/
PRE soap1/
PRE some-subdir/
PRE swoo/
2018-06-15 12:06:27 2351 0Sw5xyknAcVaqShdROBSfCfa7sdA27WbFMm4QNdUHWqf2vymo5.json
3) I can download my file from anywhere:
[root#localhost ~]# aws s3 cp s3://abc/test.txt .
download: s3://abc/test.txt to ./test.txt
4) But not able to download other's file
[root#localhost ~]# aws s3 cp s3://abc/zQhAqmwIUfIeDnEEHpiaGhXuERgO3bR84jkjhbei1aLiV1758t.json .
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
5) however, I can delete the file not uploaded by me:
[root#localhost ~]# aws s3 rm s3://abc/zQhAqmwIUfIeDnEEHpiaGhXuERgO3bR84jkjhbei1aLiV1758t.json
delete: s3://abc/zQhAqmwIUfIeDnEEHpiaGhXuERgO3bR84jkjhbei1aLiV1758t.json
I am not sure how to set-up such a bucket.
It is not advisable to setup a bucket in this manner.
The fact that anyone can upload to the bucket means that somebody could store, potentially, TBs of data and you would be liable for the cost. For example, somebody could host large video files, using your bucket for free storage and bandwidth.
Similarly, it is not good security practice to grant permissions for anyone to list the contents of your bucket. They might find sensitive data that was not intended to be released.
It would also be unwise to allow anyone to delete objects from your bucket, because somebody could delete everything!
There are two primary ways to grant access to objects:
Bucket Policy
A Bucket Policy can grant permissions on the whole bucket, or specific paths within a bucket. For example, granting GetObject to the whole bucket means that anyone can download any object.
See: Bucket Policy Examples - Amazon Simple Storage Service
Object-level permissions
Basic permissions can also be granted on a per-object basis. For example, when an object is copied to a bucket, the Access Control List (ACL) can specify who can access the object.
For example, this would grant ownership of the object to the bucket owner:
aws s3 cp foo.txt s3://my-bucket/foo.txt --acl bucket-owner-full-control
If the --acl is excluded, then the object 'belongs' to the identity that uploaded the file, which is why you were download your own file. This is not recommended, because it could lead to a situation where the bucket owner cannot access (and potentially cannot even delete!) the object.
Bottom line: Think about your security before implementing rules that grant other people, or anyone, permissions on your buckets.
I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)