AccessDeniedException: 403 Forbidden on GCS using owner account - google-cloud-platform

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException

Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!

This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/

tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)

Related

Does age of an IAM account affect object-level permissions in AWS S3?

I am working with Terraform and cannot initialise the working directory. For context, the bucket and state file was made by someone who has since left the company.
I have granted myself permission to Write, List objects and Read, Write Bucket ACL. The debug log shows that I am able to ListObject from the bucket but I fail at GetObjectwith an AccessDenied error. Attempting to download the state file with AWS CLI returns the same error as expected. I am an admin and I am able to download the state file from the S3 console.
My co-worker who has the same permission set as me (admin) is able to download the state file via AWS CLI without issue and her IAM account was made before the terraform state bucket was made. Does the age of our IAM account affect access?
No, the age of an account does not affect in any way the permissions attached to it. You can't access the S3 bucket because either your role used by Terraform does not have the necessary permissions ore the bucket policy explicitly denies the access, but chances are you do not have the necessary permissions for the role itself.
In order for Terraform to be able to work with a remote state in S3, the following permissions are required (source):
s3:ListBucket on arn:aws:s3:::mybucket
s3:GetObject on arn:aws:s3:::mybucket/path/to/my/key
s3:PutObject on arn:aws:s3:::mybucket/path/to/my/key
s3:DeleteObject on arn:aws:s3:::mybucket/path/to/my/key

GCS limit bucket access to an existing service account

Usually there is Compute Engine default service account that is created automatically by GCP, this account is used for example by VM agents to access different resources across GCP and by default has role/editor permissions.
Suppose I want to create GCS bucket that can only be accessed by this default service account and no one else. I've looked into ACLs and tried to add an ACL to the bucket with this default service account email but it didn't really work.
I realized that I can still access bucket and objects in this bucket from other accounts that have for example storage bucket read and storage object read permissions and I'm not sure what I did wrong (maybe some default ACLs are present?).
My questions are:
Is it possible to limit access to just that default account? In that case who will not be able to access it?
What would be the best way to do it? (would appreciate a lot an example using Storage API)
There are still roles such as role/StorageAdmin, and actually no matter what ACLs will be put on the bucket I could still access it if I had this role (or higher role such as owner) right?
Thanks!
I recommend you not to use ACL (and Google also). It's better to switch the bucket in uniform IAM policy.
There are 2 bad side of ACL:
New created files aren't ACL and you need to set it everytime that you create a ne file
It's difficult to know who has and who hasn't access with ACL. IAM service is better for auditing.
When you switch to Uniform IAM access, Owner, Viewer, and Editor role no longer have access to buckets (the role/storage.admin isn't included in this primitive role). It could solve in one click all the unwanted access. Else, as John said, remove all the IAM permission on the bucket and the project that have access to the bucket except your service account.
You can control access to buckets and objects using Cloud IAM and ACLs.
For example grant the service account WRITE (R: READ,W: WRITE,O: OWNER) access to the bucket using ACLs:
gsutil acl ch -u service-account#project.iam.gserviceaccount.com:W gs://my-bucket
To remove access of service account from the bucket:
gsutil acl ch -d service-account#project.iam.gserviceaccount.com gs://my-bucket
If There are roles such as role/StorageAdmin in the IAM identities (project level), they will have access to all the GCS resources of the project. You might have to change the permission to avoid them having access.

How do I give a GCP service account storage.buckets.list access with read only access?

I'm trying to do gsutil ls however that results in:
ubuntu#ip:~$ gsutil ls
AccessDeniedException: 403 xxxxxxxxxxxx#xxxxxxxxxx.iam.gserviceaccount.com does not have storage.buckets.list access to project xxxxxxxxxxxxxxx.
Can I give this permission with only read / viewer access IAM roles?
You certainly can. At a minimum, you can always create a custom role with exactly the permissions you want. You do this by clicking the Create Role button at the top of the roles tab. Then, once it is created, apply that role to your service account on the IAM page, like any other role.
Alternatively, you can use the same roles tab in the cloud console to search for that permission explicitly to see which roles contain it and see if any would work for you.
In this case, I don't see an obvious default one that is limited, however. That said,you could look at Storage Legacy Bucket Reader (roles/storage.legacyBucketReader) as a starting point for a custom role in this case -- if you select this role on the roles tab, you can 'Create Role from Selection' to use it as a starting point).
The command gsutil ls lists the buckets in your project.
To list buckets you need the permission storage.buckets.list.
To list the objects in a bucket you need the permission storage.objects.list.
Neither of those permissions allows you to read an object. To read an object you need the permission storage.objects.get.
To only read an object, you do not need the list permissions. However, since you are using the gsutil command, you do.
There are several predefined roles that you can attach to your service account to grant the necessary permissions for gsutil.
Recommended:
roles/storage.objectViewer
Or the following two roles:
roles/storage.legacyObjectReader
roles/storage.legacyBucketReader
If you ONLY want to assign a role to read an object but not list them:
roles/storage.legacyObjectReader
Read only permissions for a GCP GCS bucket as of January 2022:
storage.buckets.get
storage.buckets.list
storage.objects.get
storage.objects.list
The "Viewer" role for the project is probably what you are looking for to view all the buckets in a project.
Otherwise, by giving only the "storage object viewer" role, you can only view the content INSIDE the bucket, by going to the correct URL of the bucket.

GCP, only list buckets where user has permissions

I am trying to figure out a way to allow a GCP user to list buckets but only those where the user has permissions (through ACL). The reason is because it can be overwhelming the number of buckets and the user experience would not be the best. Any ideas ?
Thanks!
I am trying to figure out a way to allow a GCP user to list buckets
but only those where the user has permissions (through ACL).
You cannot accomplish your goal using either Bucket ACLs or IAM permissions.
To list Google Cloud Storage buckets, you need the IAM permission storage.buckets.list.
This permission grants the IAM member account permission to list all buckets in the project. You cannot restrict this permission further to list only specific bucket names. This permission does not allow listing the objects in a bucket.
For a future design decision, you can use different projects and organize your buckets under projects. This will limit access to only IAM members of that project.
When you create a bucket you permanently define its name, location and the project it is part of. These characteristics cannot be changed later.
If you're using the CLI, you can write a script that gets the permissions for each listed bucket, and only displays it if the user account is in the permission list:
for bucket in $(gsutil ls); do
if gsutil acl get $bucket|grep -q $(gcloud config get-value account) ; then
echo $bucket;
fi;
done
Note that inherited permissions (e.g. at the project level) will not appear with this script.
This can't be accomplished with the console, but if you need a web interface listing only certain buckets, then you can build it yourself by calling the API and doing the same thing that the CLI script does.

In S3, is there a way to deny bucket owner accessing object?

Firstly, what is the difference between these two ACL options: private/bucket-owner-full-control? From document, one is FULL_CONTROL for 'owner', the other is FULL_CONTROL for 'both the object owner and the bucket owner'. Then I thought that the 'private' is only for object owner, not even for the bucket owner,hence the bucket owner can't access the object. It is not true...
Secondly, is there a way to stop browsing data from S3 console at all?
Thanks.
Object is an item in bucket.
Access policy allows more permissions than ACL does, you use ACL to primarily grant basic read/write permissions, similar to file system permissions.
For full-control a Canned ACL is bucket-owner-full-control:
Both the object owner and the bucket owner get FULL_CONTROL over the object. If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
private canned ACL applies to Bucket and object:
Owner gets FULL_CONTROL. No one else has access rights (default).
To Answer Second Question only way to stop browsing data from console is by logging in as an IAM user who does not have permission to s3, if you are logged in as root user (NOT RECOMMENDED) you will see all the AWS Resources in Console.
Hope this helps and for reference read this.