How to check permissions on folders in S3? - amazon-web-services

I want to simply check the permissions that I have on a buckets/folders/files in AWS S3. Something like:
ls -l
Sounds like it should be pretty easy but I cannot find any information on the subject. I just want to know if I have read access to a content, or if I can load a file locally without trying to load the data, to have an "Error Code: 403 Forbidden" thrown at me.
Note: I am using databricks and want to check the permission from there.
Thanks!

You can check the permissions using the command,
aws s3api get-object-acl --bucket my-bucket --key index.html
You acl for each object can vary across your bucket.
More documentation at,
https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-acl.html
Hope it helps.

There are several different ways to grant access to objects in Amazon S3.
Permissions can be granted on a whole bucket, or a path within a bucket, via a Bucket Policy.
Permissions can also be granted to an IAM User or Role, giving that specific user/role permissions similar to a bucket policy.
Then there are permissions on the object itself, such as making it publicly readable.
So, there is no simple way to say "what are the permissions on this particular object" because it depends who you are. Also, the policies can restrict by IP address and time of day, so there isn't always one answer.
You could use the IAM Policy Simulator to test whether a certain call (eg PutObject or GetObject) would work for a given user.
Some commands in the AWS Command-Line Interface (CLI) come with a --dryrun option that will simply test whether the command would have worked, without actually executing the command.
Or, sometimes it is just easiest to try to access the object and see what happens!

Related

Cloud IAM Conditions for GCS objects

I'm having problems using Cloud IAM Conditions to limit my service account to only have permissions to read certain files in a GCS bucket.
I'm trying to use the following condition:
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems%2f12345")
where I want to allow the service account to ONLY have READ access to files with prefix fooItems/12345 inside the bucket some-bucket
I.e. The following files should not be authorized:
gs://some-bucket/fooItems/555/f.txt
gs://some-bucket/fooItems/555/foo/s.log
while the following files should be authorized:
gs://some-bucket/fooItems/1234/f.txt
gs://some-bucket/fooItems/1234/foo/s.log
The problem I'm having is that even files such as gs://some-bucket/fooItems/555/* are readable.
I tried both with and without encoded /, i.e.:
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems/12345")
and
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems%2f12345")
You should use ACL's instead of IAM permissions if you want to grant permission only over some of the files in a bucket. This is called Fine grained control
In order to achieve this, you should change the policy of your bucket to Fine-Grained and then you can navigate to the object where you want to give permissions to the service account and add it as documented

How to access aws s3 current bucketlist content info

I have been provided with the access and secret key for an Amazon S3 container. No more details were provided other than to drop some files into some specific folder.
I downloaded Amazon CLI and also the Amazon SDK. So far, seems to be no way for me to check the bucket name or list the folders where I'm supposed to drop my files. Every single command seems to require the knowledge of a bucket name.
Trying to list with aws s3 ls gives me the error:
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Is there a way to list the content of my current location (I'm guessing the credentials I was given are linked directly to a bucket?). I'd like to see at least the folders where I'm supposed to drop my files, but the SDK client for the console app I'm building seems to always require a bucket name.
Was I provided incomplete info or limited rights?
Do you know the bucket name or not? If you don't and you don't have permission to ListAllMyBuckets and GetBucketLocation on * and ListBucket on the bucket in question, then you can't get the bucket name. That's how it is supposed to work. If you know the bucket, then you can run aws s3 s3://bucket-name/ to get objects in the bucket.
Note, that S3 buckets don't have the concept of "folder". It's User interface "sugar" to make it look like folders and files. Internally, it's just the key and the object
Looks like it was just not possible without enhanced rights or with the actual bucketname. I was able to procure both later on from the client and able to complete the task. Thanks for the comments.

GCP, only list buckets where user has permissions

I am trying to figure out a way to allow a GCP user to list buckets but only those where the user has permissions (through ACL). The reason is because it can be overwhelming the number of buckets and the user experience would not be the best. Any ideas ?
Thanks!
I am trying to figure out a way to allow a GCP user to list buckets
but only those where the user has permissions (through ACL).
You cannot accomplish your goal using either Bucket ACLs or IAM permissions.
To list Google Cloud Storage buckets, you need the IAM permission storage.buckets.list.
This permission grants the IAM member account permission to list all buckets in the project. You cannot restrict this permission further to list only specific bucket names. This permission does not allow listing the objects in a bucket.
For a future design decision, you can use different projects and organize your buckets under projects. This will limit access to only IAM members of that project.
When you create a bucket you permanently define its name, location and the project it is part of. These characteristics cannot be changed later.
If you're using the CLI, you can write a script that gets the permissions for each listed bucket, and only displays it if the user account is in the permission list:
for bucket in $(gsutil ls); do
if gsutil acl get $bucket|grep -q $(gcloud config get-value account) ; then
echo $bucket;
fi;
done
Note that inherited permissions (e.g. at the project level) will not appear with this script.
This can't be accomplished with the console, but if you need a web interface listing only certain buckets, then you can build it yourself by calling the API and doing the same thing that the CLI script does.

AWS - S3 - Create bucket which is already existing - through CLI

Through AWS Console if you create a bucket if its already existing - console will not allow creating again.
But, through CLI it will allow you to create it again - when you execute make bucket command with the existing bucket - it just shows the success message.
It's really confusing, as doesn't show error in CLI. Confusing as different behaviors with two process.
Any idea why is this behavior and why CLI doesn't throw any error for the same?
In a distributed system, when you ask to create most of the time it will upsert. Throwing error back is a costly process.
If you want to check whether bucket exists and if you have appropriate privileges use the following command.
aws s3api head-bucket --bucket my-bucket
Documentation:
http://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
This operation is useful to determine if a bucket exists and you have
permission to access it.
Hope it helps.

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)