I have a bucket called prod, with a directory tree that looks like this:
prod
logs
uploads
user
doc
1
a.jpg
b.jpg
2
a.jpg
b.jpg
thing
photos
1
a.jpg
b.jpg
2
a.jpg
b.jpg
Everything in thing/photos should be public. GET requests should be allowed for everyone, but PUT and POST requests should only be allowed when users upload a file through my app.
The user/doc directory on the other hand, I want to be completely private. POST requests should be allowed for users who upload a file through my app, but the only person who should be able to GET those resources is an admin. That data is encrypted before it is stored, but I want to make sure that the folder is not accessible to the public or to other users of my app.
After reading A deep dive into AWS S3 access controls – taking full control over your assets and ACLs - What Permissions Can I Grant? I remain confused about how to accomplish what I want. The overlapping access controls leave me feeling bewildered, and I cannot find a tutorial that explains any of this in an action-oriented approach. Given the number of data leaks caused by improperly set S3 bucket policies, it seems likely that I'm not the only person who misunderstands this.
How are your policies set? Do you have a link to a better tutorial than the ones I've found? Thank you!
Amazon S3 buckets are private by default. Therefore, access is only available if you specifically configure access.
Everything in thing/photos should be public
If you wish to make an entire bucket, or part of a bucket public, use a Bucket Policy.
Copying from #avlazarov's answer:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Example",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/things/photos/*"]
}
]
}
This means:
Allow anyone
To GetObject, meaning read an object
As long as it is in the bucket named examplebucket, in the things/photos/ path
Please note that they will not be able List the contents of the path, so they will need to know the exact name of the object they are retrieving.
Before adding a Bucket Policy, you will need to deactivate the setting in Amazon S3 block public access that prevents Bucket Policies being added.
General rule: When granting public access, use a Bucket Policy.
The user/doc directory should be completely private
Amazon S3 buckets are private by default. Therefore, nothing needs to be done.
However, you then mention that the mobile app should have access. Such permissions should be granted via Identity and Access Management (IAM) settings.
Since you mention 'users', there is probably some authentication method being used by your app, presumably to a back-end service. Therefore, rather than putting IAM credentials directly in the app, the flow should be:
User logs into the app
The app sends the authentication information to a back-end service that authenticates the user (could be Cognito, login with Google or even just your own database)
If the user is verified, then the back-end service would generate temporary credentials using the AWS Security Token Service (STS). A policy can be attached to these credentials, granting sufficient permissions for the user and app for this particular session. For example, it could grant access to a path (sub-directory) so that the user can only access objects in their own sub-directory.
It is preferable to only grant the app (and therefore the user) the minimal amount of permissions required to use the service. This avoids intentional or accidental problems that might be caused by providing too much access.
General rule: Only provide mobile apps the minimum permissions they require. Assume that accidents or intentional hacking will happen.
The only person who should be able to GET those resources is an Admin
When granting access to your own staff, use policies attached to an IAM User or IAM Group.
I would recommend:
Create an IAM Group for Admins
Attach an IAM policy to the Group that grants desired access
Create an IAM User for each of your staff admins
Put their IAM User in the IAM Group
This way, all admins (including future ones) will obtain appropriate access, and you can track what each IAM User did independently. Never have multiple staff use the same logins. It would also be advisable to associate a Multi-factor Authentication device to each admin account since the permissions could be dangerous if access was compromised. MFA can be as simple as running an authentication app on a phone that provides a number that changes every 30 seconds.
In fact, some companies only give the Admins 'normal' accounts (without superpowers). Then, if they need to do something extraordinary, they have the Admins temporarily switch to an IAM Role that gives 'admin' capabilities. This minimizes the chance of accidentally doing something that might have an impact on the system.
General rule: Use IAM to grant access to staff within your organization.
If you wish to learn more about IAM, I would highly recommend IAM videos from the annual AWS re:Invent conference. For a complete list of sessions, see: AWS re:Invent 2019 Sessions & Podcast Feed
Disclamer: I'm assuming that your mobile app is not directly talking to S3, but instead you have a backend API server that manages the S3 access.
When you PUT the objects from your app to thing/photos you simply use "public read" permissions, or
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI>
</Grantee>
<Permission>READ</Permission>
</Grant>
from your second link. For user/doc just keep things "private" for the owner (your AWS account has FULL_ACCESS grant) of the bucket and then control the access to objects via application logic, e.g only the admin can see the content by proxing the S3 "private" objects via, let's say presigned URLs.
Finally, make sure that the user/role that accesses the bucket from your app has an attached policy like:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": ["arn:aws:s3:::examplebucket/*", arn:aws:s3:::examplebucket/"]
}
]
}
Another way to make the thing/photos public to anyone is via a bucket policy, like:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Example",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/things/photos/*"]
}
]
}
You're mostly confused because of the different kind of options to control the access – bucket policies and object level ACL.
Related
According to the Documentation: Granting permissions to multiple accounts with added conditions it is possible to create with the entry:
Principal": {"AWS": ["arn:aws:iam::111122223333:root","arn:aws:iam::444455556666:root"]}
just access for all the users inside this account. But unfortunately it is not working. When putting single users there the access for that User from that different account is working. But with all and the root option is does not work.
But with all and the root option is does not work.
This is because the admins of these accounts also have to add permissions to IAM users/roles to access the bucket. In other words, adding arn:aws:iam::111122223333:root to a bucket policy is not enough. The individual IAM users or roles from 111122223333 also need IAM permissions to access the bucket.
Usually there is Compute Engine default service account that is created automatically by GCP, this account is used for example by VM agents to access different resources across GCP and by default has role/editor permissions.
Suppose I want to create GCS bucket that can only be accessed by this default service account and no one else. I've looked into ACLs and tried to add an ACL to the bucket with this default service account email but it didn't really work.
I realized that I can still access bucket and objects in this bucket from other accounts that have for example storage bucket read and storage object read permissions and I'm not sure what I did wrong (maybe some default ACLs are present?).
My questions are:
Is it possible to limit access to just that default account? In that case who will not be able to access it?
What would be the best way to do it? (would appreciate a lot an example using Storage API)
There are still roles such as role/StorageAdmin, and actually no matter what ACLs will be put on the bucket I could still access it if I had this role (or higher role such as owner) right?
Thanks!
I recommend you not to use ACL (and Google also). It's better to switch the bucket in uniform IAM policy.
There are 2 bad side of ACL:
New created files aren't ACL and you need to set it everytime that you create a ne file
It's difficult to know who has and who hasn't access with ACL. IAM service is better for auditing.
When you switch to Uniform IAM access, Owner, Viewer, and Editor role no longer have access to buckets (the role/storage.admin isn't included in this primitive role). It could solve in one click all the unwanted access. Else, as John said, remove all the IAM permission on the bucket and the project that have access to the bucket except your service account.
You can control access to buckets and objects using Cloud IAM and ACLs.
For example grant the service account WRITE (R: READ,W: WRITE,O: OWNER) access to the bucket using ACLs:
gsutil acl ch -u service-account#project.iam.gserviceaccount.com:W gs://my-bucket
To remove access of service account from the bucket:
gsutil acl ch -d service-account#project.iam.gserviceaccount.com gs://my-bucket
If There are roles such as role/StorageAdmin in the IAM identities (project level), they will have access to all the GCS resources of the project. You might have to change the permission to avoid them having access.
I created an AWS user and granted them access to just a few things as you can see on this screenshot:
And then they logged in and told me they can access my S3 and a few other services. I even had them test uploading a file to my S3.
So what did I do wrong?
Thanks.
Look at the policy for AWSLambdaFullAccess. Under "Action" you will see "s3:". This gives the user full access to all S3 commands. Under "Resource" you will see "". This give the user full access to all resources (in this case all S3 commands on all S3 resources).
I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)
Thanks for previous replies.
Is it possible to restrict the user to view particular folder in a bucket. let us take example, i have a bucket and it contains 2 folder, User A have only privilege to view first folder of the bucket, if he tries to access another folder it has to show access denied. is this possible to do in amazon S3.
You can do this using AWS Identity and Access Management (IAM). You can use this to create multiple identities and assign various permissions to those identities.
Here's a relevant example taken from the Amazon docs:
Example 1: Allow each user to have a home directory in Amazon S3
In this example, we create a policy that we'll attach to the user
named Bob. The policy gives Bob access to the following home directory
in Amazon S3: my_corporate_bucket/home/bob. Bob is allowed to access
only the specific Amazon S3 actions shown in the policy, and only with
the objects in his home directory.
{
"Statement":[{
"Effect":"Allow",
"Action":["s3:PutObject","s3:GetObject","s3:GetObjectVersion",
"s3:DeleteObject","s3:DeleteObjectVersion"],
"Resource":"arn:aws:s3:::my_corporate_bucket/home/bob/*"
}
]
}