AWS user can access stuff I didn't grant them access to - amazon-web-services

I created an AWS user and granted them access to just a few things as you can see on this screenshot:
And then they logged in and told me they can access my S3 and a few other services. I even had them test uploading a file to my S3.
So what did I do wrong?
Thanks.

Look at the policy for AWSLambdaFullAccess. Under "Action" you will see "s3:". This gives the user full access to all S3 commands. Under "Resource" you will see "". This give the user full access to all resources (in this case all S3 commands on all S3 resources).

Related

How do I let a user see a single bucket in the root s3 console?

What permissions do I set in a policy to allow a user to see a single bucket in the root s3 page in the console (https://s3.console.aws.amazon.com/s3/buckets)
I keep trying different things but either they see all the bucketsor none of them. I gave them permissions to manage the bucket and if they put the bucket url into their browser they can access it fine and upload stuff. But if they go to the root s3 page it doesn't list any buckets.
It is not possible to control which buckets a user can see listed in the S3 Management Console.
If a user has permission to use the ListBuckets() command, then they will be able to see a listing of ALL buckets in that AWS Account.
However, there is a cheat...
You can give permissions to a user to 'use' a specific Amazon S3 bucket (eg GetObject, PutObject, ListObjects), while not giving them permission List the buckets. They will not be able to use the S3 Management Console to navigate to the bucket, but you can give them a URL that will take them directly to the bucket in the console, eg:
https://s3.console.aws.amazon.com/s3/buckets/BUCKET-NAME
This will let them see and use the bucket in the S3 Management Console, but they won't be able to see the names of any other buckets and they won't be able to navigate to their bucket via the 'root s3 page' that you mention. Instead, they will need to use that URL.

How to configure S3 bucket policies for my app

I have a bucket called prod, with a directory tree that looks like this:
prod
logs
uploads
user
doc
1
a.jpg
b.jpg
2
a.jpg
b.jpg
thing
photos
1
a.jpg
b.jpg
2
a.jpg
b.jpg
Everything in thing/photos should be public. GET requests should be allowed for everyone, but PUT and POST requests should only be allowed when users upload a file through my app.
The user/doc directory on the other hand, I want to be completely private. POST requests should be allowed for users who upload a file through my app, but the only person who should be able to GET those resources is an admin. That data is encrypted before it is stored, but I want to make sure that the folder is not accessible to the public or to other users of my app.
After reading A deep dive into AWS S3 access controls – taking full control over your assets and ACLs - What Permissions Can I Grant? I remain confused about how to accomplish what I want. The overlapping access controls leave me feeling bewildered, and I cannot find a tutorial that explains any of this in an action-oriented approach. Given the number of data leaks caused by improperly set S3 bucket policies, it seems likely that I'm not the only person who misunderstands this.
How are your policies set? Do you have a link to a better tutorial than the ones I've found? Thank you!
Amazon S3 buckets are private by default. Therefore, access is only available if you specifically configure access.
Everything in thing/photos should be public
If you wish to make an entire bucket, or part of a bucket public, use a Bucket Policy.
Copying from #avlazarov's answer:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Example",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/things/photos/*"]
}
]
}
This means:
Allow anyone
To GetObject, meaning read an object
As long as it is in the bucket named examplebucket, in the things/photos/ path
Please note that they will not be able List the contents of the path, so they will need to know the exact name of the object they are retrieving.
Before adding a Bucket Policy, you will need to deactivate the setting in Amazon S3 block public access that prevents Bucket Policies being added.
General rule: When granting public access, use a Bucket Policy.
The user/doc directory should be completely private
Amazon S3 buckets are private by default. Therefore, nothing needs to be done.
However, you then mention that the mobile app should have access. Such permissions should be granted via Identity and Access Management (IAM) settings.
Since you mention 'users', there is probably some authentication method being used by your app, presumably to a back-end service. Therefore, rather than putting IAM credentials directly in the app, the flow should be:
User logs into the app
The app sends the authentication information to a back-end service that authenticates the user (could be Cognito, login with Google or even just your own database)
If the user is verified, then the back-end service would generate temporary credentials using the AWS Security Token Service (STS). A policy can be attached to these credentials, granting sufficient permissions for the user and app for this particular session. For example, it could grant access to a path (sub-directory) so that the user can only access objects in their own sub-directory.
It is preferable to only grant the app (and therefore the user) the minimal amount of permissions required to use the service. This avoids intentional or accidental problems that might be caused by providing too much access.
General rule: Only provide mobile apps the minimum permissions they require. Assume that accidents or intentional hacking will happen.
The only person who should be able to GET those resources is an Admin
When granting access to your own staff, use policies attached to an IAM User or IAM Group.
I would recommend:
Create an IAM Group for Admins
Attach an IAM policy to the Group that grants desired access
Create an IAM User for each of your staff admins
Put their IAM User in the IAM Group
This way, all admins (including future ones) will obtain appropriate access, and you can track what each IAM User did independently. Never have multiple staff use the same logins. It would also be advisable to associate a Multi-factor Authentication device to each admin account since the permissions could be dangerous if access was compromised. MFA can be as simple as running an authentication app on a phone that provides a number that changes every 30 seconds.
In fact, some companies only give the Admins 'normal' accounts (without superpowers). Then, if they need to do something extraordinary, they have the Admins temporarily switch to an IAM Role that gives 'admin' capabilities. This minimizes the chance of accidentally doing something that might have an impact on the system.
General rule: Use IAM to grant access to staff within your organization.
If you wish to learn more about IAM, I would highly recommend IAM videos from the annual AWS re:Invent conference. For a complete list of sessions, see: AWS re:Invent 2019 Sessions & Podcast Feed
Disclamer: I'm assuming that your mobile app is not directly talking to S3, but instead you have a backend API server that manages the S3 access.
When you PUT the objects from your app to thing/photos you simply use "public read" permissions, or
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI>
</Grantee>
<Permission>READ</Permission>
</Grant>
from your second link. For user/doc just keep things "private" for the owner (your AWS account has FULL_ACCESS grant) of the bucket and then control the access to objects via application logic, e.g only the admin can see the content by proxing the S3 "private" objects via, let's say presigned URLs.
Finally, make sure that the user/role that accesses the bucket from your app has an attached policy like:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": ["arn:aws:s3:::examplebucket/*", arn:aws:s3:::examplebucket/"]
}
]
}
Another way to make the thing/photos public to anyone is via a bucket policy, like:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Example",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/things/photos/*"]
}
]
}
You're mostly confused because of the different kind of options to control the access – bucket policies and object level ACL.

How to see policies for S3 buckets which showing `Access Denied` error in AWS Console?

I've logged into AWS account as root user. But I'm unable to access some of the buckets in AWS. They are not showing in the S3 Console. I've accessed them by submitting the bucket name in the url
For example let's call the bucket unaccessible-bucket
https://s3.console.aws.amazon.com/s3/buckets/unaccessible-bucket/?region=us-east-1&tab=overview
If I navigates to Permissions > Bucket Policy I'm seeing notice Access denied, I'm unable to download the files. I'm unable to change the policy. I've tried with AWS CLI also.
Can someone please tell me how to edit the policy.
As per our organisation requirement,
We have to add two new IAM users..
For one user...We have to grant access to all buckets including this unaccessible-bucket.
For other user...We have to grant access to only this unaccessible-bucket.
Please check the screenshot
Many Thanks.
Assuming that you are logged into the AWS Console as the root user.
If you cannot see an S3 bucket in the AWS console, then you do not own the bucket and it is owned by another account.
If you can see the bucket in the console then you own the bucket. If you cannot access the contents of the bucket then you will need to edit the S3 Bucket Policy and add the root user as a principal. Replace the account number with your own.
Add this statement (or modify) to your S3 Bucket Policy:
"Principal": { "AWS": "arn:aws:iam::123456789012:root" }

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)

How to restrict folders to user

Thanks for previous replies.
Is it possible to restrict the user to view particular folder in a bucket. let us take example, i have a bucket and it contains 2 folder, User A have only privilege to view first folder of the bucket, if he tries to access another folder it has to show access denied. is this possible to do in amazon S3.
You can do this using AWS Identity and Access Management (IAM). You can use this to create multiple identities and assign various permissions to those identities.
Here's a relevant example taken from the Amazon docs:
Example 1: Allow each user to have a home directory in Amazon S3
In this example, we create a policy that we'll attach to the user
named Bob. The policy gives Bob access to the following home directory
in Amazon S3: my_corporate_bucket/home/bob. Bob is allowed to access
only the specific Amazon S3 actions shown in the policy, and only with
the objects in his home directory.
{
"Statement":[{
"Effect":"Allow",
"Action":["s3:PutObject","s3:GetObject","s3:GetObjectVersion",
"s3:DeleteObject","s3:DeleteObjectVersion"],
"Resource":"arn:aws:s3:::my_corporate_bucket/home/bob/*"
}
]
}