Prevent S3 user from posting to public buckets? - amazon-web-services

Suppose a public bucket exists that has a very similar name to my private bucket. I want to prevent a user from misspelling the private bucket and accidentally posting sensitive data to the public.
I understand that it would be best practice to make the bucket name as unique as possible.
Clarification: I want to prevent a user from posting to ANY public S3 bucket

Public Buckets are very rare. In fact, they are highly discouraged from a security perspective and also from a cost perspective -- somebody could upload illegal files and use it for file sharing, and the bucket owner would pay for it!
I would normally say that the chance of somebody being able to successfully upload to a random bucket is practically zero, but I suspect you are thinking of a case where an evil party might create a similarly-named bucket in the hope of collecting confidential data (similar to domain-name camping).
In that case, you can create a Deny policy on the user to prohibit access to ALL S3 buckets, except for the ones you specifically nominate:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::good-bucket1",
"arn:aws:s3:::good-bucket2"
]
},
{
"Sid": "NotOthers",
"Effect": "Deny",
"Action": [
"s3:*"
],
"NotResource": [
"arn:aws:s3:::good-bucket1",
"arn:aws:s3:::good-bucket2" ]
}
]
}
This will work because the Deny against the IAM User will override any Allow in a Bucket Policy. The only downside is that you will need to specifically list the buckets you wish to include/exclude because there is no way to specify that rules apply to "a public bucket".

You have no control over the other bucket, so you can't prevent this happening.
To respond to it, I suppose you could query that bucket periodically, assuming it's publicly readable, in search of content that you think should have been uploaded to your bucket though it's not clear what you would do at that point.
Alternatively, provide your users with an upload page (maybe statically-hosted in your S3 bucket) and ask your users to use that page to initiate uploads (via POST or AWS S3 JavaScript SDK) so users do not have to type in a bucket name and hence cannot accidentally target the wrong bucket.

Related

My AS3 Bucket Policy only applies to some Objects

I'm having a really hard time setting up my bucket policy, it looks like my bucket policy only applies to some objects in my bucket.
What I want is pretty simple: I store video files in the bucket and I want them to be exclusively downloadable from my webiste.
My approach is to block everything by default, and then add allow rules:
Give full rights to root and Alice user.
Give public access to files in my bucket from only specific referers (my websites).
Note:
I manually made all the objects 'public' and my settings for Block Public Access are all set to Off.
Can anyone see any obvious errors in my bucket policy?
I don't understand why my policy seems to only work for some files.
Thank you so much
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::426873019732:root",
"arn:aws:iam::426873019732:user/alice"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MY_BUCKET",
"arn:aws:s3:::MY_BUCKET/*"
]
}
]
}
Controlling access via aws:Referer is not secure. It can be overcome quite easily. A simple web search will provide many tools that can accomplish this.
The more secure method would be:
Keep all objects in your Amazon S3 bucket private (do not "Make Public")
Do not use a Bucket Policy
Users should authenticate to your application
When a user wishes to access one of the videos, or when your application creates an HTML page that refers/embeds a video, the application should determine whether the user is entitled to access the object.
If the user is entitled to access the object, the application creates an Amazon S3 pre-signed URL, which provides time-limited access to a private object.
When the user's browser requests to retrieve the object via the pre-signed URL, Amazon S3 will verify the contents of the URL. If the URL is valid and the time limit has not expired, Amazon S3 will return the object (eg the video). If the time has expired, the contents will not be provided.
The pre-signed URL can be created in a couple of lines of code and does not require and API call back to Amazon S3.
The benefit of using pre-signed URLs is that your application determines who is entitled to view objects. For example, a user could choose to share a video with another user. Your application would permit the other user to view this shared video. It would not require any changes to IAM or bucket policies.
See: Amazon S3 pre-signed URLs
Also, if you wish to grant access to an Amazon S3 bucket to specific IAM Users (that is, users within your organization, rather than application users), it is better to grant access on the IAM User rather than via an Amazon S3 bucket. If there are many users, you can create an IAM Group that contains multiple IAM Users, and then put the policy on the IAM Group. Bucket Policies should generally be used for granting access to "everyone" rather than specific IAM Users.
In general, it is advisable to avoid using Deny policies since they can be difficult to write correctly and might inadvertently deny access to your Admin staff. It is better to limit what is being Allowed, rather than having to combine Allow and Deny.

Granting tens of thousands of AWS accounts access to a bucket?

We are a humble startup that mines data from the entire Internet and put them in an Amazon S3 bucket to share with the world. For now we have 2TB of data and soon we may reach the 20TB mark.
Our subscribers will be able to download all the data from the Amazon S3 bucket we have. We have to opt for requester pays for the bandwidth apparently unless we want to end up with some heart breaking S3 bills.
Pre-signed URL is not an option because it doesn't seem to audit bandwidth usage in real time, thus is vulnerable to download abuses.
After some research this seems to be the way to grant different AWS accounts the needed permissions to access our bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Permissions to foreign account 1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-1:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
{
"Sid": "Permissions to foreign account 2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-2:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
{
"Sid": "Permissions to foreign account 3",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ForeignAccount-ID-3:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ourbucket"
]
},
......
]
}
Wherein ForeignAccount-ID-x is the account ID e.g. 2222-2222-2222.
However the issue is, we may potentially have tens of thousands or even more subscribers to this bucket.
Is this the right and efficient way to add permissions for them to access this bucket?
Would it pose any performance difficulties to this bucket considering each request would go through this mountainous bucket policy?
Any better solutions for this problem?
Your requirement for Amazon S3 Requester Pays Buckets is understandable, but leads to other limitations.
User will need their own AWS account to authenticate — it will not work with federated logins such as AWS Cognito. Also, pre-signed URLs aren't of benefit because they are generated from an AWS account too.
Bucket policies are limited to 20KB and ACLs are limited to 100 grants.
So, this approach seems unlikely to work.
Another option would be to create a mechanism where your system can push content to another user's AWS account. They would need to provide a destination bucket and some form of access (eg an IAM Role that can be assumed) and your application could copy files to their bucket. However, this could be difficult for regularly-published data.
Another option would be to allow access to the content only from within the same AWS Region. Thus, users would be able to read and process the data in AWS using services such as Amazon EMR. They could write applications on EC2 that access the data in Amazon S3. They would be able to copy the data to their own buckets. The only thing they cannot do is access the data from outside AWS. This would eliminate Data Transfer costs. The data could even be provided in multiple regions to serve worldwide users.
A final option would be to propose your dataset to the AWS Public Dataset Program, which will cover the cost of storage and data transfer for "publicly available high-value cloud-optimized datasets".

Bucket policy denying S3:DeleteBucket and S3:DeleteObject still deletes objects

I've applied the following bucket policy to a my-bucket.myapp.com S3 bucket:
{
"Version": "2008-10-17",
"Id": "PreventAccidentalDeletePolicy",
"Statement": [
{
"Sid": "PreventAccidentalDelete",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject”
],
"Resource": [
“arn:aws:s3:::my-bucket.myapp.com”,
"arn:aws:s3:::my-bucket.myapp.com/*"
]
}
]
}
Then in the console, when I attempt to delete the bucket (right-click, Delete) I get the error I'm expecting: Access Denied.
BUT, and here's the rub, the problem is that it still deletes all the objects that are in the bucket
Why does this happen?
And it even happens with a versioned bucket. It just wipes all the versions and the objects are GONE.
Recommended best practice is to not use the root account aside from creating your initial IAM user so you can add restrictions to prevent such an incident. In the event someone has a use-case that needs this behavior programmatically they don't want to put limits in the system as "safe guards". It's up to the user to follow best practice and implement the necessary safeguards as applicable to their situation
The exact process for how amazon authorizes actions on s3 objects: http://docs.aws.amazon.com/AmazonS3/latest/dev/how-s3-evaluates-access-control.html
Section 2|A on this document describes behavior applied to root account in user context: " If the request is made using root credentials of an AWS account, Amazon S3 skips this step."

S3 IAM Policy to access other account

We need to create an IAM user that is allowed to access buckets in our client's S3 accounts (provided that they have allowed us access to those buckets as well).
We have created an IAM user in our account with the following inline policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}
In addition to this, we will request that our clients use the following policy and apply it to their relevant bucket:
{
"Version": "2008-10-17",
"Id": "Policy1416999097026",
"Statement": [
{
"Sid": "Stmt1416998971331",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::client-bucket-name/*"
},
{
"Sid": "Stmt1416999025675",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::client-bucket-name"
}
]
}
Whilst this all seems to work fine, the one major issue that we have discovered is our own internal inline policy seems to give full access to our-iam-user to all of our own internal buckets.
Have we mis-configured something, or are we missing something else obvious here?
According to AWS support, this is not the right way to approach the problem:
https://forums.aws.amazon.com/message.jspa?messageID=618606
I am copying the answer from them here.
AWS:
The policy you're using with your IAM user grants access to any Amazon S3 bucket. In this case this will include any S3 bucket in your account and any bucket in any other account, where the account owner has granted your user access. You'll want to be more specific with the policy of your IAM user. For example, the following policy will limit your IAM user access to a single bucket.
You can also grant access to an array of buckets, if the user requires access to more than one.
Me
Unfortunately, we don't know beforehand all of our client's bucket names when we create the inline policy. As we get more and more clients to our service, it would be impractical to keep adding new client bucket names to the inline policy.
I guess another option is to create a new AWS account used solely for the above purpose - i.e. this account will not itself own anything, and will only ever be used for uploading to client buckets.
Is this acceptable, or are there any other alternatives options open to us?
AWS
Having a separate AWS account would provide clear security boundaries. Keep in mind that if you ever create a bucket in that other account, the user would inherit access to any bucket if you grant access to "arn:aws:s3:::*".
Another approach would be to use blacklisting (note whitelisting as suggested above is a better practice).
As you can see, the 2nd statement explicitly denies access to an array of buckets. This will override the allow in the first statment. The disadvantage here is that by default the user will inherit access to any new bucket. Therefore, you'd need to be diligent about adding new buckets to the blacklist. Either approach will require you to maintain changes to the policy. Therefore, I recommend my previous policy (aka whitelisting) where you only grant access to the S3 buckets that the user requires.
Conclusion
For our purposes, the white listing/blacklisting approach is not acceptable because we don't know before all the buckets that will be supplied by our clients. In the end, we went the route of creating a new AWS account with a single user, and that user does not have of its own s3 buckets
The policy you grant to your internal user gives this user access to all S3 bucket for the API listed (the first policy in your question). This is unnecessary as your client's bucket policies will grant your user required privileges to access to client's bucket.
To solve your problem, remove the user policy - or - explicitly your client's bucket in the list of allowed [Resources] instead of using "*"

IAM Policy to list specific folders inside a S3 bucket for an user

I have below keys under the bucket demo.for.customers
demo.for.customers/customer1/
demo.for.customers/customer2/
Now I have 2 customers namely customer1 and customer2. This is what I want:
Grant them access to only demo.for.customers bucket.
customer1 should be able to access only demo.for.customers/customer1/ and customer2 should be able to access only demo.for.customers/customer2/.
And I am able to achieve this with below policy ( I am creating policy for each customer. Hence I am pasting the one only for customer1 below.) I have defined this policy in IAM and not in S3.
{
"Version":"2012-10-17",
"Statement": [
{
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::demo.for.customers"],
"Condition":{"StringEquals":{"s3:prefix":["","customer1/"],"s3:delimiter":["/"]}}
},
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::demo.for.customers/customer1/*"]
}
]
}
Problem:
Customer1 is able to see all my bucket although he can't access any of them . I don't want this. He should be able to see only demo.for.customers
Customer1 is able to see demo.for.customers/customer2 as well although he can't access it. THis is highly unacceptable as I don't want him to even see what other customer folders I have under this bucket.
QUESTIONS:
After doing a lot of googling, I came to know that there is no way to list specific buckets. Is this really true?
However, I have to find a way to list only specific folders inside a bucket for a given user. How to do that?
Thanks.
Regarding your problems:
Unfortunately there is no way to list only certain buckets. If the intent is just to allow access to the one known bucket, I would remove the first statement entirely as it does not add any value (the bucket is already known and would not need to be listed).
Can you show the code you are using to list the bucket contents? Based on what you've shown here I would expect customer1 to only be able to list the bucket contents at the root of their prefix and nowhere else.
Regarding your questions:
Yes, there is no way to list certain buckets. The list buckets API is an all or nothing operation.
This is done by prefix. What language are you using? We have a sample for the AWS Mobile SDKs that uses a Token Vending Machine to deliver per user access to an S3 bucket.