I have a static website hosted on an aws s3 bucket. I am using a few different api's like google, trello etc. I am not sure how to keep some of these keys private as I set up my bucket to use PublicReadForGetBucketObjects which makes the entire website public. I have looked into AssumeRoleWithWebIdentity and permissions to restrict access but still cannot figure out how to make one of my files private. It seems to me that this is probably something easy but I cannot find a way.
Here is what my bucket policy looks like
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket]/*"
}
]
}
Thanks
The policy you have listed can apply permissions based on path. For example, setting the Resource to arn:aws:s3:::[my bucket]/public/* would only make the public sub-directory public (or more accurately, any path that starts with /public/).
Similarly, a policy can also define a path to specifically Deny, which will override the Allow (so you could make everything public but then specifically deny certain files and paths)
However, you mention that you would like to keep some files private, yet this is a static website, with no compute component. It would not be possible for only 'some' of your website to access the desired objects, since all the logic is taking place in your users' browsers rather than on your web server. Therefore, a file would either be public or private, but the private files could not be accessed as part of the static website. This might not be what you are trying to achieve.
Related
My web app allows different user to upload different files. Currently putting them all in one bucket, something like:
A12345-Something.zip
B67890-Lorem.zip
A12345-... is file uploaded by user id A12345.
B67890-... is file uploaded by user id B67890.
This is my S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxxx/*"
}
]
}
So far, this is all good, user A12345 can download the zip file by accessing https://xxxx.s3.ap-south-1.amazonaws.com/A12345-Something.zip
But the AWS interface gives me a warning that this bucket is a public bucket and it is highly recommended to not set it to public.
I am not sure but it is indeed very wrong if the policy above allows someone to list all objects from all users in my bucket and then access them one by one.
I think I need a policy that only allows reading a specific object if the full path is provided (assuming only that user will have access to that full path), but disallow listing of objects?
How should the policy looks like?
The policy you've specified allows someone to get all objects which means if they have the path they can retrieve that file publicly in the browser.
The permission ListObjects would be the permission that allows people to list all of the objects in your S3 bucket, this is not included.
If only specific users should be accessing this content, you should take a look at using signed URLs instead, this would prevent someone guessing or somehow gaining access to a link you do not want them to have.
This warning is in place to protect sensitive data being left exposed to the world, which is recent times has caused large volumes of private company data to be leaked.
I created a Amazon S3 Bucket to store only images from my website. I have more than 1 million images all with public read access. Everytime I make a login, Amazon gives me this warning:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket. "
I'm using the following Bucket Policy to only allow images to be shown just in my site:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com.br",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://www.example.com.br/*",
"https://www.example.com/*",
"https://www.example.com.br/*"
]
}
}
}
]
}
How can I revoke the public access to the bucket and to my files and grant it only to my sites?
Thank you!
It's a scary warning meant to prevent people from leaking data unintentionally. There have been lots of cases in the news lately about companies accidentally setting permissions to allow public reads.
In your case you really do want these to be publicly readable so you can just ignore the warning. Your security policy looks fine and still matches the documentation for public hosting.
You could theoretically put these images behind another server that streams them to the user if you really don't want someone to be able to download them directly. That's not really any more secure though.
If you do not want to have these publicly available at all just delete this policy from your bucket. In that case your website will not be able to serve the images.
Your policy looks good. You are providing a higher level of security then just public thru the referer header and not allowing the listing of objects.
Using S3 to provide common files such as CSS, JS and Images is just so easy. However, with all of the accidental security problems I usually recommend one of these approaches:
Turn on static web site hosting for the bucket. This makes it very clear to future admins that this bucket is intended for public files. Also I do not see big warning messages for these buckets. Enable redirect requests.
Better, turn off all public access and use CloudFront. Enable Origin Access Identity. You receive all the benefits of CloudFront, tighter security, etc.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
Suppose a public bucket exists that has a very similar name to my private bucket. I want to prevent a user from misspelling the private bucket and accidentally posting sensitive data to the public.
I understand that it would be best practice to make the bucket name as unique as possible.
Clarification: I want to prevent a user from posting to ANY public S3 bucket
Public Buckets are very rare. In fact, they are highly discouraged from a security perspective and also from a cost perspective -- somebody could upload illegal files and use it for file sharing, and the bucket owner would pay for it!
I would normally say that the chance of somebody being able to successfully upload to a random bucket is practically zero, but I suspect you are thinking of a case where an evil party might create a similarly-named bucket in the hope of collecting confidential data (similar to domain-name camping).
In that case, you can create a Deny policy on the user to prohibit access to ALL S3 buckets, except for the ones you specifically nominate:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::good-bucket1",
"arn:aws:s3:::good-bucket2"
]
},
{
"Sid": "NotOthers",
"Effect": "Deny",
"Action": [
"s3:*"
],
"NotResource": [
"arn:aws:s3:::good-bucket1",
"arn:aws:s3:::good-bucket2" ]
}
]
}
This will work because the Deny against the IAM User will override any Allow in a Bucket Policy. The only downside is that you will need to specifically list the buckets you wish to include/exclude because there is no way to specify that rules apply to "a public bucket".
You have no control over the other bucket, so you can't prevent this happening.
To respond to it, I suppose you could query that bucket periodically, assuming it's publicly readable, in search of content that you think should have been uploaded to your bucket though it's not clear what you would do at that point.
Alternatively, provide your users with an upload page (maybe statically-hosted in your S3 bucket) and ask your users to use that page to initiate uploads (via POST or AWS S3 JavaScript SDK) so users do not have to type in a bucket name and hence cannot accidentally target the wrong bucket.
I have a sub directory under a bucket with ~7000 sub directories. And then each of those sub directories may have 1-100 files.
I need the files to be public, but I don't want anyone to be able to see the list of subdirectories, or even the list of files under a given directory.
I know I can set the ACL for the files to read-only, and then i think I can set the directory to private. But for this many files, I'm hoping there is a much easier solution?
To allow everyone to get objects, but not allow anyone to list objects, you can apply a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucket/myfolder1/*",
"arn:aws:s3:::mybucket/myfolder2/*"
]
}
]
}
Note that anyone who discovers the URL of an object can retrieve that object.
The short answer is that you don't need to do anything.
Just make your bucket public and give out links to individual files.
S3 does not actually have directories and files. It's just a key -> object mapping. The "directories" is just a convention to help people, but internally there is no structure.
2 scenarios:
Users not going through AWS auth (hitting the public URL for the object) will not be able to see the structure. Disable Read access for the Everyone group for the bucket itself, and then enable Read access for the individual files within the bucket
if you need to provide AWS credentials, only give the GetObject permission (http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html)
For people with this issue that are still working with an empty bucket, what worked for me was to allow public ACLs for that bucket, from the public access configuration tab, and then pass the property { ACL: "public-read" } when uploading the file with the upload method of the SDK. This way new files have public links, but the bucket is still restricted. The bad part is that previous files remain without that public-read ACL. The documentation says you can assign ACLs in batches so maybe that's an option for non empty buckets too.
I have a static website created with Amazon S3. The only permissions I have set are through the bucket policy provided in Amazons tutorial:
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
Clearly, this policy enables the public to view any file stored on my bucket, which I want. My question is, is this policy alone enough to prevent other people from uploading files and/or hijacking my website? I wish for the public to be able to access any file on the bucket, but I want to be the only one with list, upload, and delete permissions. Is this the current behavior of my bucket, given that my bucket policy only addresses view permissions?
Have a look at this: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_EvaluationLogic.html#policy-eval-basics
From that document:
When a request is made, the AWS service decides whether a given
request should be allowed or denied. The evaluation logic follows
these rules:
By default, all requests are denied. (In general, requests made using
the account credentials for resources in the account are always
allowed.)
An explicit allow overrides this default.
An explicit deny overrides any allows.
So as long as you don't explicitly allow other access you should be fine. I have a static site hosted on S3 and I have the same access policy.