Say you want to host a static web site on S3 :
You create a bucket with name your-website.com and set it up for web hosting;
You add a CNAME in your domain's zone file to point to your S3 bucket.
Great. Everything works fine when you visit http://your-website.com. But you don't want the raw/"naked" endpoint to be accessible.
Is there any setting in the bucket to disable direct access to http://your-website.com.s3-website.your-region.amazonaws.com ?
The reason is that if your web site is accessible both through http://your-website.com and http://your-website.com.s3-website.your-region.amazonaws.com would hurt your SEO (duplicate content)
You mention your major concern is SEO. For that purpose, you could use a other techniques, that are probably easier to implement than the one you initially asked about.
One of the main techniques to deal with duplicate content is to use rel=canonical, which is probably fairly easy to implement. For more information, see http://googlewebmastercentral.blogspot.com.br/2013/04/5-common-mistakes-with-relcanonical.html
If you insist on the need to disable access to the bucket unless the client connects through your CNAME, your best bet is to use CloudFront. You disable the S3 website hosting option on your bucket, make your S3 bucket private (i.e., remove bucket policies or ACLs allowing public read), create a CloudFront distribution, define your bucket as the origin, configure a CNAME on your distribution, change your DNS records to point to your distribution instead of bucket, create an Origin Access Identity (OAI) on your distribution and grant access to your bucket for that OAI. Phew.
By doing all this, there's no way for a user to access the content on your S3 bucket (unless they have an AK/SK with permissions to read the bucket, and send a signed request, obviously). The only way will be through your domain.
For more detail on Origin Access Identity, see http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Related
How to make aws s3 bucket public but restrict it to specific domain and localhost:3000 (for testing purpose).
Basically the s3 files will be accessed by the react.js website and we don't want the s3 files to be accessed outside the wwww.example.com domain and localhost:3000
Tried couple of things but doesn't seem to work.
Bucket policy - Not configured and not sure what to specify
Let me know the changes to be done to make it work.
How to make aws s3 bucket public but strict it to specific domain
Its not possible. At best you could Restricting access to a specific HTTP referer, but its not bullet proof. AWS writes:
Therefore, do not use aws:Referer to prevent unauthorized parties from making direct AWS requests.
You need proper authoritative mechanism and place your website behind some login screen, if you want to control access to it.
We are implementing a static site hosted in S3 behind CloudFront with OAI. Both CloudFront and S3 can have logging enabled and I'm trying to determine whether it is necessary/practical to have logging enabled for both.
The only S3 access that would not come from CloudFront should be site deployments from our CI/CD pipeline which may still be useful to log. It may be useful for that exact reason to find any unintended access that was not from CloudFront or deployments too? The downside of course is that there would be two sets of access logs that would mostly overlap and adding to the monthly bill.
We will need to make a decision on this, but curious what the consensus is out there.
Thanks,
John
If you are using CloudFront with origin Access Identity , then your bucket can be private to the world.
Only the OAI and any other users you want can have read access and denying any others access to the s3 bucket/files inside the bucket which host the static website.
Which means user around the world need to mandatorily come via cloud front and direct access to s3 bucket and it's files will be denied.
So if you have implemented it right , you do not need to have s3 access logging enabled.
However the value of security is only know once we face a disaster , just weigh in and take a decision.
References :
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
I want to connect CDN to an AWS S3 Bucket, but the AWS Document indicates that the bucket name must be the same as the CNAME. Therefore, it is very easy to guess the real s3 bucket url by others.
For example,
- My domain: example.com
- My S3 Bucket name: image.example.com
- My CDN CNAME(image.example.com) will point to image.example.com.s3.amazonaws.com
After that, people can access the CDN URL -> http://image.example.com to obtain the resources from my S3 Bucket. However, under this restriction, people can guess my real S3 bucket url from the CNAME (CNAME + s3.amazonaws.com) easily.
So, my question is that how can I hide my real s3 bucket url? Because I don't want to expose my real S3 url to anyone for preventing any attacks.
I am not sure I understand what you are asking for or what you are trying to do [hiding your bucket does not really help anything], however I will attempt to answer your question regarding "hiding" your bucket name. Before I answer, I would like to ask these two questions:
Why do you want to hide your S3 bucket url?
What kind of attacks are you trying to prevent?
You are correct that the S3 bucket name had to be the same as your URL. This is no longer a requirement as you can mask the S3 bucket using cloudfront. CloudFront as you know is a CDN from AWS. Thus the bucket name could be anything (randomstring).
You can restrict access to the bucket, such that only CloudFront can access it. Data in the bucket is then replicated to edge locations and served from there. Even if one knows the S3 URL, it will not do anything as access to the s3 bucket is restricted, an IAM rule grants CloudFront access and no one else.
Access restriction is done via origin access and while you can manually configure this using a bucket policy, you can also set a flag in CloudFront to do this on your behalf. More information is available here.
Use the CloudFront name in Route53. Do not use CNAME, but rather use A type, and set it up as an Alias. For more information see this document.
If you are using a different DNS provider, AWS aliases will naturally not be available. I suggest moving the zone file from your other provider to AWS. If you cannot do this, then you can still use a CNAME. Again see here for more information.
I suggest using your own domain name for CloudFront and setting up HTTPS. AWS offers certificates at no additional cost for services within AWS. You can register a certificate for your domain name which is either validated by a DNS entry or an Email. To set this up please see this document.
If you want to restrict access to specific files within AWS, you can use signed URLs. More information about that is provided here.
This is more of a theoretical question for AWS S3 website hosting.
Say I have a website hosted in S3. Obviously I want the content to be public, but I don't want people to be able to download the backend scripts, images, css by simply changing the domain url. I want to hide those folders, but if I deny GetObject access in the bucket policy for the folders the application "breaks" because it can't reach those folders.
How can I secure my content to ensure the most security for my backend when it sits in an S3 bucket?
You need to access the website via cloudfront with restricted access, better known as Origin Access Identity. This will allow cloudfront distribution access to s3 bucket.
More details can be found in the AWS Docs or https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-creating-oai
In my application we have to open some pdf files in a new tab on click of an icon using the direct s3 bucket url like this:
http://MyBucket.s3.amazonaws.com/Certificates/1.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1522947975&Signature=XXXXXXXXXXXXXXXXX
Some how i feel this is not secure as the user could see the bucket name, AWSAccessKeyId,Expiration and Signature. Is this still considered secure ? Or is there a better way to handle this ?
Allowing the user to see these parameters is not a problem because;
AWSAccessKeyId can be public (do not confuse with SecretAccessKey)
Expires and signature is signed with your SecretAccessKey so no one will be able to manipulate it (aws will validate it against you SecretKey)
Since you don't have public objects and your bucket itself is not public, then it is ok to the user knowing your bucket name - you will always need a valid signature to access the objects.
But I have two suggestions for you; 1. Use your own domain, so the bucket is not visible (you can use free SSL provided by AWS if you use CloudFornt), 2. Use HTTPS instead of plain HTTP.
And if for any reason you absolutely dont want your users to see AWS parameters, then I suggest that you proxy the access to S3 via your own API. (though I consider it unnecessary)
I see you access with http (with no SSL). You can do virtual hosting with S3 for multiple domains.
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
and create signed url based on your domain and you are good to go.
If you are using SSL, you can use Cloudfront
and configure cloudfront origin to point to your S3 bucket.
Hope it helps.