On Amazon S3, you can restrict access to buckets by domain.
But as far as I understand from a helpful StackOverflow user, you cannot do this on CloudFront. But why? If I am correct, CloudFront only allows time-based restrictions or IP restrictions (--> so I need to know the IP's of random visitors..?) Or am I missing something?
Here is a quote from S3 documentation that suggests that per-domain restriction is possible:
---> " To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. "
--> Is there a way to make this method work on CloudFront as well? Or why something like this is not available on CloudFront?
--> Is there a similar service where this is possible, easier to setup?
Using CloudFront along with WAF (Web Application Firewall), you can restrict requests based on IP address, referrers, or domains.
Here is a AWS blog tutorial on restricting "hotlinking".
https://blogs.aws.amazon.com/security/post/Tx2CSKIBS7EP1I5/How-to-Prevent-Hotlinking-by-Using-AWS-WAF-Amazon-CloudFront-and-Referer-Checkin
In this example, it prohibits requests where the Referrer: header does not match a specific domain.
Related
When you navigate to a file uploaded on S3, you'll see its URL in a format such as this (e.g. in this example the bucket name is example and the file is hello.txt):
https://example.s3.us-west-2.amazonaws.com/hello.txt
Notice that the region, us-west-2, is embedded in the domain.
I accidentally tried accessing the same url without the region, and noticed that it worked too:
https://example.s3.amazonaws.com/hello.txt
It seems much simpler to use these shorter URLs rather than the longer ones as I don't need to pass around the region.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
This is a deprecated feature of Amazon S3 known as Global Endpoints. Some regions support the global endpoint for backward compatibility purposes. AWS recommends that you use the standard endpoint syntax in the future.
For regions that support the global endpoint, your request is redirected to the standard endpoint. By default, Amazon routes global endpoint requests to the us-east-1 region. For buckets that are in supported regions other than us-east-1, Amazon S3 updates the DNS record for future requests (note that DNS updates require 24-48 hours to propagate). Amazon then redirects the request to the correct region using the HTTP 307 Temporary Redirect.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
The domains are not the same.
Advantages to using the legacy global endpoint: the URL is shorter.
Disadvantages: the request must be redirected and is, therefore, less efficient. Further, if you create a bucket in a region that does not support global endpoints, AWS will return an HTTP 400 Bad Request error response.
TLDR: It is a best practice to use the standard (regional) S3 endpoint syntax.
How to make aws s3 bucket public but restrict it to specific domain and localhost:3000 (for testing purpose).
Basically the s3 files will be accessed by the react.js website and we don't want the s3 files to be accessed outside the wwww.example.com domain and localhost:3000
Tried couple of things but doesn't seem to work.
Bucket policy - Not configured and not sure what to specify
Let me know the changes to be done to make it work.
How to make aws s3 bucket public but strict it to specific domain
Its not possible. At best you could Restricting access to a specific HTTP referer, but its not bullet proof. AWS writes:
Therefore, do not use aws:Referer to prevent unauthorized parties from making direct AWS requests.
You need proper authoritative mechanism and place your website behind some login screen, if you want to control access to it.
I've set up a static site on AWS with route 53, ACM, cloudfront and s3. However although I can prevent direct access to the bucket's generated domain name via a bucket policy so that access is only via my custom domain eg www.example.com I'm not sure how to do this for cloudfront and currently the website can be accessed via a cloudfront domain name eg 23324sdfff.cloudfront.net
Is there a way to prevent access to the website via the cloudfront domain name so that traffic can only access the site directly via www.example.com?
I think you could achieve that using Lambda#Edge.
Specifically you could create a function for viewer-request. The function would inspect the request and then decide if to allow or deny it.
Sadly, I don't have concrete example addressing your specific use-case. But AWS docs provide a number of examples that could be useful to you.
Maybe,there is an easier way not involving the lambda, but at present I'm not aware of such a possibility.
I want to connect CDN to an AWS S3 Bucket, but the AWS Document indicates that the bucket name must be the same as the CNAME. Therefore, it is very easy to guess the real s3 bucket url by others.
For example,
- My domain: example.com
- My S3 Bucket name: image.example.com
- My CDN CNAME(image.example.com) will point to image.example.com.s3.amazonaws.com
After that, people can access the CDN URL -> http://image.example.com to obtain the resources from my S3 Bucket. However, under this restriction, people can guess my real S3 bucket url from the CNAME (CNAME + s3.amazonaws.com) easily.
So, my question is that how can I hide my real s3 bucket url? Because I don't want to expose my real S3 url to anyone for preventing any attacks.
I am not sure I understand what you are asking for or what you are trying to do [hiding your bucket does not really help anything], however I will attempt to answer your question regarding "hiding" your bucket name. Before I answer, I would like to ask these two questions:
Why do you want to hide your S3 bucket url?
What kind of attacks are you trying to prevent?
You are correct that the S3 bucket name had to be the same as your URL. This is no longer a requirement as you can mask the S3 bucket using cloudfront. CloudFront as you know is a CDN from AWS. Thus the bucket name could be anything (randomstring).
You can restrict access to the bucket, such that only CloudFront can access it. Data in the bucket is then replicated to edge locations and served from there. Even if one knows the S3 URL, it will not do anything as access to the s3 bucket is restricted, an IAM rule grants CloudFront access and no one else.
Access restriction is done via origin access and while you can manually configure this using a bucket policy, you can also set a flag in CloudFront to do this on your behalf. More information is available here.
Use the CloudFront name in Route53. Do not use CNAME, but rather use A type, and set it up as an Alias. For more information see this document.
If you are using a different DNS provider, AWS aliases will naturally not be available. I suggest moving the zone file from your other provider to AWS. If you cannot do this, then you can still use a CNAME. Again see here for more information.
I suggest using your own domain name for CloudFront and setting up HTTPS. AWS offers certificates at no additional cost for services within AWS. You can register a certificate for your domain name which is either validated by a DNS entry or an Email. To set this up please see this document.
If you want to restrict access to specific files within AWS, you can use signed URLs. More information about that is provided here.
I'm using Amazon's simple storage service (S3). I noticed that others like Trello were able to configure sub-domain for their S3 links. In the following link they have trello-attachments as sub-domain.
https://trello-attachments.s3.amazonaws.com/.../.../..../file.png
Where can I configure this?
You don't have to configure it.
All buckets work that way if there are no dots in the bucket name and it's otherwise a hostname made up of valid characters. If your bucket isn't in the "US-Standard" region, you may have to use the correct endpoint instead of ".s3.amazonaws.com" to avoid a redirect (or to make it work at all).
An ordinary Amazon S3 REST request specifies a bucket by using the first slash-delimited component of the Request-URI path. Alternatively, you can use Amazon S3 virtual hosting to address a bucket in a REST API call by using the HTTP Host header. In practice, Amazon S3 interprets Host as meaning that most buckets are automatically accessible (for limited types of requests) at http://bucketname.s3.amazonaws.com.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html