One domain to mulitple s3 buckets based on geolocation - amazon-web-services

We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.

It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.

So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.

Related

Redirect directories to individual s3 buckets?

I have a Laravel application hosted on a domain. I have several dynamic directories ie:
example.com/directory
example.com/random
example.com/moon
I would like each of these directories to resolve to a different s3 bucket while masking the URL (I want to see the URL above, not the s3 bucket URL). What's the best way to accomplish this? I could possibly create a primary bucket and host example.com on it and create routing rules on that s3 bucket to redirect to the other s3 buckets (I think). What do those routing rules look like? I was unable to find directions in the AWS documentation that showed how to redirect to other buckets. Is there another, more simple way to go about this?
It's worth noting the Laravel application may not need to be involved in the actual routing as much as using the AWS sdk to dynamically configure the directories.
You have to use Route53 along with S3 enabling static website hosting.
For detail configuration about static website hosting in S3, you can take a look here.
After that choose Route53 as a service in AWS Console.
Select your hosted zone and add a CNAME recordset, in the value field enter the S3 bucket endpoint url and in the Name field enter the url that you want to point to the S3 bucket.
For using Route53 you can read this AWS document.
The best way would be to create a CloudFront (CF) distribution with three different origins. Then each origin would respond to different Origin Path which would lead to different buckets.
example.com could be defined in Route53, with Alias A record to the CF distribution. The benefit of using CF with S3 is that you not only can speed up your website (CF is CDN), but also you can keep your buckets and objects private:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud

How to serve static web content from S3 backed by multiple buckets from different regions

I'm trying to serve static web content (HTML, CSS, and JS files) from S3 buckets. I know I can go to the bucket's properties tab and choose the item Use this bucket to host a website from the Static website hosting box. And I'm sure this step will still be part of the solution I'm looking for but it won't be all.
Here's what I'm trying to accomplish:
Deploying the same content to multiple regions and based on availability and/or latency, provide the service to the client.
As for the API Gateway, I know how to do this. I should create the same API Gateway (alongside underlying lambda functions) and Custom Domain Names in all the regions. And then creating the same domain on Route 53 (of type CNAME) and choose Latency as Routing Policy. One can also set up a Health Check for the Record Set so availability of the API Gateway and lambda functions are checked periodically.
Now I want to do the same for the S3 bucket and my static content. i.e. I want to deploy the same content to different regions and somehow make Route 53 to route the request to the closest available bucket. Previously, I was using CloudFront but it seems to me in this setup, I can only introduce one bucket.
Does anyone know how can I serve my static content from multiple buckets? If you are going to suggest CouldFront, please tell me how you plan to use multiple buckets.
You can generate a certificate, setup a CloudFront distribution to grab the content from your bucket and then point your domain to your distribution using Route53. You get free https and you can also add several S3 buckets as origins for your distribution.
From AWS Docs:
After you configure CloudFront to deliver your content, here's what happens when users request your objects:
1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency—and routes the request to that edge location.
3. In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
3a. CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.
3b. The origin servers send the files back to the CloudFront edge location.
3c. As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.
P.D. Keep in mind this is for static content only!
This is possible with CloudFront using Lambda#Edge to change origin based on answer from Route 53.
Please refer this blog for a sample Lambda#Edge code to do this -
https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/

Use S3 for website in multi-regions

I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing

have multiple subdomains refer to the same S3 bucket without HTTP redirect

I have a bucket called subdomain.domain.com that hosts code that should be used whenever users go to various subdomains.
e.g. going to:
- a.domain.com
- b.domain.com
- c.domain.com
Should go to the same bucket.
I've set the CNAME for all the subdomain URL's to go to the URL of the subdomain.domain.com bucket. The problem is that, AWS tries to look for bucket a.domain.com' instead of just going tosubdomain.domain.com' bucket
I've read some suggestions saying I can create a bucket like a.domain.com and have it redirect back to subdomain.domain.com but I don't want a URL change and I'd like to be able to upload just to one bucket and all subdomains will be updated.
Some features that appear to be "missing" in S3 are actually designed into CloudFront, which complements S3. Pointing multiple domain names to a single bucket is one of those features. It isn't possible to do this with only S3 since, as you noticed, S3 matches the hostname with the bucket name.
Create a CloudFront distribution, defining each of the desired domain names as Alternate Domain Names.
For the origin server, type in the web site endpoint hostname of the bucket, found in the S3 console. (Don't select the bucket from the dropdown list).
Point the various hostnames to CloudFront in DNS.
CloudFront will translate the incoming hostnames so that S3 serves all the domains from a single bucket, the one you specified as the origin server.
Note that this configuration also allows you to optionally use SSL with your web hosting buckets, which is another feature that S3 relies on CloudFront to implement.

How can I use one route53 record to point to two S3 buckets?

I have two buckets a and b with static websites enabled that redirect to original buckets A and B. I created two route53 record sets(A records) slave-1 and slave-2 pointing to each bucket a and b. I then created a Master record set(A record) with failover, slave-1 as primary and slave-2 as secondary. When I try to access the S3 contents using the Master, I get a 404 'No Such Bucket.' Is there a way that I can get this set up to work? If are there are any workarounds for configurations like this?
S3 only supports directly accessing a bucket using either one of the endpoint hostnames directly (such as example-bucket.s3.amazonaws.com) or via a DNS record pointing to the bucket endpoint when the name of the bucket matches the entire hostname presented in the Host: header (the hostname my-bucket.example.com works with a bucket named exactly "my-bucket.example.com").
If your tool will be signing requests for the bucket, there is no simple and practical workaround, since the signatures will not match on the request. (This technically could be done with a proxy that has knowledge of the keys and secrets, validates the original signature, strips it, then re-signs the request, but this is a complex solution.)
If you simply need to fetch content from the buckets, then use CloudFront. When CloudFront is configured in front of a bucket, you can point a domain name to CloudFront, and specify one or more buckets to handle the requests, based on pattern matching in the request paths. In this configuration, the bucket names and regions are unimportant and independent of the hostname associated with the CloudFront distribution.