AWS - Multiple S3 bucket for multiple subdomain - amazon-web-services

I made my research, but I cannot find a specific example for this.
I have my main domain: domain.com. It works well with S3 - cloudfront - cert. manager setup, but I would like to create multiple subdomains for individual s3 buckets.
My goal:
Main: domain.com
subdomain1: landing1.domain.com -> S3 landing1 bucket.
subdomain2: landing2.domain.com -> S3 landing2 bucket.
Overall plan is to use one general domain but I will have several static pages for different topics.
Can it work? How can I setup this? Any example/flow explanation would be appreciated!

Yes, you can do this. I'm using this setup in production. Let me explain my set up.
Lets say I have three different subdomains; one.example.com, two.example.com, three.example.com.
I have created three different CloudFront distribution for each of them. Attach the same ACM certificate for each of them on CloudFront.
Finally, I directed them via Route53.
one.example.com -> one.cloudfront.net
two.example.com -> two.cloudfront.net
three.example.com -> three.cloudfront.net
It works with no problem.

You mentioned static pages so I assume you just want to host simple html site. If not, the following will not apply.
You need to setup so that the bucket name is same as the intended access URL.
So bucket name for https://landing1.domain.com should be landing1.domain.com
and https://landing2.domain.com should be landing2.domain.com.
Make sure Static Web Hosting and public access are turned on for these buckets.
You will not need CloudFront. So this setup is simpler. If you still want to use CloudFront , it is also possible, just setup two separate CloudFront distributions with custom CNAME.

Related

Can we use 2 domains (for ex mywebsite1.com and mywebsite2.com) and use the same S3 website for both domains in AWS? How?

I have to use two domain names mywebsite1.com and mywebsite2.com to host the same S3 website. I own both the domains and I have programmed the mywebsite2.com to redirect the traffic to mywebsite1.com but that changes the URL to mywebsite1.com which I do not want. Any suggestions will be appreciated.
You can use two domains for pointing to the same S3 website by leveraging following services of AWS - S3 , Cloudfront and Certificate manager.
You can refer to the following article which explains the complete steps for achieving it.

Redirect directories to individual s3 buckets?

I have a Laravel application hosted on a domain. I have several dynamic directories ie:
example.com/directory
example.com/random
example.com/moon
I would like each of these directories to resolve to a different s3 bucket while masking the URL (I want to see the URL above, not the s3 bucket URL). What's the best way to accomplish this? I could possibly create a primary bucket and host example.com on it and create routing rules on that s3 bucket to redirect to the other s3 buckets (I think). What do those routing rules look like? I was unable to find directions in the AWS documentation that showed how to redirect to other buckets. Is there another, more simple way to go about this?
It's worth noting the Laravel application may not need to be involved in the actual routing as much as using the AWS sdk to dynamically configure the directories.
You have to use Route53 along with S3 enabling static website hosting.
For detail configuration about static website hosting in S3, you can take a look here.
After that choose Route53 as a service in AWS Console.
Select your hosted zone and add a CNAME recordset, in the value field enter the S3 bucket endpoint url and in the Name field enter the url that you want to point to the S3 bucket.
For using Route53 you can read this AWS document.
The best way would be to create a CloudFront (CF) distribution with three different origins. Then each origin would respond to different Origin Path which would lead to different buckets.
example.com could be defined in Route53, with Alias A record to the CF distribution. The benefit of using CF with S3 is that you not only can speed up your website (CF is CDN), but also you can keep your buckets and objects private:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud

AWS unable to enforce https for S3 bucket

I have tried several tutorials on how to set up https via CloudFront, but nothing is working. I am hosting from an S3 bucket and the app works fine via the http protocol, but I need it to be https.
Does anyone have a very thorough tutorial on how to make this work?
Some tutorials explain how to go about setting up a certificate, some explain how to use CloudFront to handle its distribution and I even found a CloudFront tutorial that explains how not using a link from the CloudFront setup forces the wrong region to be created for a certificate, so I even tried that.
I have not found anything that explains exactly what needs to be done for this very common setup, so I am hoping that someone here has some helpful resources.
I think the main issue I had when setting up a CloudFront distribution for an S3 static webhosting bucket was in the Orign Domain Name.
When you create a new distribution, under Origin Settings, the Origin Domain Name field works as a drop-down menu and lists your buckets. Except that picking a bucket from that list doesn't work for static webhosting. You need to specifically put the endpoint for the bucket there, for example:
mywebhostingbucket.com.s3-website-sa-east-1.amazonaws.com
And for custom domains, you must set up the CNAMEs under Distribution Settings, Alternate Domain Names (CNAMEs), and then make sure you have your custom SSL certificate in the us-east-1 region.
Then you can configure the alias record set for the CloudFront distribution.
Here is a complete answer for setting up a site with https.
I had everything in this document completed:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
And it worked to get the site live via http, but in order to add https, I needed to do the following:
I had requested a certificate for whatever.com, and tried several suggestions after that. But there were a couple of things missing.
To route traffic for the domain (whatever.com) to CloudFront distribution, you will need to clear the current value of the A record and fill in distribution domain name.
Several documents that I viewed said to point the whatever.com S3 bucket to the www.whatever.com S3 bucket, and use the second one to drive the site. Since CloudFront can serve multiple domain name, you may set CNAME of distribution with both, but you will need to set A record for both to distribution AND request an ACM certificate with both domain names (with and without the www). Also, I did ask this, so if you already have a certificate, you can't edit it to do this, which means you'll need to request a new one that has both whatever.com and www.whatever.com
After all of this, I still got "Access Denied" when I went to my site, so to fix this issue, I had to create a new origin in CloudFront with 'Origin Domain Name' set to the full address of the S3 bucket (without the http), and then set the Default (*) Behavior to the S3-Website-.....whatever.com bucket.
After all of this, my site was accessible via http AND https. I hope this helps anyone who experienced this challenge.

How do I route multiple domain names to the same amazon S3 bucket location?

I have hosted react application on s3 bucket.
I want to access that same application with different domain name.
For Example:
www.example.com, www.demoapp.com, www.reactapp.com
I want to access s3 bucket hosted application with different domain.
How can I do that ?
Is there any other Amazon services do I need to use ?
if you have done something like this then please help me.
Use amazon cloudfront and you should be able to do this: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
You won't be able to do it as a standard s3 static website, but using cloud front will make it possible.
Using Amazon CloudFront Distributions service we can achieve it.
1) Create new CloudFront Distributions service,
2) select the bucket name that you want to use
3) add multiple domain name which we want to add
4) CloudFront Distributions will generate new host something like
abcded.cloudfront.net
5) Add abcded.cloudfront.net that in our original domains CNAME record.
If you are trying to route multiple domain names to the same S3 bucket then you are doing something wrong and you need to re-think your strategy.
domain names are meant to be a unique way of identifying resources on the internet (in this case its your S3 bucket) and there is no point of routing multiple domain names to the same S3 bucket.
If you really want to do this, then there are couple of options:
Option one:
Easy option is to point domain1.com to the S3 bucket and then have redirect rule that redirects requests to domain2.com and domain3.com to domain1.com.
Option two:
Another way is to create 3 S3 buckets and duplicate the content on all 3 and then use different domain names to point to each one. This of course is going to create a nightmare maintenance scenario as if you changed the application in one bucket then you have to do the same change in all other buckets. If you are using GIT to host your application then you can push to all 3 S3 bucket at once.
Again, all of the above is really nothing but a hack to get around the fact that domain names point to unique resources on the internet and I can't really see why would you do something like this.
Hope this gives you some insight on how to resolve this issue.

One domain to mulitple s3 buckets based on geolocation

We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.