I am trying to implement a green/blue AWS deploy of static files backed by S3 according to this (oldish) whitepaper.
In short, the idea is to create two separate CloudFront distributions which point to two separate folders in an S3 bucket. One is "green" and one "blue". After deploying one or the other, you then switch traffic over from green to blue or vice versa using weighted routing.
That is all well and good but the problem comes with using your own domain and linking a certificate.
In order to get CloudFront to serve the S3 files properly (over https with a cert on your own domain), you need to input the FQDN in the "Alternate Domain Names (CNAMEs) field when configuring the CloudFront distribution. However you cannot use the same name in multiple Cloudfront Distributions.
Therefore, I would need to use a different url per cloudfront distribution e.g. blue.mydomain.com and green.mydomain.com
However, if I do this then using weighted routing with a single A record in the associated Route53 entry would not work as the name must match the "CNAMEs" entered in the Cloudfront distribution to prevent ssl errors. Am I missing something? I could add my own reverse proxy or something but I really don't want to do that.
TL;DR it seems like this whitepaper is impossible to implement as-is?
You can use single CloudFront distribution with two AWS buckets as websites and switch them while deploying an application. Another option you can modify the viewer request with Lambda#Edge/Cloudfront function in order to redirect the request to the right origin or implement weighted routing.
Also, I suggest considering using *.domain_name for blue distribution and app.domain_name for another one with ACM certificate *.domain_name. This allows you to use the same FQDN as an entry point for both.
Take into account the fact that Cloudfront is HA and a global AWS service. There is no point to include it in your blue/green deployment schemas. Lambda#Edge or Cloudfront Functions might be really useful to switch between origins.
There is an example.
Related
I made my research, but I cannot find a specific example for this.
I have my main domain: domain.com. It works well with S3 - cloudfront - cert. manager setup, but I would like to create multiple subdomains for individual s3 buckets.
My goal:
Main: domain.com
subdomain1: landing1.domain.com -> S3 landing1 bucket.
subdomain2: landing2.domain.com -> S3 landing2 bucket.
Overall plan is to use one general domain but I will have several static pages for different topics.
Can it work? How can I setup this? Any example/flow explanation would be appreciated!
Yes, you can do this. I'm using this setup in production. Let me explain my set up.
Lets say I have three different subdomains; one.example.com, two.example.com, three.example.com.
I have created three different CloudFront distribution for each of them. Attach the same ACM certificate for each of them on CloudFront.
Finally, I directed them via Route53.
one.example.com -> one.cloudfront.net
two.example.com -> two.cloudfront.net
three.example.com -> three.cloudfront.net
It works with no problem.
You mentioned static pages so I assume you just want to host simple html site. If not, the following will not apply.
You need to setup so that the bucket name is same as the intended access URL.
So bucket name for https://landing1.domain.com should be landing1.domain.com
and https://landing2.domain.com should be landing2.domain.com.
Make sure Static Web Hosting and public access are turned on for these buckets.
You will not need CloudFront. So this setup is simpler. If you still want to use CloudFront , it is also possible, just setup two separate CloudFront distributions with custom CNAME.
I'm trying to serve static web content (HTML, CSS, and JS files) from S3 buckets. I know I can go to the bucket's properties tab and choose the item Use this bucket to host a website from the Static website hosting box. And I'm sure this step will still be part of the solution I'm looking for but it won't be all.
Here's what I'm trying to accomplish:
Deploying the same content to multiple regions and based on availability and/or latency, provide the service to the client.
As for the API Gateway, I know how to do this. I should create the same API Gateway (alongside underlying lambda functions) and Custom Domain Names in all the regions. And then creating the same domain on Route 53 (of type CNAME) and choose Latency as Routing Policy. One can also set up a Health Check for the Record Set so availability of the API Gateway and lambda functions are checked periodically.
Now I want to do the same for the S3 bucket and my static content. i.e. I want to deploy the same content to different regions and somehow make Route 53 to route the request to the closest available bucket. Previously, I was using CloudFront but it seems to me in this setup, I can only introduce one bucket.
Does anyone know how can I serve my static content from multiple buckets? If you are going to suggest CouldFront, please tell me how you plan to use multiple buckets.
You can generate a certificate, setup a CloudFront distribution to grab the content from your bucket and then point your domain to your distribution using Route53. You get free https and you can also add several S3 buckets as origins for your distribution.
From AWS Docs:
After you configure CloudFront to deliver your content, here's what happens when users request your objects:
1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency—and routes the request to that edge location.
3. In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
3a. CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.
3b. The origin servers send the files back to the CloudFront edge location.
3c. As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.
P.D. Keep in mind this is for static content only!
This is possible with CloudFront using Lambda#Edge to change origin based on answer from Route 53.
Please refer this blog for a sample Lambda#Edge code to do this -
https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/
In Api Gateway I've created one custom domain, foo.example.com, which creates a Cloud Front distribution with that CNAME.
I also want to create a wildcard domain, *.example.com, but when attempting to create it, CloudFront throws an error:
CNAMEAlreadyExistsException: One or more of the CNAMEs you provided
are already associated with a different resource
AWS in its docs states that:
However, you can add a wildcard alternate domain name, such as
*.example.com, that includes (that overlaps with) a non-wildcard alternate domain name, such as www.example.com. Overlapping domain
names can be in the same distribution or in separate distributions as
long as both distributions were created by using the same AWS account.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-wildcard
So I might have misunderstood this, is it possible to accomplish what I've described?
This is very likely to be a side-effect of your API Gateway endpoint being configured as Edge Optimized instead of Regional, because with an edge-optimized API, there is a hidden CloudFront distribution provisioned automatically... however, the CloudFront distribution associated with your API is not owned by your account, but rather by an account associated with API Gateway.
Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution that is created and managed by API Gateway.
— Amazon API Gateway Supports Regional API Endpoints
This creates a conflict that prevents the wildcard distribution from being created.
Subdomains that mask a wildcard are not allowed to cross AWS account boundaries, because this would potentially allow traffic for a wildcard distribution's matching domains to be hijacked by creating a more specific alternate domain name -- but, as you noted from the documentation, you can do within your own account.
Redeploying your API as Regional instead of Edge Optimized is the likely solution. If you still want the edge optimization behavior, you can create another CloudFront distribution with that specific subdomain for use with the API. This would be allowed, because you would own the distribution. Regional APIs are still globally accessible.
Yes it is. But keep in mind that CNAMEs set for CloudFront distributions are validated to be globally unique, including API Gateway distributions. So this means you (or any other account) have that CNAME already set up. Currently there is no way to lookup where the conflict is, you may need to raise a ticket with AWS support if you can't find that yourself.
I have hosted react application on s3 bucket.
I want to access that same application with different domain name.
For Example:
www.example.com, www.demoapp.com, www.reactapp.com
I want to access s3 bucket hosted application with different domain.
How can I do that ?
Is there any other Amazon services do I need to use ?
if you have done something like this then please help me.
Use amazon cloudfront and you should be able to do this: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
You won't be able to do it as a standard s3 static website, but using cloud front will make it possible.
Using Amazon CloudFront Distributions service we can achieve it.
1) Create new CloudFront Distributions service,
2) select the bucket name that you want to use
3) add multiple domain name which we want to add
4) CloudFront Distributions will generate new host something like
abcded.cloudfront.net
5) Add abcded.cloudfront.net that in our original domains CNAME record.
If you are trying to route multiple domain names to the same S3 bucket then you are doing something wrong and you need to re-think your strategy.
domain names are meant to be a unique way of identifying resources on the internet (in this case its your S3 bucket) and there is no point of routing multiple domain names to the same S3 bucket.
If you really want to do this, then there are couple of options:
Option one:
Easy option is to point domain1.com to the S3 bucket and then have redirect rule that redirects requests to domain2.com and domain3.com to domain1.com.
Option two:
Another way is to create 3 S3 buckets and duplicate the content on all 3 and then use different domain names to point to each one. This of course is going to create a nightmare maintenance scenario as if you changed the application in one bucket then you have to do the same change in all other buckets. If you are using GIT to host your application then you can push to all 3 S3 bucket at once.
Again, all of the above is really nothing but a hack to get around the fact that domain names point to unique resources on the internet and I can't really see why would you do something like this.
Hope this gives you some insight on how to resolve this issue.
We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.