I have the following business continuity (BC) solution:
Static website that is deployed in S3 in two identical buckets in regions A (primary) and B (secondary).
CloudFront distribution that uses Origin Group to failover from the bucket in region A to the other one in region B.
The distribution is configured with the custom domain and certificates to provide access to the website.
Route 53 has "A" record connecting the custom domain to the CloudFront distribution.
The above solution works as indented, i.e. it provides the failover in case of the failure of the primary site (e.g. S3 failure).
What I am trying to figure out is the best way to ensure the availability of the CloudFront distribution, i.e. what happens if the region where the distribution is configured becomes compromised.
Originally I was thinking to create another distribution that will be identical to the first one but will exist in another region and use Route 53 to failover between them. Unfortunately it is not possible due to the one-to-one relationship between a CF distribution and customer domains (CNAME).
I would appreciate anyone's experience with creating a BC solution for the CloudFront distribution that uses custom domains.
Thank you,
gen
Related
I'm automating a deployment of a static site to AWS, including the following:
Uploading static content to an S3 bucket.
Creating a hosted zone in Route 53.
Creating an SSL/TLS certificate in ACM.
Placing the certificate validation records in Route 53.
Creating a CloudFront distribution using the ACM certificate.
Adding Route 53 records to point to the CloudFront distribution.
Everything seems to be automatable, but there's a little snag: at step #5, creating the CloudFront distribution, the certificate has just been created and hasn't been validated yet, so creation of the CloudFront distribution fails. Moreover even waiting for validation won't help, because to be validated the domain's DNS must be updated to point to the name servers for the hosted zone, which aren't known until step #2 when the hosted zone is created.
As I am using a third-party domain registrar, I don't know ahead of time what name pervers to indicate for the domain until completion of step #2. But then step #5 cannot complete until the certificate is validated, which requires updating the name servers with my domain registrar. So I have a chicken-and-egg problem.
Is there any way to tell CloudFound to create the distribution even though the domain is "invalid" (not yet validated), so that after everything is finished I can go update my domain registrar with the name servers for the created hosted zone? Or must I stop this automated process in the middle after creation of the hosted zone, and then come back and complete it later after updating the name servers?
Any other ideas to work around this problem? (Yes, I know I can use Route 53 to register the domain as well, and then probably update the name servers automatically, but I want my automated tool to work with third-party registrars.)
I don't believe you will be able to configure Cloudfront until the domain is registered and AWS can validate that you control the DNS.
There are a few security issues that AWS has been working to resolve - see https://aws.amazon.com/blogs/security/enhanced-domain-protections-for-amazon-cloudfront-requests/ - and it sounds like you will be prevented from doing what you are attempting to do.
I recommend your steps should be:
Create an S3 Bucket
Upload your static content to the bucket.
Create a "placeholder" CloudFront Distribution using bucket from Step 1 (no cert needed; this is your egg)
Create your hosted zone in Route 53.(old step 2)
Create an SSL/TLS certificate in ACM. (old step 3)
Place the certificate validation records in Route 53 (old step 4)
Update your CloudFront Distribution created in step 3 when your cert is done. your chicken!
and on.. remaining automation steps to complete
The minimal requirements for creating a CloudFront web distribution are small. One bucket will do. Go into the console and try for yourself, its simple. Once the distribution is created it can sit idle until your other steps are complete with AWS and your third-party registrar. Then you update the distribution with your fully baked cert. Because this is possible via the console, and all console actions are backed by API calls, means that you can automate this.
Take a look at the create CloudFront API call requirements compared to the update CloudFront API requirements. You can create a distribution without a valid certificate. However for updates to that placeholder distribution, you will need all of the required fields, to include the ViewerCertificate.
Keep in mind when building your automated tool, if you get stuck with the API, try the action in the console to determine/reverse engineer the values needed to pass into the API calls.
I have a custom origin i.e. a web app on an EC2 instance. How do I decide whether I should go for:
a Cloudfront CDN
or,
deploy multiple instances in different regions and configure a Geolocation/proximity based routing policy
The confusion arises from the fact that both aim at routing the request to the nearest location (edge location in case of Cloudfront and region specific EC2 instance when it comes to multi-region deployments with Geolocation based policy with Route 53) based on where the request originates from.
There is no reason why you can't do both.
CloudFront automatically routes requests to an edge location nearest the viewer, and when a request can't be served from that location or the nearest regional cache, CloudFront does a DNS lookup for the origin domain name and fetches the content from the origin.
So far, I've only really stated the obvious. But up next is a subtle but important detail:
CloudFront does that origin server DNS lookup from a location that is near the viewer -- which means that if the origin domain name is a latency-based record set in Route 53, pointing to deployments in two or more EC2 regions, then the request CloudFront makes to "find" the origin will be routed to the origin deployment nearest the edge, which is also by definition going to be near to the viewer.
So a single, global CloudFront deployment can automatically and transparently select the best origin, using latency-based configuration for the backend's DNS configuration.
If the caching and transport optimizations provided by CloudFront do not give you the global performance you require, then you can deploy in multiple regions, behind CloudFront... being mindful, always, that a multi-region deployment is almost always a more complex environment, depending on the databases that are backing your application and how they are equipped to handle cross-region replication for reads and/or writes.
Including CloudFront as the front-end is also a better solution for fault tolerance among multiple regional deployments, because CloudFront correctly honors the DNS TTL on your origin server's DNS record, and if you have Route 53 health checks configured to take an unhealthy region out of the DNS response on the origin domain name, CloudFront will quickly stop sending further requests to it. Browsers are notoriously untrustworthy in this regard, sometimes caching a DNS answer until all tabs/windows are closed.
And if CloudFront is your front-end, you can offload portions of your logic to Lambda#Edge if desired.
You can use multi region for lot reasons mainly,
Proximity
Failover (incase if first region fails, requests can be sent to another region)
Multi region lambda deployment is clearly documented here. You can apply the same logic to all of the AWS Resources too. (DynamoDB, S3)
https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/
You can also run Lambda#Edge to force all your requests / splits to one region on the edge.
Hope it helps.
I am currently implementing Canary Release and Blue Green Deployment on my Static Website on AWS S3. Basically, I created two S3 bucket (v1 and v2) and 2 cloud front (I didn't append the CNAME). Then, I create 2 A alias records in Route 53 with 50% each weight routing policy. However, I was being routed to v1 only using both laptop and mobile to access my domain. I even ask my colleague to open my domain and they're being routed to v1 as well.
It really puzzled me why there's no user being routed to v2?
AWS Static Web in S3
The assigned dyyyexample.cloudfront.net and dzzzexample.cloudfront.net hostnames that route traffic to your CloudFront distributions go to the same place. CloudFront can't see your DNS alias entries, so it is unaware of which alias was followed.
Instead, it looks at the TLS SNI and the HTTP Host header the browser sends. It uses this information to match with the Alternate Domain Name for your distribution -- with no change to the DNS.
Your site's hostname, example.com, is only configured as the Alternate Domain Name on one of your distributions, because CloudFront does not allow you to provision the same value on more than one distribution.
If you swap that Alternate Domain Name entry to the other distribution, all traffic will move go the other distribution.
In short, CloudFront does not directly and natively support Blue/Green or Canary.
The workaround is to use a Lambda#Edge trigger and a cookie to latch each viewer to one origin or another. Lambda#Edge origin request trigger allows the origin to be changed while the request is in flight.
There is an A/B testing example in the docs, but that example swaps out the path. See the Dynamic Origin Selection examples for how to swap out the origin. Combining the logic of these two allows A/B testing across two buckets (or any two alternate back-ends).
What you are explaining should work if you make use of "overlapping aliases" in Cloudfront. You configure one distribution to listen to app.example.com and the other one to *.example.com and use Route53 weighted routing for app.example.com
However weighted routing might not be ideal solution for canary releases. This is due to DNS propagation/caching and the fact that it is not sticky.
Like Michael suggests you might want to look into having 1 cloudfront and routing to bucket A/B using Lambda#Edge or Cloudfront functions.
Here is an example.
If all my endpoints are AWS services like ELB or S3 "Evaluate Target Health" can be used instead of failover records correct? I can use multiple weighted, geo, or latency records and if I enabled "Evaluate Target Health" it also servers the purpose of failover if one of the resources a record is pointing to is not healthly route53 will not send traffic to it.
The only use I see for failover records with custom healthchecks is for non-aws resources OR if maybe you have a more complex decision you want DNS to make instead of just ELB/S3/etc service health.
EDIT: so it seems while I can get active-active with "Evaluate Target Health" (on alias endpoints) if I want active-passive I have to use a failover policy- is this correct?
Essentially, yes. Evaluating target health makes the records viable candidates for generating responses, only when healthy. Without a failover policy, they're all viable when they're all healthy.
If you do something like latency-based routing and you had two targets, let's say Ohio and London, then you'd essentially have a dual active/passive configuration with reversed roles -- Ohio active and London passive for viewers in North America, and the roles reversed for viewers in Europe. But if you want global active/passive, you'd need a a failover policy.
Note that if you are configuring any kind of high-availability design using Route 53 and target health, your best bet is to do all of this behind CloudFront -- where the viewer always connects to CloudFront and CloudFront does the DNS lookup against Route 53 to find the correct origin based on whatever rules you've created. The reason for this is that CloudFront always respects the DNS TTL values. Browsers, for performance reasons, do not. Your viewers can find themselves stuck with DNS records for a dead target because their browsers don't flush their cached DNS lookups until all tabs in all windows are closed. For users like me, that almost never happens.
This also works with latency-based routes in Route 53 behind CloudFront, because CloudFront has already routed the viewer to its optimal edge, and when that edge does a lookup on a latency-based route in Route 53, it receives the answer that has the lowest latency from the CloudFront edge that's handling the request... so both viewer to CloudFront and CloudFront to origin routes are thus optimal.
Note also that failover routing to S3 with only DNS is not possible, because S3 expects the hostname to match the bucket name, and bucket names are global. An S3 failure is a rare event, but it has happened at least once. When it happened, the impact was limited to a single region, as designed. For a site to survive an S3 regional failure requires additional heroics involving either CloudFront and Lambda#Edge triggers, or EC2-based proxies that can modify the request as needed and send it to the alternate bucket in an alternate region.
Latency-based routing to buckets with Route 53 is also not possible, for the same reason, but can be accomplished by Lambda#Edge origin request triggers. These triggers are aware of the AWS region where a given invocation is running, and thus can swap origin servers based on location.
I've provisioned the following AWS resources to host a static website (via Cloudformation):
S3 bucket to store the website
Cloudfront distribution for CDN (for reduced latency)
Route 53 A record sets that directs traffic to the CDN
I have two completely different websites that I'd like to A/B test to see traffic behavior and conversions. Is it possible to configure A/B testing using the resources that I've configured?
This is what I've tried so far.
Bringing up the same Cloudformation stack for the second website doesn't work cause Cloudfront only allows a single distribution to have a particular CNAME. For example if my website is example.com, only one Cloudfront distribution can have that CNAME configured to it.
If I try to use the existing Cloudfront distribution there doesn't seem to be a way to split traffic to the two different websites hosted in different S3 buckets. Within the Cloudfront distribution I need to create a second origin and then a behavior. The behavior requires a path + precedence so all the traffic will get sent to one of the websites but not the other.
Is it not possible to configure A/B testing of static websites hosted in S3 with Cloudfront?
You can use a Lambda#Edge function to manipulate the request and then serve a different page using CloudFront.
A/B Testing example:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-a-b-testing