My S3(front-end codes), and EC2(back-end codes) are BOTH linked to 'example.com'
What I like to do here is to direct users to view the pages, stored in S3, unless they input 'example.com/api/'on their browser.
In other words, 'example.com/api/' for the server access, and the other routes for the react app access.
which service should I use in AWS? CloudFront, Route53, Load Balance. I am so confused. thank you.
TLDR; Create a single CloudFront distribution for your domain with separate cache behaviors for each of the paths you need to support. You can easily add additional applications, control caching, or write edge functions (using Lambda#Edge or CloudFront Functions) for any further customization you might need.
Details: CloudFront allows you to configure multiple origins (e.g. S3, EC2, API Gateway, ELB, custom URLs, and so forth) and then create cache behaviors (routes) that direct traffic to the appropriate origin.
In your case, you would create a cache behavior of /api/* to point to your EC2 origin, and the default cache behavior (think of this as your fallback route) would point to your S3 origin for all requests that do not begin with /api/
You can create a subdomain say api.example.com using Route53, here is a document for that https://aws.amazon.com/premiumsupport/knowledge-center/create-subdomain-route-53/
And for routing traffic for the newly created subdomain go through this document https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-routing-traffic-for-subdomains.html
Related
I'm moving my domain names from CloudFlare's DNS to AWS Route53 and in some cases I'm using CloudFlare's redirects for project that are dead so that their domains go to a page in another domain, so https://projectx.com goes to https://example.com/projectx-is-no-more.
I want to replicate this in AWS and what I found so far is this:
Set up an S3 bucket with the redirect to the desired URL, https://example.com/projectx-is-no-more
Set up CloudFront for the domain, projectx.com
Generate the TLS cert for projectx.com and add it to CloudFront so it can serve both https and http.
Set up Route53 to resolve the domain name to CloudFront.
I set it up, it's working, I'm even using CDK so I'm not doing it manually. But I'm wondering if there's a way of setting up these redirects that requires less moving pieces. It sounds like such a redirect would be a common enough problem that maybe Route53 or CloudFront would have a shortcut. Are there any?
Update: using only S3 doesn't work because S3 cannot serve https://projectx.com. S3 has no method by which it can respond to HTTPS request for arbitrary domains, there's no way of adding a TLS certificate (and keys) for another domain.
I checked for information and see only three possible solutions:
Set up CloudFront + S3 *
Set up Application Load Balancer
Set up API Gateway + Lambda (mock integration may be used instead of Lambda, that should reduce service cost)
Use GitHub pages with custom domain
※ S3 support only HTTP traffic so we need to add CloudFront for HTTPS:
Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.
In my opinion the ②nd way is super easy to set up but running 24/7 ALB is little bit expensive. In other way Lambda and API Gateway price depending on requests count. CloudFront seems to be cheaper than ALB too.
So the better solution is depending on how many requests you have
The ④th solution is depends on GitHub platform (wider than AWS only scope), but it is absolutely free and support custom domain and Let's Encrypt certificates out of the box.
You just need to create repository with static index.html file that will do redirects
You can do it without including CloudFront.
What you need to do is create S3 bucket projectx.com. In Properties go to Static website hosting. Enable static website hosting and choose Redirect as a hosting type (add the redirection URL).
You will still need to set up Route53, but you will now add alias to this projectx.com bucket, instead of going to CloudFront
I have a load balancer (ALB) that is dealing with my app's API requests (alb.domain.com) . I also have an S3 bucket where my static site is served from via CloudFront (domain.com). I have configured a distribution so that /api goes to the ALB and the rest goes to S3 and all works fine. However, instead of routing my API requests to CloudFront (domain.com/api), I can also route them directly to the ALB (alb.domain.com/api), which also works fine. I don't need caching on my API requests so i disabled it on the distribution anyway. What would be the point in me doing the requests via CloudFront given that I am (1) introducing an extra connection in the route and (2) my requests are now calculated for CloudFront pricing. Are there any disadvantages associated with just routing requests directly to the ALB and using CloudFront only for serving static files?
There are a number of benefits to using CloudFront in your scenario (and generally).
Firstly by having route through CloudFront it gives the impression this is a single domain, this removes implementation of CORS, helps to reduce complexities in any security configurations (such as CSP). Additionally you can cache any static based requests, or those that change infrequently in the future if you need it.
You also gain the benefits that you get access to the edge network to do routing, this benefits for clients that are further away from the region. This means that the user will hit the closest edge location to them, which will then route and establish a connection with the origin over the private AWS network (which will be faster than traversing the public internet).
Additionally security evaluations at the Edge, both WAF and Lambda#Edge can be evaluated closer to the user on AWS Edge infrastructure rather than your own resources.
So currently we have two ec2 instances (lets say A and B) and a cloudfront.
If the user goes to www.appdomain.com/app the user should get routed to the cloudfront SPA page. However if the user goes www.appdomain.com the user should be routed to the EC2 instance A, and if user goes to www.appdomain.com/api be routed to EC2 instance B.
All of these applications must be on the same domain.
Now we found out how to set path rules using an application load balancer, but would like to know how to set it to cloudfront as well.
Update:
So in summary the question is how do we route /app to cloudfront / and /api to ec2.
All of these applications must be on the same domain.
In this scenario, every request for that domain must pass through CloudFront first.
Your DNS record will need to point to CloudFront (not the ALB) and CloudFront is then responsible for routing the request to the appropriate target -- to an EC2 instance via an ALB, to an S3 bucket, to wherever you need the requests to go -- and each of these things is called a content origin.
Once the origins are specified by their individual domain name (not your site's domain name, but a domain name specifically for the resource in question), you define CloudFront path patterns to select which origin is to receive the request for each pattern (e.g. /api*).
Once your DNS is changed to point to CloudFront, all requests go there first, and are handed off to the next service, unless CloudFront has a cached copy of the requested object -- in which case, CloudFront will serve it from its cache, and nothing will be sent to the origin.
You can't route from ALB to CloudFront, but you can route from CloudFront to ALB.
You can't subdivide a domain into multiple, different path-based content origins without using a reverse proxy that is able to match the paths and fetch the content on behalf of the requester -- HTTP and DNS don't support such functionality. CloudFront, in addition to providing the CDN service, is also a reverse proxy.
ALB, of course, is also a reverse proxy, but does not support as many different types of content origins as CloudFront does -- ALB only supports EC2 instances, servers in your data center (in which case, ALB must have a VPN path in order to reach them), and Lambda functions as content origins. CloudFront can use literally anything as a content origin as long as it speaks HTTP/HTTPS and is accessible via the Internet. (To choose a somewhat random example, CloudFront can even use a service from another vendor -- like a Google Cloud Storage bucket -- as a content origin, if that was something you needed to do, for whatever reason... because these are accessible via HTTP across the public Internet.)
How does cloudfront work with Route53 routing policies?
So as I understand it CF is supposed to route requests to the nearest server, which is in effect the Route53 latency policy. So if you have an R53 hosted zone entry for your CF domain name is this done by default if you leave the routing policy as simple or do you neec to explicitly set this yourself? And if you chose another policy type (failover, geo-location etc) would that overwrite it?
You leave it as simple.
You don't have access to the necessary information to actually configure it yourself -- CloudFront returns an appropriate DNS response based on the location of the requester, from a single, simple DNS record. The functionality and configuration is managed transparently by the logic that powers the cloudfront.net domain, you set it and forget it, because there are no user-serviceable parts inside.
This is true whether you use an A-record Alias or a CNAME.
Any other configuration would not really make sense, because talking of failover or geolocation imply that you'd want to send traffic somewhere other than where CloudFront's algorithm would send it.
Now... there are cases when, behind CloudFront, you might want to use some of Route 53's snazzier options. Let's say you had app servers in multiple regions serving exactly the same content. Latency-based routing for the origin hostname (the one where CloudFront sends cache misses) would allow CloudFront to magically send requests to the app server closest to the CloudFront edge that serves each individual request. This would be unrelated to the routing from the browser to the edge, though.
I am having cname(abc.com) pointed to my elastic IP and need to create three EC2 instances(e.g. Instance1, Instance2, Instance3) for three different applications.
Now I want to achieve following results:
If user hits "abc.com/App1", request should be redirected to Instance1.If user hits "abc.com/App2", request should be redirected to Instance2.If user hits "abc.com/App3", request should be redirected to Instance3.
All these Instances should work independently. And, If any of these goes down, it should not impact others.
We can't use subdomains. I am trying to find out something in ELB.
ELB does not offer path-based routing. All instances connected to an ELB receive a share of incoming requests.
CloudFront, however, does support path-based routing. You can configure each instance as a "custom origin" and configure which path patterns to route to it.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern
Granted, this is not the "primary purpose" of CloudFront, but it works quite nicely in this application.
CloudFront is actually a caching reverse proxy CDN service, so if you go this route, you can also potentially relieve your back-end machines of some workload, or you can disable caching entirely by forwarding all the request headers to the origin and returning an appropriate Cache-Control: header from your instances.
A CloudFront distribution can be associated with a domain name in Route 53 in exactly the same way that an ELB can -- using Alias records.
Bonus: you can also easily pluck additional paths and route them directly to S3 to serve up static assets from an S3 bucket.