are there any benefits to running a static website via S3/CloudFront vs running one via Express (via Fargate) sitting behind API Gateway? In terms of performance or any other considerations? The latter approach allowing api endpoints to also be hosted via Express/API Gateway on the same domain.
Scaling in case of Cloudfront/S3 is better.
Caching and serving from edge locations in Cloudfront S3.
Cloudfront gives you options to alter req and response, say you want to add CSP headers to all response going out from your domain or say you want to prevent open redirect from you domain URL, you can use origin request and origin response lambdas in cloudfront.
Related
I'm moving my domain names from CloudFlare's DNS to AWS Route53 and in some cases I'm using CloudFlare's redirects for project that are dead so that their domains go to a page in another domain, so https://projectx.com goes to https://example.com/projectx-is-no-more.
I want to replicate this in AWS and what I found so far is this:
Set up an S3 bucket with the redirect to the desired URL, https://example.com/projectx-is-no-more
Set up CloudFront for the domain, projectx.com
Generate the TLS cert for projectx.com and add it to CloudFront so it can serve both https and http.
Set up Route53 to resolve the domain name to CloudFront.
I set it up, it's working, I'm even using CDK so I'm not doing it manually. But I'm wondering if there's a way of setting up these redirects that requires less moving pieces. It sounds like such a redirect would be a common enough problem that maybe Route53 or CloudFront would have a shortcut. Are there any?
Update: using only S3 doesn't work because S3 cannot serve https://projectx.com. S3 has no method by which it can respond to HTTPS request for arbitrary domains, there's no way of adding a TLS certificate (and keys) for another domain.
I checked for information and see only three possible solutions:
Set up CloudFront + S3 *
Set up Application Load Balancer
Set up API Gateway + Lambda (mock integration may be used instead of Lambda, that should reduce service cost)
Use GitHub pages with custom domain
※ S3 support only HTTP traffic so we need to add CloudFront for HTTPS:
Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.
In my opinion the ②nd way is super easy to set up but running 24/7 ALB is little bit expensive. In other way Lambda and API Gateway price depending on requests count. CloudFront seems to be cheaper than ALB too.
So the better solution is depending on how many requests you have
The ④th solution is depends on GitHub platform (wider than AWS only scope), but it is absolutely free and support custom domain and Let's Encrypt certificates out of the box.
You just need to create repository with static index.html file that will do redirects
You can do it without including CloudFront.
What you need to do is create S3 bucket projectx.com. In Properties go to Static website hosting. Enable static website hosting and choose Redirect as a hosting type (add the redirection URL).
You will still need to set up Route53, but you will now add alias to this projectx.com bucket, instead of going to CloudFront
I'm new with AWS WAF and get stuck with setting up it for application that hosts on some dedicated server. I didn't find any information how to set up it without migration to aws servers, but I found that WAF integrated with CloudFront. But anyway I found only few information that explain how to integrate this CDN with my web application. So, the main question is:
Is it possible to use AWS WAF with application that hosted on some dedicated server? And if it possible - can you provide some guides and/or docs for setting up?
Yes, you can use WAF with a server outside AWS.
WAF works with CloudFront, and CloudFront does not require the origin server to be in the AWS ecosystem.
When you create a distribution, you specify where CloudFront sends requests for the files. CloudFront supports using several AWS resources as origins. For example, you can specify an Amazon S3 bucket or a MediaStore container, a MediaPackage channel, or a custom origin, such as an Amazon EC2 instance or your own HTTP web server. (emphasis added)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
Configuring CloudFront to work with your external server is no different than configuring it to work with a server in EC2. Your DNS entry (e.g. www.example.com) changes to point to CloudFront, and CloudFront connects to your server using a new name that you create (e.g. origin.example.com). CloudFront proxies requests through to your server, unless the edge location handling the a given request happens to have access to a copy of the same resource that it cached while handling a previous request for the same page -- that's how CloudFront gets your content, by caching it as it handles requests that are passing through. (You don't pre-load any content into CloudFront.) If CloudFront has a cached copy, your server sees nothing, and CloudFront returns the object to the browser from its cache. But CloudFront isn't strictly a CDN, even though they market it that way. It is a global network of reverse proxies and high-reliability/low-latency transport.
You'll want to take steps to ensure that the web server rejected requests that didn't come through CloudFront. See Using Custom Headers to Restrict Access to Your Content on a Custom Origin as well as the list of CloudFront IP Addresses which you could use on your web server's firewall.
Once you have your site working through CloudFront, all you do is activate WAF on the distribution. CloudFront is very tightly integrated with WAF so that is a very simple change, once you have your WAF rules set up.
I have a website which I serve using express running in an EC2 instance. But this EC2 serves only the static content (html, js, css) and the dynamic part comes from API Gateway. Right now, these two have different IPs (and domains) which means that I have to deal with CORS problems accessing API Gateway from the web pages. If I could somehow serve the static content and dynamic one through the same address, that would be much better.
The way I see it, this can be done in two ways. I can serve both of them on the same host but different ports which I'm not sure if it's going to solve the same CORS problem or not. But another way which I'm sure it will not face the CORS problem is serving API Gateway under some specific sub-folder. Like http://example.com/api while the static content is served from any url except that.
Does anyone know how can I do this? Is CloudFront what I need? Or Elastic Load Balancer?
Yes, CloudFront is what you need for this scenario.
Application Load Balancer can also do path-based routing, but it doesn't support API Gateway as a target.
By default, CloudFront can route requests under a single domain to the correct choice from up to 25 destinations, using up to 25 path (matching) patterns (both of these limits can be increased by request, but it sounds like for now, you only need 2 of each, /api/* to the API, and the default * route to EC2). You can also leverage this setup to put some static content in an S3 bucket and take some load off of the servers in EC2.
For this configuration, you will want to configure your API Gateway deployment with a regional endpoint, not an edge optimized endpoint. This is because edge optimized endpoints already use part of the CloudFront infrastructure (a part to which you have no ability to configure), so using an edge-optimized endpoint behind your own CloudFront distribution sends each request and response through the CloudFront network twice, increasing latency.
I'm currently extensively using the API Gateway as a source for CloudFront. My CloudFront serves other things as well, such as plain files from S3.
I've recently been looking into improving the current setup, and noticed the "Custom Domain Names" option in API Gateway.
From what I've understood, using it creates an unconfigurable CloudFront instance. I've not been able to find much information beyond that.
Are there any advantages to using API Gateway's Custom Domain Names over using a self-managed CloudFront instance?
When you use AWS CloudFront you can configure different Origins such as S3, API Gateway & etc to the distribution which allows to serve different services through same domain. e.g you can serve mydomain.com points to index.html in S3 and mydomain.com/api/* points to API Gateway. This allows for the frontend JavaScripts to access the API without the need for Cross Origin Request support at API Gateway which avoids sending Options preflight(If you have headers like Cookie, Authorization & etc.) request by the browser.
On the other hand you can configure Custom Domain Names to API Gateway. This allows to define a Custom Domain as well as a Custom SSL Certificate using AWS Certificate Manager. The main difference is, if you have a frontend application, you need to define two domains(or different subdomains) for the frontend served from S3 and API. When accessing the API from different domain it will require to have CORS configured at the API Gateway and can affect performance based on the latency.
I have a domain formulagrid.com.
I am using AWS S3 to host it as a static website. My problem was that I wanted to redirect the www subdomain to the bare domain like so:
https://www.formulagrid.com -> https://formulagrid.com
http://www.formulagrid.com -> https://formulagrid.com
Amazon provides URL redirecting from S3 bucket to S3 bucket if both are setup for static website hosting.
So what I had to do was set up two buckets:
formulagrid.com - actual website
www.formulagrid.com - exists solely to redirect to the actual website
This works perfectly fine if you're operating only over HTTP, but S3 has absolutely no support for HTTPS.
The way that one can use HTTPS to connect to an S3 static website is by setting up a CloudFront distribution in front of an S3 bucket. CloudFront, however, while it does provide HTTPS, mainly exists to function as a CDN.
Initially, I had a single CloudFront distribution setup in front of the S3 bucket holding the actual site. Everything seemed operational: the site was distributed over the CDN, it had HTTPS, and HTTP redirected to HTTPS.
There was one exception.
https://www.formulagrid.com was a completely broken page
After trying to find the source of the error for a while, I realized it's because it wasn't going through the CDN, and trying to access S3 over HTTPS doesn't work.
Finally, what I ended up having to do was provision another distribution to sit in front of the www S3 bucket so it was accessible over HTTPS. This is where my concerns come in because, like I mentioned earlier, CloudFront's main purpose is to be a CDN.
It doesn't make any sense to me to have a CDN sit in front of a url that just redirects to another. Also it brings up the question of whether I would be double charged for every request that hits the www subdomain because it'd hit the other CloudFront distribution after being redirected.
This is frustrating because I'm trying to do a "serverless" architecture using Lambda, and having to provision an EC2 instance just to do url rewriting isn't something I want to do unless it's my last resort.
The solution would be trivial if Amazon offered any form of URL rewriting or if CloudFront itself did redirecting, but neither of these exist as far as I know (let me know if they do).
I'm new to AWS so I'm hoping someone with more experience can point me in the right direction.
You're thinking too narrowly -- there's nothing wrong with this setup.
The solution would be trivial if Amazon offered any form of URL rewriting
They do -- the empty bucket.
S3 has absolutely no support for HTTPS.
Not for web site hosted buckets, no... but CloudFront does.
CloudFront is not just a CDN. It's also an SSL offloader, Host: header rewriter, path prepender, geolocator, georestrictor, secure content gateway, http to https redirector, error page customizer, root page substituter, web application firewall, origin header injector, dynamic content gzipper, path-based multi-origin http request router, viewer platform identifier, DDoS mitigator, zone apex alias target... so don't get too hung up on "CDN" or on the fact that you're stacking one service in front of another -- CloudFront was designed, in large part, to complement S3. They each specialize in certain facets of storage and delivery.
So, you did it right... most of it, anyway... Create a bucket, configure it for web site hosting, set it to redirect all requests to another site (the non-www) and put a CloudFront distribution in front of it -- using the web site endpoint URL for with bucket in CloudFront, not the one from the drop-down list -- configured with high TTLs so that CloudFront will send a minimal number of requests to S3 then put your (free!) SSL certificate from Amazon Certificate Manager. HTTPS alternate domain routing: solved. No servers, no troubleshooting, and cheap. The only charges are the usage -- there is no background recurring charge as there would be with servers.
Extra credit: configure the redirecting CloudFront distribution for the cheapest rate tier. Redirects from more expensive locations will either be routed to a cheaper edge location or -- at CloudFront's option -- may be served out of a higher cost location but billed at the lower rate.
Note that most of the time, CloudFront should serve the redirects from S3 from it's cache... and when you configure a bucket to redirect all requests to another hostname, the redirect is a 301 permanent redirect -- which browsers are supposed to cache, themselves.