Serve file from server where file exists using AWS Route53 - amazon-web-services

I'm building a globally distributed streaming platform built with Wowza server. Reading this article on
How could I determine which AWS location is best for serving customers from a particular region? I'd like to use this method to pick the ingest server based on the geolocation or lowest latency.
On the other site our CDN needs to pull from the server that is being streamed to. Is there a way for the Route53 to select the server that doesn't return 404 for the requested content?

You can do this, yes, using Route53 health checks. That way, R53 can determine the appropriate endpoint that is "healthy" and serving up your content. I'm not sure what time gap would be involved though -- probably 30-90 seconds at least to detect an unhealthy endpoint and switch over.

Related

GCP: HTTPS termination. Why is the load balancer so expensive?

I want to use a GCP load balancer to terminate HTTPS and auto manage HTTPS cert renewal with Lets Encrypt.
The pricing calculator gives me $21.90/month for a single rule. Is this how much it would cost to do HTTPS termination for a single domain? Are there cheaper managed options on GCP?
Before looking at the price, and to another solution, look at what you need. Are you aware of Global Load balancer capabilities?
It offers you a unique IP reachable all over the globe and route the request to the region the closest to your user for reducing the latency. If the region is off, or the capacity of your backend full (health check KO), the request is routed to the next closest region.
It allows you to rewrite your URL, to manage SSL certificates, to cache your file into CDN, to scale with your traffic, to deploy security layer on top of it, like IAP, to absorb the DDoS attack without impacting your backend.
And the price is for 5 forwarding rules, not only one.
Now, of course, you can do differently.
You can use regional solution. This solution is often free or affordable. But you don't have all the Global load balancer feature.
If your backend is on Cloud Run or App Engine. Cloud Endpoint is a solution for Cloud Function (and other API endpoints).
You can deploy and set up your own nginx with your SSL certificate on a compute engine.
If you want to serve only static file, you can have a look to Firebase hosting.

Is Load testing with Cloudfront (CDN) is a good approach?

I have an application that needs to handle huge traffic. The previous version of the application hits nearly 2,000,000 requests in 15 mins. That version does not have a CDN so that I need to deploy nearly 50 containers each for frontend and backend. So now I have added a CDN in front of my application. I have chosen AWS Cloudfront as CDN because the application is hosted on AWS.
Right now, I need to do the load test for this new application. If I do the load test with the Cloudfront URL, will it show the exact result as it will be served by Cloudfront?
If I load test with the Load Balancer URL and find out the required number of servers for handling the required load, will that be an over provision? As the Cloudfront will serve my application from nearly 189 edge locations (from AWS docs), is that much servers are required?
How can I find a relation between the amount of traffic that can be handled with and without Cloudfront?
Load testing Cloudfront itself is not the best idea, according to Cloudfront main page
The Amazon CloudFront content delivery network (CDN) is massively scaled and globally distributed.
However you could test the performance of your website with and without the CDN to see if there is a benefit/ROI of using Cloudfront as it doesn't come for free and you need to ensure that it makes sense to use it as it might be the case your application performance will be sufficient without the CDN integration.
Check out 6 CDN Load Testing Best Practices for more details.
Also make sure to add DNS Cache Manager to your test plan to ensure that each JMeter thread (virtual user) independently resolves the underlying server address for the ELB as it might be the case all the threads will be hitting the same IP address.
You can conduct the load test with cloudfront url as this is the real user scenario.
Please check the auto-scaling is enabled for the server. Also you need to monitor the load balancer while test execution to validate the traffic.
Also you need to check the security software/filters setting for compression and caching headers for the requests. Sometime this security patches/Filer ignore the header and it will impact on application performance in AWS cloud.
Use AWS cloud watch to monitor the servers.

Is it possible to use Amazon Web Application Firewall with application that not hosted on AWS instances?

I'm new with AWS WAF and get stuck with setting up it for application that hosts on some dedicated server. I didn't find any information how to set up it without migration to aws servers, but I found that WAF integrated with CloudFront. But anyway I found only few information that explain how to integrate this CDN with my web application. So, the main question is:
Is it possible to use AWS WAF with application that hosted on some dedicated server? And if it possible - can you provide some guides and/or docs for setting up?
Yes, you can use WAF with a server outside AWS.
WAF works with CloudFront, and CloudFront does not require the origin server to be in the AWS ecosystem.
When you create a distribution, you specify where CloudFront sends requests for the files. CloudFront supports using several AWS resources as origins. For example, you can specify an Amazon S3 bucket or a MediaStore container, a MediaPackage channel, or a custom origin, such as an Amazon EC2 instance or your own HTTP web server. (emphasis added)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
Configuring CloudFront to work with your external server is no different than configuring it to work with a server in EC2. Your DNS entry (e.g. www.example.com) changes to point to CloudFront, and CloudFront connects to your server using a new name that you create (e.g. origin.example.com). CloudFront proxies requests through to your server, unless the edge location handling the a given request happens to have access to a copy of the same resource that it cached while handling a previous request for the same page -- that's how CloudFront gets your content, by caching it as it handles requests that are passing through. (You don't pre-load any content into CloudFront.) If CloudFront has a cached copy, your server sees nothing, and CloudFront returns the object to the browser from its cache. But CloudFront isn't strictly a CDN, even though they market it that way. It is a global network of reverse proxies and high-reliability/low-latency transport.
You'll want to take steps to ensure that the web server rejected requests that didn't come through CloudFront. See Using Custom Headers to Restrict Access to Your Content on a Custom Origin as well as the list of CloudFront IP Addresses which you could use on your web server's firewall.
Once you have your site working through CloudFront, all you do is activate WAF on the distribution. CloudFront is very tightly integrated with WAF so that is a very simple change, once you have your WAF rules set up.

How to set bandwidth limit per domain on Google Kubernetes Engine

We are a website builder SaaS platform, where users are building their FREE websites and are connecting their domains.
We would like to set a limit per domain, example 100mb/month, so when that bandwidth limit is hit we can restrict access to that domain with a message like "You have reached your limit".
This limit can be set for example via custom headers in the request for the specific domain or maybe on a Load Balancer's level.
We weren't able to find documentation on that topic, so decided to ask here!
Please help!
Regards,
Gev
It's possible to use Pulse Virtual Traffic Manager for control the network traffic through your domains. It's in Google Marketplace for deploy directly in your infrastructure.
Also, you can think to use the HTTPs load balancing features in combination with Cloud DNS forwarding and Cloud CDN (caching) for optimize the network traffic and redirections to the different domains.

Serving the static content (EC2) and REST services (API Gateway) from the same address

I have a website which I serve using express running in an EC2 instance. But this EC2 serves only the static content (html, js, css) and the dynamic part comes from API Gateway. Right now, these two have different IPs (and domains) which means that I have to deal with CORS problems accessing API Gateway from the web pages. If I could somehow serve the static content and dynamic one through the same address, that would be much better.
The way I see it, this can be done in two ways. I can serve both of them on the same host but different ports which I'm not sure if it's going to solve the same CORS problem or not. But another way which I'm sure it will not face the CORS problem is serving API Gateway under some specific sub-folder. Like http://example.com/api while the static content is served from any url except that.
Does anyone know how can I do this? Is CloudFront what I need? Or Elastic Load Balancer?
Yes, CloudFront is what you need for this scenario.
Application Load Balancer can also do path-based routing, but it doesn't support API Gateway as a target.
By default, CloudFront can route requests under a single domain to the correct choice from up to 25 destinations, using up to 25 path (matching) patterns (both of these limits can be increased by request, but it sounds like for now, you only need 2 of each, /api/* to the API, and the default * route to EC2). You can also leverage this setup to put some static content in an S3 bucket and take some load off of the servers in EC2.
For this configuration, you will want to configure your API Gateway deployment with a regional endpoint, not an edge optimized endpoint. This is because edge optimized endpoints already use part of the CloudFront infrastructure (a part to which you have no ability to configure), so using an edge-optimized endpoint behind your own CloudFront distribution sends each request and response through the CloudFront network twice, increasing latency.