Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
Related
I have been tasked with upgrading the servers / endpoints that our AWS APIGateway uses to respond to requests. We are using a VPC link for the integration request. These servers are hosted on AWS Elastic Beanstalk.
We only use two resources / methods in our API : /middleware and middleware-dev-4 that go through this VPC link.
As our customers rely heavily on our API I can not easily swap out the servers. I can create new servers and point the APIs to those but I do not want any downtime in our API service. Can you recommend a way to do this API change without impacting our customers ?
Iv'e seen several examples using canary release but they seem to pertain to Lambda functions and not VPC links with EC2 servers as endpoints.
Edit --
AWS responded and they agreed that I should add the new servers to the target group in the network load balancer and deregister the old ones.
Do you need to update servers or endpoints?
If endpoints, api gateway has stages, your customer uses endpoints published on some stage. Make your changes & publish new stage, api gateway will publish new endpoints after some seconds.
If servers, then api gateway has not much to do with it. It's up to you how you run the servers. Check AWS Elastic Beanstalk blue/green deployments:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
Because AWS Elastic Beanstalk performs an in-place update when you
update your application versions, your application might become
unavailable to users for a short period of time. To avoid this,
perform a blue/green deployment. To do this, deploy the new version to
a separate environment, and then swap the CNAMEs of the two
environments to redirect traffic to the new version instantly.
By the way, check your DNS cache settings. DNS Cache may lead to real issues & downtime. If you will change CNAME value on DNS, make sure your client is not caching that CNAME value for a long time. Check what is DNS cache for that cname now, you will need that time period, so make sure to check. Update DNS to have minimum cache time, like 1 or 5 minutes. Wait the time period what was set for cache originally. Do your blue/green, update CNAME, wait for DNS cache time to expire the cache.
I'm running a backend app with several endpoints on Cloud Run(fully-managed). My endpoints are publicly available by its nature so I don't want to authenticate users through my client app hosted on Netlify.
What I do need is to restrict access to my endpoints so that other applications or malicious users can't abuse it. It is not about scaling, I just don't want to exceed the Free Tier limits since it is a demo of an opensource application.
I've already set the concurrency and max instance limits to minimum but this alone is not enough. There is also a product named Google Cloud Armor but it seems an expensive one, not free.
I was expecting to have a simple built-in solution for this but couldn't find it.
What other solutions do I have? How can I block the traffic coming out of my website on Netlify?
You don't have a lot of solution:
You don't want to authenticate your users -> so you need to rely on the technical layers
Netlify is a serverless hosting platform, you don't manage servers/IPs -> So you need to rely on the host name
To filter on the host name, you can use 2 products
External HTTPS only (about $15 per month) with url path matching.
Default URL land on a dummy service
Only request where the host matches your netlify host name are redirected to your backend
Use Cloud Armor on top of External HTTPS load balancer ($15 + Cloud Armor policy x traffic volume). The time, the load balancer redirect the default URL to the correct backend and Cloud Armor check the request origin.
The problem is that this weak solution is easy to overpass. Perform a simple curl with the host as header, and HTTPS Load Balancer and Cloud Armor think that is the correct origin
curl -H 'Host: myNetlifyHost.com' ....
The highest protection is the authentication. Google Cloud itself say: "Don't trust the network".
I'm using AWS S3 for front-end web hosting and AWS EC2 for back-end hosting.
The EC2 instance is behind an elb and has scheduled maintenance, and I want to display a maintenance page when the EC2 instance is under maintenance.
The way that I set it up is to let index.html "touch" some files on EC2, if the server is unavailable it will return HTTP 503 error. There is a 503.html in S3 and I want to display it when 503 error happens.
I've tried creating a new CloudFront Error Page and creating S3 Redirection rules, but none of them is working. What is the best way to set up the maintenance page?
I've been searching for a quick way to do this. We need to return a 503 error to the world during DB upgrade, but white list a few IPs of developers so they can test it before opening back up to public.
Found a one spot solution::
Go to the Loader Balancer in EC2 and select the load balancer you would like to target. Below, you should see Listeners. Click on a listener, and edit the rule. Create a rule like this:
Now everyone gets a pretty maintenance page returned with a 503 error code, and only two IP addresses in the first rule will be able to browse to the site. Order is important, where the two IP exceptions are on top, then it goes down the list. The last item is always there by default.
https://aws.amazon.com/about-aws/whats-new/2018/07/elastic-load-balancing-announces-support-for-redirects-and-fixed-responses-for-application-load-balancer/
Listener Rules for Your Application Load Balancer:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
Note: There is a max limit of 1024 characters in the Fixed Response.
It sounds like you are looking for the origin failover functionality https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html which lets you failover to a second origin if requests to the first origin start failing. Alternatively, you could also configure route53 health checks and do the failover at DNS level https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
I have an application that needs to handle huge traffic. The previous version of the application hits nearly 2,000,000 requests in 15 mins. That version does not have a CDN so that I need to deploy nearly 50 containers each for frontend and backend. So now I have added a CDN in front of my application. I have chosen AWS Cloudfront as CDN because the application is hosted on AWS.
Right now, I need to do the load test for this new application. If I do the load test with the Cloudfront URL, will it show the exact result as it will be served by Cloudfront?
If I load test with the Load Balancer URL and find out the required number of servers for handling the required load, will that be an over provision? As the Cloudfront will serve my application from nearly 189 edge locations (from AWS docs), is that much servers are required?
How can I find a relation between the amount of traffic that can be handled with and without Cloudfront?
Load testing Cloudfront itself is not the best idea, according to Cloudfront main page
The Amazon CloudFront content delivery network (CDN) is massively scaled and globally distributed.
However you could test the performance of your website with and without the CDN to see if there is a benefit/ROI of using Cloudfront as it doesn't come for free and you need to ensure that it makes sense to use it as it might be the case your application performance will be sufficient without the CDN integration.
Check out 6 CDN Load Testing Best Practices for more details.
Also make sure to add DNS Cache Manager to your test plan to ensure that each JMeter thread (virtual user) independently resolves the underlying server address for the ELB as it might be the case all the threads will be hitting the same IP address.
You can conduct the load test with cloudfront url as this is the real user scenario.
Please check the auto-scaling is enabled for the server. Also you need to monitor the load balancer while test execution to validate the traffic.
Also you need to check the security software/filters setting for compression and caching headers for the requests. Sometime this security patches/Filer ignore the header and it will impact on application performance in AWS cloud.
Use AWS cloud watch to monitor the servers.
I am maintaining an embedded database for a web app on an EC2 instance. Since this central server is single-threaded, it's very susceptible to DDOS (even a non-distributed attack would cripple it).
AWS has DDOS protection for its CDN CloudFront, so I am wondering if I can use CloudFront as a layer of indirection around my EC2 instance for DDOS protection.
The problem is figuring out how to effectively prevent users from bypassing CloudFront and hitting the server directly. My questions:
Will users be able to trace the network path to get the IP of my EC2 instance, or will they only be able to see the API url for Cloudfront?
Is there a way to prevent traffic from reaching my EC2 instance if it didn't come through Cloudfront? I see that there is an option to send a custom origin header from Cloudfront, but this doesn't solve the problem--I'd still have to process each request in my EC2 instance. Is there a way to configure input rules to my server which prevent it from processing non Cloudfront requests?
I am new to thinking about network architecture and security, so any and all advice is appreciated!
AWS Shield Standard is included automatically and transparently to Amazon CloudFront distributions providing,
Active Traffic Monitoring with Network flow monitoring and Automatic always-on detection.
Attack Mitigations with Protection from common DDoS attacks (e.g. SYN floods, ACK floods, UDP floods, Reflection attacks), Automatic inline mitigation and you can use AWS WAF in conjunction to mitigate Layer 7 attacks.
To prevent, users from bypassing CloudFront and directly accessing your EC2 instance, you can use security groups whitelisting the AWS CloudFront IP address list. Since this list suspects to change, you can setup a Lambda function to update it automatically once AWS changes CloudFront IPs. For more information about this refer the article How to Automatically Update Your Security Groups for Amazon CloudFront and AWS WAF by Using AWS Lambda.
If you are using Application Load Balancer, you can whitelist a header and add it to the CloudFront origin so that the requests are only accepted if the header is present. (This could also be added to the Web Server header whitelisting but then the HTTP requests will be rejected only at the Web Server level as you clearly identified).
Also, you can include an AWS WAF configuration (At ALB or CloudFront whichever you use as the external interface) with rate limiting to prevent any abuse which is easy to setup and the cost-effective.