In my logs for my Elastic Beanstalk App I keep seeing "GET /aetn-heartbeat.html HTTP/1.1" 404 158 "-" "Varnish/2.1+fastly (healthcheck)"
The load balancer is working fine, but the health of my ELB is constantly showing degraded health because of these 404 errors and it's a bit confusing. My question is: Do I ignore these 404s? Figure out a way to block these requests? Or is there a real issue that should be addressed?
AWS is open cloud environment. By which I mean you can get all sort of requests from all around the world, Maybe the IP address (or even the DNS name) assigned to your AWS EB environment was hard coded in someones application around the world. Or even worse someone is trying to hack in. That's the only reason AWS promotes its shared responsibility model.
You can get all sort of requests from anywhere around the world. A better way to block them is to use AWS WAF and allow only the urls you want to pass through. You can find 2 common ways of using AWS WAF with EB
Associate the AWS WAF with ALB
Use AWS WAF with AWS cloudfront and ELB
Secondly you can also handle this on code level and do not send 404 if AWS is hard to implement. But I'll recommend using AWS WAF
Related
I am running a custom PHP application, My application is using ALB and EC2. I am facing issues with a lot of fake requests which are similar to the WordPress site and loading my site and make it inactive. Is there a way that we can create Rules in WAF and attach it to load balancer to block that bad requests? Any other alternate way suggestions/solutions are also appreciated.
Fake requests look as below
https://example.com/xxx.php
https://example.com/wp-login.php
https://example.com/dsfh.php
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
I'm using AWS S3 for front-end web hosting and AWS EC2 for back-end hosting.
The EC2 instance is behind an elb and has scheduled maintenance, and I want to display a maintenance page when the EC2 instance is under maintenance.
The way that I set it up is to let index.html "touch" some files on EC2, if the server is unavailable it will return HTTP 503 error. There is a 503.html in S3 and I want to display it when 503 error happens.
I've tried creating a new CloudFront Error Page and creating S3 Redirection rules, but none of them is working. What is the best way to set up the maintenance page?
I've been searching for a quick way to do this. We need to return a 503 error to the world during DB upgrade, but white list a few IPs of developers so they can test it before opening back up to public.
Found a one spot solution::
Go to the Loader Balancer in EC2 and select the load balancer you would like to target. Below, you should see Listeners. Click on a listener, and edit the rule. Create a rule like this:
Now everyone gets a pretty maintenance page returned with a 503 error code, and only two IP addresses in the first rule will be able to browse to the site. Order is important, where the two IP exceptions are on top, then it goes down the list. The last item is always there by default.
https://aws.amazon.com/about-aws/whats-new/2018/07/elastic-load-balancing-announces-support-for-redirects-and-fixed-responses-for-application-load-balancer/
Listener Rules for Your Application Load Balancer:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
Note: There is a max limit of 1024 characters in the Fixed Response.
It sounds like you are looking for the origin failover functionality https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html which lets you failover to a second origin if requests to the first origin start failing. Alternatively, you could also configure route53 health checks and do the failover at DNS level https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
What is my indication that I am using AWS Certificate Manager correctly and that any remaining problems getting my site to load at https are due to a mistake I am making in my Apache configuration?
In AWS Certificate Manager, I see "Success! Your certificate was issued successfully." Does that mean there are no further steps for me to complete in the AWS console, and I need only get my Apache configuration correct to finish?
Currently, when I try to visit a URL at my site with the http protocol, it loads fine, but when I visit at https, the browser tries to load the page but it never loads.
I have followed the instructions for creating an HTTPS listener, but still do not know if I am done with all necessary steps in AWS console. How would I know?
Edit: To clarify, I am using an Elastic Load Balancer (ELB), since the documentation indicated I need to use ELB with AWS Certificate Manager (ACM). However, I do not know how to determine if I have configured everything correctly in AWS console that I need to in order to access the site at HTTPS.
Edit 2: This might come close to answering my question, possibly, but I don't know how to do this: "You can use curl, telnet etc from your local machine to verify 443 port status on ELB" -- #vivekyad4v.
ACM(AWS Certificate Manager) supports the AWS resources like ELB, Cloudfront, API Gateway etc. You can add SSL certificates to these
resources via AWS console.
Currently, it doesn't support EC2. You cannot use ACM with EC2 instances, you will need a Load Balancer in front of it. Once you have a load balancer, SSL termination happens on the load balancer & not on the EC2 instance.
Once it is setup, you can change your apache server config to redirect all HTTP requests to HTTPS.
Add certificate to ELB - "https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-update-ssl-cert.html"
Update apache config - "https://aws.amazon.com/premiumsupport/knowledge-center/redirect-http-https-elb/"
No EC2 support - "https://aws.amazon.com/certificate-manager/faqs/"
I have a WordPress blog that serves niche content on domain.com
On a different endpoint, domain.com/api/ I have a completely different Node.JS API that doesn't regard the WordPress but I want to serve it from the same domain.
It is worth mentioning that we proiritize performance and speed above all.
My thought was the following :
Set up 2 EC2 Instances, one for the WordPress and one for the API (Maybe make the API a Lambda instance ?).
Set up an Application Load Balancer that will know how to route requests with a rule depending on the URL.
Is it the right way to go ? Should I just use nginx as a reverse proxy and serve the Node.JS API on a local port ?
I also want to use Elastic Beanstalk to save myself the headache of configuring the Load Balancer and the Auto Scaling group.
P.S If anyone has any advice or good habits on how to build those (With S3 Bucket perhaps, over CloudFront, etc etc) it will be more than welcome.
Thanks !