408 timeouts on my Amazon ELB - amazon-web-services

We are seeing a lot of 408 timeouts on our ELB access logs. Have come across this thread https://serverfault.com/questions/485063/getting-408-errors-on-our-logs-with-no-request-or-user-agent
and also https://forums.aws.amazon.com/thread.jspa?messageID=307846
These are just two sample threads I found but others suggest the same solutions with no joy.
Have set web server timeout to be < ELB idle timeout, to be = to it and to be > than it, same result, our logs are polluted with these 408s. A bigger problem though it that they also throw off the average latency response time of our ELB which is what we trigger our auto scaler with.
We use Tomcat on our back end instances. No logs appear on tomcat to indicate a request was recieved but the ELB still shows as if requests had timed out.
On our ELB access logs there is no back end IP given for the 408s so in my opinon the requests never got to an instance at all but Amazon disagree :(.
Any one had this problem and got a reliable solution for it?

Following the suggestion of milsonspt in the linked thread, I added a virtual host to my server that monitors a different thread instead of 80 so all health checks will be executed on that host (replace CUSTOM_PORT with any port you want to be used for ELB health check).
Listen CUSTOM_PORT
<VirtualHost *:CUSTOM_PORT>
CustomLog "|/usr/sbin/rotatelogs /var/log/httpd/access_log_elb_health_check_rotated_%Y-%m-%d-%H_%M_%S 10M" combined
</VirtualHost>
Make sure that the ELB does NOT have a listener on that port.
That configuration removed the 408 errors and logs all health check in a separate log so you get an uncluttered log for your regular access log, and a dedicated log for health check.

This could happen when ELB is waiting on the client for complete request. If a partial request comes in, with incomplete headers AWS ELB would just wait. AWS ELB would not do anything with the partial request headers, and eventually respond with 408 req_timeout due to idle timeout expiry on the tcp connection.

Related

Getting 5xx error with AWS Application Load Balancer - fluctuating healthy and unhealthy target group

My web application on AWS EC2 + load balancer sometimes shows 500 errors. How do I know if the error is on the server side or the application side?
I am using Route 53 domain and ssl on my url. I set the ALB redirect requests on port 80 to 443, and forward requests on port 443 to the target group (the EC2). However, the target group is returning 5xx error code sometimes when handling the request. Please see the screenshots for the metrics and configurations for the ALB.
Target Group Metrics
Target Group Configuration
Load Balancer Metrics
Load Balancer Listeners
EC2 Metrics
Right now the web application is running unsteady, sometimes it returns a 502 or 503 service unavailable (seems like it's a connnection timeout).
I have set up the ALB idle timeout 4000 secs.
ALB configuration
The application is using Nuxt.js + PHP7.0 + MySQL + Apache 2.4.54.
I have set the Apache prefork worker Maxclient number as 1000, which should be enough to handle the requests on the application.
The EC2 is a t2.Large resource, the CPU and Memory look enough to handle the processing.
It seems like if I directly request the IP address but not the domain, the amount of 5xx errors significantly reduced (but still exists).
I also have Wordpress application host on this EC2 in a subdomain (CNAME). I have never encountered any 5xx errors on this subdomain site, which makes me guess there might be some errors in my application code but not on the server side.
Is the 5xx error from my application or from the server?
I also tried to add another EC2 in the target group see if they can have at lease one healthy instance to handle the requests. However, the application is using a third-party API and has strict IP whitelist policy. I did some research that the Elastic IP I got from AWS cannot be attached to 2 different EC2s.
First of all, if your application is prone to stutters, increase healthcheck retries and timeouts, which will affect your initial question of flapping health.
To what I see from your screenshot, most of your 5xx are due to either server or application (you know obviously better what's the culprit since you have access to their logs).
To answer your question about 5xx errors coming from LB: this happens directly after LB kicks out unhealthy instance and if there's none to replace (which shouldn't be the case because you're supposed to have ASG if you enable evaluation of target health for LB), it can't produce meaningful output and thus crumbles with 5xx.
This should be enough information for you to make adjustments and logs investigation.

Google cloud load balancer causing error 502 - failed_to_pick_backend

I've got an error 502 when I use google cloud balancer with CDN, the thing is, I am pretty sure I must have done something wrong setting up the load balancer because when I remove the load balancer, my website runs just fine.
This is how I configure my load balancer
here
Should I use HTTP or HTTPS healthcheck, because when I set up HTTPS
healthcheck, my website was up for a bit and then it down again
I have checked this link, they seem to have the same problem but it is not working for me.
I have followed a tutorial from openlitespeed forum to set Keep-Alive Timeout (secs) = 60s in server admin panel and configure instance to accepts long-lived connections ,still not working for me.
I have added these 2 firewall rules following this google cloud link to allow google health check ip but still didn’t work:
https://cloud.google.com/load-balancing/docs/health-checks#fw-netlb
https://cloud.google.com/load-balancing/docs/https/ext-http-lb-simple#firewall
When checking load balancer log message, it shows an error saying failed_to_pick_backend . I have tried to re-configure load balancer but it didn't help.
I just started to learn Google Cloud and my knowledge is really limited, it would be greatly appreciated if someone could show me step by step how to solve this issue. Thank you!
Posting an answer - based on OP's finding to improve user experience.
Solution to the error 502 - failed_to_pick_backend was changing Load Balancer from HTTP to TCP protocol and at the same type changing health check from HTTP to TCP also.
After that LB passes through all incoming connections as it should and the error dissapeared.
Here's some more info about various types of health checks and how to chose correct one.
The error message that you're facing it's "failed_to_pick_backend".
This error message means that HTTP responses code are generated when a GFE was not able to establish a connection to a backend instance or was not able to identify a viable backend instance to connect to
I noticed in the image that your health-check failed causing the aforementioned error messages, this Health Check failing behavior could be due to:
Web server software not running on backend instance
Web server software misconfigured on backend instance
Server resources exhausted and not accepting connections:
- CPU usage too high to respond
- Memory usage too high, process killed or can't malloc()
- Maximum amount of workers spawned and all are busy (think mpm_prefork in Apache)
- Maximum established TCP connections
Check if the running services were responding with a 200 (OK) to the Health Check probes and Verify your Backend Service timeout. The Backend Service timeout works together with the configured Health Check values to define the amount of time an instance has to respond before being considered unhealthy.
Additionally, You can see this troubleshooting guide to face some error messages (Including this).
Those experienced with Kubernetes from other platforms may be confused as to why their Ingresses are calling their backends "UNHEALTHY".
Health checks are not the same thing as Readiness Probes and Liveness Probes.
Health checks are an independent utility used by GCP's Load Balancers and perform the exact same function, but are defined elsewhere. Failures here will lead to 502 errors.
https://console.cloud.google.com/compute/healthChecks

HTTP ERROR 408 - After setting up kubernetes , along with AWS ELB and NGINX Ingress

I am finding it extremely hard to debug this issue, I have a Kubernetes cluster setup along with services for the pods, they are connected to the Nginx ingress and connected to was elb classic, which also connects to the AWS route53 DNS my domain name is connected to. Everything works fine with that, but then am facing an issue where my domains do not behave the way I would like them to.
My domains in the Nginx-ingress-rules are connected to a service which sends alive page when hit with a domain, now when I do that I get this page.
Please help me what what to do to resolve this quickly, thanks in advance!
Talk to you soon
enter image description here
While you are using web servers behind ELB you must know that they generate a lot of 408 responses due to their health checks.
Possible solutions:
1. Set RequestReadTimeout header=0 body=0
This disables the 408 responses if a request times out.
2. Disable logging for the ELB IP addresses with:
SetEnvIf Remote_Addr "10\.0\.0\.5" exclude_from_log
CustomLog logs/access_log common env=!exclude_from_log
3. Set up different port for ELB health check.
4. Adjust your request timeout higher than 60.
5. Make sure that the idle time configured on the Elastic Loadbalancer is slightly lower than the idle timeout configured for the Apache httpd running on each of the instances.
Take a look: amazon-aws-http-408, haproxy-elb, 408-http-elb.

How do I configure keep-alive time between elb and server?

We are seeing 504 errors in our ELB logs, however there are no corresponding errors in the application logs. Have increased the idle timeout on ELB and can see that no requests are taking more time than that.
Going through aws documentation found that we need to configure keep-alive time at ec2 instances to be equal or more than idle timeout to keep the connection open between elb and backend server.
Couldn't find any way to configure keep-alive time between elb and backend server. Any suggestion to do that would be helpful
We are using tomcat-ebs for backend servers.
You need to set the keepAliveTimeout="xxxx" in your tomcat connector settings to avoid tearing down idle connections.

AWS ELB - will it retry request if node fails?

I have an ELB and 3 nodes behind it.
Can someone please explain me what will ELB do in these scenarios:
Client Request -> ELB -> Node1 fails in the middle of the request (ELB timeout)
Client Request -> ELB -> Node1 timeouts (Server timeout and health check haven't kicked in yet)
Particularly I'm wondering if ELB retries the request to another node?
I made a test and it doesn't seem to, but maybe there's a setting that I've missed.
Thanks,
This may have been a matter of passage of time, but these days ELBs do retry requests that abort:
Either because of an Idle Timeout (60s by default);
Or because the instance went unhealthy due to failing health checks, with Connection Draining disabled (default is enabled)
However, this holds only if you haven't sent any response bytes yet. If you have sent incomplete headers, you will get a 408 Request Timeout. If you have sent full headers (but didn't send a body or got halfway through the body) the client will get the incomplete response as-is.
The experiments I've performed were all with a single HTTP request per connection. If you use Keep-Alive connections, the behavior might be different.
The AWS Elastic Load Balancing service uses Health Checks to identify healthy/unhealthy Amazon EC2 instances. If an instance is marked as Unhealthy, then no new traffic will be sent to that server. Once it is identified as Heathy, traffic will once again be sent to the instance.
If a request is sent to an instance and no response is received (either because the app fails or a timeout is triggered), the request will not be resent nor sent to another server. It would be up to the originator (eg a user or an app) to resend the request.