I've set up an Application Load Balancer - ECS stack and noticed the ALB is causing latency issues. Direct requests to the Fargate instance have a latency of ~15ms consistently, but requests to the ALB's DNS or public IP have a latency of ~170ms.
Strangely, measuring the latency with Desktop Safari and all the mobile browsers gives ~15ms, but not with Desktop Chrome, curl or postman. (all ~170ms)
Another odd behavior I have observed is that the ALB exhibits a warm-up effect, where the latency even increases to over 2s after some idle time and converges to 170ms after 1~3 requests, but there is no such behavior with the Fargate instance. These are the results.
Safari (to ALB, DNS)
Chrome (to ALB, DNS)
Chrome (to ALB, IP)
Chrome (to Fargate, IP)
curl (to ALB, DNS)
Can you help me identify the cause of the latency issue and suggest potential solutions?
Edit: It appears that ALB may not be the cause of the issue, as the ALB log shows that request_processing_time, target_processing_time and response_processing_time are all less than 1ms. What could be the source of the problem?
The issue is quite alike in these posts, however, all my public subnets uses the same internet gateway.
Related
My web application on AWS EC2 + load balancer sometimes shows 500 errors. How do I know if the error is on the server side or the application side?
I am using Route 53 domain and ssl on my url. I set the ALB redirect requests on port 80 to 443, and forward requests on port 443 to the target group (the EC2). However, the target group is returning 5xx error code sometimes when handling the request. Please see the screenshots for the metrics and configurations for the ALB.
Target Group Metrics
Target Group Configuration
Load Balancer Metrics
Load Balancer Listeners
EC2 Metrics
Right now the web application is running unsteady, sometimes it returns a 502 or 503 service unavailable (seems like it's a connnection timeout).
I have set up the ALB idle timeout 4000 secs.
ALB configuration
The application is using Nuxt.js + PHP7.0 + MySQL + Apache 2.4.54.
I have set the Apache prefork worker Maxclient number as 1000, which should be enough to handle the requests on the application.
The EC2 is a t2.Large resource, the CPU and Memory look enough to handle the processing.
It seems like if I directly request the IP address but not the domain, the amount of 5xx errors significantly reduced (but still exists).
I also have Wordpress application host on this EC2 in a subdomain (CNAME). I have never encountered any 5xx errors on this subdomain site, which makes me guess there might be some errors in my application code but not on the server side.
Is the 5xx error from my application or from the server?
I also tried to add another EC2 in the target group see if they can have at lease one healthy instance to handle the requests. However, the application is using a third-party API and has strict IP whitelist policy. I did some research that the Elastic IP I got from AWS cannot be attached to 2 different EC2s.
First of all, if your application is prone to stutters, increase healthcheck retries and timeouts, which will affect your initial question of flapping health.
To what I see from your screenshot, most of your 5xx are due to either server or application (you know obviously better what's the culprit since you have access to their logs).
To answer your question about 5xx errors coming from LB: this happens directly after LB kicks out unhealthy instance and if there's none to replace (which shouldn't be the case because you're supposed to have ASG if you enable evaluation of target health for LB), it can't produce meaningful output and thus crumbles with 5xx.
This should be enough information for you to make adjustments and logs investigation.
I´m consistently being charged for a surprisingly high amount of data transfer out (from Amazon to Internet).
I looked into the Usage Reports of the past few months and found out that the Data Transfer Out was coming out of an Application Load Balancer (ALB) between the Internet and multiple nodes of my application (internal IPs).
Also noticed that DataTransfer-Out-Bytes is very close to the DataTransfer-In-Bytes in the same load balancer, which is weird (coincidence?). I was expecting the response to each request to be way smaller than the request itself.
So, I enabled flow logs in the ALB for a few minutes and found out the following:
Requests coming from the Internet (public IPs) in to ALB = ~0.47 GB;
Requests coming from ALB to application servers in the same availability zone = ~0.47 GB - ALB simply passing requests through to application servers, as expected. So, about the same amount of traffic.
Responses from application servers back into the same ALB = ~0.04 GB – As expected, responses generate way less traffic back into ALB. Usually a 1K request gets a simple “HTTP 200 OK” response.
Responses from ALB back to the external IP addresses => ~0.43 GB – this was mind-blowing. I was expecting ~0.04GB, the same amount received from the application servers.
Unfortunately, ALB does not allow me to use packet sniffers (e.g. tcpdump) to see that is actually coming in and out. Is there anything I´m missing? Any help will be much appreciated. Thanks in advance!
Ricardo.
I believe the next step in your investigation would be to enable ALB access logs and see whether you can correlate the "sent_bytes" in the ALB access log to either your Flow log or your bill.
For information on ALB access logs see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
There is more than one way to analyze the ALB access logs, but I've always been happy to use Athena, please see: https://aws.amazon.com/premiumsupport/knowledge-center/athena-analyze-access-logs/
I have a number of services deployed in ECS. They register with a Network Load Balancer (via a target group). The NLB is private, and is accessed via API Gateway + a VPC link.
Most of the time, requests to my services take ~4-5 seconds, but occasionally < 100ms. The latter should be the standard; the actual requests are served by my node instances in ~10ms or less. I'm starting to dig into this, but was wondering if there was a common bottleneck in setups similar to what I'm currently using.
Any insight would be greatly appreciated!
The answer to this was to enable Cross-Zone Load Balancing on my load balancers. This isn't immediately obvious and took two AWS support sessions to dig it up as the root cause.
So last week we noticed a spike in our ELB 400 errors, and when we dove into the logs we discovered something strange that we can't figure out.
A request url formatted like below is hitting the Application Load Balancer 40 times a second consistently.
https://autodeploy-xxxxxxxx.us-east-1.elb.amazonaws.com:443
The IPs seem to come from Amazon themselves, which make sense since the above url is not how we send traffic to our ELB. and the ports the are coming over are not ports enabled on the ELB hence the 400 errors, but this is significantly increasing our costs as our Load Balancer Units have doubled because of it, and we haven't added any service that should act this way.
Sample IP: 34.207.214.113:21556
Any Ideas?
My site runs on AWS and uses ELB
I regularly see 2K con-current users, and during these times, requests through my stack would become slow and take a long time to get a response (30s-50s)
None of my servers or database at this time, would show significant load.
Which leads me to believe my issue could be related to ELB.
I have added some images of a busy day on my site, which shows graphs of my main ELB. Can you perhaps spot something that would give me insight into my problem?
Thanks!
UPDATE
The ELB in the screengrabs is my main ELB forwarding to multiple varnish cache servers. In my varnish vcl I would send misses for a couple of URL's but varnish have a queing behavior and what I ended up doing was set a high ttl for these request, and return hit_for_pass for them. What this does is let varnish know in the vcl_recv that these requests should be passed to the back-end immediately. Since doing this, the problem outlined above has completely been fixed
did you ssh into one of the servers? Maybe you reach some connection limit in apache or whatever server you run. Also check the cloudwatch monitors of EBS volumes attached to your instances, maybe they cause a io bottleneck.