OK, so I have three ELBs, and three subdomains in the same hosted zone. Each ELB load balances a different environment--one is prod, one is staging, one is a second staging environment. I've got three CNAMEs configured in Route 53, each directing to one of the ELBs, like this:
mysite.com = directs to ELB for prod (let's call it ProdELB)
staging.mysite.com = directs to StagingELB
newtest.staging.mysite.com = directs to NewStagingELB
The first two work fine.. however, the last one won't work.. it keeps mixing it up with the second one. Whenever I type newtest.staging.mysite.com into my browser, my browser responds by loading a page from staging.mysite.com instead, as though it's somehow redirecting to the second ELB instead of the third one. But there's nothing in my Route 53 to tell it to do that.
This even happens if I try to load the ELB domain name directly; ie. typing http://NewStagingELB.elb.aws.amazon.com in my browser will also cause staging.mysite.com to load. Even loading one of the instance IPs directly causes my browser to load the staging.mysite.com site.. what the heck is going on?
It's only the browser that does this.. pinging newtest.staging.mysite.com returns the correct ELB. It's also not a cache or cookie issue or anything like that because I've tried on multiple browsers including on my cell phone over data.
How do I get newtest.staging.mysite.com to actually direct to the right ELB?
Ended up being a software-related problem.. newtest's tomcat's server.xml's Connector had proxyName and proxyPort settings that were stuck to the second instance.
Related
A resource on my webapp takes nearly a minute to load after a long stall. This happens consistently. As shown below, only 3 requests on this page actually hit the server itself, the rest hit the memory or disk cache. This problem only seems to occur on Chrome, both Safari and Firefox do not exhibit this behavior.
I have implemented the Cache-Control: no-store suggestion in this SO question but the problem persists. request stalled for a long time occasionally in chrome
Also included below is an example of what the response looks like once it finally does come in.
My app is hosted in AWS behind a Network Load Balancer which proxies to an EC2 instance running nginx and the app itself.
Any ideas what is causing this?
I encountered the exact same problem. We are using Elastic Beanstalk with Network Load Balancer (NLB) with TLS termination at NLB.
The feedback I got from AWS support is:
This problem can occur when a client connects to a TLS listener on a Network Load Balancer and does not send data immediately after completing the TLS handshake. The root cause is an edge case in the handling of new connections. Note that this only occurs if the Target Group for the TLS listener is configured to use the TCP protocol without Proxy Protocol v2 enabled
They are working on a fix for this issue now.
Somehow this problem can only be noticed when you are using Chrome browser.
In the meantime, you have these 2 options as workaround:
enable Proxy Protocol v2 on the Target Group OR
configure the Target Group to use TLS protocol for routing traffic to the targets
I know it's a late answer but I write it for someone seeking a solution.
TL;DR: In my case, enabling cross-zone load balancing attribute of NLB solved the problem.
With investigation using WireShark I figured out there were two different IPv4 addresses Chrome communicated with.
Sending packets to one of them always succeeded and to the other always failed.
Actually the two addresses delegated two Availability Zones.
By default, cross-zone load balancing is disabled if you choose NLB (on the contrary the same attribute of ALB is enabled by default).
Let's say there are two AZs; AZ-1 / AZ-2.
When you attach both AZs to a NLB, it has a node for each AZ.
The node belongs to AZ-1 just routes traffic to instances which also belong to AZ-1. AZ-2 instances are ignored.
My modest app (hosted on Fargate) has just one app server (ECS task) in AZ-2 so that the NLB node in AZ-1 cannot route traffic to anywhere.
I'm not familiar with TCP/IP or Browser implementation but in my understanding, your browser somehow selects the actual ip address after DNS lookup.
If the AZ-2 node is selected in the above case then everything goes fine, but if the AZ-1 is selected your browser starts stalling.
Maybe Chrome has a random strategy to select ip while Safari or Firefox has a sticky one, so that the problem only appears on Chrome.
After enabling cross-zone load balancing the ECS task on AZ-2 is visible from the AZ-1 NLB node, and it works fine with Chrome browser too.
(Please feel free to correct my poor English. Thank you!)
I see two things that could be responsible for delays:
1) Usage of CDNs
If the resources that load slow are loaded from CDNs (Content Delivery Networks) you should try to download them to the server and deliver directly.
Especially if you use http2 this can be a remarkable gain in speed, but also with http1. I've no experience with AWS, so I don't know how things are served there by default.
It's not shown clearly in your screenshot if the resources are loaded from CDN but as it's about scripts I think that's a reasonable assumption.
2) Chrome’s resource scheduler
General description: https://blog.chromium.org/2013/04/chrome-27-beta-speedier-web-and-new.html
It's possible or even probable that this scheduler has changed since the article was published but it's at least shown in your screenshot.
I think if you optimize the page with help of the https://www.webpagetest.org and the chrome web tools you can solve any problems with the scheduler but also other problems concerning speed and perhaps other issues too. Here is the link: https://developers.google.com/web/tools/
EDIT
3) Proxy-Issue
In general it's possible that chrome has either problems or reasons to delay because of the proxy-server. Details can't be known before locking at the log-files, perhaps you've to adjust that log-files are even produced and that the log-level is enough to tell you about any problems (Level Warning or even Info).
After monitoring the chrome net-export logs, it seems as though I was running into this issue: https://bugs.chromium.org/p/chromium/issues/detail?id=447463.
I still don't have a solution for how to fix the problem though.
I am repeating a question that I posted at https://forums.aws.amazon.com/thread.jspa?threadID=275855&tstart=0
to reach out more people.
Hi,
I am trying to deploy a REST service in AWS. The current architecture is:
Domain name (Route 53) -> Load Balancer -> Single EC2 instance (bound to an Elastic IP). And I use TLS/SSL certificate issued by a Certificate Manager.
The instance is Ubuntu 16.04 machine, and the service is implemented with (bare) Vert.X (==no proxy server).
However, 504 Error (gateway timeout) occurs after a few different requests (each of which takes <1s) in a series, and then it does not respond. The requests do not reach the server instance after a few requests. I checked that it happens in the same way when I access both the domain name and the load balancer directly. I have confirmed that the exact same scenario is working with direct URL.
I run up a dummy server returning "hello world" and it's working okay with the load balancer. The problem should be caused by something no coherent between the load balancer and the server code, but I can't get where to start.
I have checked several threads complaining the 504 errors, and followed some of the instructions, but they do not work. Especially I set keep-alive option in Vert.x and set the idle time longer than the balancer's. As the delays are not longer than the idel time with the direct communication, I believe it is not the problem anyway. I have checked the Security Groups also and confirmed the right ports are open. (The first few requests are working, so it must not be the problem also.)
Does any of you have a sense where I should start looking at? Even better, know the source of the problem?
Thanks in advance.
EDIT: I just found the issue in some of the code. I've answered myself below. Thanks for reading!
Found the issue in my code. Some of the APIs (implemented by my colleague...) was not flushing the buffer of HTTP responses in the server.
In Vert.X Java, it was resp.end().
It was somehow working with direct access probably the buffer was flushed at some point, but that flush seems not caught by the load balancer.
Hope nobody experiences this, but in case...
I got a problem about ec2 and route 53. I setup my website using node with keystonejs framework. It works fast in my local server and work well before I bind the domain name with route 53. But after I succeed binding domain name, it loads very slow and I use pindom to do speed test and find the following problem:
View Pingdom website speed test result
When it requests firstly, it has been waited for at least 8 seconds. Anyone know what's going wrong ? Why after I got this? I think it is not because of node js. My website is nexstartup.org( http://nexstartup.org)
When trying to resolve a hostname (i.e. using dig), the server almost always fails, saying ;; connection timed out; no servers could be reached. Around one in ten attempts works, usually after a long waiting time.
Strange thing is that the same behavior happens also if I'm querying a different DNS server (Google's).
My default nameserver is Amazon's, # 172.31.0.2 . I get this one automatically when the server connects using DHCP.
Pinging the IPs (8.8.8.8 & 172.31.0.2) also usually fails.
I've tried checking the VPC settings and security group settings, but found nothing. Also the fact it works every once in a while makes me even more confused.
The problem disappeared by itself after around 48 hours. I don't know how to further analyize the issue so I'm closing this question. I can't think of anything about the server or AWS configuration that could have caused this, so I assume it was something with AWS's infrastructure.
Thanks
I'm currently deploying some wso2as cluster, and am facing a strange problem with URL mapping.
I have setup two worker nodes (named was0 and was1), a manager node (named mgt) and an ELB (named elb).
The installation seems working fine, as I'm able to call URL mapped on load balancer like the following : http://was0.domain/services/... , was0.domain being mapped on the load balancer IP on the station accessing this address (outside the cluster).
When I call services on this endpoint, I'm able to load balance as I can notice that my wsdl has enpoints based on was0 and was1. The two worker nodes are pretty detected as application nodes on the ELB.
The problem I encounter however is that when I use was0 based URL, it works fine, but when I try to use the was1 one, the load balancer returns a blank page, and I don't notice any error in logs. I have both hosts was1 and was0 defined in my cluster configuration as application members for AS.
If I try from the ELB node to access the was1 based webservice directly on the WAS, I'm able to access it without problem (so the service is working on was1 node, and this node is also detected and registered inside the cluster, but not accessible through cluster).
Finally, this results in one call working when round robin targets was0, and one call not working when targetting was1.
So I'm currently wondering if I understood well the cluster behavior: should it work for both application servers mapped URL, or is it normal to have only the first was0 responding with success? How could I force generated WSDL to return a valid endpoint URL?
What I understood by reading documentation is that I need mapping WAS URLs on the ELB, and that this one will then balance on all WAS servers, but it doesn't seem working like that.
Please tell me if you need some configuration part, diagram or example, I didn't paste it here because it's quite big :)
For information, I had the same problem when balancing through 2 wso2esb worker nodes, but was able to solve it by forcing WSDL URLs prefix by the first node URL (esb0) with the WSDLEPRPrefix in ESB configuration. As I don't have a such setting in wso2as, I don't know how accessing the URL returned in WSDL.
Thank you by advance for your help,
BOUCNIAUX Benjamin