Very slow to load website using AWS ec2 and route 53 - amazon-web-services

I got a problem about ec2 and route 53. I setup my website using node with keystonejs framework. It works fast in my local server and work well before I bind the domain name with route 53. But after I succeed binding domain name, it loads very slow and I use pindom to do speed test and find the following problem:
View Pingdom website speed test result
When it requests firstly, it has been waited for at least 8 seconds. Anyone know what's going wrong ? Why after I got this? I think it is not because of node js. My website is nexstartup.org( http://nexstartup.org)

Related

504 Gateway Time-out from google cloud platform, but only sometimes

I'm hosting a CouchBase single node cluster in GCP and a flask backend OpenShift cluster which supports angular frontend. The problem is, when a post is called in my flask by my angular, it is taking too much time to get connected the VM (couchbase) and hence flask has to return a "504 Gateway Time-out". But this happens only sometimes. Sometimes it just works very well with proper speed. Not able to troubleshoot. The total data size is less than 100M, and everything is 100% memory resident in Couchbase. So I guess this is not a problem with Couchbase. Just the connection latency to GCP.
My guess is that the first time your flask backend is trying to connect for the first time to your VM, it's taking more than usual as it needs to establish the connection, authenticate and possibly do other things depending on your use-case.
This is a common problem when hosting your app on App Engine or something similar and the solution there is to use "warm-up requests". This basically spins up the whole connection (and in app engine case the instance) and makes a test connection just so when the desired connection comes, everything is already set up.
So I suggest that you check how warm-up requests work and configure something similar between your flask and VM. So basically a route in flask with the only purpose of establishing a test connections with a test package. This way your next connection will be up to speed with no 504 errors.
try to clear to cache of load balancer in GCP console
I already faced same kind of issue and resolved it using above technique

Using SRV records to point subdomains

Im using google cloud platform to host my website.
I have spinned up VM, i have two docker containers, one is for the UI and other one is for backend.
I have added A record, pointing to my VM - webpage loads correctly, no problems.
Did a little bit digging on google, and wanted to accomplish pointing the backend to specific subdomain - e.g api.domain.com, so I added SRV records as well, as this isn't running on port 80, but 6009 instead.
My SRV records look something like this.
DNS Name: _http._tcp.domain.com
TTL: 300
Data: 0 5 5006 api.domain.com
Why is it not working, can anybody tell me what i've been doing wrong or does it just take time to actually show up? It's been nearly 7 hours now.
I think that you are misunderstanding what DNS SRV records are for. You can create any SRV record you want, but you will need to use clients that are programmed to look for your SRV record. The standard web browsers are not going to lookup your SRV record to see if you are using a different port from 80 (HTTP) or 443 (HTTPS).
If you are using custom software in your web server to access your API backend, then your software will need to lookup the SRV record and resolve the entries.

AWS, Load Balancer 504 error after a few requests

I am repeating a question that I posted at https://forums.aws.amazon.com/thread.jspa?threadID=275855&tstart=0
to reach out more people.
Hi,
I am trying to deploy a REST service in AWS. The current architecture is:
Domain name (Route 53) -> Load Balancer -> Single EC2 instance (bound to an Elastic IP). And I use TLS/SSL certificate issued by a Certificate Manager.
The instance is Ubuntu 16.04 machine, and the service is implemented with (bare) Vert.X (==no proxy server).
However, 504 Error (gateway timeout) occurs after a few different requests (each of which takes <1s) in a series, and then it does not respond. The requests do not reach the server instance after a few requests. I checked that it happens in the same way when I access both the domain name and the load balancer directly. I have confirmed that the exact same scenario is working with direct URL.
I run up a dummy server returning "hello world" and it's working okay with the load balancer. The problem should be caused by something no coherent between the load balancer and the server code, but I can't get where to start.
I have checked several threads complaining the 504 errors, and followed some of the instructions, but they do not work. Especially I set keep-alive option in Vert.x and set the idle time longer than the balancer's. As the delays are not longer than the idel time with the direct communication, I believe it is not the problem anyway. I have checked the Security Groups also and confirmed the right ports are open. (The first few requests are working, so it must not be the problem also.)
Does any of you have a sense where I should start looking at? Even better, know the source of the problem?
Thanks in advance.
EDIT: I just found the issue in some of the code. I've answered myself below. Thanks for reading!
Found the issue in my code. Some of the APIs (implemented by my colleague...) was not flushing the buffer of HTTP responses in the server.
In Vert.X Java, it was resp.end().
It was somehow working with direct access probably the buffer was flushed at some point, but that flush seems not caught by the load balancer.
Hope nobody experiences this, but in case...

Route 53 - Subdomain directing to wrong instance

OK, so I have three ELBs, and three subdomains in the same hosted zone. Each ELB load balances a different environment--one is prod, one is staging, one is a second staging environment. I've got three CNAMEs configured in Route 53, each directing to one of the ELBs, like this:
mysite.com = directs to ELB for prod (let's call it ProdELB)
staging.mysite.com = directs to StagingELB
newtest.staging.mysite.com = directs to NewStagingELB
The first two work fine.. however, the last one won't work.. it keeps mixing it up with the second one. Whenever I type newtest.staging.mysite.com into my browser, my browser responds by loading a page from staging.mysite.com instead, as though it's somehow redirecting to the second ELB instead of the third one. But there's nothing in my Route 53 to tell it to do that.
This even happens if I try to load the ELB domain name directly; ie. typing http://NewStagingELB.elb.aws.amazon.com in my browser will also cause staging.mysite.com to load. Even loading one of the instance IPs directly causes my browser to load the staging.mysite.com site.. what the heck is going on?
It's only the browser that does this.. pinging newtest.staging.mysite.com returns the correct ELB. It's also not a cache or cookie issue or anything like that because I've tried on multiple browsers including on my cell phone over data.
How do I get newtest.staging.mysite.com to actually direct to the right ELB?
Ended up being a software-related problem.. newtest's tomcat's server.xml's Connector had proxyName and proxyPort settings that were stuck to the second instance.

wso2elb services mapping not working with all configured hostnames

I'm currently deploying some wso2as cluster, and am facing a strange problem with URL mapping.
I have setup two worker nodes (named was0 and was1), a manager node (named mgt) and an ELB (named elb).
The installation seems working fine, as I'm able to call URL mapped on load balancer like the following : http://was0.domain/services/... , was0.domain being mapped on the load balancer IP on the station accessing this address (outside the cluster).
When I call services on this endpoint, I'm able to load balance as I can notice that my wsdl has enpoints based on was0 and was1. The two worker nodes are pretty detected as application nodes on the ELB.
The problem I encounter however is that when I use was0 based URL, it works fine, but when I try to use the was1 one, the load balancer returns a blank page, and I don't notice any error in logs. I have both hosts was1 and was0 defined in my cluster configuration as application members for AS.
If I try from the ELB node to access the was1 based webservice directly on the WAS, I'm able to access it without problem (so the service is working on was1 node, and this node is also detected and registered inside the cluster, but not accessible through cluster).
Finally, this results in one call working when round robin targets was0, and one call not working when targetting was1.
So I'm currently wondering if I understood well the cluster behavior: should it work for both application servers mapped URL, or is it normal to have only the first was0 responding with success? How could I force generated WSDL to return a valid endpoint URL?
What I understood by reading documentation is that I need mapping WAS URLs on the ELB, and that this one will then balance on all WAS servers, but it doesn't seem working like that.
Please tell me if you need some configuration part, diagram or example, I didn't paste it here because it's quite big :)
For information, I had the same problem when balancing through 2 wso2esb worker nodes, but was able to solve it by forcing WSDL URLs prefix by the first node URL (esb0) with the WSDLEPRPrefix in ESB configuration. As I don't have a such setting in wso2as, I don't know how accessing the URL returned in WSDL.
Thank you by advance for your help,
BOUCNIAUX Benjamin