Currently, we run into one problem with timeout issue. Our application is based on Jetty and uses Zeus as load balancing. The maxIdleTime is set as default value 30000 in jetty.xml. When a request/connection exceeds 30 seconds, the connection status will change to TIME_WAIT, but we get the HTTP 500 Internal Error in the browser side.
I guess the HTTP 500 error comes from Zeus but I want to confirm this: how would Zeus handle the closed connection?
OR
The jetty service sends 500 to Zeus? If so, how can I confirm this?
The surefire way to iron out what is happening here is to sniff the packets using something like ethereal or tcpdump between the load balancer and the jetty server, and you can use the network tooling in something like firebug or the chrome developer tools to see what is happening on that side of the connection. You can also turn on debug on the jetty side to see what it is doing specifically.
Regardless, if your hitting your timeout settings then you need to either increase those settings or decided on a proper strategy to deal with them to avoid this sort of issue, assuming you don't want that 500 error on the browser.
Related
I am using JMeter 5.1 to conduct performance testing (HTTP requests), the system tested is web application on Google Cloud Platform. For JMeter, I configured 10 threads(users), and got error messages for failed request: failed: Connection timed out (Connection timed out)
and check result in "View Results in Table" found Connect time = Sample time and there is no latency time.
Under this condition, What should I try to find the root cause. Is there any analyze direction or method?
You can take a look to this link it explain some causes related to your issue.
In addition, if you are using GCP HTTP(S) Load balancer, you should take a look to stackdriver logs. That will provide more clues for your issue. You may need to change the timeout values for your load balancer or your backend.
This is due to TCP connection timeout, if JMeter fails to get the response within the bounds of the TCP Handshake it will fail the associated HTTP Request sampler.
Latency (also known as TTFB) would be 0 because there is no any response from the server.
same for sent bytes, same for header size in bytes.
If for your application response times over 2 minutes is acceptable and expected you can increase Connect timeout, the relevant setting lives under "Advanced" tab of the HTTP Request sampler (or even better HTTP Request Defaults)
For example this is how you can increase the Connect timeout to 5 minutes:
With regards to root cause, the reasons are numerous, i.e.:
wrong protocol/port combination
the port is blocked by OS firewall or GCP firewall
the application or VM is overloaded and cannot respont
incorrect configuration of application server/database/other middleware
etc.
We are facing the following problem. What we have:
1) "Our" ASMX web-services, hosted both on 80 and 8080 ports
2) "Their" unsupported unknown solution which works with our web-services. That is, it is working as is and almost can not be modified. I don't know how it is implemented. It looks like WWF hosted on IIS, but I can't say for sure. But it is 100% .NET solution as ours.
So far this solution calls our services through WCF client proxies using endpoints configured in settings. They are standard WCF-configuration:
K4 default
The problem with these endpoints is that they are using default HTTP port 80. We have load balancing, but even round-robin balancing is implemented on port 8080 (so that for port 80 all requests go to 1 specific server but not to 1 random of 3 nodes for every request). So we asked to change endpoint ports in configuraiotn file and test it is working. And this is really working on test server k4-dv.
However when we tried to change ports on production server, everything stopped working. Error that is seen in logs is following:
Metadata error
followed by this:
HTTP 503 error
And some other errors, but I can't say whether they are relevant to each other:
Message contract
Timeout
So my question is fairly obvious: what is the reason of this error and what can be changed, taking in account, that test server works with both ports? What settings should we check?
The main problem is that I don't know reasons of this error and did not manage to find the solution. And there are no people from "their side" who develops this solution. We have only "support" than can check something, change settings, but in this case they can only wait a suggestion from us to fix problem with 8080 port. They don't know reasons either.
I am currently using Jetty 9.1.4 on Windows.
When I deploy the war file without hot deployment config, and then restart the Jetty service. During that 5-10 seconds starting process, all client connections to my Jetty server are waiting for the server to finish loading. Then clients will be able to view the contents.
Now with hot deployment config on, the default Jetty 404 error page shows within that 5-10 second loading interval.
Is there anyway I can make the hot deployment has the same behavior as the complete restart - clients connections will wait instead seeing the 404 error page ?
Unfortunately this does not seem to be possible currently after talking with the Jetty developers on IRC #jetty.
One solution I will try to use are two Jetty instances with a loadbalancing reverse proxy (e.g. nginx) before them and taking one instance down for deployment.
Of course this will instantly lead to new requirements (session persistence/sharing) which need to be handled. So in conclusion: much work to do in the Java world for zero downtime on deployments.
Edit: I will try this, seems like a simple enough solution http://rafaelsteil.com/zero-downtime-deploy-script-for-jetty/ Github: https://github.com/rafaelsteil/jetty-zero-downtime-deploy
I am using turnserver (http://code.google.com/p/rfc5766-turn-server/) with --alternate-server options for relaying media stream and using pjnath library on client side.
But when turn server on ALLOCATION request, return a 300 error code i.e. Try Alternate Server, pjnath simply treated it as an error and doesn't connect to alternate server.
So my question is, Does pjnath supports ALTERNATE-SERVER option? Does it try to connect alternate server on 300 error code?
Does anybody had the similar problem with pjnath? How to make pjnath to connect alternate server ?
Any help will be appreciated.
Checking version 2.2.1, PJNATH does not support 300 ALTERNATE-SERVER option.
However, rfc5766-turn-server support two other means
DNS SRV based load-balancing
network load-balancer server
I use WebRequest in a client to consume a web service on Internet. Each request is triggered in a separate thread.
It works well if hosting the client in IIS. But most of the requests will get timed out error if the client is hosted in a windows service.
When I tried to debug the problem using Fiddler, the WebRequest worked well as all traffic went through 127.0.0.1:8888
Without Fiddler, the traffic goes to Internet directly through a random port, and the time out problem hits again.
The windows service runs under Local System account.
Why do I get time out if the client is in windows service without using a proxy?
Update: My original question wasn't clear. The requests are made concurrently (or at a very short interval). This is to do with the connection limit in the ServicePoint class. By default only 2 connections are allowed to the same external destination. If the destination is local, the limit will be int.Max value. That's why fiddler can magically fix the problem with the proxy. So I manually set the DefaultConnectionLimit to 100 and the requests are on wire.
Adjusting HttpWebRequest Connection Timeout in C#
The most common source of problems that is "magically" fixed by running Fiddler is when your .NET code fails to call Close() on the object returned by GetResponseStream(). See http://www.telerik.com/automated-testing-tools/blog/13-02-28/help-running-fiddler-fixes-my-app.aspx for more details.