Server connection delay not recorded on some JMeter clients - web-services

While testing a webservice we set a connection delay on the server of 5 seconds. Thus you would expect JMeter to give response times >5000ms. In some cases / clients this works fine. As expected, but in others it doesn't.
On some clients JMeter just gives a response time of (e.g.) 315ms, whilst other machines give 5315ms (which includes the 5 second delay). On the problem-machines I also test through SoapUI, same response time, and Firefox. Firefox shows a response time of >5000ms.
Theoretically there shouldn't be a difference between the machines, but obviously there is. I just can't find what.

Please use transaction controller.
All your HTTP/s requests should be part of the same transaction controller.
In order to include the delay time, kindly check/select the property of transaction controller mentioned below:
"Include duration of timer and pre-post processors in generated sample"
hope this will help.

Related

Limiting processing time of request for WCF or ASMX webservice

Let say I have a webservice (WCF and ASMX .net framework 4.8) which is hosted on IIS 10. Webservice has a method with this content:
public CustomerListResponse Get(CustomerListRequest request)
{
//sleep for 1 hour
System.Threading.Thread.Sleep(TimeSpan.FromHours(1));
return new CustomerListResponse();
}
The line that is performing sleep on thread is just to show that there is code that in some cases can take long time.
What I'm looking is setting or way to limit allowed processing time for example to one minute and error returned to client. I want the processing be killed by IIS/WCF/ASMX if the execution time will exceed one minute.
Unfortunately I didn't found a way in IIS for that. Also I don't have access to client code to set this limit on other side - change is possible only on server side.
What I tried:
on binding for WCF there is couple of properties openTimeout="00:01:00" closeTimeout="00:01:00" sendTimeout="00:01:00" receiveTimeout="00:01:00" - I set them all but it didn't work. Code can still process for long time.
<httpRuntime targetFramework="4.8" executionTimeout="60" /> - also didn't work
I don't have other ideas how to achieve that, but I believe there should be some solution to be able control how long we want to spend on processing.
You need to set the timeout on both client-side and server-side.
Client-side:
SendTimeout is used to initialize OperationTimeout, which manages the entire interaction of sending a message (including receiving a reply message in a request-reply case). This timeout also applies when a reply message is sent from the CallbackContract method.
OpenTimeout and CloseTimeout are used to open and close channels (when no explicit timeout value is passed).
ReceiveTimeout not used.
Server-side:
Send, open, and close timeouts are the same as on the client side (for callbacks).
ReceiveTimeout is used by the ServiceFramework layer to initialize idle session timeouts.

I have tested my AWS server (8 GB RAM) on which my Moodle site is hosted for 1000 users using JMeter, I am getting 0% error, what could be the issue?

My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration

Do ColdFusion Scheduled Tasks have a built-in request timeout?

I have several scheduled tasks that essentially perform the same type of functionality:
Request JSON data from an external API
Parse the data
Save the data to a database
The "Timeout (in seconds)" field in the Scheduled Task form is empty for each task.
Each CFM template has the following line of code at the top of the page:
<cfscript>
setting requesttimeout=299;
</cfscript>
However, I consistently see the following entries in the scheduled.log file:
"Information","DefaultQuartzScheduler_Worker-8","04/24/19","12:23:00",,"Task
default - Data - Import triggered."
"Error","DefaultQuartzScheduler_Worker-8","04/24/19","12:24:00",,"The
request has exceeded the allowable time limit Tag: cfhttp "
Notice, there is only a 1-minute difference between the start of the task, and its timing out.
I know that, according to Charlie Arehart, the timeout error messages that are logged are usually not indicative of the actual cause/point of the timeout, and, in fact, I have run tests and confirmed that the CFHTTP calls generally run in a matter of 1-10 seconds.
Lastly, when I make the same request in a browser, it runs until the requesttimeout set in the CFM page is reached.
This leads me to believe that there is some "forced"/"built-in"/"unalterable" request timeout for Scheduled Tasks, or, that it is using the default timeout value for the server and/or application (which is set to 60 seconds for this server/application) yet, I cannot find this documented anywhere.
If this is the case, is it possible to scheduled a task in ColdFusion that runs longer than the forced request timeout?

Sustain an http connection while django processes a big request (20mins+)

I've got a django site that is producing a csv download. The content of the csv is dictated by user defined parameters. It's possible that users will set parameters that require significant thinking time on the server. I need a way of sustaining the http connection so the browser doesn't kick up an error message. I heard that it's possible to send intermittent http headers to do this. Can anyone point me in the right direction to set this up on a django site?
(unfortunatly I'm stuck with the possibility of slow reports - improving my sql won't mitigate this)
Don't do it online. Trigger an offline task, use a bit of Javascript to repeatedly call a view that checks if the task has finished, and redirect to the finished file when it's ready.
Instead of blocking the user and it's browser for 20 minutes (which is not a good idea) do the time-consuming task in the background. When the task will finish and generate the result simply notify the user so that he/she will just need to download the ready result.

Django/Postgres performance worsening after repeatedly processing the same query

I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.