I have an ASP frontend that loads data from a Filemaker database using XSL to perform simple queries. The problem is that the first page load takes 20 seconds +/- 200ms, then the next few page refreshes within a minute of the first request take <200ms, then the cycle starts over again.
Each page load makes only 2 XSL queries, and they execute fast after the first page load, so what is causing the delay on the first page load? I have caching turned up with a 100% hit rate, and number of connections at 100. I've tried with XSL database sessions on and off, and session time anywhere from 1 to 60 minutes without any changes.
The XSL loads from ASP use a GET request and add a Basic Authorization header to authenticate each time.
During fast page requests, the fmserver.exe and fmswpc.exe processes don't even flinch, but during a 20 second holdup I see fmserver jump to 30% CPU and a 3mb I/O read a few seconds into the request, and occasionally fmswpc jump to 60% CPU.
If you're accessing the FileMaker server on the same machine, be sure to use '127.0.0.1' instead of 'localhost'.
Found the problem, for some reason it was the Authorization header that caused the lag. If I gave the guest account full access and removed that header, every request was fast. Go figure.
Related
My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration
I have a python program which query youtube to get the video details. I use the version-3 api. I have multiple processes m and a python pool of 10 processes in each python process.
songs_pool = Pool()
songs_pool =Pool(processes=10)
return_pool = songs_pool.map(getVideo,songs_list)
I get some client errors when the value of m is increased to more than 2 and the pool is increased to >5. I get forbidden errors. When I check the number of requests in the google analytics,it shows that the number of requests are 250 per sec. But according to the documentation the limit is 3000 requests per sec. I dont understand why am I getting the client errors. Can you tell me if there is a way to not get this errors and run the program quicker.
if m = 2 and process = 10 , i get no errors but it takes so much time to complete.
But if I increase them , then I get client errors which are ~ 5% of the total requests.
The per-user-limit is 3000 requests per second from a single IP address, and as soon as you go above that in a given second you'll start getting the forbidden errors. The analytics you see in the developers console will only report your average number of requests over a 5 minute period; therefore, if you had zero requests for 4 minutes, then started running your routine, the console may show only 250 requests per second (as an average) but your app likely is overrunning the limit in a given period of time or two.
It seems that you're handling it in the best way possible if speed is your concern; you'll want to run it fast enough to get a very small number of errors (so you know you're staying up there at your limit). Another option, though, might be to look into using etags; if you find yourself requesting info on the same videos a lot, you can let etags tell you whether or not any info has changed (and if the API responds that nothing has changed, it doesn't count against either your quota or your reqests/sec.)
I'm developing a web service that measures site response time. It's a Django app, and as an initial test I pointed it at a couple of my other Django sites on the same VPS. The response time was small (~5ms) most of the time, but on a fairly regular (5 or 10 min) schedule, jumped to a much higher value (up to 400ms, despite no heavy DB load or cache).
Suspicious of my own timing methodology, I pointed it at a static site on the same VPS, and got a consistent quick response. I then used nginx's response_time and upstream_response_time logging to find that it really was my Django apps that were giving the response time variance.
I then pointed the app at a few other Django sites around the web and found similar results: there's a fast "baseline" response with fairly regular spikes to one or two remarkably repeatable slower times. For example, one has a 25ms baseline with ~200ms and ~400ms "steps".
Using the Django Debug Toolbar's Time tab on one of my sites, I can see this behaviour. Most F5 reloads are quick, but there's an occasional (fairly consistently one in ten) slow one, with all the time spent in the "request" section.
Any ideas?
Solved it: it was gunicorn's "max-requests" setting, spawning a new worker every X connections. When X is ten and I'm hitting a "quiet" site, the long response time is every ten hits.
That's why I only noticed the multi-modal times on some Django sites: the others were presumably busy or not using gunicorn.
While testing a webservice we set a connection delay on the server of 5 seconds. Thus you would expect JMeter to give response times >5000ms. In some cases / clients this works fine. As expected, but in others it doesn't.
On some clients JMeter just gives a response time of (e.g.) 315ms, whilst other machines give 5315ms (which includes the 5 second delay). On the problem-machines I also test through SoapUI, same response time, and Firefox. Firefox shows a response time of >5000ms.
Theoretically there shouldn't be a difference between the machines, but obviously there is. I just can't find what.
Please use transaction controller.
All your HTTP/s requests should be part of the same transaction controller.
In order to include the delay time, kindly check/select the property of transaction controller mentioned below:
"Include duration of timer and pre-post processors in generated sample"
hope this will help.
I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.