I am using Hunchentoot for a web app to be a high traffic db driven app, also depends on web sockets protocol and http ajax requests.
When I benchmark my app with apache benchmark as
ab -c 50 -n 1000
connection is reset prompt is shown. for unto 40 concurrency test is completed but after not. How can one increase max-thread-count of Hunchentoot.
What is the realistic numbers of concurrency and request number per unit time for a high traffic web app that I should think according to? for example for reddit or twitter.
You can pass in a number to the taskmaster with the :max-thread-count keyword.
Related
Is it possible to roughly estimate how many concurrent requests an API might receive?
Let's say it's a super simple API that just returns "hello" to a GET request deployed on a 16gb machine. In general, how many concurrent requests could it support before it starts to melt or say nah?
If it failed because of too many concurrent requests, what would happen?
Requests over the threshold would time out
Machine would crash
As PiRocks suggested, I ran an experiment
Deployed a simple node.js api app to heroku
Deployed the app to heroku (machine specs TBD - looking around if they even list it)
Signed up for a free account on loader.io
Unfortunately, the maximum for free is 10k requests over 15s, aka 666 QPS. That resulted in a 2ms average response time, no timeouts, and no errors. Might upgrade to see what it looks like from there.
Update: seems like 2K QPS is where I started to see errors. More details here
I have been doing load test for very long in my company but tps never passed 500 transaction per minute. I have more challenging problem right now.
Problem:
My company will start a campaing and ask a questiong to it's customers and first correct answer will be rewarded. Analists expect 100.000 k request in a second at global maximum. (doesnt seem to me that realistic but this can be negotiable)
Resources:
Jmeter,
2 different service requests,
5 x slave with 8 gb ram,
80 mbps internet connection,
3.0 gigahertz
Master computer with same capabilities with slaves.
Question:
How to simulete this scenario, is it possible? What are the limitations. How should be the load model. Are there any alternative to do that?
Any comment is important..
Your load test always need to represent real usage of application by real users so first of all carefully implement your test scenario to mimic real human using a real browser with all its stuff like:
cookies
headers
embedded resources (proper handling of images, scripts, styles, fonts, etc.)
cache
think times
etc.
Make sure your test is following JMeter Best Practices, i.e.:
being run in non-GUI mode
all listeners are disabled
JVM settings are optimised for maximum performance
etc.
Once done you need to set up monitoring of your JMeter engines health metrics like CPU, RAM, Swap usage, Network and Disk IO, JVM stats, etc. in order to be able to see if there is a headroom to continue. JMeter PerfMon Plugin is very handy as its results can be correlated with the Test Metrics.
Start your test from 1 virtual user and gradually increase the load until you reach the target throughput / your application under test dies / JMeter engine(s) run out of resources, whatever comes the first. Depending on the outcome you will either report success or defect or will need to request more hosts to use as JMeter engines / upgrade existing hardware.
I'm trying to optimize a single core 1GB ram Digital Ocean VPS to handle more requests per second. After some tweaking (workers/gzip etc.) it serves about 15 requests per second. I don't have anything to compare it with but I think this number can be higher.
The stack works like this:
VPS -> Docker container -> nginx (ssl) -> Varnish -> nginx -> uwsgi (Django)
I'm aware of the fact that this is a long chain and that Docker might cause some overhead. However, almost all requests can be handled by Varnish.
These are my tests results:
ab -kc 100 -n 1000 https://mydomain | grep 'Requests per second'
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Requests per second: 18.87 [#/sec] (mean)
I have actually 3 questions:
Am I correct that 18.87 requests per second is low?
For a simple Varnished Django blog app, what would be an adequate value (indication)?
I already applied the recommended tweaks (tuned for my system) from this tutorial. What can I tweak more and how do I figure out the bottlenecks.
First some note about Docker. It is not meant to run a multiple processes in a single docker container. Docker is not a replacement for a VM. It simply allows to run processes in isolation. So the docker diagram should be:
VPS -> docker nginx container -> docker varnish container -> docker django container
To make your life using multiple Docker containers simpler, I would recommend to use Docker-compose. It is not perfect but its an excellent start.
Old but still fundamentally relent blog post about that. Note that some suggestions are no-longer relevant like nsenter since docker exec command is now available but most of the blog post is still correct.
As for your performance issues, yes, 18 requests per second is pretty low. However the issue is probably has nothing to do with nginx and is most likely in your Django application and possibly varnish (however very unlikely).
To debug PA issues in Django, I would recommend to use django-debug-toolbar. Most issues in Django are caused by unnecessary SQL queries. You can see them easily in debug toolbar. To solve most them you can use select_related() and prefetch_related. For more detailed analysis, I would also recommend profiling your application. cProfile is a great start. Also some IDEs like PyCharm include built-in profilers so its pretty easy to profile your application to see which functions are taking most of the time which you can optimize. Finally you can use 3rd party tools to profile your application. Even free newrelic account will give you quite a bit of information. Alternatively you can use opbeat which is a new cool kid on the block.
Several PCs with Windows 7 and .NET 4.5.2 are using an application based upon the Invantive Web Service to access data located after the Web Service server. However, since a few months the performance is bad.
Even switching between tabs in the user interface is slow, taking multiple seconds instead of instaneous switching.
The problem was caused by Webroot SecureAnywhere, an anti-virus product. Invantive Web Service uses HTTPS to communicate between application and the web service servers.
Webroot SecureAnywhere does some form of deep packet inspection on HTTPS connections. The application in this case typically exchanges information in average every few seconds using a HTTPS POST with a small payload (several hundred bytes).
Webroot SecureAnywhere inspects this HTTPS POST and it takes approximately 750-1700 ms per HTTPS POST to analyze it and pass it through. Measured on an i3 processor with Windows 7.
When Webroot SecureAnywhere was disabled, response times per HTTPS POST dropped from in average 1.500 ms to 30 ms.
Long term solution is to either abandon use of Webroot SecureAnywhere or that Webroot SecureAnywhere improves their algorithm in how they do deep packet inspection on HTTPS POSTs.
Short term solution is to add entries for all approved remote sites using HTTPS to the white list of Webroot SecureAnywhere.
I want to load test an enterprise server which is running 80 different SOAP webservices. All webservices need to be used at the same time. I want to register breaking point of the overall system.
I found some tools for testing a single webservice but not for all at the same time. Which tools can i use?
Our tool http://www.stresstimulus.com can test multiple Web services at the same time. For each web service, create a separate test case http://support.stresstimulus.com/display/doc40/Creating+Multiple+Test+Cases
To do so, issue a SOAP request and record it with StresStimulus proxy, or import a Fiddler .saz capture file.
By default, all test cases will have the same mix weight = 1, so load will be spread between them evenly. To customize the load distribution, adjust the mix weights. After that, you can ramp-up the load to reach a breaking point. You'll receive a separate performance report for each web service as well as consolidated report with response times, errors and other KPI.