NodeJS CPU utilisation statistics - profiling

NOTE: This is on Windows.
I have an application that is started as pm2 start index.js --name dvc -- config.json. Then, I started a new command window to monitor the application pm2 monit. To test the application, I am using the Runner option in Postman where, the number of iterations is set to 1000 with a delay of 0 ms.
In pm2 monit window, the CPU % remains between 0 and 11%. In Task Manager, the node.exe process shows CPU % in 20s. The Process Explorer shows the CPU utilisation close to values as reported by pm2 monit. So, I am not able to conclude exactly what is the CPU utilisation.
Can you please advise?

I would recommend looking into Windows Performance Monitor instead, it exposes more precise counters
Start Performance Monitor (i.e. type perfmon in "Search" or "Run" boxes and click "Enter")
Add a new Counter (click green plus sign)
Choose Process from the "Available counters" and search for node
You should see charts for different counters (including, but not limited to CPU usage)
Be aware of following:
On multi-core processor systems you might need to monitor CPU usage for all cores in order to ensure that your application can be paralellised
Your 1000 iterations don't really create any load as Postman waits for previous response prior to sending a new request therefore you always have only 1 request processed by your system which might be even cached. If you would like to load test your application I would recommend considering another tool capable of sending requests in multithreaded fashion, for example Apache JMeter would be a reasonable choice. Check out REST API Testing - How to Do it Right article for instructions on seting up JMeter for an API load testing.

Related

speed up boot time of compute engine instances

I've written a simple batch file that starts apache and sends a curl request to my server at start time. I am using windows server 2016 and n-4 compute engine instance.
I've noticed that 2 identical machines require vastly different start up times. One sends a message in just 40s, other one takes almost 80s. While in console, both seem to start at the same time, the reality is different, since the other one is inaccessible for 80s via RD tools.
The second machine is made from disk image of the first one. What factors contribute to the start time? Where should I trip the fat?
The delay could occur if the instances are in different regions and also if the second instance has some additional memory intensive applications or additional customizations done. The boot disk type for the instance also contributes to the booting time. Are you getting any information from the logs about this delay during the startup time? You could also compare traceroute results on both instances to see if there is a delay at some point in the network.

jmeter how to produce large number of service reqest in a second like 100.000 req/per sec

I have been doing load test for very long in my company but tps never passed 500 transaction per minute. I have more challenging problem right now.
Problem:
My company will start a campaing and ask a questiong to it's customers and first correct answer will be rewarded. Analists expect 100.000 k request in a second at global maximum. (doesnt seem to me that realistic but this can be negotiable)
Resources:
Jmeter,
2 different service requests,
5 x slave with 8 gb ram,
80 mbps internet connection,
3.0 gigahertz
Master computer with same capabilities with slaves.
Question:
How to simulete this scenario, is it possible? What are the limitations. How should be the load model. Are there any alternative to do that?
Any comment is important..
Your load test always need to represent real usage of application by real users so first of all carefully implement your test scenario to mimic real human using a real browser with all its stuff like:
cookies
headers
embedded resources (proper handling of images, scripts, styles, fonts, etc.)
cache
think times
etc.
Make sure your test is following JMeter Best Practices, i.e.:
being run in non-GUI mode
all listeners are disabled
JVM settings are optimised for maximum performance
etc.
Once done you need to set up monitoring of your JMeter engines health metrics like CPU, RAM, Swap usage, Network and Disk IO, JVM stats, etc. in order to be able to see if there is a headroom to continue. JMeter PerfMon Plugin is very handy as its results can be correlated with the Test Metrics.
Start your test from 1 virtual user and gradually increase the load until you reach the target throughput / your application under test dies / JMeter engine(s) run out of resources, whatever comes the first. Depending on the outcome you will either report success or defect or will need to request more hosts to use as JMeter engines / upgrade existing hardware.

JMeter: Low CPU usage but response too low

I am trying to load test for web services with 1000 Users using JMeter. I can see CPU usage is around 30% once 1000 Users are injected but when it comes to response, maximum time taken is around 12 Seconds.
My query is, if the CPU is not utilized 100%, maximum time in receiving any response should not get more than few seconds.
It is good that you are monitoring CPU usage on application server side. However the devil may live somewhere else, for instance application can experience the lack of available RAM, does intensive swapping or reaches the limits of network or disk IO so you should consider these metrics as well. Aforementioned ones (and more) can be monitored using JMeter PerfMon Plugin.
Basically the same as point 1, but applied to JMeter side of things. JMeter tests are very resource intensive and if JMeter lacks resources it will be sending requests much slower. So make sure you monitor baseline OS health metrics on JMeter machine(s) as well. Also 1000 users is quite a high load, double check you test corresponds JMeter Best Practices
It may be the bottleneck in your application, i.e. it isn't capable of providing a good response time given 1000 concurrent users. Use the relevant profiler tool to detect the most long running functions and investigate the root cause.

Configure uwsgi server for performance

I am deploying a uwsgi server for a django app. Each request will have a latency around 2 seconds. I need to handle 100 QPS. On a 4 cores machines, how should I configure the number of processes and the number of threads? I tried to play with the values but I do not understand what I am doing.
Go through the uWSGI Things to know page. 100 requests per second should be easily attainable with uWSGI.
Based on uWSGI behavior I've experienced, I would recommend that you start with only processes and don't use any threads. With both processes and threads we observed that there seemed to be an affinity to use threads over processes. That resulted in a single process handling all requests until it's thread pool was fully occupied and only then were requests handled by the next process. This resulted in poor utilization of resources as a single core was maxed out with all other idle. Turning off threading resulted in a massive performance boost for our particular use model.
Your experience may be different. The uWSGI authors stress that there isn't any magic config combination- it's completely dependent on your particular use case. You need benchmark your app against various configurations to find the sweet spot. Additionally, unless you're able to use benchmarks that perfectly model your actual production load, you'll want to continue to monitor performance and methodically tweak settings after you deploy.
From the Things to know page:
There is no magic rule for setting the number of processes or threads
to use. It is very much application and system dependent. Simple math
like processes = 2 * cpucores will not be enough. You need to
experiment with various setups and be prepared to constantly monitor
your apps. uwsgitop could be a great tool to find the best values.

Is there a way to speed up recovery from a crash

I'm trying to find a way to switch Calabash to next Scenario after noticing a crash
Retrying.. HTTPClient::ReceiveTimeoutError: (execution expired)
Retrying.. HTTPClient::ReceiveTimeoutError: (execution expired)
Failing... HTTPClient::ReceiveTimeoutError
Otherwise it can take up to half an hour before Calabash reestablish connection to Simulator and starts the next Scenario.
half an hour before Calabash reestablish connection to Simulator
This is very unusual and typically indicates a problem with UIAutomation.
Have you seen the Hot Topics page? In particular:
NSLog output can cause apps to become unresponsive during testing.
My best guess is that instruments is hanging for some reason. Below, I provide details about various variables and their defaults that influence launching and connecting the Calabash server.
I don't think that adjusting any of the variables below will make an difference in your case.
Reporting Problems
In the future, please include the details found in the Report Problems section of the Calabash iOS Wiki Home Page.
Environment Variables
You can find documentation about all the Calabash iOS environment variables here.
There are several variables you can use to control how long Calabash will wait for a response.
In Calabash iOS, two things need to happen before tests can begin:
The instruments command-line tool must launch the app and respond that it has launched the app.
Calabash must establish a connection with the embedded server.
You can control how long run-loop waits for instruments to launch the app and report back using the UIA_TIMEOUT environment variable. The default is 10 seconds. Calabash tells run-loop to try 3 times, for a total of 30 seconds. Unfortunately, there are no public API docs for run-loop.
The there are two environment variables that control how long Calabash will try to establish a connection with the embedded server:
CONNECT_TIMEOUT
MAX_CONNECT_RETRY
The default is try to reconnect once every 3 seconds 10 times for a total of 30 seconds.
These two variables are also used every time a query or gesture is made - how long does Calabash wait for the server to reply.