How to read concurrently in Jmeter? - concurrency

I have a question about concurrent user in Jmeter.
If i use setting like this, is that means in first sec 500 thread will hit concurrently?
Then after the first hit, is the threads will repeatedly hit every minute?

It means that JMeter will start 500 threads and will keep them running for 20 minutes.
It doesn't mean that the threads will "repeatedly hit every minute"
All JMeter threads are independent, once started each thread (virtual user) starts executing Samplers upside down. When there are no more loops to iterate or samplers to execute (or test duration is exceeded) the thread is being shut down.
So in your case 500 threads will be repeatedly executing Samplers without any delay (as long as you're not using Timers anywhere) for 20 minutes.
The actual concurrency (number of requests per second) mainly depends on the nature of your test plan and your application response time, it can be checked using i.e. Transactions per Second listener

The preview graph in Concurrency Thread Group is useful to understand how it works.
The preview graph reacts immediately to changes in the fields, showing you the planned concurrency schedule.
Target Concurrency: Number of threads available at the end of the ramp-up period.
**Hold Target Rate Time **: Duration of the test after the ramp-up period. Duration of the test execution with the Target Concurrency
You have not set the ramp-up settings. Hence, JMeter will create 500 tests as the test is started and will run the test for 20 minutes.
Note: It is advisable to set a ramp-up to avoid
Unbearable load to the JMeter client machine (load agent) as the test is started
Unrealistic, sudden load to the target server

Related

Running background processes in Google Cloud Run

I have a lightweight server that runs cron jobs at a given time. As I understand Google Cloud Run only processes incoming requests and then becomes idle after a short time if there is no other request to process. Hence, it is not advisable to deploy that cron service to Cloud Run.
Out of curiosity, I deployed the following server that starts up and then prints a log every hour.
const express = require('express');
const app = express();
setInterval(() => console.log('ping!'), 1000 * 60 * 60);
app.listen(process.env.PORT, () => {
console.log('server listening');
})
I deployed it with a minimum and maximum instance count of 1. It has not received any request and when I checked back the next day, it was precisely printing the log every hour. Was this coincidence or can I use this setup for production?
If you set the min instance to 1 and the CPU always on to true, yes, you can perform background compute intensive processing without CPU Throttling (in your hello world case, you can use the few CPU % allowed to the idle instance without the CPU always on option).
BUT, and the but is very important, you will pay for 1 Cloud Run instance always up. In addition, is you receive request, you can scale up and have more than 1 instance up and running. Does it make sense to have several instances with the same CRON scheduling? (except if you set the max instance to 1).
At the end, the best pattern is to host the scheduling outside, on Cloud Scheduler, and then to query your instance to perform the task. It's serverless, you can handle several task in parallel, it's scalable.
From my understanding no.
From the documentation here, Google indicates that the CPU of idle instances is throttled to nearly zero. I suppose this means that very simple operation can still be performed (e.g. logging a string every hour). I guess you could test it more extensively by doing some more complex operations and evaluate the processing time of these operations.
Either way, I would not count on it in a production environment. There is no guarantee that the CPU "throttled to nearly zero" will be able to complete the operations you need in a reasonable time delay.

Concurrency Thread Group Showing more Samples then define

I am using Concurrency Thread group with the following values
Target Concurrency: 200,
Ramp-Up Time: 5 min,
Ramp-Up Step Count: 10,
Hold Target Rate Time : 0 min,
Thread Iteration Limit: 1.
I am Using Throughput Controller as a child to Concurrency Thread Group, Total Executions, Throughput = 1, per User selected.
I am 5 HTTP Request, What I am expected is each HTTP request should have 200 users but, it shows more than 300 users.
Can anyone tell me, that my expectation is wrong or my setup is wrong?
What is the best way to do?
Your expectation is wrong. With regards to your setup - we don't know what you're trying to achieve.
Concurrency Thread Group maintains the defined concurrency so
JMeter will start with 20 users
In 30 seconds another 20 users will be kicked off so you will have 40 users
In 60 seconds another 20 users will arrive so you will have 60 users
etc.
Once started the threads will begin executing Sampler(s) upside down (or according to Logic Controllers) and the actual number of requests will depend on your application response time.
Your "Thread Iteration Limit" setting allows the threads to loop only once so thread will be stopped once it executed all the samplers, however Concurrency Thread Group will kick off another thread to replace the ended one in order to maintain the defined concurrency
If you want to limit the total number of executions to 200 you can go for Throughput Controller
and this way you will have only 200 executions of its children
Be aware that in the above setup your test will still be running for 5 minutes, however the threads will not be executing samplers after 200 total executions.

GAE - how to avoid service request timing out after 1 day

As I explained in this post, I'm trying to scrape tweets from Twitter.
I implemented the suggested solution with services, so that the actual heavy lifting happens in the backend.
The problem is that after about one day, I get this error
"Process terminated because the request deadline was exceeded. (Error code 123)"
I guess this is because the manual scaling has the requests timing out after 24 hours.
Is it possible to make it run for more than 24 hours?
You can't make a single request / task run for more than 24 hours but you can split up your request into different parts and each lasting a day. It's unwise to have a request run indefinitely that's why app engine closes them after a certain time to prevent idling / loopy request that lasts indefinitely.
I would recommend having your task fire a call at the end to trigger the queuing of the next task, that way it's automatic and you don't have to queue a task daily. Make sure there's some cursor or someway for your task to communicate progress so it won't duplicate work.

Get a Constant rps in Jmeter

I have been trying to use JMeter to test my server. I have a cloudsearch endpoint on AWS. I have to test if it can scale upto 25000 requests per second without failing. I have tried JMeter with a constant throughput timer with throughput = 1500000 per second and running 1000 threads. I ran it for 10 mins. But When I review the aggregate report it shows an average of only 25 requests per second. How do i get an average of around 25000 requests per second?
Constant Throughput Timer can only pause the threads to reach specified "Target Throughput" value so make sure you provide enough virtual users (threads) to generate desired "requests per minute" value.
You don't have enough Threads to achieve such Requests per second!!!
To get an average (~25000) requests per second, you have to increase the Number of threads.
Remember, The number of threads will impact results if your server faces slowdowns. If so and you don't have enough threads then you will not be injecting the expected load and end up with fewer transactions performed.
You need to increase the number of concurrent users to be at least 25000 (it assumes 1 second response time, if you have 2 seconds response time - you will need 50000)
JMeter default configuration is not suitable for high loads, it is good for tests development and debugging, however when it comes to the real load you need to consider some constraints, i.e.:
Run JMeter test in non-GUI mode
Increase JVM Heap Size and tune other parameters
Disable all listeners during the test
If above tips don't help follow other recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article or consider Distributed Testing

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.