Concurrency Thread Group Showing more Samples then define - concurrency

I am using Concurrency Thread group with the following values
Target Concurrency: 200,
Ramp-Up Time: 5 min,
Ramp-Up Step Count: 10,
Hold Target Rate Time : 0 min,
Thread Iteration Limit: 1.
I am Using Throughput Controller as a child to Concurrency Thread Group, Total Executions, Throughput = 1, per User selected.
I am 5 HTTP Request, What I am expected is each HTTP request should have 200 users but, it shows more than 300 users.
Can anyone tell me, that my expectation is wrong or my setup is wrong?
What is the best way to do?

Your expectation is wrong. With regards to your setup - we don't know what you're trying to achieve.
Concurrency Thread Group maintains the defined concurrency so
JMeter will start with 20 users
In 30 seconds another 20 users will be kicked off so you will have 40 users
In 60 seconds another 20 users will arrive so you will have 60 users
etc.
Once started the threads will begin executing Sampler(s) upside down (or according to Logic Controllers) and the actual number of requests will depend on your application response time.
Your "Thread Iteration Limit" setting allows the threads to loop only once so thread will be stopped once it executed all the samplers, however Concurrency Thread Group will kick off another thread to replace the ended one in order to maintain the defined concurrency
If you want to limit the total number of executions to 200 you can go for Throughput Controller
and this way you will have only 200 executions of its children
Be aware that in the above setup your test will still be running for 5 minutes, however the threads will not be executing samplers after 200 total executions.

Related

Cloud function, 2nd gen concurrency?

They saying 2nd gen is
Concurrency: Process up to 1000 concurrent requests with a single function instance,
minimizing cold starts and improving latency when scaling.
but as far as I know..
pre ver Cloud function`s maximum concurrent invocation of a single instance is 3000
so is it kinda downgrade??
Gen 1 functions can only handle 1 concurrent request at a time per instance. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance.
Gen 2 functions on the other hand can handle up to 1000 concurrent requests per function instance.

How to read concurrently in Jmeter?

I have a question about concurrent user in Jmeter.
If i use setting like this, is that means in first sec 500 thread will hit concurrently?
Then after the first hit, is the threads will repeatedly hit every minute?
It means that JMeter will start 500 threads and will keep them running for 20 minutes.
It doesn't mean that the threads will "repeatedly hit every minute"
All JMeter threads are independent, once started each thread (virtual user) starts executing Samplers upside down. When there are no more loops to iterate or samplers to execute (or test duration is exceeded) the thread is being shut down.
So in your case 500 threads will be repeatedly executing Samplers without any delay (as long as you're not using Timers anywhere) for 20 minutes.
The actual concurrency (number of requests per second) mainly depends on the nature of your test plan and your application response time, it can be checked using i.e. Transactions per Second listener
The preview graph in Concurrency Thread Group is useful to understand how it works.
The preview graph reacts immediately to changes in the fields, showing you the planned concurrency schedule.
Target Concurrency: Number of threads available at the end of the ramp-up period.
**Hold Target Rate Time **: Duration of the test after the ramp-up period. Duration of the test execution with the Target Concurrency
You have not set the ramp-up settings. Hence, JMeter will create 500 tests as the test is started and will run the test for 20 minutes.
Note: It is advisable to set a ramp-up to avoid
Unbearable load to the JMeter client machine (load agent) as the test is started
Unrealistic, sudden load to the target server

Google cloud task queues not running in parallel

I have a project in google cloud where there are 2 task queues: process-request to receive requests and process them, send-result to send the result of the processed request to another server. They are both running on an instance called remote-processing
My problem is that I see the tasks being enqueued in send-result but they are only executed after the process-request queue is empty and has processed all requests.
This is the instance config:
instance_class: B4
basic_scaling:
max_instances: 8
Here is the queue config:
- name: send-result
max_concurrent_requests: 20
rate: 1/s
retry_parameters:
task_retry_limit: 10
min_backoff_seconds: 5
max_backoff_seconds: 20
target: remote-processing
- name: process-request
bucket_size: 50
max_concurrent_requests: 10
rate: 10/s
target: remote-processing
Clarification : I don't need for the queues to run in an specific order, but I find it very strange that it looks like the insurance only runs one queue at a time, so it will only run the tasks in another queue after its done with the current queue.
over what period of time is this all happening?
how long does a process-request task take to run vs a send-result task
One thing that sticks out is that your rate for process-request is much higher than your rate for send-result. So maybe a couple send-result tasks ARE squeezing off, but it then hits the rate cap and has to run process-request tasks instead.
Same note for bucket_size. The bucket_size for process-request is huge compared to it's rate:
The bucket size limits how fast the queue is processed when many tasks
are in the queue and the rate is high. The maximum value for bucket
size is 500. This allows you to have a high rate so processing starts
shortly after a task is enqueued, but still limit resource usage when
many tasks are enqueued in a short period of time.
If you don't specify bucket_size for a queue, the default value is 5.
We recommend that you set this to a larger value because the default
size might be too small for many use cases: the recommended size is
the processing rate divided by 5 (rate/5).
https://cloud.google.com/appengine/docs/standard/python/config/queueref
Also, by setting max_instances: 8 does a big backlog of work build-up in these queues?
Let's try a two things:
set bucket_size and rate to be the same for both process-request and send-result. If fixes it, then start fiddling with the values to get the desired balance
bump up max_instances: 8 to see if removing that bottleneck fixes it

Concurrency and ultimate thread group Setup

I want to use Concurrency thread group, so I'm using this configuration
Why I'm expecting is to send 10 requests in 5 seconds, and hold them for 1 second but the result after running my script is this, more than 10 http request are send.
How can I control only send 10 requests?
Thank you.
A similar behaviour happens with Ultimate thread group
You're not sending 10 requests in 5 seconds, you're launching 5 threads (virtual users) in 5 seconds, to wit JMeter will add 2 virtual users each second for 5 seconds and then hold the load for 1 second.
The actual number of requests which will be made depends on your application response time, higher response time - less requests, lower response time - more requests.
If you want to send exactly 10 requests in 5 seconds evenly distributed go for the following configuration:
Normal Thread Group with users * loops = 10, to wit:
10 users - 1 loop
5 users - 2 loops
etc.
Throughput Controller in Total Executions mode and Throughput set to 10
HTTP Request
Throughput Shaping Timer configured to send 2 requests per second

Get a Constant rps in Jmeter

I have been trying to use JMeter to test my server. I have a cloudsearch endpoint on AWS. I have to test if it can scale upto 25000 requests per second without failing. I have tried JMeter with a constant throughput timer with throughput = 1500000 per second and running 1000 threads. I ran it for 10 mins. But When I review the aggregate report it shows an average of only 25 requests per second. How do i get an average of around 25000 requests per second?
Constant Throughput Timer can only pause the threads to reach specified "Target Throughput" value so make sure you provide enough virtual users (threads) to generate desired "requests per minute" value.
You don't have enough Threads to achieve such Requests per second!!!
To get an average (~25000) requests per second, you have to increase the Number of threads.
Remember, The number of threads will impact results if your server faces slowdowns. If so and you don't have enough threads then you will not be injecting the expected load and end up with fewer transactions performed.
You need to increase the number of concurrent users to be at least 25000 (it assumes 1 second response time, if you have 2 seconds response time - you will need 50000)
JMeter default configuration is not suitable for high loads, it is good for tests development and debugging, however when it comes to the real load you need to consider some constraints, i.e.:
Run JMeter test in non-GUI mode
Increase JVM Heap Size and tune other parameters
Disable all listeners during the test
If above tips don't help follow other recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article or consider Distributed Testing