How to setup Concurrency Thread Group - concurrency

I have following test plan to test concurrent user load test of a website -
Configuration set as -
Target Concurrency = 10
Ramp up Time = 1
Ramp up step count = 1
Hold Target rate time = 6
So it's creating confusion, what I am expecting that it will send only 10 requests at a time in 1 second but the result is it sends first 10 request at a time in 1 second and continue sending requests till 60 seconds.
Why it is so?

Keep Hold Target Rate Time to 1 sec to match your expectations.
The graph should reflect the settings you made.
Note: In the graph you shared, it is clearly visible that you kept Hold Target Rate Time to 60 sec (reflected in the graph also) which resulted in 60 seconds execution after ramp-up time.
Reference:
Refer Concurrency ThreadGroup section in the link

as per requirements for simulating 10 requests at a time in 1 second
Target Concurrency = 10
Ramp up Time = 1
Ramp up step count = 1
Hold Target rate time = 1
Keep Hold Target rate time till you want to run to test.
e.g 1 sec for running test plan for 1 sec, 1 min to run test plan for 1 min.

Related

concurrency in AWS Lambda function

I am just looking for confirmation that I understand the lambda settings correctly.
The minimum concurrency setting is 10, My lambda function takes 3000 ms.
So does that mean the maximum number of times my function can be called a minute is...
60 seconds / 3 seconds = 20 calls
20 x 10 concurrent = 200 function calls per minute.
thanks

GCP PubSub retry backoff timing

I have a dead-letter policy configured for my Google Pub/Sub subscription as:
...
dead_letter_policy {
dead_letter_topic = foobar
max_delivery_attempts = x
}
{
"minimumBackoff": y,
"maximumBackoff": z
}
...
Plugging in various values, i am not seeing the retries happen at times i would expect. E.g.
max_delivery_attempts: 5
minimumBackoff: 10 Seconds
maximumBackoff: 300 Seconds
Seconds between retries:
15
17
20
29
max_delivery_attempts: 30
minimumBackoff: 5 Seconds
maximumBackoff: 600 Seconds
Seconds between retries:
12
9
9
14
15
18
24
24
45
44
58
81
82
120
..., and so on.
From this testing, it seems u need a high max attempts value to get actual exponential back-off? For my first data set, i would have expected the time between my last 2 attempts would have been closer to 300. From my second data set, it seems this would only be the case if the max attempts is set to the max value of 100. Is this assumption correct?
(also, this is a pull subscription)
Thanks
Related answer: How does the exponential backoff configured in Google Pub/Sub's RetryPolicy work?
The exponential backoff based on minimum_backoff and maximum_backoff roughly follows the equation mentioned in the question above (with randomization factor). The relevant factor to your question are
Maximum backoff is not part of the calculation when deriving the backoff interval. Maximum backoff setting is used to ensure we do not back off more than configured, even if the backoff interval computation results in such an answer. The rate of growth in interval duration still increases with retry, as visible from your test.
The multiplication factor, responsible for the growth in backoff interval, is a system internal detail and clients should not be dependent on it.
If you want the maximum backoff to happen before the dead letter event occurs, I suggest starting with a higher minimum backoff configuration.

Cost efficiency for AWS Lambda Provisioned Concurrency

I'm running a system with lots of AWS Lambda functions. Our load is not huge, let's say a function gets 100k invocations per month.
For quite a few of the Lambda functions, we're using warmup plugins to reduce cold start times. This is effectively a CloudWatch event triggered every 5 minutes to invoke the function with a dummy event which is ignored, but keeps that Lambda VM running. In most cases, this means one instance will be "warm".
I'm now looking at the native solution to the cold start problem: AWS Lambda Concurrent Provisioning, which at first glance looks awesome, but when I start calculating, either I'm missing something, or this will simply be a large cost increase for a system with only medium load.
Example, with prices from the eu-west-1 region as of 2020-09-16:
Consider function RAM M (GB), average execution time t (s), million requests per month N, provisioned concurrency C ("number of instances"):
Without provisioned concurrency
Cost per month = N⋅(16.6667⋅M⋅t + 0.20)
= $16.87 per million requests # M = 1 GB, t = 1 s
= $1.87 per million requests # M = 1 GB, t = 100 ms
= $1.69 per 100.000 requests # M = 1 GB, t = 1 s
= $1686.67 per 100M requests # M = 1GB, t = 1 s
With provisioned concurrency
Cost per month = C⋅0.000004646⋅M⋅60⋅60⋅24⋅30 + N⋅(10.8407⋅M⋅t + 0.20) = 12.04⋅C⋅M + N(10.84⋅M⋅t + 0.20)
= $12.04 + $11.04 = $23.08 per million requests # M = 1 GB, t = 1 s, C = 1
= $12.04 + $1.28 = $13.32 per million requests # M = 1 GB, t = 100 ms, C = 1
= $12.04 + $1.10 = $13.14 per 100.000 requests # M = 1 GB, t = 1 s, C = 1
= $12.04 + $1104.07 = $1116.11 per 100M requests # M = 1 GB, t = 1 s, C = 1
There are obviously several factors to take into account here:
How many requests per month is expected? (N)
How much RAM does the function need? (M)
What is the average execution time? (t)
What is the traffic pattern, few small bursts or even traffic (might mean C is low, high, or must be dynamically changed to follow peak hours etc)
In the end though, my initial conclusion is that Provisioned Concurrency will only be a good deal if you have a lot of traffic? In my example, at 100M requests per month there's a substantial saving (however, at that traffic it's perhaps likely that you would need a higher value of C as well; break-even at about C = 30). Even with C = 1, you need almost a million requests per month to cover the static costs.
Now, there are obviously other benefits of using the native solution (no ugly dummy events, no log pollution, flexible amount of warm instances, ...), and there are also probably other hidden costs of custom solutions (CloudWatch events, additional CloudWatch logging for dummy invocations etc), but I think they are pretty much neglectible.
Is my analysis fairly correct or am I missing something?
I think about provisioned concurrency as something that eliminates the cold starts and not something that saves money. There is a bit of saving if you can keep the lambda function running all the time (100%) utilization, but as you've calculated it becomes quite expensive when the provisioned capacity sits idle.

How to get current time in CP when using Intervals for scheduling

I am trying to schedule tasks in different machines. These machines have dynamique available ressources, for example:
machine 1: max capacity 4 core.
At T=t1 => available CPU = 2 core;
At T=t2 => available CPU = 1 core;
Each interval has a fixed time (Ex: 1 minute).
So in CPLEX, I have a cumulFunction to sum the used ressource in a machine :
cumulFunction cumuls[host in Hosts] =
sum(job in Jobs) pulse(itvs[task][host], requests[task]);
Now the problem is in the constraint:
forall(host in Hosts) {
cumuls[host] <= ftoi(available_res_function[host](**<<Current Period>>**));
}
I can't find a way to get the current period so that I could compare the used ressources to the available in that specefic period.
PS: available_res_function is a stepFunction of the available ressources.
Thank you so much for your help.
What you can do is to add a set of pulse in your cumul function.
For instance, in the sched_cumul function you could change:
cumulFunction workersUsage =
sum(h in Houses, t in TaskNames) pulse(itvs[h][t],1);
into
cumulFunction workersUsage =
sum(h in Houses, t in TaskNames) pulse(itvs[h][t],1)+pulse(1,40,3);
if you want to mention that 3 workers less are available between time 1 and 40.

SoapUI load test, calculate cnt in variance strategy

I work with SoapUI project and I have one question. In following example I've got 505 requests in 5 seconds with thread count =5. I would like to understand how count has been calculated in this example.
For example, if I want 1000 request in 1 minute what setting should I set in variance strategy?
Regards, Evgeniy
variance strategy as the name implies, it varies the number of threads overtime.Within the specified interval the threads will increase and decrease as per the variance value, thus simulating a realistic real time load on target web-service.
How variance is calculated : its not calculated using the mathematical variance formula. its just a multiplication. (if threads = 10 and variance = 0.5 then 10 * 0.5 = 5. The threads will be incremented and decremented by 5)
For example:
Threads = 20
variance = 0.8
Strategy = variance
interval = 60
limit = 60 seconds
the above will vary the thread by 16 (because 20 * 0.8 = 16), that is the thread count will increase to 36 and decrease to 4 and end with the original 20 within the 60 seconds.
if your requirement is to start with 500 threads and hit 1000 set your variance to 2 and so on.
refrence link:
chek the third bullet - simulating different type of load - soapUI site
Book for reference:
Web Service Testing with SoapUi by Charitha kankanamge