Running Concurrent thread execution - concurrency

I am using jmeter tool for load testing. I want to execute all threads are in simultaneously (at the same time).
So I configured
No of Threads:20
Ramp-up period:Empty
Loop Count:1.
Now I run the jmeter tool.
After getting the result, I saw the result in view results in Table.
From this start time is displayed.i.e Threads are executed one by one not simultaneously. I included the image also.
Could you tell me how to run the concurrent threads simultaneously?
Starting Time for threads.

Add Synchronizing Timer in your test plan

Just add a Synchronizing Timer to your Test Plan.

Related

How to execute code asynchronously without creating new threads

I am using Qt SQL which is blocking API so I have to execute SQL code in Separate thread (QtConcurrent::run) and return (Q)future.
something like this:-
QFuture<QString> future = QtConcurrent::run( []() { /* some SQL code */ } );
auto watcher = new QFutureWatcher<QString>();
watcher.setFuture(future);
connect(watcher,&QFutureWatcher<QString>::finished,
[future](){ /* code to execute after future is finished */ });
But I learned that Threading is costly. every context switch is expensive. So it looks like CPU wastage to create new Thread just for waiting for result from MySQL server. My application is going to run on single core Virtual Machine on Google Cloud anyways . it there any way I can execute Qt SQL code asynchronusly without possibly creating new thread ?
I was also wondering how other APIs like Qt Networking implement asynchronus API without create new thread ? or i am wrong and they do create new thread under the hood ?
Many threaded applications run on a single core. Flushing cache to run on a separate core is also expensive. Use the right tool for the job. There's nothing wrong with threads.
That said, if you really want to run on a single thread use a workqueue to keep track of async task progress. The libevent library does this for you, but there are others. You just run a polling loop adding work onto the queue and executing callbacks when a task needs attention or completes.
By using QtConcurrent::run you already solved one problem - cost of creating thread because it use a thread pool.
When comes to context switches, first you could try to measure them with perf stat. And depends on situation, optimize it. If its just simple queries then probably vast majority of context switches comes from the system, not your app.
Doing something async means that you can execute task and move forward with your current code without waiting for results. But usually such task i.e sql query will spawn thread/process or will make request to OS.
Qt Networking make i.e read request and OS signals (epoll) when data will arrive. But in case of single core OS will interrupt your thread anyway.
If you have many many small queries you could try optimize them to make less queries, do caching.

Dynamically Evaluate load and create Threads depending on machine performance

Hi i have started to work on a project where i use parallel computing to separate job loads among multiple machines, such as hashing and other forms of mathematical calculations. Im using C++
it is running on a Master/slave or Server/Client model if you prefer where every client connects to the server and waits for a job. The server can than take a job and seperate it depending on the number of clients
1000 jobs -- > 3 clients
IE: client 1 --> calculate(0 to 333)
Client 2 --> calculate(334 to 666)
Client 3 --> calculate(667 to 999)
I wanted to further enhance the speed by creating multiple threads on every running client. But since every machine are not likely (almost 100%) not going to have the same hardware, i cannot arbitrarily decide on a number of threads to run on every client.
i would like to know if one of you guys knew a way to evaluate the load a thread has on the cpu and extrapolate the number of threads that can be run concurently on the machine.
there are ways i see of doing this.
I start threads one by one, evaluating the cpu load every time and stop when i reach a certain prefix ceiling of (50% - 75% etc) but this has the flaw that ill have to stop and re-separate the job every time i start a new thread.
(and this is the more complex)
run some kind of test thread and calculate its impact on the cpu base load and extrapolate the number of threads that can be run on the machine and than start threads and separate jobs accordingly.
any idea or pointer are welcome, thanks in advance !

Performance testing using JMeter with concurrent users

I have to do the performance testing in JMeter with concurrent users.Now I am using the synchronous timer and set the rampup time is zero.When i check the report it shows the start time.Milliseconds are varied for all the users.Is it correct or not.If its not correct , what to do?
Looking at your screenshot, it appears that you have the synchronizing timer set to 600. IF that's the case, then yes it is correct. JMeter will release the threads "at the same time", but it will never be the exact time.
I would recommend monitoring the server side of your application to see how many concurrent users are in the system.

Running a batch script from a C++ application and check if it has an infinit loop

I'm launching a bat file with system() in my software and it's can go to an infinit loop.
the question is how can I detect it in my cpp application ?
I'm using VS2010.
thanks
You can create a thread, and let the thread do the run of your batch file, and then set a timer with a timeout in the main thread to check whether the thread has ended its execution. If it takes longer than the timeout period, stop it and claim that it has an infinite loop.
I don't see any other way, because you practically can't access the batch file.
For threads, you may use boost threads or Qt threads, and there's many more different libraries for threads.

2 different task_group instances not running tasks in parallel

I wanted to replace the use of normal threads with the task_group class from ppl, but I ran in to the following problem:
I have a class A with a task_group member,
create 2 different instances of class A,
start a task in the task_group of the first A instance (using run),
after a few seconds start a task in the task_group of the second A instance.
I'm expecting the two tasks to run in parallel but the second task wait for the first task to finish then starts.
This is happening only in my application where the tasks are started from a static function. I did the same scenario in a test application and the tasks are running correctly in parallel.
After spending several hours trying to figure this out I switched back to normal threads.
Does anyone knows why is the concurrency run-time having this behavior, or how I can avoid this?
EDIT
The problem was that it was running on a single core CPU and concurrency run-time looks at throughput. I wonder if microsoft parallel patterns library has the concept of an active object, or something on the lines so that you can specify that the task you are about to lunch is to be executed in parallel with the thread you start it from...
The response can be found here: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/thread/85a84373-4c3d-4862-bff3-9a21ffe82493
For one core machines, this is expected "default" behavior. This can be changed.
By default, number of tasks that can run in parallel = number of hardware threads (num of cores). This improves the raw throughut and efficiency of completing tasks.
However, there are a number of situations where a developer would want many tasks running in parallel, regardless of the number of cores. In this case you have two options:
Oversubsribe locally.
In your example above, you would use
void lengthyTask()
{
Context::Oversubscribe(true)
...do a lengthy task (//OR a blocking task)
Context::Oversubscribe(false)
}
Oversubcribe the scheduler when you start the application.
SchedulerPolicy policy(1, MaxConcurrency, GetProcessorCount() * 2);
SetDefaultSchedulerPolicy(policy);