Recommended pattern for a queue accessed by multiple threads...what should the worker thread do? - c++

I have a queue of objects that is being added to by a thread A. Thread B is removing objects from the queue and processing them. There may be many threads A and many threads B.
I am using a mutex when the queue in being "push"ed to, and also when "front"ed and "pop"ped from as shown in the pseudo-code as below:
Thread A calls this to add to the queue:
void Add(object)
{
mutex->lock();
queue.push(object);
mutex->unlock();
}
Thread B processes the queue as follows:
object GetNextTargetToWorkOn()
{
object = NULL;
mutex->lock();
if (! queue.empty())
{
object = queue.front();
queue.pop();
}
mutex->unlock();
return(object);
}
void DoTheWork(int param)
{
while(true)
{
object structure;
while( (object = GetNextTargetToWorkOn()) == NULL)
boost::thread::sleep(100ms); // sleep a very short time
// do something with the object
}
}
What bothers me is the while---get object---sleep-if-no-object paradigm. While there are objects to process it is fine. But while the thread is waiting for work there are two problems
a) The while loop is whirling consuming resources
b) the sleep means wasted time is a new object comes in to be processed
Is there a better pattern to achieve the same thing?

You're using spin-waiting, a better design is to use a monitor. Read more on the details on wikipedia.
And a cross-platform solution using std::condition_variable with a good example can be found here.

a) The while loop is whirling consuming resources
b) the sleep means wasted time is a new object comes in to be processed
It has been my experience that the sleep you used actually 'fixes' both of these issues.
a) The consuming of resources is a small amount of ram, and remarkably small fraction of available cpu cycles.
b) Sleep is not a wasted time on the OS's I've worked on.
c) Sleep can affect 'reaction time' (aka latency), but has seldom been an issue (outside of interrupts.)
The time spent in sleep is likely to be several orders of magnitude longer than the time spent in this simple loop. i.e. It is not significant.
IMHO - this is an ok implementation of the 'good neighbor' policy of relinquishing the processor as soon as possible.
On my desktop, AMD64 Dual Core, Ubuntu 15.04, a semaphore enforced context switch takes ~13 us.
100 ms ==> 100,000 us .. that is 4 orders of magnitude difference, i.e. VERY insignificant.
In the 5 OS's (Linux, vxWorks, OSE, and several other embedded system OS's) I have worked on, sleep (or their equivalent) is the correct way to relinquish the processor, so that it is not blocked from running another thread while the one thread is in sleep.
Note: It is feasible that some OS's sleep might not relinquish the processor. So, you should always confirm. I've not found one. Oh, but I admit I have not looked / worked much on Windows.

Related

Execute Functions on an Interval Basis C++

So I have a Kinect program that has three main functions that collect data and saves it. I want one of these functions to execute as much as possible, while the other two run maybe 10 times every second.
while(1)
{
...
//multi-threading to make sure color and depth events are aligned -> get skeletal data
if (WaitForSingleObject(colorEvent, 0) == 0 && WaitForSingleObject(depthEvent, 0) == 0)
{
std::thread first(getColorImage, std::ref(colorEvent), std::ref(colorStreamHandle), std::ref(colorImage));
std::thread second(getDepthImage, std::ref(depthEvent), std::ref(depthStreamHandle), std::ref(depthImage));
if (WaitForSingleObject(skeletonEvent, INFINITE) == 0)
{
first.join();
second.join();
std::thread third(getSkeletonImage, std::ref(skeletonEvent), std::ref(skeletonImage), std::ref(colorImage), std::ref(depthImage), std::ref(myfile));
third.join();
}
//if (check == 1)
//check = 2;
}
}
Currently my threads are making them all run at the same exact time, but this slows down my computer a lot and I only need to run 'getColorImage' and 'getDepthImage' maybe 5-10 times/second, whereas 'getSkeletonImage' I would want to run as much as possible.
I want 'getSkeletonImage' to run at max frequency (~30 times/second through the while loop) and then the 'getColorImage' and 'getDepthImage' to time synchronize (~5-10 times/second through the while loop)
What is a way I can do this? I am already using threads, but I need one to run consistently, and then the other two to join in intermittently essentially. Thank you for your help.
Currently, your main loop is creating the threads every iteration, which suggests each thread function runs once to completion. That introduces the overhead of creating and destroying threads every time.
Personally, I wouldn't bother with threads at all. Instead, in the main thread I'd do
void RunSkeletonEvent(int n)
{
for (i = 0; i < n; ++i)
{
// wait required time (i.e. to next multiple of 1/30 second)
skeletonEvent();
}
}
// and, in your main function ....
while (termination_condition_not_met)
{
runSkeletonEvent(3);
colorEvent();
runSkeletonEvent(3);
depthEvent();
}
This interleaves the events, so skeletonEvent() runs six times for every time depthEvent() and colorEvent() are run. Just adjust the numbers as needed to get required behaviour.
You'll need to design the code for all the events so they don't run over time (if they do, all subsequent events will be delayed - there is no means to stop that).
The problem you'll then need to resolve is how to wait for the time to fire the skeleton event. A process of retrieving clock time, calculating how long to wait, and sleeping for that interval will do it. By sleeping (the thread yielding its time slice) your program will also be a bit better mannered (e.g. it won't be starving other processes of processor time).
One advantage is that, if data is to be shared between the "events" (e.g. all of the events modify some global data) there is no need for synchronisation, because the looping above guarantees that only one "event" accesses shared data at one time.
Note: your usage of WaitForSingleObject() indicates you are using windows. Windows (except, arguably CE in a weak sense) is not really a realtime system, so does not guarantee precise timing. In other words, the actual intervals you achieve will vary.
It is still possible to restructure to use threads. From your description, there is no evidence you really need anything like that, so I'll leave this reply at that.

Many detached boost threads segfault

I'm creating boost threads inside a function with
while(trueNonceQueue.empty() && block.nNonce < std::numeric_limits<uint64_t>::max()){
if ( block.nNonce % 100000 == 0 )
{
cout << block.nNonce << endl;
}
boost::thread t(CheckNonce, block);
t.detach();
block.nNonce++;
}
uint64 trueNonce;
while (trueNonceQueue.pop(trueNonce))
block.nNonce = trueNonce;
trueNonceQueue was created with boost::lockfree::queue<uint64> trueNonceQueue(128); in the global scope.
This is the function being threaded
void CheckNonce(CBlock block){
if(block.CheckBlockSilently()){
while (!trueNonceQueue.push(block.nNonce))
;
}
}
I noticed that after it crashed, my swap had grown marginally which never happens unless if I use poor technique like this after leaking memory; otherwise, my memory usage stays frequently below 2 gigs. I'm running cinnamon on ubuntu desktop with chrome and a few other small programs open. I was not using the computer at the time this was running.
The segfault occurred after the 949900000th iteration. How can this be corrected?
CheckNonce execution time
I added the same modulus to CheckNonce to see if there was any lag. So far, there is none.
I will update if the detached threads start to lag behind the spawning while.
You should use a Thread Pool instead. This means spawning just enough threads to get work done without undue contention (for example you might spawn something like N-2 threads on an N-core machine, but perhaps more if some work may block on I/O).
There is not exactly a thread pool in Boost, but there are the parts you need to build one. See here for some ideas: boost::threadpool::pool vs.boost::thread_group
Or you can use a more ready-made solution like this (though it is a bit dated and perhaps unmaintained, not sure): http://threadpool.sourceforge.net/
Then the idea is to spawn the N threads, and then in your loop for each task, just "post" the task to the thread pool, where the next available worker thread will pick it up.
By doing this, you will avoid many problems, such as running out of thread stack space, avoiding inefficient resource contention (look up the "thundering herd problem"), and you will be able to easily tune the aggressiveness with which you use multiple cores on any system.

Porting threads to windows. Critical sections are very slow

I'm porting some code to windows and found threading to be extremely slow. The task takes 300 seconds on windows (with two xeon E5-2670 8 core 2.6ghz = 16 core) and 3.5 seconds on linux (xeon E5-1607 4 core 3ghz). Using vs2012 express.
I've got 32 threads all calling EnterCriticalSection(), popping an 80 byte job of a std::stack, LeaveCriticalSection and doing some work (250k jobs in total).
Before and after every critical section call I print the thread ID and current time.
The wait time for a single thread's lock is ~160ms
To pop the job off the stack takes ~3ms
Calling leave takes ~3ms
The job takes ~1ms
(roughly same for Debug/Release, Debug takes a little longer. I'd love to be able to properly profile the code :P)
Commenting out the job call makes the whole process take 2 seconds (still more than linux).
I've tried both queryperformancecounter and timeGetTime, both give approx the same result.
AFAIK the job never makes any sync calls, but I can't explain the slowdown unless it does.
I have no idea why copying from a stack and calling pop takes so long.
Another very confusing thing is why a call to leave() takes so long.
Can anyone speculate on why it's running so slowly?
I wouldn't have thought the difference in processor would give a 100x performance difference, but could it be at all related to dual CPUs? (having to sync between separate CPUs than internal cores).
By the way, I'm aware of std::thread but want my library code to work with pre C++11.
edit
//in a while(hasJobs) loop...
EVENT qwe1 = {"lock", timeGetTime(), id};
events.push_back(qwe1);
scene->jobMutex.lock();
EVENT qwe2 = {"getjob", timeGetTime(), id};
events.push_back(qwe2);
hasJobs = !scene->jobs.empty();
if (hasJobs)
{
job = scene->jobs.front();
scene->jobs.pop();
}
EVENT qwe3 = {"gotjob", timeGetTime(), id};
events.push_back(qwe3);
scene->jobMutex.unlock();
EVENT qwe4 = {"unlock", timeGetTime(), id};
events.push_back(qwe4);
if (hasJobs)
scene->performJob(job);
and the mutex class, with linux #ifdef stuff removed...
CRITICAL_SECTION mutex;
...
Mutex::Mutex()
{
InitializeCriticalSection(&mutex);
}
Mutex::~Mutex()
{
DeleteCriticalSection(&mutex);
}
void Mutex::lock()
{
EnterCriticalSection(&mutex);
}
void Mutex::unlock()
{
LeaveCriticalSection(&mutex);
}
Window's CRITICAL_SECTION spins in a tight loop when you first enter it. It does not suspend the thread that called EnterCriticalSection unless a substantial period has elapsed in the spin loop. So having 32 threads contending for the same critical section will burn and waste a lot of CPU cycles. Try a mutex instead (see CreateMutex).
It seems like your windows threads are facing super contention. They seem totally serialized. You have about 7ms of total processing time in your critical section and 32 threads. If all the threads are queued up on the lock, the last thread in the queue wouldn't get to run until after sleeping about 217ms. This is not too far off your 160ms observed wait time.
So, if the threads have nothing else to do than to enter the critical section, do work, then leave the critical section, this is the behavior I would expect.
Try to characterize the linux profiling behavior, and see if the program behavior is really and apples to apples comparison.

C++11 Thread waiting behaviour: std::this_thread::yield() vs. std::this_thread::sleep_for( std::chrono::milliseconds(1) )

I was told when writing Microsoft specific C++ code that writing Sleep(1) is much better than Sleep(0) for spinlocking, due to the fact that Sleep(0) will use more of the CPU time, moreover, it only yields if there is another equal-priority thread waiting to run.
However, with the C++11 thread library, there isn't much documentation (at least that I've been able to find) about the effects of std::this_thread::yield() vs. std::this_thread::sleep_for( std::chrono::milliseconds(1) ); the second is certainly more verbose, but are they both equally efficient for a spinlock, or does it suffer from potentially the same gotchas that affected Sleep(0) vs. Sleep(1)?
An example loop where either std::this_thread::yield() or std::this_thread::sleep_for( std::chrono::milliseconds(1) ) would be acceptable:
void SpinLock( const bool& bSomeCondition )
{
// Wait for some condition to be satisfied
while( !bSomeCondition )
{
/*Either std::this_thread::yield() or
std::this_thread::sleep_for( std::chrono::milliseconds(1) )
is acceptable here.*/
}
// Do something!
}
The Standard is somewhat fuzzy here, as a concrete implementation will largely be influenced by the scheduling capabilities of the underlying operating system.
That being said, you can safely assume a few things on any modern OS:
yield will give up the current timeslice and re-insert the thread into the scheduling queue. The amount of time that expires until the thread is executed again is usually entirely dependent upon the scheduler. Note that the Standard speaks of yield as an opportunity for rescheduling. So an implementation is completely free to return from a yield immediately if it desires. A yield will never mark a thread as inactive, so a thread spinning on a yield will always produce a 100% load on one core. If no other threads are ready, you are likely to lose at most the remainder of the current timeslice before you get scheduled again.
sleep_* will block the thread for at least the requested amount of time. An implementation may turn a sleep_for(0) into a yield. The sleep_for(1) on the other hand will send your thread into suspension. Instead of going back to the scheduling queue, the thread goes to a different queue of sleeping threads first. Only after the requested amount of time has passed will the scheduler consider re-inserting the thread into the scheduling queue. The load produced by a small sleep will still be very high. If the requested sleep time is smaller than a system timeslice, you can expect that the thread will only skip one timeslice (that is, one yield to release the active timeslice and then skipping the one afterwards), which will still lead to a cpu load close or even equal to 100% on one core.
A few words about which is better for spin-locking. Spin-locking is a tool of choice when expecting little to no contention on the lock. If in the vast majority of cases you expect the lock to be available, spin-locks are a cheap and valuable solution. However, as soon as you do have contention, spin-locks will cost you. If you are worrying about whether yield or sleep is the better solution here spin-locks are the wrong tool for the job. You should use a mutex instead.
For a spin-lock, the case that you actually have to wait for the lock should be considered exceptional. Therefore it is perfectly fine to just yield here - it expresses the intent clearly and wasting CPU time should never be a concern in the first place.
I just did a test with Visual Studio 2013 on Windows 7, 2.8GHz Intel i7, default release mode optimizations.
sleep_for(nonzero) appears sleep for a minimium of around one millisecond and takes no CPU resources in a loop like:
for (int k = 0; k < 1000; ++k)
std::this_thread::sleep_for(std::chrono::nanoseconds(1));
This loop of 1,000 sleeps takes about 1 second if you use 1 nanosecond, 1 microsecond, or 1 millisecond. On the other hand, yield() takes about 0.25 microseconds each but will spin the CPU to 100% for the thread:
for (int k = 0; k < 4,000,000; ++k) (commas added for clarity)
std::this_thread::yield();
std::this_thread::sleep_for((std::chrono::nanoseconds(0)) seems to be about the the same as yield() (test not shown here).
In comparison, locking an atomic_flag for a spinlock takes about 5 nanoseconds. This loop is 1 second:
std::atomic_flag f = ATOMIC_FLAG_INIT;
for (int k = 0; k < 200,000,000; ++k)
f.test_and_set();
Also, a mutex takes about 50 nanoseconds, 1 second for this loop:
for (int k = 0; k < 20,000,000; ++k)
std::lock_guard<std::mutex> lock(g_mutex);
Based on this, I probably wouldn't hesitate to put a yield in the spinlock, but I would almost certainly wouldn't use sleep_for. If you think your locks will be spinning a lot and are worried about cpu consumption, I would switch to std::mutex if that's practical in your application. Hopefully, the days of really bad performance on std::mutex in Windows are behind us.
What you want is probably a condition variable. A condition variable with a conditional wake up function is typically implemented like what you are writing, with the sleep or yield inside the loop a wait on the condition.
Your code would look like:
std::unique_lock<std::mutex> lck(mtx)
while(!bSomeCondition) {
cv.wait(lck);
}
Or
std::unique_lock<std::mutex> lck(mtx)
cv.wait(lck, [bSomeCondition](){ return !bSomeCondition; })
All you need to do is notify the condition variable on another thread when the data is ready. However, you cannot avoid a lock there if you want to use condition variable.
if you are interested in cpu load while using yield - it's very bad, except one case-(only your application is running, and you are aware that it will basically eat all your resources)
here is more explanation:
running yield in loop will ensure that cpu will release execution of thread, still, if system try to come back to thread it will just repeat yield operation. This can make thread use full 100% load of cpu core.
running sleep() or sleep_for() is also a mistake, this will block thread execution but you will have something like wait time on cpu. Don't be mistaken, this IS working cpu but on lowest possible priority. While somehow working for simple usage examples ( fully loaded cpu on sleep() is half that bad as fully loaded working processor ), if you want to ensure application responsibility, you would like something like third example:
combining! :
std::chrono::milliseconds duration(1);
while (true)
{
if(!mutex.try_lock())
{
std::this_thread::yield();
std::this_thread::sleep_for(duration);
continue;
}
return;
}
something like this will ensure, cpu will yield as fast as this operation will be executed, and also sleep_for() will ensure that cpu will wait some time before even trying to execute next iteration. This time can be of course dynamicaly (or staticaly) adjusted to suits your needs
cheers :)

Overhead due to use of Events

I have a custom thread pool class, that creates some threads that each wait on their own event (signal). When a new job is added to the thread pool, it wakes the first free thread so that it executes the job.
The problem is the following : I have around 1000 loops of each around 10'000 iterations do to. These loops must be executed sequentially, but I have 4 CPUs available. What I try to do is to split the 10'000 iteration loops into 4 2'500 iterations loops, ie one per thread. But I have to wait for the 4 small loops to finish before going to the next "big" iteration. This means that I can't bundle the jobs.
My problem is that using the thread pool and 4 threads is much slower than doing the jobs sequentially (having one loop executed by a separate thread is much slower than executing it directly in the main thread sequentially).
I'm on Windows, so I create events with CreateEvent() and then wait on one of them using WaitForMultipleObjects(2, handles, false, INFINITE) until the main thread calls SetEvent().
It appears that this whole event thing (along with the synchronization between the threads using critical sections) is pretty expensive !
My question is : is it normal that using events takes "a lot of" time ? If so, is there another mechanism that I could use and that would be less time-expensive ?
Here is some code to illustrate (some relevant parts copied from my thread pool class) :
// thread function
unsigned __stdcall ThreadPool::threadFunction(void* params) {
// some housekeeping
HANDLE signals[2];
signals[0] = waitSignal;
signals[1] = endSignal;
do {
// wait for one of the signals
waitResult = WaitForMultipleObjects(2, signals, false, INFINITE);
// try to get the next job parameters;
if (tp->getNextJob(threadId, data)) {
// execute job
void* output = jobFunc(data.params);
// tell thread pool that we're done and collect output
tp->collectOutput(data.ID, output);
}
tp->threadDone(threadId);
}
while (waitResult - WAIT_OBJECT_0 == 0);
// if we reach this point, endSignal was sent, so we are done !
return 0;
}
// create all threads
for (int i = 0; i < nbThreads; ++i) {
threadData data;
unsigned int threadId = 0;
char eventName[20];
sprintf_s(eventName, 20, "WaitSignal_%d", i);
data.handle = (HANDLE) _beginthreadex(NULL, 0, ThreadPool::threadFunction,
this, CREATE_SUSPENDED, &threadId);
data.threadId = threadId;
data.busy = false;
data.waitSignal = CreateEvent(NULL, true, false, eventName);
this->threads[threadId] = data;
// start thread
ResumeThread(data.handle);
}
// add job
void ThreadPool::addJob(int jobId, void* params) {
// housekeeping
EnterCriticalSection(&(this->mutex));
// first, insert parameters in the list
this->jobs.push_back(job);
// then, find the first free thread and wake it
for (it = this->threads.begin(); it != this->threads.end(); ++it) {
thread = (threadData) it->second;
if (!thread.busy) {
this->threads[thread.threadId].busy = true;
++(this->nbActiveThreads);
// wake thread such that it gets the next params and runs them
SetEvent(thread.waitSignal);
break;
}
}
LeaveCriticalSection(&(this->mutex));
}
This looks to me as a producer consumer pattern, which can be implented with two semaphores, one guarding the queue overflow, the other the empty queue.
You can find some details here.
Yes, WaitForMultipleObjects is pretty expensive. If your jobs are small, the synchronization overhead will start to overwhelm the cost of actually doing the job, as you're seeing.
One way to fix this is bundle multiple jobs into one: if you get a "small" job (however you evaluate such things), store it someplace until you have enough small jobs together to make one reasonably-sized job. Then send all of them to a worker thread for processing.
Alternately, instead of using signaling you could use a multiple-reader single-writer queue to store your jobs. In this model, each worker thread tries to grab jobs off the queue. When it finds one, it does the job; if it doesn't, it sleeps for a short period, then wakes up and tries again. This will lower your per-task overhead, but your threads will take up CPU even when there's no work to be done. It all depends on the exact nature of the problem.
Watch out, you are still asking for a next job after the endSignal is emitted.
for( ;; ) {
// wait for one of the signals
waitResult = WaitForMultipleObjects(2, signals, false, INFINITE);
if( waitResult - WAIT_OBJECT_0 != 0 )
return;
//....
}
Since you say that it is much slower in parallel than sequential execution, I assume that your processing time for your internal 2500 loop iterations is tiny (in the few micro seconds range). Then there is not much you can do except review your algorithm to split larger chunks of precessing; OpenMP won't help and every other synchronization techniques won't help either because they fundamentally all rely on events (spin loops do not qualify).
On the other hand, if your processing time of the 2500 loop iterations is larger than 100 micro seconds (on current PCs), you might be running into limitations of the hardware. If your processing uses a lot of memory bandwidth, splitting it to four processors will not give you more bandwidth, it will actually give you less because of collisions. You could also be running into problems of cache cycling where each of your top 1000 iteration will flush and reload the cache of the 4 cores. Then there is no one solution, and depending on your target hardware, there may be none.
If you are just parallelizing loops and using vs 2008, I'd suggest looking at OpenMP. If you're using visual studio 2010 beta 1, I'd suggesting looking at the parallel pattern library, particularly the "parallel for" / "parallel for each"
apis or the "task group class because these will likely do what you're attempting to do, only with less code.
Regarding your question about performance, here it really depends. You'll need to look at how much work you're scheduling during your iterations and what the costs are. WaitForMultipleObjects can be quite expensive if you hit it a lot and your work is small which is why I suggest using an implementation already built. You also need to ensure that you aren't running in debug mode, under a debugger and that the tasks themselves aren't blocking on a lock, I/O or memory allocation, and you aren't hitting false sharing. Each of these has the potential to destroy scalability.
I'd suggest looking at this under a profiler like xperf the f1 profiler in visual studio 2010 beta 1 (it has 2 new concurrency modes which help see contention) or Intel's vtune.
You could also share the code that you're running in the tasks, so folks could get a better idea of what you're doing, because the answer I always get with performance issues is first "it depends" and second, "have you profiled it."
Good Luck
-Rick
It shouldn't be that expensive, but if your job takes hardly any time at all, then the overhead of the threads and sync objects will become significant. Thread pools like this work much better for longer-processing jobs or for those that use a lot of IO instead of CPU resources. If you are CPU-bound when processing a job, ensure you only have 1 thread per CPU.
There may be other issues, how does getNextJob get its data to process? If there's a large amount of data copying, then you've increased your overhead significantly again.
I would optimise it by letting each thread keep pulling jobs off the queue until the queue is empty. that way, you can pass a hundred jobs to the thread pool and the sync objects will be used just the once to kick off the thread. I'd also store the jobs in a queue and pass a pointer, reference or iterator to them to the thread instead of copying the data.
The context switching between threads can be expensive too. It is interesting in some cases to develop a framework you can use to process your jobs sequentially with one thread or with multiple threads. This way you can have the best of the two worlds.
By the way, what is your question exactly ? I will be able to answer more precisely with a more precise question :)
EDIT:
The events part can consume more than your processing in some cases, but should not be that expensive, unless your processing is really fast to achieve. In this case, switching between thredas is expensive too, hence my answer first part on doing things sequencially ...
You should look for inter-threads synchronisation bottlenecks. You can trace threads waiting times to begin with ...
EDIT: After more hints ...
If I guess correctly, your problem is to efficiently use all your computer cores/processors to parralellize some processing essencialy sequential.
Take that your have 4 cores and 10000 loops to compute as in your example (in a comment). You said that you need to wait for the 4 threads to end before going on. Then you can simplify your synchronisation process. You just need to give your four threads thr nth, nth+1, nth+2, nth+3 loops, wait for the four threads to complete then going on. You should use a rendezvous or barrier (a synchronization mechanism that wait for n threads to complete). Boost has such a mechanism. You can look the windows implementation for efficiency. Your thread pool is not really suited to the task. The search for an available thread in a critical section is what is killing your CPU time. Not the event part.
It appears that this whole event thing
(along with the synchronization
between the threads using critical
sections) is pretty expensive !
"Expensive" is a relative term. Are jets expensive? Are cars? or bicycles... shoes...?
In this case, the question is: are events "expensive" relative to the time taken for JobFunction to execute? It would help to publish some absolute figures: How long does the process take when "unthreaded"? Is it months, or a few femtoseconds?
What happens to the time as you increase the threadpool size? Try a pool size of 1, then 2 then 4, etc.
Also, as you've had some issues with threadpools here in the past, I'd suggest some debug
to count the number of times that your threadfunction is actually invoked... does it match what you expect?
Picking a figure out of the air (without knowing anything about your target system, and assuming you're not doing anything 'huge' in code you haven't shown), I'd expect the "event overhead" of each "job" to be measured in microseconds. Maybe a hundred or so. If the time taken to perform the algorithm in JobFunction is not significantly MORE than this time, then your threads are likely to cost you time rather than save it.
As mentioned previously, the amount of overhead added by threading depends on the relative amount of time taken to do the "jobs" that you defined. So it is important to find a balance in the size of the work chunks that minimizes the number of pieces but does not leave processors idle waiting for the last group of computations to complete.
Your coding approach has increased the amount of overhead work by actively looking for an idle thread to supply with new work. The operating system is already keeping track of that and doing it a lot more efficiently. Also, your function ThreadPool::addJob() may find that all of the threads are in use and be unable to delegate the work. But it does not provide any return code related to that issue. If you are not checking for this condition in some way and are not noticing errors in the results, it means that there are idle processors always. I would suggest reorganizing the code so that addJob() does what it is named -- adds a job ONLY (without finding or even caring who does the job) while each worker thread actively gets new work when it is done with its existing work.