Multithreading using threadpool - c++

I'm currently using the boost threadpool with the number of threads equal to the number of cores. I have scheduled, say 10 tasks using the pool's schedule function. For example,
suppose I have the function
void my_fun(std::vector<double>* my_vec){
// Do something here
}
The argument 'my_vec' here is just used to do some temporary calculations. The main reason I passing it the function is that I would like to reuse this vector when I call the function again.
Currently, I have the following
// Create a vector of 10 vectors called my_vecs
// Create threadpool
boost::threadpool::pool tp(num_threads);
// Schedule tasks
for (int m = 0; m < 10; m++){
tp.schedule(boost::bind(my_fun, my_vecs.at(m)));
}
This is my problem: I would like to replace the vector of 10 vectors with only 2 vectors. If I want to schedule 10 tasks and I have 2 cores, a maximum of 2 threads (tasks) will be running at any time. So I only want to use two vectors (one assigned to each thread) and use it to carry out my 10 tasks. How can I do this?
I hope this is clear. Thank You!

Probably boost::thread_specific_ptr is what you need. Below is how you may use it in your function:
#include <boost/thread/tss.hpp>
boost::thread_specific_ptr<std::vector<double> > tls_vec;
void my_fun()
{
std::vector<double>* my_vec = tls_vec.get();
if( !my_vec ) {
my_vec = new std::vector<double>();
tls_vec.reset(my_vec);
}
// Do something here with my_vec
}
It will reuse vector instances between tasks scheduled to the same thread. There might be more than 2 instances if there are more threads in the pool, but due to preemption mentioned in other answers you really need an instance per running thread, not per core.
You should not need to delete vector instances stored in thread_specific_ptr; those will be automatically destroyed when corresponding threads finish.

I wouldn't limit the number of threads to the number of cores. Remember that multi-threaded programming has been going on long before we had multi-core processors. This is because the threads will likely block for some resource and the next thread can jump in and use the CPU.

Java has a FixedThreadPool.
it looks like Boost might have something similar
http://deltavsoft.com/w/RcfUserGuide/1.2/rcf_user_guide/Multithreading.html
Basically a fixed thread pool spawned a fixed number of threads and then you can queue tasks in the manager queue.

While it's two that only two threads can be scheduled at the same time, on many threading systems the threads get time-sliced, so a thread gets pre-empted during the execution of its task. Hence a third (fourth, ...) thread will get a chance to work while the processing of the first and second are still incomplete.
I don't know about this particular threading implementation, but my guess is that it will allow (or run in environments supporting) pre-emptive scheduling. My way of thinking for threads is to try to keep it really simple, let each threads have its own resoruces.

Related

Spawn multiple std::thread and reuse it

I am a noob when it comes to threading and need some help / advice.
First of all, can you check if my understanding is correct in the following code:
std::vector<std::unique_ptr<Object>> totalObjects(512);
std::vector<Object*> objectsToUpdate(32);
std::vector<std::thread> threadsPool(32);
int nrObjectsToUpdate; //Varies between 1 and 32 for each update.
findObjectsToUpdate(totalObjects, objectsToUpdate, nrObjectsToUpdate);
for(int i = 0; i < nrObjectsToUpdate; i++)
threadsPool[i] = std::thread(objectsToUpdate[i]->updateTask1());
//All tasks in this step must be completed before
//we can move on to the next, i.e. updateTask2();.
for(int i = 0; i < nrObjectsToUpdate; i++)
threadsPool[i].join();
for(int i = 0; i < nrObjectsToUpdate; i++)
threadsPool[i] = std::thread(objectsToUpdate[i]->updateTask2());
for(int i = 0; i < nrObjectsToUpdate; i++)
threadsPool[i].join();
Should I spawn one thread for each updateTask1() and updateTask2()?
For each update, do I need to create std::thread() all over again? or can I simply reuse it again with some member function?
If I create threads for updateTask1(), is it possible to reuse all thread objects for updateTask2()?, i.e. switching function pointer with some std::thread member function?
Let us say that we create 100 threads and we have a quadcore CPU(4 cores),
will all the CPU cores be busy until all the threads is completed?
I know at least that 4 cores means 4 threads.
Grateful for all the help and explanations that can be given.
The optimal number of threads to use is both application and hardware dependant, therefore how many threads you should spawn depends on your application.
For example, some applications might run well with multiple threads per core because the threads do not interfere with each other (thread X and thread Y on core 1, for example, don't fight for compute resources so there is an advantage gained with multiple threads per core). However, other applications might perform worse with multiple threads per core because using only one thread might require most of the core's resources, so then when using additional threads per core, the threads interfere. You should do some testing to find out what is the best thread configuration for your application. Multithreading is often not straightforward, and the performance results may be surprising.
There are a number of things which you can use to help with determining the number of threads and thread scheduling (you should still do the performance tests though).
You can use unsigned num_cpus = std::thread::hardware_concurrency(); to get the number of available CPUs. While you may know the number of cores for the CPU you're using, maybe you want to run it on another machine for which you don't know the number of cores.
Additionally there is processor affinity, which is essentially pinning certain threads to specific CPUs. By default the OS is allowed to schedule any of the spawned threads to any of the CPUs. Sometimes this results in multiple threads per CPU, and some CPUs not being utilised for some portion of the multi-threaded component. You can explicitly set specific threads to use specific CPUs using pthread_setaffinity_np as follows (do this for each thread you want to pin to a core):
cpu_set_t cpu_set;
CPU_ZERO(&cpu_set);
CPU_SET(i, &cpu_set);
int rc = pthread_setaffinity_np(threadsPool[i].native_handle(),
sizeof(cpu_set_t), &cpu_set);
// Check for error
if (rc != 0)
std::cerr << "pthread_setaffinity_np error: " << rc << "\n";
If I create threads for updateTask1(), is it possible to reuse all thread objects for updateTask2()?, i.e. switching function pointer with some std::thread member function?
Yes you can do this. The logic in your program regarding the use of threads for updateTask1() and updateTask2() is correct, however, syntactically you have made errors when assigning the threads.
threadsPool[i] = std::thread(objectsToUpdate[i]->updateTask1());
Is incorrect. You are wanting to use a member function as the function to spawn for each thread, so you need to pass a reference to the function, as well as the object to bind to, followed by any additional arguments (for the sake of example, I'll add that the updateTask1 function takes the object id i). The assignment of the threads should then look like this:
threadsPool[i] = std::thread(&Object::updateTask1, // Reference to function
objectsToUpdate[i] , // Object to bind to
i ); // Additional argument -- thread number
You can then use the same syntax for updateTask2. Here is a live demo for demonstration, which includes processor affinity.

Many detached boost threads segfault

I'm creating boost threads inside a function with
while(trueNonceQueue.empty() && block.nNonce < std::numeric_limits<uint64_t>::max()){
if ( block.nNonce % 100000 == 0 )
{
cout << block.nNonce << endl;
}
boost::thread t(CheckNonce, block);
t.detach();
block.nNonce++;
}
uint64 trueNonce;
while (trueNonceQueue.pop(trueNonce))
block.nNonce = trueNonce;
trueNonceQueue was created with boost::lockfree::queue<uint64> trueNonceQueue(128); in the global scope.
This is the function being threaded
void CheckNonce(CBlock block){
if(block.CheckBlockSilently()){
while (!trueNonceQueue.push(block.nNonce))
;
}
}
I noticed that after it crashed, my swap had grown marginally which never happens unless if I use poor technique like this after leaking memory; otherwise, my memory usage stays frequently below 2 gigs. I'm running cinnamon on ubuntu desktop with chrome and a few other small programs open. I was not using the computer at the time this was running.
The segfault occurred after the 949900000th iteration. How can this be corrected?
CheckNonce execution time
I added the same modulus to CheckNonce to see if there was any lag. So far, there is none.
I will update if the detached threads start to lag behind the spawning while.
You should use a Thread Pool instead. This means spawning just enough threads to get work done without undue contention (for example you might spawn something like N-2 threads on an N-core machine, but perhaps more if some work may block on I/O).
There is not exactly a thread pool in Boost, but there are the parts you need to build one. See here for some ideas: boost::threadpool::pool vs.boost::thread_group
Or you can use a more ready-made solution like this (though it is a bit dated and perhaps unmaintained, not sure): http://threadpool.sourceforge.net/
Then the idea is to spawn the N threads, and then in your loop for each task, just "post" the task to the thread pool, where the next available worker thread will pick it up.
By doing this, you will avoid many problems, such as running out of thread stack space, avoiding inefficient resource contention (look up the "thundering herd problem"), and you will be able to easily tune the aggressiveness with which you use multiple cores on any system.

Multi threading independent tasks

I have N tasks, which are independent (ie., write at different memory addresses) but don't take exactly the same time to complete (from 2 to 10 seconds, say). I have P threads.
I can divide my N tasks into P threads, and launch my threads. Ultimately, at the end, there will be a single thread remaining to complete the last few tasks, which is not optimal.
I can also launch P threads with 1 task each, WaitForMultipleObjects, and relaunch P threads etc. (that's what I currently do, as the overhead of creating threads is small compared to the task). However, this does not solve the problem either, there will still be P-1 threads waiting for the last one at some point.
Is there a way to launch threads, and as soon as the thread has finished its task, go on to the next available task until all tasks are completed ?
Thanks !
yes, it's called thread pooling. it's a very common practice.
http://en.wikipedia.org/wiki/Thread_pool_pattern
Basically, you create a queue of tasks (function pointers with their arguments), and push the tasks there. You have N threads running which do the following loop (schematic code):
while (bRunning) {
task = m_pQueue.pop();
if (task) {
executeTask(task);
}
else {
//you can sleep a bit here if you want
}
}
there are more elegant ways to implement it (avoiding sleeps, etc) but this is the gist of it.

boost::thread: How to start all threads, but have only up to n running at a time?

In the boost::thread library, is there any mechanism to control how many threads (at most) are running at a time?
In my case, it would be most convenient to start N threads all at the same time (N may be hundreds or a few thousand):
std::vector<boost::thread*> vec;
for (int i = 0; i < N; ++i) {
vec.push_back(new boost::thread(my_fct));
}
// all are running, now wait for them to finish:
for (int i = 0; i < N; ++i) {
vec[i]->join();
delete vec[i];
}
But I want Boost to transparently set a maximum of, say, 4 threads running at a time. (I'm sharing an 8-core machine, so I'm not supposed to run more than 4 at a time.)
Of course, I could take care of starting only 4 at a time myself, but the solution I'm asking about would be more transparent and most convenient.
Don't think Boost.Thread has this built in but you can overlay Boost.Threadpool (not an official library) onto Boost.Thread, and that does allow you to control the thread count via SizePolicy.
The default is a fixed-size pool which is what you want - specify the initial (and ongoing) thread count on the threadpool constructor.
What you really want, it seems, would be to only ever have 4 threads, each of which would process many jobs.
One way to implement this would be to spawn as many threads as you like, and then for the run-loop of each threads to take tasks (typically function objects or pointers) from a thread-safe queue structure where you store everything that needs to be done.
This way you avoid the overhead from creating lots of threads, and still maintain the same amount of concurrency.
You could create a lock that you could only obtain n times. Then each thread should have to obtain the lock (blocking) before processing.

Overhead due to use of Events

I have a custom thread pool class, that creates some threads that each wait on their own event (signal). When a new job is added to the thread pool, it wakes the first free thread so that it executes the job.
The problem is the following : I have around 1000 loops of each around 10'000 iterations do to. These loops must be executed sequentially, but I have 4 CPUs available. What I try to do is to split the 10'000 iteration loops into 4 2'500 iterations loops, ie one per thread. But I have to wait for the 4 small loops to finish before going to the next "big" iteration. This means that I can't bundle the jobs.
My problem is that using the thread pool and 4 threads is much slower than doing the jobs sequentially (having one loop executed by a separate thread is much slower than executing it directly in the main thread sequentially).
I'm on Windows, so I create events with CreateEvent() and then wait on one of them using WaitForMultipleObjects(2, handles, false, INFINITE) until the main thread calls SetEvent().
It appears that this whole event thing (along with the synchronization between the threads using critical sections) is pretty expensive !
My question is : is it normal that using events takes "a lot of" time ? If so, is there another mechanism that I could use and that would be less time-expensive ?
Here is some code to illustrate (some relevant parts copied from my thread pool class) :
// thread function
unsigned __stdcall ThreadPool::threadFunction(void* params) {
// some housekeeping
HANDLE signals[2];
signals[0] = waitSignal;
signals[1] = endSignal;
do {
// wait for one of the signals
waitResult = WaitForMultipleObjects(2, signals, false, INFINITE);
// try to get the next job parameters;
if (tp->getNextJob(threadId, data)) {
// execute job
void* output = jobFunc(data.params);
// tell thread pool that we're done and collect output
tp->collectOutput(data.ID, output);
}
tp->threadDone(threadId);
}
while (waitResult - WAIT_OBJECT_0 == 0);
// if we reach this point, endSignal was sent, so we are done !
return 0;
}
// create all threads
for (int i = 0; i < nbThreads; ++i) {
threadData data;
unsigned int threadId = 0;
char eventName[20];
sprintf_s(eventName, 20, "WaitSignal_%d", i);
data.handle = (HANDLE) _beginthreadex(NULL, 0, ThreadPool::threadFunction,
this, CREATE_SUSPENDED, &threadId);
data.threadId = threadId;
data.busy = false;
data.waitSignal = CreateEvent(NULL, true, false, eventName);
this->threads[threadId] = data;
// start thread
ResumeThread(data.handle);
}
// add job
void ThreadPool::addJob(int jobId, void* params) {
// housekeeping
EnterCriticalSection(&(this->mutex));
// first, insert parameters in the list
this->jobs.push_back(job);
// then, find the first free thread and wake it
for (it = this->threads.begin(); it != this->threads.end(); ++it) {
thread = (threadData) it->second;
if (!thread.busy) {
this->threads[thread.threadId].busy = true;
++(this->nbActiveThreads);
// wake thread such that it gets the next params and runs them
SetEvent(thread.waitSignal);
break;
}
}
LeaveCriticalSection(&(this->mutex));
}
This looks to me as a producer consumer pattern, which can be implented with two semaphores, one guarding the queue overflow, the other the empty queue.
You can find some details here.
Yes, WaitForMultipleObjects is pretty expensive. If your jobs are small, the synchronization overhead will start to overwhelm the cost of actually doing the job, as you're seeing.
One way to fix this is bundle multiple jobs into one: if you get a "small" job (however you evaluate such things), store it someplace until you have enough small jobs together to make one reasonably-sized job. Then send all of them to a worker thread for processing.
Alternately, instead of using signaling you could use a multiple-reader single-writer queue to store your jobs. In this model, each worker thread tries to grab jobs off the queue. When it finds one, it does the job; if it doesn't, it sleeps for a short period, then wakes up and tries again. This will lower your per-task overhead, but your threads will take up CPU even when there's no work to be done. It all depends on the exact nature of the problem.
Watch out, you are still asking for a next job after the endSignal is emitted.
for( ;; ) {
// wait for one of the signals
waitResult = WaitForMultipleObjects(2, signals, false, INFINITE);
if( waitResult - WAIT_OBJECT_0 != 0 )
return;
//....
}
Since you say that it is much slower in parallel than sequential execution, I assume that your processing time for your internal 2500 loop iterations is tiny (in the few micro seconds range). Then there is not much you can do except review your algorithm to split larger chunks of precessing; OpenMP won't help and every other synchronization techniques won't help either because they fundamentally all rely on events (spin loops do not qualify).
On the other hand, if your processing time of the 2500 loop iterations is larger than 100 micro seconds (on current PCs), you might be running into limitations of the hardware. If your processing uses a lot of memory bandwidth, splitting it to four processors will not give you more bandwidth, it will actually give you less because of collisions. You could also be running into problems of cache cycling where each of your top 1000 iteration will flush and reload the cache of the 4 cores. Then there is no one solution, and depending on your target hardware, there may be none.
If you are just parallelizing loops and using vs 2008, I'd suggest looking at OpenMP. If you're using visual studio 2010 beta 1, I'd suggesting looking at the parallel pattern library, particularly the "parallel for" / "parallel for each"
apis or the "task group class because these will likely do what you're attempting to do, only with less code.
Regarding your question about performance, here it really depends. You'll need to look at how much work you're scheduling during your iterations and what the costs are. WaitForMultipleObjects can be quite expensive if you hit it a lot and your work is small which is why I suggest using an implementation already built. You also need to ensure that you aren't running in debug mode, under a debugger and that the tasks themselves aren't blocking on a lock, I/O or memory allocation, and you aren't hitting false sharing. Each of these has the potential to destroy scalability.
I'd suggest looking at this under a profiler like xperf the f1 profiler in visual studio 2010 beta 1 (it has 2 new concurrency modes which help see contention) or Intel's vtune.
You could also share the code that you're running in the tasks, so folks could get a better idea of what you're doing, because the answer I always get with performance issues is first "it depends" and second, "have you profiled it."
Good Luck
-Rick
It shouldn't be that expensive, but if your job takes hardly any time at all, then the overhead of the threads and sync objects will become significant. Thread pools like this work much better for longer-processing jobs or for those that use a lot of IO instead of CPU resources. If you are CPU-bound when processing a job, ensure you only have 1 thread per CPU.
There may be other issues, how does getNextJob get its data to process? If there's a large amount of data copying, then you've increased your overhead significantly again.
I would optimise it by letting each thread keep pulling jobs off the queue until the queue is empty. that way, you can pass a hundred jobs to the thread pool and the sync objects will be used just the once to kick off the thread. I'd also store the jobs in a queue and pass a pointer, reference or iterator to them to the thread instead of copying the data.
The context switching between threads can be expensive too. It is interesting in some cases to develop a framework you can use to process your jobs sequentially with one thread or with multiple threads. This way you can have the best of the two worlds.
By the way, what is your question exactly ? I will be able to answer more precisely with a more precise question :)
EDIT:
The events part can consume more than your processing in some cases, but should not be that expensive, unless your processing is really fast to achieve. In this case, switching between thredas is expensive too, hence my answer first part on doing things sequencially ...
You should look for inter-threads synchronisation bottlenecks. You can trace threads waiting times to begin with ...
EDIT: After more hints ...
If I guess correctly, your problem is to efficiently use all your computer cores/processors to parralellize some processing essencialy sequential.
Take that your have 4 cores and 10000 loops to compute as in your example (in a comment). You said that you need to wait for the 4 threads to end before going on. Then you can simplify your synchronisation process. You just need to give your four threads thr nth, nth+1, nth+2, nth+3 loops, wait for the four threads to complete then going on. You should use a rendezvous or barrier (a synchronization mechanism that wait for n threads to complete). Boost has such a mechanism. You can look the windows implementation for efficiency. Your thread pool is not really suited to the task. The search for an available thread in a critical section is what is killing your CPU time. Not the event part.
It appears that this whole event thing
(along with the synchronization
between the threads using critical
sections) is pretty expensive !
"Expensive" is a relative term. Are jets expensive? Are cars? or bicycles... shoes...?
In this case, the question is: are events "expensive" relative to the time taken for JobFunction to execute? It would help to publish some absolute figures: How long does the process take when "unthreaded"? Is it months, or a few femtoseconds?
What happens to the time as you increase the threadpool size? Try a pool size of 1, then 2 then 4, etc.
Also, as you've had some issues with threadpools here in the past, I'd suggest some debug
to count the number of times that your threadfunction is actually invoked... does it match what you expect?
Picking a figure out of the air (without knowing anything about your target system, and assuming you're not doing anything 'huge' in code you haven't shown), I'd expect the "event overhead" of each "job" to be measured in microseconds. Maybe a hundred or so. If the time taken to perform the algorithm in JobFunction is not significantly MORE than this time, then your threads are likely to cost you time rather than save it.
As mentioned previously, the amount of overhead added by threading depends on the relative amount of time taken to do the "jobs" that you defined. So it is important to find a balance in the size of the work chunks that minimizes the number of pieces but does not leave processors idle waiting for the last group of computations to complete.
Your coding approach has increased the amount of overhead work by actively looking for an idle thread to supply with new work. The operating system is already keeping track of that and doing it a lot more efficiently. Also, your function ThreadPool::addJob() may find that all of the threads are in use and be unable to delegate the work. But it does not provide any return code related to that issue. If you are not checking for this condition in some way and are not noticing errors in the results, it means that there are idle processors always. I would suggest reorganizing the code so that addJob() does what it is named -- adds a job ONLY (without finding or even caring who does the job) while each worker thread actively gets new work when it is done with its existing work.