I want to implement thread pool using boost::thread class.
I am able to create the threads using below line.
boost::thread Consumer_1(consume);
where consumer_1 is thread and consume is the function bound to it.
Above statement starts the thread as soon as it gets executed.
Now I just want to create the thread and do the binding run time.
I have not yet discovered the boost method to delay this binding.
Can anyone help on this?
The binding can't be done later. For principal reasons—a thread of execution has to be executing something.
What you need to do is create a function, that will take jobs, represented as boost::function, from a queue and execute them. Than run this function in one or more threads.
I am not sure there is a thread-safe queue, but you can always use a regular std::deque with boost::condition_variable for waking up the threads and boost::mutex for locking the deque.
You might want to look at Boost.Asio too. See also here.
Related
Due to fixed requirements, I need to execute some code in a specific thread, and then return a result. The main-thread initiating that action should be blocked in the meantime.
void background_thread()
{
while(1)
{
request.lock();
g_lambda();
response.unlock();
request.unlock();
}
}
void mainthread()
{
...
g_lambda = []()...;
request.unlock();
response.lock();
request.lock();
...
}
This should work. But it leaves us with a big problem: background thread needs to start with response mutex locked, and main-thread needs to start with request mutex locked...
How can we accomplish that? I cant think of a good way. And isnt that an anti-pattern anyways?
Passing tasks to background thread could be accomplished by a producer-consumer queue. Simple C++11 implementation, that does not depend on 3rd party libraries would have std::condition_variable which is waited by the background thread and notified by main thead, std::queue of tasks, and std::mutex to guard these.
Getting the result back to main thread can be done by std::promise/std::future. The simplest way is to make std::packaged_task as queue objects, so that main thread creates packaged_task, puts it to the queue, notifies condition_variable and waits on packaged_task's future.
You would not actually need std::queue if you will create tasks by one at once, from one thread - just one std::unique_ptr<std::packaged_task>> would be enough. The queue adds flexibility to simultaneosly add many backround tasks.
Does Qt provide a synchronization primitive that behaves in much the same way as Concurrency::event from Microsoft's Concurrency Runtime?
Specifically, I would like wait() in thread A to return even if it does not call wait() until after thread B has already called wakeAll(), but before a "reset" function is called. Also, I'd like something where reset() and set() do not have to be called from the same thread.
Basically, if I did not need to have async operations run in a specific thread (in my case it basically feeding tasks to an OpenGL rendering thread) QFuture and Qt Concurrent would be perfect.
If not specifically provided, is there a way to emulate that functionality with Qt?
Thanks!
I thought that I needed a QFuture a few times in the past as well, but always ended up using signals and slots to pass messages between the threads, carrying the data I would have put in the QFuture as an argument. Especially when there's a QEventLoop at the bottom of my thread.
Without an event loop I usually end up doing it manually with QWaitCondition, QMutex and QMutexLocker.
So sadly I would say that there isn't any higher-level class that would fit what you describe.
So now you have the mutex and the wait condition.
Simply add a boolean flag, which you access with the same mutex locked.
When you do wakeAll, also set flag to true. Before doing wait, check the flag first and don't wait it is true. And then reset is simply setting the flag to false.
I'm trying to understand the different use cases. and the difference between the 2 thread uses.
This is a great tutorial I have read which explains boost::thread_group.
and here is a code I'm using:
boost::threadpool::pool s_ThreadPool(GetCoreCount());
CFilterTask task(pFilter, // filter to run
boost::bind(&CFilterManagerThread::OnCompleteTask, this, _1, _2) // OnComplete sync callback // _1 will be filter name // _2 will be error code
);
// schedule the new task - runs on the threadpool
s_ThreadPool.schedule(task);
this is the destructor:
s_ThreadPool.wait(0);
can you please explain?
boost::thread_group is a convenience class for performing thread management operations on a collection of threads. For example, instead of having to iterate over std::vector<boost::thread>, invoking join() on each thread, the thread_group provides a convenient join_all() member function.
With boost::thread, regardless of it being managed by boost::thread_group, the lifetime of the thread is often dependent on the work in which the thread is doing. For example, if a thread is created to perform a computationally expensive calculation, then the thread can exit once the result has been calculated. If the work is short-lived, then the overhead of creating and destroying threads can affect performance.
On the other hand, a threadpool is a pattern, where a number of threads services a number of task/work. The lifetime of the thread is not directly associated with the lifetime of the task. To continue with the previous example, the application would schedule the computationally expensive calculation to run within the thread pool. The work will be queued within the threadpool, and one of the threadpool's threads will be selected to perform the work. Once the calculation has completed, the thread goes back to waiting for more work to be scheduled with the threadpool.
As shown in this threadpool example, a threadpool can be implemented with boost::thread_group to manage lifetime of threads, and boost::asio::io_service for task/work dispatching.
boost::thread is not-a-thread, a new thread is created when the ftor passed to it is called and thread exits when ftor returns.
We use threadpool to minimize thread creation and destruction cost. but each thread in threadpool is also destroyed when the supplied ftor returns.
So whats the basic concept behind building a threadpool ? is there any permanent thread where I can assign ftors to that thread ?
A thread pool is just a bunch of threads that already running, and that are all running the same function. This functions basically just waits on a queue, and when there is a "function" in the queue it extracts and executes it.
Pseudo-code:
void thread_pool_function()
{
while (true)
{
wait_for_signal_that_queue_is_not_empty();
function_to_call = queue.remove_top();
unklock_queue_semaphore();
function_to_call();
}
}
create_thread(thread_pool_function);
create_thread(thread_pool_function);
create_thread(thread_pool_function);
create_thread(thread_pool_function);
In the "code" above there are now four threads, all initially waiting for something to be put in a "queue". When there is something in the queue, it extracts it, and calls it as a function.
This is probably the simplest way to implement a thread pool.
In addtion to what #Joachim posted:
One way to flow-control such a system (and one I use a lot), is to use a 'pool queue', (blocking producer-consumer queue), of tasks, created and filled at startup with a fixed number of task objects. Any thread that wants to issue a task has to get one from the pool first and tasks are returned to the pool after completion handling. This limits the number of tasks in the system and, if the pool empties, requesting threads just have to wait, blocked on the empty pool, until some 'used' tasks come back in.
This works well, provides flow-control, prevents memory-runaway and eliminates continual task create/destroy. It's also easy to periodically display/write the pool queue depth on a timer, so you can see how 'busy' your app is, (and detect any leaks:).
Edit: Also, it removes the need for any bounded queues in the system. Unbounded queues are simpler and tend to need fewer system calls.
I am working on a networking program using C++ and I'd like to implement a pthread pool. Whenever, I receive an event from the receive socket, I will put the data into the queue in the thread pool. I am thinking about creating 5 separate threads and will consistently check the queue to see if there is anything incoming data to be done.
This is quite straight forward topic but I am not a expert so I would like to hear anything that might help to implement this.
Please let me know any tutorials or references or problems I should aware.
Use Boost.Asio and have each thread in the pool invoke io_service::run().
Multiple threads may call
io_service::run() to set up a pool of
threads from which completion handlers
may be invoked. This approach may also
be used with io_service::post() to use
a means to perform any computational
tasks across a thread pool.
Note that all threads that have joined
an io_service's pool are considered
equivalent, and the io_service may
distribute work across them in an
arbitrary fashion.
Before I start.
Use boost::threads
If you want to know how to do it with pthread's then you need to use the pthread condition variables. These allow you to suspend threads that are waiting for work without consuming CPU.
When an item of work is added to the queue you signal the condition variable and one pthread will be released from the condition variable thus allowing it to take an item from the queue. When the thread finishes processing the work item it returns back to the condition variable to await the next piece of work.
The main loop for the threads in the loop should look like this;
ThreadWorkLoop() // The function that all the pool threads run.
{
while(poolRunnin)
{
WorkItem = getWorkItem(); // Get an item from the queue. This suspends until an item
WorkItem->run(); // is available then you can run it.
}
}
GetWorkItem()
{
Locker lock(mutex); // RAII: Lock/unlock mutex
while(workQueue.size() == 0)
{
conditionVariable.wait(mutex); // Waiting on a condition variable suspends a thread
} // until the condition variable is signalled.
// Note: the mutex is unlocked while the thread is suspended
return workQueue.popItem();
}
AddItemToQueue(item)
{
Locker lock(mutex);
workQueue.pushItem(item);
conditionVariable.signal(); // Release a thread from the condition variable.
}
Have the receive thread to push the data on the queue and the 5 threads popping it. Protect the queue with a mutex and let them "fight" for the data.
You also want to have a usleep() or pthread_yield() in the worker thread's main loop
You will need a mutex and a conditional variable. Mutex will protect your job queue and when receiving threads add a job to the queue it will signal the condition variable. The worker threads will wait on the condition variable and will wake up when it is signaled.
Boost asio is a good solution.
But if you dont want to use it (or cant use it for whatever reasons) then you'll probably want to use a semaphore based implementation.
You can find a multithreaded queue implementation based on semaphores that I use here:
https://gist.github.com/482342
The reason for using semaphores is that you can avoid having the worker threads continually polling, and instead have them woken up by the OS when there is work to be done.