I have a program starting an std::thread doing the following: sleep X, execute a function, terminate.
create std::thread(Xms, &func)
wait Xms
then do func()
end
I was wondering if I could for example send a signal to my std::thread in order to instantly break the sleep and do func, then quit.
Do I need to send the signal to std::thread::id in order to perform this?
my thread is launched this way, with a lambda function:
template<typename T, typename U>
void execAfter(T func, U params, const int ms)
{
std::thread thread([=](){
std::this_thread::sleep_for(std::chrono::milliseconds(ms));
func(params);
});
thread.detach();
}
Using wait_for of std::condition_variable would be the way to go, if the thread model can't be changed. In the code snippet below, the use of the condition_variable is wrapped into a class of which objects have to be shared across the threads.
#include <iostream>
#include <atomic>
#include <condition_variable>
#include <thread>
#include <chrono>
class BlockCondition
{
private:
mutable std::mutex m;
std::atomic<bool> done;
mutable std::condition_variable cv;
public:
BlockCondition()
:
m(),
done(false),
cv()
{
}
void wait_for(int duration_ms)
{
std::unique_lock<std::mutex> l(m);
int ms_waited(0);
while ( !done.load() && ms_waited < duration_ms )
{
auto t_0(std::chrono::high_resolution_clock::now());
cv.wait_for(l, std::chrono::milliseconds(duration_ms - ms_waited));
auto t_1(std::chrono::high_resolution_clock::now());
ms_waited += std::chrono::duration_cast<std::chrono::milliseconds>(t_1 - t_0).count();
}
}
void release()
{
std::lock_guard<std::mutex> l(m);
done.store(true);
cv.notify_one();
}
};
void delayed_func(BlockCondition* block)
{
block->wait_for(1000);
std::cout << "Hello actual work\n";
}
void abortSleepyFunction(BlockCondition* block)
{
block->release();
}
void test_aborted()
{
BlockCondition b();
std::thread delayed_thread(delayed_func, &b);
abortSleepyFunction(&b);
delayed_thread.join();
}
void test_unaborted()
{
BlockCondition b();
std::thread delayed_thread(delayed_func, &b);
delayed_thread.join();
}
int main()
{
test_aborted();
test_unaborted();
}
Note that there might be spurious wakeups that abort the wait call prematurely. To account for that, we count the milliseconds actually waited and continue waiting until the done flag is set.
As was pointed out in the comments, this wasn't the smartest approach for solving your problem in the first place. As implementing a proper interruption mechanism is quite complex and extremely easy to get wrong, here are suggestions for a workaround:
Instead of sleeping for the whole timeout, simply loop over a sleep of fixed small size (e.g. 10 milliseconds) until the desired duration has elapsed. After each sleep you check an atomic flag whether interruption was requested. This is a dirty solution, but is the quickest to pull of.
Alternatively, supply each thread with a condition_variable and do a wait on it instead of doing the this_thread::sleep. Notify the condition variable to indicate the request for interruption. You will probably still want an additional flag to protect against spurious wakeups so you don't accidentally return too early.
Ok, to figure this out I found a new implementation, it's inspired by all your answers so thanks a lot.
First I am gonna do a BombHandler item, in the main Game item. It will have a an attribute containing all the Bomb items.
This BombHandler will be a singleton, containing a timerLoop() function who will execute in a thread (This way I only use ONE thread for xxx bombs, way more effective)
The timerLoop() will usleep(50) then pass through the whole std::list elements and call Bomb::incrTimer() who will increment their internal _timer attribute by 10ms indefinitely, and check bombs who have to explode.
When they reach 2000ms for instance, BombHandler.explode() will be called, exploding the bomb and deleting it.
If another bomb is in range Bomb::touchByFire() will be called, and set the internal attribute of Bomb, _timer, to TIME_TO_EXPLODE (1950ms).
Then it will be explode 50ms later by BombHandler::explode().
Isn't this a nice solution?
Again, thanks for your answers! Hope this can help.
Related
I am new to using multithreading and I am working on a program that handles mouse movement, it consists of two threads, the main thread gets the input and stores the mouse position in a fixed location and the child thread loops through that location to get the value. So how do I reduce CPU utilization, I am using conditional variables to achieve this, is there a better way to do this? It seems that adding a delay to the subthreads would also work
void Engine::InputManager::MouseMove(const MouseMoveEvent& ev)
{
cur_mouse_ev_.x_ = ev.x_;
cur_mouse_ev_.y_ = ev.y_;
cv_.notify_all();
}
void Engine::InputManager::ProcessInput(MouseMoveEvent* ev)
{
while (true)
{
cv_.wait(u_mutex_);
float dx = static_cast<float>(ev->x_ - pre_mouse_pos[0]) * 0.25f;
float dy = static_cast<float>(ev->y_ - pre_mouse_pos[1]) * 0.25f;
g_pGraphicsManager->CameraRotateYaw(dx);
pre_mouse_pos[0] = ev->x_;
pre_mouse_pos[1] = ev->y_;
}
}
Using std::condition_variable is a good and efficient way to achieve what you want.
However - you implementation has the following issue:
std::condition_variable suffers from spurious wakeups. You can read about it here: Spurious wakeup - Wikipedia.
The correct way to use a condition variable requires:
To add a variable (bool in your case) to hold the "condition" you are waiting for. The variable should be updated under a lock using the mutex.
Again under a lock: calling wait in a loop until the variable satisfies the condition you are waiting for. If a spurious wakeup will occur, the loop will ensure getting into the waiting state again. BTW - wait method has an overload that gets a predicate for the condition, and loops for you.
You can see some code examples here:
Condition variable examples.
A minimal sample that demonstrates the flow:
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex mtx;
std::condition_variable cond_var;
bool ready{ false };
void handler()
{
{
std::unique_lock<std::mutex> lck(mtx);
cond_var.wait(lck, []() { return ready; }); // will loop internally to handle spurious wakeups
}
// Handle data ...
}
void main()
{
std::thread t(handler);
// Prepare data ...
std::this_thread::sleep_for(std::chrono::seconds(3));
{
std::unique_lock<std::mutex> lck(mtx);
ready = true;
}
cond_var.notify_all();
t.join();
}
you could try using a semaphore, for (possibly) better performance
instead of 2 threads, you could try using coroutines (standard or your own), for less memory consumption. A thread needs a stack frame, that's several MBytes at least. A coroutine may not need anything extra.
I have been trying to figure out std::condition_variables and I am particularly confused by wait() and whether to use notify_all or notify_one.
First, I've written some code and attached it below. Here's a short explanation: Collection is a class that holds onto a bunch of Counter objects. These Counter objects have a Counter::increment() method, which needs to be called on all the objects, over and over again. To speed everything up, Collection also maintains a thread pool to distribute the work over, and sends out all the work with its Collection::increment_all() method.
These threads don't need to communicate with each other, and there are usually many more Counter objects than there are threads. It's fine if one thread processes more than Counters than others, just as long as all the work gets done. Adding work to the queue is easy and only needs to be done in the "main" thread. As far as I can see, the only bad thing that can happen is if other methods (e.g. Collection::printCounts) are allowed to be called on the counters in the middle of the work being done.
#include <iostream>
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <queue>
class Counter{
private:
int m_count;
public:
Counter() : m_count(0) {}
void increment() {
m_count ++;
}
int getCount() const { return m_count; }
};
class Collection{
public:
Collection(unsigned num_threads, unsigned num_counters)
: m_shutdown(false)
{
// start workers
for(size_t i = 0; i < num_threads; ++i){
m_threads.push_back(std::thread(&Collection::work, this));
}
// intsntiate counters
for(size_t j = 0; j < num_counters; ++j){
m_counters.emplace_back();
}
}
~Collection()
{
m_shutdown = true;
for(auto& t : m_threads){
if(t.joinable()){
t.join();
}
}
}
void printCounts() {
// wait for work to be done
std::unique_lock<std::mutex> lk(m_mtx);
m_work_complete.wait(lk); // q2: do I need a while lop?
// print all current counters
for(const auto& cntr : m_counters){
std::cout << cntr.getCount() << ", ";
}
std::cout << "\n";
}
void increment_all()
{
std::unique_lock<std::mutex> lock(m_mtx);
m_work_complete.wait(lock);
for(size_t i = 0; i < m_counters.size(); ++i){
m_which_counters_have_work.push(i);
}
}
private:
void work()
{
while(!m_shutdown){
bool action = false;
unsigned which_counter;
{
std::unique_lock<std::mutex> lock(m_mtx);
if(m_which_counters_have_work.size()){
which_counter = m_which_counters_have_work.front();
m_which_counters_have_work.pop();
action = true;
}else{
m_work_complete.notify_one(); // q1: notify_all
}
}
if(action){
m_counters[which_counter].increment();
}
}
}
std::vector<Counter> m_counters;
std::vector<std::thread> m_threads;
std::condition_variable m_work_complete;
std::mutex m_mtx;
std::queue<unsigned> m_which_counters_have_work;
bool m_shutdown;
};
int main() {
int num_threads = std::thread::hardware_concurrency()-1;
int num_counters = 10;
Collection myCollection(num_threads, num_counters);
myCollection.printCounts();
myCollection.increment_all();
myCollection.printCounts();
myCollection.increment_all();
myCollection.printCounts();
return 0;
}
I compile this on Ubuntu 18.04 with g++ -std=c++17 -pthread thread_pool.cpp -o tp && ./tp I think the code accomplishes all of those objectives, but a few questions remain:
I am using m_work_complete.wait(lk) to make sure the work is finished before I start printing all the new counts. Why do I sometimes see this written inside a while loop, or with a second argument as a lambda predicate function? These docs mention spurious wake ups. If a spurious wake up occurs, does that mean printCounts could prematurely print? If so, I don't want that. I just want to ensure the work queue is empty before I start using the numbers that should be there.
I am using m_work_complete.notify_all instead of m_work_complete.notify_one. I've read this thread, and I don't think it matters--only the main thread is going to be blocked by this. Is it faster to use notify_one just so the other threads don't have to worry about it?
std::condition_variable is not really a condition variable, it's more of a synchronization tool for reaching a certain condition. What that condition is is up to the programmer, and it should still be checked after each condition_variable wake-up, since it can wake-up spuriously, or "too early", when the desired condition isn't yet reached.
On POSIX systems, condition_variable::wait() delegates to pthread_cond_wait, which is susceptible to spurious wake-up (see "Condition Wait Semantics" in the Rationale section). On Linux, pthread_cond_wait is in turn implemented via a futex, which is again susceptible to spurious wake-up.
So yes you still need a flag (protected by the same mutex) or some other way to check that the work is actually complete. A convenient way to do this is by wrapping the check in a predicate and passing it to the wait() function, which would loop for you until the predicate is satisfied.
notify_all unblocks all threads waiting on the condition variable; notify_one unblocks just one (or at least one, to be precise). If there are more than one waiting threads, and they are equivalent, i.e. either one can handle the condition fully, and if the condition is sufficient to let just one thread continue (as in submitting a work unit to a thread pool), then notify_one would be more efficient since it won't unblock other threads unnecessarily for them to only notice no work to be done and going back to waiting. If you ever only have one waiter, then there would be no difference between notify_one and notify_all.
It's pretty simple: Use notify() when;
There is no reason why more than one thread needs to know about the event. (E.g., use notify() to announce the availability of an item that a worker thread will "consume," and thereby make the item unavailable to other workers)*AND*
There is no wrong thread that could be awakened. (E.g., you're probably safe if all of the threads are wait()ing in the same line of the same exact function.)
Use notify_all() in all other cases.
I am using C++11 and I have a std::thread which is a class member, and it sends information to listeners every 2 minutes. Other that that it just sleeps. So, I have made it sleep for 2 minutes, then send the required info, and then sleep for 2 minutes again.
// MyClass.hpp
class MyClass {
~MyClass();
RunMyThread();
private:
std::thread my_thread;
std::atomic<bool> m_running;
}
MyClass::RunMyThread() {
my_thread = std::thread { [this, m_running] {
m_running = true;
while(m_running) {
std::this_thread::sleep_for(std::chrono::minutes(2));
SendStatusInfo(some_info);
}
}};
}
// Destructor
~MyClass::MyClass() {
m_running = false; // this wont work as the thread is sleeping. How to exit thread here?
}
Issue:
The issue with this approach is that I cannot exit the thread while it is sleeping. I understand from reading that I can wake it using a std::condition_variable and exit gracefully? But I am struggling to find a simple example which does the bare minimum as required in above scenario. All the condition_variable examples I've found look too complex for what I am trying to do here.
Question:
How can I use a std::condition_variable to wake the thread and exit gracefully while it is sleeping? Or are there any other ways of achieving the same without the condition_variable technique?
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code here?
Environment:
Linux and Unix with compilers gcc and clang.
How can I use an std::condition_variable to wake the thread and exit gracefully while it was sleeping? Or are there any other ways of achieving the same without condition_variable technique?
No, not in standard C++ as of C++17 (there are of course non-standard, platform-specific ways to do it, and it's likely some kind of semaphore will be added to C++2a).
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary?
Yes.
Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
No. For a start, you can't wait on a condition_variable without locking a mutex (and passing the lock object to the wait function) so you need to have a mutex present anyway. Since you have to have a mutex anyway, requiring both the waiter and the notifier to use that mutex isn't such a big deal.
Condition variables are subject to "spurious wake ups" which means they can stop waiting for no reason. In order to tell if it woke because it was notified, or woke spuriously, you need some state variable that is set by the notifying thread and read by the waiting thread. Because that variable is shared by multiple threads it needs to be accessed safely, which the mutex ensures.
Even if you use an atomic variable for the share variable, you still typically need a mutex to avoid missed notifications.
This is all explained in more detail in
https://github.com/isocpp/CppCoreGuidelines/issues/554
A working example for you using std::condition_variable:
struct MyClass {
MyClass()
: my_thread([this]() { this->thread(); })
{}
~MyClass() {
{
std::lock_guard<std::mutex> l(m_);
stop_ = true;
}
c_.notify_one();
my_thread.join();
}
void thread() {
while(this->wait_for(std::chrono::minutes(2)))
SendStatusInfo(some_info);
}
// Returns false if stop_ == true.
template<class Duration>
bool wait_for(Duration duration) {
std::unique_lock<std::mutex> l(m_);
return !c_.wait_for(l, duration, [this]() { return stop_; });
}
std::condition_variable c_;
std::mutex m_;
bool stop_ = false;
std::thread my_thread;
};
How can I use an std::condition_variable to wake the thread and exit gracefully while it was sleeping?
You use std::condition_variable::wait_for() instead of std::this_thread::sleep_for() and first one can be interrupted by std::condition_variable::notify_one() or std::condition_variable::notify_all()
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
Yes it is necessary to use std::mutex with std::condition_variable and you should use it instead of making your flag std::atomic as despite atomicity of flag itself you would have race condition in your code and you will notice that sometimes your sleeping thread would miss notification if you would not use mutex here.
There is a sad, but true fact - what you are looking for is a signal, and Posix threads do not have a true signalling mechanism.
Also, the only Posix threading primitive associated with any sort of timing is conditional variable, this is why your online search lead you to it, and since C++ threading model is heavily built on Posix API, in standard C++ Posix-compatible primitives is all you get.
Unless you are willing to go outside of Posix (you do not indicate platform, but there are native platform ways to work with events which are free from those limitations, notably eventfd in Linux) you will have to stick with condition variables and yes, working with condition variable requires a mutex, since it is built into API.
Your question doesn't specifically ask for code sample, so I am not providing any. Let me know if you'd like some.
Additionally, I see that I need to use a std::mutex in conjunction with std::condition_variable? Is that really necessary? Is it not possible to achieve the goal by adding the std::condition_variable logic only to required places in the code piece here?
std::condition_variable is a low level primitive. Actually using it requires fiddling with other low level primitives as well.
struct timed_waiter {
void interrupt() {
auto l = lock();
interrupted = true;
cv.notify_all();
}
// returns false if interrupted
template<class Rep, class Period>
bool wait_for( std::chrono::duration<Rep, Period> how_long ) const {
auto l = lock();
return !cv.wait_until( l,
std::chrono::steady_clock::now() + how_long,
[&]{
return !interrupted;
}
);
}
private:
std::unique_lock<std::mutex> lock() const {
return std::unique_lock<std::mutex>(m);
}
mutable std::mutex m;
mutable std::condition_variable cv;
bool interrupted = false;
};
simply create a timed_waiter somewhere both the thread(s) that wants to wait, and the code that wants to interrupt, can see it.
The waiting threads do
while(m_timer.wait_for(std::chrono::minutes(2))) {
SendStatusInfo(some_info);
}
to interrupt do m_timer.interrupt() (say in the dtor) then my_thread.join() to let it finish.
Live example:
struct MyClass {
~MyClass();
void RunMyThread();
private:
std::thread my_thread;
timed_waiter m_timer;
};
void MyClass::RunMyThread() {
my_thread = std::thread {
[this] {
while(m_timer.wait_for(std::chrono::seconds(2))) {
std::cout << "SendStatusInfo(some_info)\n";
}
}};
}
// Destructor
MyClass::~MyClass() {
std::cout << "~MyClass::MyClass\n";
m_timer.interrupt();
my_thread.join();
std::cout << "~MyClass::MyClass done\n";
}
int main() {
std::cout << "start of main\n";
{
MyClass x;
x.RunMyThread();
using namespace std::literals;
std::this_thread::sleep_for(11s);
}
std::cout << "end of main\n";
}
Or are there any other ways of achieving the same without the condition_variable technique?
You can use std::promise/std::future as a simpler alternative to a bool/condition_variable/mutex in this case. A future is not susceptible to spurious wakes and doesn't require a mutex for synchronisation.
Basic example:
std::promise<void> pr;
std::thread thr{[fut = pr.get_future()]{
while(true)
{
if(fut.wait_for(std::chrono::minutes(2)) != std::future_status::timeout)
return;
}
}};
//When ready to stop
pr.set_value();
thr.join();
Or are there any other ways of achieving the same without condition_variable technique?
One alternative to a condition variable is you can wake your thread up at much more regular intervals to check the "running" flag and go back to sleep if it is not set and the allotted time has not yet expired:
void periodically_call(std::atomic_bool& running, std::chrono::milliseconds wait_time)
{
auto wake_up = std::chrono::steady_clock::now();
while(running)
{
wake_up += wait_time; // next signal send time
while(std::chrono::steady_clock::now() < wake_up)
{
if(!running)
break;
// sleep for just 1/10 sec (maximum)
auto pre_wake_up = std::chrono::steady_clock::now() + std::chrono::milliseconds(100);
pre_wake_up = std::min(wake_up, pre_wake_up); // don't overshoot
// keep going to sleep here until full time
// has expired
std::this_thread::sleep_until(pre_wake_up);
}
SendStatusInfo(some_info); // do the regular call
}
}
Note: You can make the actual wait time anything you want. In this example I made it 100ms std::chrono::milliseconds(100). It depends how responsive you want your thread to be to a signal to stop.
For example in one application I made that one whole second because I was happy for my application to wait a full second for all the threads to stop before it closed down on exit.
How responsive you need it to be is up to your application. The shorter the wake up times the more CPU it consumes. However even very short intervals of a few milliseconds will probably not register much in terms of CPU time.
You could also use promise/future so that you don't need to bother with conditionnal and/or threads:
#include <future>
#include <iostream>
struct MyClass {
~MyClass() {
_stop.set_value();
}
MyClass() {
auto future = std::shared_future<void>(_stop.get_future());
_thread_handle = std::async(std::launch::async, [future] () {
std::future_status status;
do {
status = future.wait_for(std::chrono::seconds(2));
if (status == std::future_status::timeout) {
std::cout << "do periodic things\n";
} else if (status == std::future_status::ready) {
std::cout << "exiting\n";
}
} while (status != std::future_status::ready);
});
}
private:
std::promise<void> _stop;
std::future<void> _thread_handle;
};
// Destructor
int main() {
MyClass c;
std::this_thread::sleep_for(std::chrono::seconds(9));
}
I want to implement a thread that can accept function pointers from a main thread and execute them serially. My idea was to use a struct that keeps the function pointer and its object and keep pushing it to a queue. This can be encapsulated in a class. The task thread can then pop from the queue and process it. I also need to synchronize it(so it doesnt block the main thread?), so I was thinking of using a semaphore. Although I have a decent idea of the structure of the program, I am having trouble coding this up, especially the threading and semaphore sync in C++11. It'd be great if someone can suggest an outline by which I can go about implementing this.
EDIT: The duplicate question answers the question about creating a thread pool. It looks like multiple threads are being created to do some work. I only need one thread that can queue function pointers and process them in the order they are received.
Check this code snippet, I have implemented without using a class though. See if it helps a bit. Conditional variable could be avoided here, but I want the reader thread to poll only when there is a signal from the writer so that CPU cycles in the reader won't be wasted.
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
#include <queue>
#include <chrono>
#include <condition_variable>
using namespace std;
typedef function<void(void)> task_t;
queue<task_t> tasks;
mutex mu;
condition_variable cv;
bool stop = false;
void writer()
{
while(!stop)
{
{
unique_lock<mutex> lock(mu);
task_t task = [](){ this_thread::sleep_for(chrono::milliseconds(100ms)); };
tasks.push(task);
cv.notify_one();
}
this_thread::sleep_for(chrono::milliseconds(500ms)); // writes every 500ms
}
}
void reader()
{
while(!stop)
{
unique_lock<mutex> lock(mu);
cv.wait(lock,[]() { return !stop;});
while( !tasks.empty() )
{
auto task = tasks.front();
tasks.pop();
lock.unlock();
task();
lock.lock();
}
}
}
int main()
{
thread writer_thread([]() { writer();} );
thread reader_thread([]() { reader();} );
this_thread::sleep_for(chrono::seconds(3s)); // main other task
stop = true;
writer_thread.join();
reader_thread.join();
}
Your problem has 2 parts. Storing the list of jobs and manipulating the jobs list in a threadsafe way.
For the first part, look into std::function, std::bind, and std::ref.
For the second part, this is similar to the producer/consumer problem. You can implement a semaphore using std::mutexand std::condition_variable.
There's a hint/outline. Now my full answer...
Step 1)
Store your function pointers in a queue of std::function.
std::queue<std::function<void()>>
Each element in the queue is a function that takes no arguments and returns void.
For functions that take arguments, use std::bind to bind the arguments.
void testfunc(int n);
...
int mynum = 5;
std::function<void()> f = std::bind(testfunction, mynum);
When f is invoked, i.e. f(), 5 will be passed as argument 1 to testfunc. std::bind copies mynum by value immediately.
You probably will want to be able to pass variables by reference as well. This is useful for getting results back from functions as well as passing in shared synchronization devices like semaphores and conditions. Use std::ref, the reference wrapper.
void testfunc2(int& n); // function takes n by ref
...
int a = 5;
std::function<void()> f = std::bind(testfunction, std::ref(a));
std::function and std::bind can work with any callables--functions, functors, or lambdas--which is pretty neat!
Step 2)
A worker thread dequeues while the queue is non-empty. Your code should look similar to the producer/consumer problem.
class AsyncWorker
{
...
public:
// called by main thread
AddJob(std::function<void()> f)
{
{
std::lock_guard<std::mutex> lock(m_mutex);
m_queue.push(std::move(f));
++m_numJobs;
}
m_condition.notify_one(); // It's good style to call notify_one when not holding the lock.
}
private:
worker_main()
{
while(!m_exitCondition)
doJob();
}
void doJob()
{
std::function<void()> f;
{
std::unique_lock<std::mutex> lock(m_mutex);
while (m_numJobs == 0)
m_condition.wait(lock);
if (m_exitCondition)
return;
f = std::move(m_queue.front());
m_queue.pop();
--m_numJobs;
}
f();
}
...
Note 1: The synchronization code...with m_mutex, m_condition, and m_numJobs...is essentially what you have to use to implement a semaphore in C++'11. What I did here is more efficient than using a separate semaphore class because only 1 lock is locked. (A semaphore would have its own lock and you would still have to lock the shared queue).
Note 2: You can easily add additional worker threads.
Note 3: m_exitCondition in my example is an std::atomic<bool>
Actually setting up the AddJob function in a polymorphic way gets into C++'11 variadic templates and perfect forwarding...
class AsyncWorker
{
...
public:
// called by main thread
template <typename FUNCTOR, typename... ARGS>
AddJob(FUNCTOR&& functor, ARGS&&... args)
{
std::function<void()> f(std::bind(std::forward<FUNCTOR>(functor), std::forward<ARGS&&>(args)...));
{
std::lock_guard<std::mutex> lock(m_mutex);
m_queue.push(std::move(f));
++m_numJobs;
}
m_condition.notify_one(); // It's good style to call notify_one when not holding the lock.
}
I think it may work if you just used pass-by-value instead of using the forwarding references, but I haven't tested this, while I know the perfect forwarding works great. Avoiding perfect forwarding may make the concept slightly less confusing but the code won't be much different...
I would like to write a class that wraps around std::thread and behaves like a std::thread but without actually allocating a thread every time I need to process something async. The reason is that I need to use multi threading in a context where I'm not allow to dynamically allocate and I also don't want to have the overhead of creating a std::thread.
Instead, I want a thread to run in a loop and wait until it can start processing. The client calls invoke which wakes up the thread. The Thread locks a mutex, does it's processing and falls asleep again. A function join behaves like std::thread::join by locking until the thread frees the lock (i.e. falls asleep again).
I think I got the class to run but because of a general lack of experience in multi threading, I would like to ask if anybody can spot race conditions or if the approach I used is considered "good style". For example, I'm not sure if temporary locking the mutex is a decent way to "join" the thread.
EDIT
I found another race condition: when calling join directly after invoke, there is no reason the thread already locked the mutex and thus locks the caller of join until the thread goes to sleep. To prevent this, I had to add a check for the invoke counter.
Header
#pragma once
#include <thread>
#include <atomic>
#include <mutex>
class PersistentThread
{
public:
PersistentThread();
~PersistentThread();
// set function to invoke
// locks if thread is currently processing _func
void set(const std::function<void()> &f);
// wakes the thread up to process _func and fall asleep again
// locks if thread is currently processing _func
void invoke();
// mimics std::thread::join
// locks until the thread is finished with it's loop
void join();
private:
// intern thread loop
void loop(bool *initialized);
private:
bool _shutdownRequested{ false };
std::mutex _mutex;
std::unique_ptr<std::thread> _thread;
std::condition_variable _cond;
std::function<void()> _func{ nullptr };
};
Source File
#include "PersistentThread.h"
PersistentThread::PersistentThread()
{
auto lock = std::unique_lock<std::mutex>(_mutex);
bool initialized = false;
_thread = std::make_unique<std::thread>(&PersistentThread::loop, this, &initialized);
// wait until _thread notifies, check bool initialized to prevent spurious wakeups
_cond.wait(lock, [&] {return initialized; });
}
PersistentThread::~PersistentThread()
{
{
std::lock_guard<std::mutex> lock(_mutex);
_func = nullptr;
_shutdownRequested = true;
// wake up and let join
_cond.notify_one();
}
// join thread,
if (_thread->joinable())
{
_thread->join();
}
}
void PersistentThread::set(const std::function<void()>& f)
{
std::lock_guard<std::mutex> lock(_mutex);
this->_func = f;
}
void PersistentThread::invoke()
{
std::lock_guard<std::mutex> lock(_mutex);
_cond.notify_one();
}
void PersistentThread::join()
{
bool joined = false;
while (!joined)
{
std::lock_guard<std::mutex> lock(_mutex);
joined = (_invokeCounter == 0);
}
}
void PersistentThread::loop(bool *initialized)
{
std::unique_lock<std::mutex> lock(_mutex);
*initialized = true;
_cond.notify_one();
while (true)
{
// wait until we get the mutex again
_cond.wait(lock, [this] {return _shutdownRequested || (this->_invokeCounter > 0); });
// shut down if requested
if (_shutdownRequested) return;
// process
if (_func) _func();
_invokeCounter--;
}
}
You are asking about potential race conditions, and I see at least one race condition in the shown code.
After constructing a PersistentThread, there is no guarantee that the new thread will acquire its initial lock in its loop() before the main execution thread returns from the constructor and enters invoke(). It is possible that the main execution thread enters invoke() immediately after the constructor is complete, ends up notifying nobody, since the internal execution thread hasn't locked the mutex yet. As such, this invoke() will not result in any processing taking place.
You need to synchronize the completion of the constructor with the execution thread's initial lock acquisition.
EDIT: your revision looks right; but I also spotted another race condition.
As documented in the description of wait(), wait() may wake up "spuriously". Just because wait() returned, doesn't mean that some other thread has entered invoke().
You need a counter, in addition to everything else, with invoke() incrementing the counter, and the execution thread executing its assigned duties only when the counter is greater than zero, decrementing it. This will guard against spurious wake-ups.
I would also have the execution thread check the counter before entering wait(), and enter wait() only if it is 0. Otherwise, it decrements the counter, executes its function, and loops back.
This should plug up all the potential race conditions in this area.
P.S. The spurious wake-up also applies to the initial notification, in your correction, that the execution thread has entered the loop. You'll need to do something similar for that situation, too.
I don't understand what you're trying to ask exactly. It's a nice style you used.
It would be much safer using bools and check the single routines because void returns nothing so you could be maybe stuck caused by bugs. Check everything you can since the thread runs under the hood. Make sure the calls are running correctly, if the process had really success. Also you could read some stuff about "Thread Pooling".