std::future::wait_for is causing a deadlock - c++

I use in my code std::promise and std::shared_future (which is a multiple waiting threads version of std::future) for asynchronous execution, and I found a case where it never come back (deadlock) when calling this function with 100 ms (or any other time duration).
This is the piece of code (from my class) that suppose to block the thread until someone sets the value of the std::promise related to this std::shared_future:
bool wait_until_ready(unsigned int msec)
{
if (_ready)
return true;
std::chrono::milliseconds span(msec);
std::future_status fstat;
fstat = _shared_fut.wait_for(span);
if (fstat == std::future_status::ready)
{
_shared_fut.get();
return true;
}
return false;
}
void init()
{
unique_lock<mutex> lck(_mutex);
_promise = promise<bool>();
_shared_fut = _promise.get_future().share();
_ready = false;
}
void set_ready()
{
unique_lock<mutex> lck(_mutex);
if (!_ready)
{
_ready = true;
_promise.set_value(true);
}
}
The call to wait_for() consistently causing a deadlock (no exception is being thrown).
Has anyone experienced this kind of behavior from std::future before?

Related

Handle mutex lock in callback c++

I've got a Timer class that can run with both an initial time and an interval. There's an internal function internalQuit performs thread.join() before a thread is started again on the resetCallback. The thing is that each public function has it's own std::lock_guard on the mutex to prevent the data of being written. I'm now running into an issue that when using the callback to for example stop the timer in the callback, the mutex cannot be locked by stop(). I'm hoping to get some help on how to tackle this issue.
class Timer
{
public:
Timer(string_view identifier, Function &&timeoutHandler, Duration initTime, Duration intervalTime);
void start()
void stop() // for example
{
std::lock_guard lock{mutex};
running = false;
sleepCv.notify_all();
}
void setInitTime()
void setIntervalTime()
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
startTimerThread(std::forward<Function>(timeoutHandler));
}
private:
internalQuit() // performs thread join
{
{
std::lock_guard lock {mutex};
quit = true;
running = false;
sleepCv.notify_all();
}
thread.join();
}
mainLoop(Function &&timeoutHandler)
{
while(!quit)
{
std::unique_lock lock{mutex};
// wait for running with sleepCv.wait()
// handle initTimer with sleepCv.wait_until()
timeoutHandler(); // callback
// handle intervalTimer with sleepCv.wait_until()
timeoutHandler(); // callback
}
}
startTimerThread(Function &&timeoutHandler)
{
thread = std::thread([&, timeoutHandler = std::forward<Function>(timeoutHandler)](){
mainLoop(timeoutHandler);
});
}
std::thread thread{};
std::mutex mutex{};
std::condition_variable sleepCv{}
// initTime, intervalTime and some booleans for updating with sleepCv.notify_all();
}
For testing this, I have the following testcase in Gtest. I'm expecting the timer to stop in the callback. Unfortunately, the timer will hang on acquiring the mutex lock in the stop() function.
std::atomic<int> callbackCounter;
void timerCallback()
{
callbackCounter.fetch_add(1, std::memory_order_acq_rel);
}
TEST(timerTest, timerShouldStopWhenStoppedInNewCallback)
{
std::atomic<int> testCounter{0};
Timer<std::chrono::steady_clock > t{"timerstop", &timerCallback, std::chrono::milliseconds(0), std::chrono::milliseconds(100)};
t.resetCallback([&]{
testCounter += 1;
t.stop();
});
t.start();
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // trigger due to original interval timeout
sleepMilliSeconds(100);
ASSERT_EQ(testCounter.load(), 1); // no trigger, because stopped in new callback
}
Removing all the mutexes in each of the public fucntions, fixes the issue. But that could lead to possible race conditions for data being written to variables. Hence each function has a lock before writing to f.e. the booleans.
I've tried looking into the std::move functionality to move the thread during the resetCallback into a different variable and then call join on that one. I'm also investigating recursive_mutex but have no experience with using that.
void resetCallback(Function &&timeoutHandler)
{
internalQuit();
{
std::lock_guard lock{mutex};
quit = false;
}
auto prevThread = std::thread(std::move(this->thread));
// didn't know how to continue from here, requiring more selfstudy.
startTimerThread(std::forward<Function>(timeoutHandler));
}
It's a new subject for me, have worked with mutexes and timers before but with relatively simple stuff.
Thank you in advance.

What C++'s equivalent to winapi's MsgWaitForMultipleObjectsEx

I'm making the transition from using native Win32 API calls to manage my thread's message queue to using my own C++ code. I have encountered a question which I can't fully answer.
Given the following code snippet
LRESULT QueueConsumeThread()
{
MSG msg = { 0 };
HANDLE hHandles[] = { hHandle1, hHandle2 };
while (true)
{
DWORD dwRes;
switch (dwRes = ::MsgWaitForMultipleObjects(_countof(hHandles), hHandles, FALSE, INFINITE, QS_ALLEVENTS))
{
case WAIT_OBJECT_0 :
DoSomething();
break;
case WAIT_OBJECT_0 + 1:
DoSomething2();
break;
case WAIT_OBJECT_0 + _countof(hHandles):
ATLASSERT(msg.message == WM_QUIT);
return 1;
}
}
return 1;
}
I have read in many sources that a particular thread should be a associated with a single condition_variable, also that using multiple condition_variables or invoking wait_for() or wait_until() doesn't sound too efficient.
The following source suggested implementing a safe_queue using condition_variables. I guess that PeekMessage/GetMessage/MsgWaitForMultipleObject work similarly, but what kind of data should each cell of the queue hold and be able to receive event signals?
Edit: I'm asking this as I have to write a cross-platform application.
Contrary to windows synhronization events (which can be in signalled state) std::condition_variable is decoupled from the state. So, the most natural approach would be to define several conditions and wait/report them with the single condition_variable:
std::unique_lock<std::mutex> lock(m);
cv.wait(lock, []{ return ready1 || ready2 || ready3; });
if (ready1) { ... }
if (ready2) { ... }
if (ready3) { ... }
std::unique_lock<std::mutex> lock(m);
ready1 = true;
cv.notify_one();

Using a single Condition Variable to pause multiple threads

I have a program that starts N number of threads (async/future). I want the main thread to set up some data, then all threads should go while the main thread waits for all of the other threads to finish, and then this needs to loop.
What I have atm is something like this
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
bRun = true;
}
run.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bDone; });
}
//Reset bools
bRun = false;
bDone = false;
}
//Get results from futures once complete
}
int thread()
{
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
run.wait(lock, [] {return bRun; });
bDone = true;
//Do thread stuff here
lock.unlock();
run.notify_all();
}
}
But I can't see any signs of either the main or the other threads waiting for each other! Any idea what I am doing wrong or how I can do this?
There are a couple of problems. First, you're setting bDone as soon as the first worker wakes up. Thus the main thread wakes immediately and begins readying the next data set. You want to have the main thread wait until all workers have finished processing their data. Second, when a worker finishes processing, it loops around and immediately checks bRun. But it can't tell if bRun == true means that the next data set is ready or if the last data set is ready. You want to wait for the next data set.
Something like this should work:
std::mutex mrun;
std::condition_variable dataReady;
std::condition_variable workComplete;
int nCurrentIteration = 0;
int nWorkerCount = 0;
int main()
{
//Start N new threads (std::future/std::async)
while(condition)
{
//Set Up Data Here
//Send Data to threads
{
std::lock_guard<std::mutex> lock(mrun);
nWorkerCount = N;
++nCurrentIteration;
}
dataReady.notify_all();
//Wait for threads
{
std::unique_lock<std::mutex> lock(mrun);
workComplete.wait(lock, [] { return nWorkerCount == 0; });
}
}
//Get results from futures once complete
}
int thread()
{
int nNextIteration == 1;
while(otherCondition)
{
std::unique_lock<std::mutex> lock(mrun);
dataReady.wait(lock, [&nNextIteration] { return nCurrentIteration==nNextIteration; });
lock.unlock();
++nNextIteration;
//Do thread stuff here
lock.lock();
if (--nWorkerCount == 0)
{
lock.unlock();
workComplete.notify_one();
}
}
}
Be aware that this solution isn't quite complete. If a worker encounters an exception, then the main thread will hang (because the dead worker will never reduce nWorkerCount). You'll likely need a strategy to deal with that scenario.
Incidentally, this pattern is called a barrier.

A race condition in a custom implementation of recursive mutex

UPD: It seems that the problem which I explain below is non-existent. I cannot reproduce it in a week already, I started suspecting that it was caused by some bugs in a compiler or corrupted memory because it is not reproducing anymore.
I tried to implement my own recursive mutex in C++, but for some reason, it fails. I tried to debug it, but I stuck. (I know that there are recursive mutex in std, but I need a custom implementation in a project where STL is not available; this implementation was just a check of an idea). I haven't thought about efficiency yet, but I don't understand why my straightforward implementation doesn't work.
First of all, here's the implementation of the RecursiveMutex:
class RecursiveMutex
{
std::mutex critical_section;
std::condition_variable cv;
std::thread::id id;
int recursive_calls = 0;
public:
void lock() {
auto thread = std::this_thread::get_id();
std::unique_lock<std::mutex> lock(critical_section);
cv.wait( lock, [this, thread]() {
return id == thread || recursive_calls == 0;
});
++recursive_calls;
id = thread;
}
void unlock() {
std::unique_lock<std::mutex> lock( critical_section );
--recursive_calls;
if( recursive_calls == 0 ) {
lock.unlock();
cv.notify_all();
}
}
};
The failing test is straightforward, it just runs two threads, both of them are locking and unlocking the same mutex (the recursive nature of the mutex is not tested here). Here it is:
std::vector<std::thread> threads;
void initThreads( int num_of_threads, std::function<void()> func )
{
threads.resize( num_of_threads );
for( auto& thread : threads )
{
thread = std::thread( func );
}
}
void waitThreads()
{
for( auto& thread : threads )
{
thread.join();
}
}
void test () {
RecursiveMutex mutex;
while (true) {
int count = 0;
initThreads(2, [&mutex] () {
for( int i = 0; i < 100000; ++i ) {
try {
mutex.lock();
++count;
mutex.unlock();
}
catch (...) {
// Extremely rarely.
// Exception is "Operation not permited"
assert(false);
}
}
});
waitThreads();
// Happens often
assert(count == 200000);
}
}
In this code I have two kinds of errors:
Extremely rarely I get an exception in RecursiveMutex::lock() which contains message "Operation not permitted" and is thrown from cv.wait. As far as I understand, this exception is thrown when wait is called on a mutex which is not owned by the thread. At the same time, I lock it just above calling the wait so this cannot be the case.
In most situations I just get a message into console "terminate called without an active exception".
My main question is what the bug is, but I'll also be happy to know how to debug and provoke race condition in such a code in general.
P.S. I use Desktop Qt 5.4.2 MinGW 32 bit.

Win32 alternative to pthread

Is it possible to write this using the standard win32 CreateMutex style code. I am just wondering if I want to introduce a new library to our application or if I can find a way to write this myself. I just can't figure out how to to the wait inside a CriticalSection. This is my current working code with the pthread library.
T remove() {
pthread_mutex_lock(&m_mutex);
while (m_queue.size() == 0) {
pthread_cond_wait(&m_condv, &m_mutex);
}
T item = m_queue.front();
m_queue.pop_front();
pthread_mutex_unlock(&m_mutex);
return item;
}
For pre-VC-2012 support, the best alternative is Boost.Thread that supports conditional variables.
Here's my attempt. This is not the best implementation of a conditional wait lock in win32, but I think it works. It could use careful code review scrutiny.
One caveat - it doesn't necessarily guarantee ordered fairness since all the waiting threads may be initially blocked waiting for the event. The scheduler will resume all the threads at this point to continue running (up to the subsequent blocking EnterCriticalSection call), but not necessarily in the same order in which the threads arrived into the remove() call to begin with. This likely isn't a big deal for most app's with only a handful of threads, but it's something most threading frameworks guarantee.
Other caveat - for brevity, I'm leaving out the important steps of checking the return value from all of these Win32 APIs.
CRITICAL_SECTION m_cs;
HANDLE m_event;
void Init()
{
InitializeCriticalSection(&m_cs);
m_event = CreateEvent(NULL, TRUE, FALSE, NULL); // manual reset event
}
void UnInit()
{
DeleteCriticalSection(&m_cs);
CloseHandle(m_event);
m_event = NULL;
}
T remove()
{
T item;
bool fGotItem = false;
while (fGotItem == false)
{
// wait for event to be signaled
WaitForSingleObject(m_event, INFINITE);
// wait for mutex to become available
EnterCriticalSection(&m_cs);
// inside critical section
{
// try to deque something - it’s possible that the queue is empty because another
// thread pre-empted us and got the last item in the queue before us
size_t queue_size = m_queue.size();
if (queue_size == 1)
{
// the queue is about to go empty
ResetEvent(m_event);
}
if (queue_size > 0)
{
fGotItem = true;
item = m_queue.front();
m_queue.pop();
}
}
LeaveCriticalSection(&m_cs);
}
return item;
}
void Add(T& item)
{
// wait for critical section to become available
EnterCriticalSection(&m_cs);
// inside critical section
{
m_queue.push_back(item);
SetEvent(m_event); // signal other threads that something is available
}
LeaveCriticalSection(&m_cs);
}
Windows Vista introduced new native Win32 Conditional Variable and Slim Reader/Writer Lock primitives for exactly this type of scenario, for example:
Using a critical section:
CRITICAL_SECTION m_cs;
CONDITION_VARIABLE m_condv;
InitializeCriticalSection(&m_cs);
InitializeConditionVariable(&m_condv);
...
void add(T item)
{
EnterCriticalSection(&m_cs);
m_queue.push_back(item);
LeaveCriticalSection(&m_cs);
WakeConditionVariable(&m_condv);
}
T remove()
{
EnterCriticalSection(&m_cs);
while (m_queue.size() == 0)
SleepConditionVariableCS(&m_condv, &m_cs, INFINITE);
T item = m_queue.front();
m_queue.pop_front();
LeaveCriticalSection(&m_cs);
return item;
}
Using a SRW lock:
SRWLOCK m_lock;
CONDITION_VARIABLE m_condv;
InitializeSRWLock(&m_lock);
InitializeConditionVariable(&m_condv);
...
void add(T item)
{
AcquireSRWLockExclusive(&m_lock);
m_queue.push_back(item);
ReleaseSRWLockExclusive(&m_lock);
WakeConditionVariable(&m_condv);
}
T remove()
{
AcquireSRWLockExclusive(&m_lock);
while (m_queue.size() == 0)
SleepConditionVariableSRW(&m_condv, &m_lock, INFINITE, 0);
T item = m_queue.front();
m_queue.pop_front();
ReleaseSRWLockExclusive(&m_lock);
return item;
}