I have a method:
void move_robot(const vector<vector<double> > &map) {
// accumulate the footprint while the robot moves
// Iterate through the path
//std::unique_lock<std::mutex> lck(mtx);
for (unsigned int i=1; i < map.size(); i++) {
while (distance(position , map[i]) > DISTANCE_TOLERANCE ) {
this->direction = unitary_vector(map[i], this->position);
this->next_step();
lck.unlock();
this_thread::sleep_for(chrono::milliseconds(10)); // sleep for 500 ms
lck.lock();
}
std::cout << "New position is x:" << this->position[0] << " and y:" << this->position[1] << std::endl;
}
this->moving = false;
// notify to end
}
When the sleep and locks are included I get:
ASM generation compiler returned: 0
Execution build compiler returned: 0
Program returned: 143
Killed - processing time exceeded
Nevertheless, if I comment all the locks and this_thread::sleep_for it works as expected.
I need the locks because I am dealing with other threads. The complete code is this one: https://godbolt.org/z/7ErjrG
I am quite stacked because the otput does not provide much information
You have not posted the code of next_step and the definition of mtx, this is important information.
std::mutex mtx;
void next_step() {
std::unique_lock<std::mutex> lck(mtx);
this->position[0] += DT * this->direction[0];
this->position[1] += DT * this->direction[1];
}
If you read the manual for std::mutex you find out:
A calling thread must not own the mutex prior to calling lock or try_lock.
And std::unique_lock:
Locks the associated mutex by calling m.lock(). The behavior is undefined if the current thread already owns the mutex except when the mutex is recursive.
next_step called from move_robot violates this, it tries to lock already owned mutex object by that calling thread.
The relative topic for your question is Can unique_lock be used with a recursive_mutex?. There you get the fix:
std::recursive_mutex mtx;
std::unique_lock<std::recursive_mutex> lck(mtx);
Related
Lets say I have one thread that continuously updates a certain object. During the update, the object must be locked for thread safety.
Now the second thread is more of an event kind of operation. If such a thread is spawned, I'd like the running update to finish it's call and then immediately perform the event operation.
What I absolutely want to avoid is a situation where the event thread needs to wait until it gets lucky to be given computation time at a specific time the update thread doesn't lock up the data it needs to access.
Is there any way I could use the threading/mutex tools in c++ to accomplish this? Or should I save the to-be-done operation in an unlocked var and perform the operation on the update thread?
//// System.h
#pragma once
#include <mutex>
#include <iostream>
#include <chrono>
#include <thread>
class System {
private:
int state = 0;
std::mutex mutex;
public:
void update();
void reset(int e);
};
//////// System.cpp
#include "System.h"
void System::update() {
std::lock_guard<std::mutex> guard(mutex);
state++;
std::cout << state << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
void System::reset(int e) {
std::lock_guard<std::mutex> guard(mutex);
state = e;
std::cout << state << std::endl;
}
////// ThreadTest.h
#pragma once
#include <iostream>
#include "System.h"
void loop_update(System& system);
void reset_system(System& system);
int main();
////// ThreadTest.cpp
#include "ThreadTest.h"
void loop_update(System& system) {
while (true) system.update();
};
void reset_system(System& system) {
system.reset(0);
};
int main()
{
System system;
std::thread t1 = std::thread(loop_update, std::ref(system));
int reset = 0;
while (true) {
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "Reset" << std::endl;
reset_system(system);
}
}
Example gives following output. You can clearly see a huge delay in the actual update.
1
...
10
Reset
11
...
16
0
1
...
10
Reset
11
...
43
0
1
If I understand you correctly, you have 2 threads using the same mutex. However, you want one thread to get a higher preference than the other to get the actual lock.
As far as I know, there ain't a way to ensure preference using the native tools. You can work around it, if you don't mind the code of both threads knowing about it.
For example:
std::atomic<int> shouldPriorityThreadRun{0};
auto priorityThreadCode = [&shouldPriorityThreadRun](){
++shouldPriorityThreadRun;
auto lock = std::unique_lock{mutex};
doMyStuff();
--shouldPriorityThreadRun;
};
auto backgroundThreadCode = [&shouldPriorityThreadRun](){
while (true)
{
if (shouldPriorityThreadRun == 0)
{
auto lock = std::unique_lock{mutex};
doBackgroundStuff();
}
else
std::this_thread::yield();
}
};
If you have multiple priority threads, those can't have priority over each other.
If you don't like the yield, you could do fancier stuff with std::condition_variable, so you can inform other threads that the mutex is available. However, I believe it's good enough.
it should already work with your current approach.
The mutex is locking concurrent access to your data, so you can lock it within the first thread to update the data.
If the event routine / your second thread comes to execution, it always has to check if the mutex is unlocked. If the mutex is unlocked - and only then, you can lock the mutex and perform the tasks of the second thread.
If I understand your code correctly (i am not a c++ expert), the std::lock_guard<std::mutex> guard(mutex); seems to be locking the mutex the entire time of the update function...
And therefore other threads merely have time to access the mutex.
When the update thread finish the job, it needs to unlock the mutex before entering the sleep state, then the reset thread could have a chance to take the lock without any delay. I also tried running your codes on my machine and observe it's still waiting for the lock. I don't know when it gets lucky to take the lock. I think in this case it's an UB
2
3
4
5
6
7
8
9
10
Reset
11
12
13
14
15
16
17
18...
void System::update() {
mutex.lock();
state++;
std::cout << state << std::endl;
mutex.unlock();
std::this_thread::sleep_for(std::chrono::seconds(1));
}
void System::reset(int e) {
mutex.lock();
state = e;
std::cout << state << std::endl;
mutex.unlock();
}
UPD: It seems that the problem which I explain below is non-existent. I cannot reproduce it in a week already, I started suspecting that it was caused by some bugs in a compiler or corrupted memory because it is not reproducing anymore.
I tried to implement my own recursive mutex in C++, but for some reason, it fails. I tried to debug it, but I stuck. (I know that there are recursive mutex in std, but I need a custom implementation in a project where STL is not available; this implementation was just a check of an idea). I haven't thought about efficiency yet, but I don't understand why my straightforward implementation doesn't work.
First of all, here's the implementation of the RecursiveMutex:
class RecursiveMutex
{
std::mutex critical_section;
std::condition_variable cv;
std::thread::id id;
int recursive_calls = 0;
public:
void lock() {
auto thread = std::this_thread::get_id();
std::unique_lock<std::mutex> lock(critical_section);
cv.wait( lock, [this, thread]() {
return id == thread || recursive_calls == 0;
});
++recursive_calls;
id = thread;
}
void unlock() {
std::unique_lock<std::mutex> lock( critical_section );
--recursive_calls;
if( recursive_calls == 0 ) {
lock.unlock();
cv.notify_all();
}
}
};
The failing test is straightforward, it just runs two threads, both of them are locking and unlocking the same mutex (the recursive nature of the mutex is not tested here). Here it is:
std::vector<std::thread> threads;
void initThreads( int num_of_threads, std::function<void()> func )
{
threads.resize( num_of_threads );
for( auto& thread : threads )
{
thread = std::thread( func );
}
}
void waitThreads()
{
for( auto& thread : threads )
{
thread.join();
}
}
void test () {
RecursiveMutex mutex;
while (true) {
int count = 0;
initThreads(2, [&mutex] () {
for( int i = 0; i < 100000; ++i ) {
try {
mutex.lock();
++count;
mutex.unlock();
}
catch (...) {
// Extremely rarely.
// Exception is "Operation not permited"
assert(false);
}
}
});
waitThreads();
// Happens often
assert(count == 200000);
}
}
In this code I have two kinds of errors:
Extremely rarely I get an exception in RecursiveMutex::lock() which contains message "Operation not permitted" and is thrown from cv.wait. As far as I understand, this exception is thrown when wait is called on a mutex which is not owned by the thread. At the same time, I lock it just above calling the wait so this cannot be the case.
In most situations I just get a message into console "terminate called without an active exception".
My main question is what the bug is, but I'll also be happy to know how to debug and provoke race condition in such a code in general.
P.S. I use Desktop Qt 5.4.2 MinGW 32 bit.
I'm using a std::timed_mutex for the first time and it's not behaving the way I expect. It appears to fail immediately instead of waiting for the mutex. I'm providing the lock timeout in milliseconds (as shown here http://www.cplusplus.com/reference/mutex/timed_mutex/try_lock_for/). But the call to try_lock_for() fails right away.
Here's the class that handles locking and unlocking the mutex:
const unsigned int DEFAULT_MUTEX_WAIT_TIME_MS = 5 * 60 * 1000;
class ScopedTimedMutexLock
{
public:
ScopedTimedMutexLock(std::timed_mutex* sourceMutex, unsigned int numWaitMilliseconds=DEFAULT_MUTEX_WAIT_TIME_MS)
m_mutex(sourceMutex)
{
if( !m_mutex->try_lock_for( std::chrono::milliseconds(numWaitMilliseconds) ) )
{
std::string message = "Timeout attempting to acquire mutex lock for ";
message += Conversion::toString(numWaitMilliseconds);
message += "ms";
throw MutexException(message);
}
}
~ScopedTimedMutexLock()
{
m_mutex->unlock();
}
private:
std::timed_mutex* m_mutex;
};
And this is where it's being used:
void CommandService::Process( RequestType& request )
{
unsigned long callTime =
std::chrono::duration_cast< std::chrono::milliseconds >(
std::chrono::system_clock::now().time_since_epoch()
).count();
try
{
ScopedTimedMutexLock lock( m_classMutex, request.getLockWaitTimeMs(DEFAULT_MUTEX_WAIT_TIME_MS) );
// ... command processing code goes here
}
catch( MutexException& mutexException )
{
unsigned long catchTime =
std::chrono::duration_cast< std::chrono::milliseconds >(
std::chrono::system_clock::now().time_since_epoch()
).count();
cout << "The following error occured while attempting to process command"
<< "\n call time: " << callTime
<< "\n catch time: " << catchTime;
cout << mutexException.description();
}
}
Here's the console output:
The following error occured while attempting to process command
call time: 1131268914
catch time: 1131268914
Timeout attempting to acquire mutex lock for 300000ms
Any idea where this is going wrong? Is the conversion to std::chrono::milliseconds correct? How do I make try_lock_for() wait for the lock?
ADDITIONAL INFO: The call to try_lock_for() didn't always fail immediately. Many times the call acquired the lock and everything worked as expected. The failures I was seeing were intermittent. See my answer below for details about why this was failing.
The root cause of the problem is mentioned in the description for try_lock_for() at http://en.cppreference.com/w/cpp/thread/timed_mutex/try_lock_for. Near the end of the description it says:
As with try_lock(), this function is allowed to fail spuriously and
return false even if the mutex was not locked by any other thread at
some point during timeout_duration.
I naively assumed there were only two possible outcomes: (1) the function acquires the lock within the time period, or (2) the function fails after the wait time has elapsed. But there is another possibility, (3) the function fails after a relatively short time for no specified reason. TL;DR, my bad.
I solved the problem by rewriting the ScopedTimedMutexLock constructor to loop on try_lock() until the lock is acquired or the wait time limit is exceeded.
ScopedTimedMutexLock(std::timed_mutex* sourceMutex, unsigned int numWaitMilliseconds=DEFAULT_MUTEX_WAIT_TIME_MS)
m_mutex(sourceMutex)
{
const unsigned SLEEP_TIME_MS = 5;
bool isLocked = false;
unsigned long startMS = now();
while( now() - startMS < numWaitMilliseconds && !isLocked )
{
isLocked = m_sourceMutex->try_lock();
if( !isLocked )
{
std::this_thread::sleep_for(
std::chrono::milliseconds(SLEEP_TIME_MS));
}
}
if( !isLocked )
{
std::string message = "Timeout attempting to acquire mutex lock for ";
message += Conversion::toString(numWaitMilliseconds);
message += "ms";
throw MutexException(message);
}
}
Where now() is defined like this:
private:
unsigned long now() {
return std::chrono::duration_cast< std::chrono::milliseconds >(
std::chrono::system_clock::now().time_since_epoch() ).count();
}
Just a little bump-up for those who come late. And many thanks for help!
Got quite same behavior (only in std::shared_timed_mutex). After some digging found out that both try_lock_for() and try_lock_until() fail immediately if same thread already has exclusive lock on mutex, basically saving time on testing broken code.
Tested with gcc-9, gcc-10, clang-10 and clang-12.
Did NOT tested other possible combinations like requesting exclusive lock over shared lock or requesting shared lock over any of exclusive/shared locks.
This code is simplification of real project code. Main thread create worker thread and wait with std::condition_variable for worker thread really started. In code below std::condition_variable wakes up after current_thread_state becomes "ThreadState::Stopping" - this is the second notification from worker thread, that is the main thread did not wake up after the first notification, when current_thread_state becomes "ThreadState::Starting". The result was deadlock. Why this happens? Why std::condition_variable not wake up after first thread_event.notify_all()?
int main()
{
std::thread thread_var;
struct ThreadState {
enum Type { Stopped, Started, Stopping };
};
ThreadState::Type current_thread_state = ThreadState::Stopped;
std::mutex thread_mutex;
std::condition_variable thread_event;
while (true) {
{
std::unique_lock<std::mutex> lck(thread_mutex);
thread_var = std::move(std::thread([&]() {
{
std::unique_lock<std::mutex> lck(thread_mutex);
cout << "ThreadFunction() - step 1\n";
current_thread_state = ThreadState::Started;
}
thread_event.notify_all();
// This code need to disable output to console (simulate some work).
cout.setstate(std::ios::failbit);
cout << "ThreadFunction() - step 1 -> step 2\n";
cout.clear();
{
std::unique_lock<std::mutex> lck(thread_mutex);
cout << "ThreadFunction() - step 2\n";
current_thread_state = ThreadState::Stopping;
}
thread_event.notify_all();
}));
while (current_thread_state != ThreadState::Started) {
thread_event.wait(lck);
}
}
if (thread_var.joinable()) {
thread_var.join();
current_thread_state = ThreadState::Stopped;
}
}
return 0;
}
Once you call the notify_all method, your main thread and your worker thread (after doing its work) both try to get a lock on the thread_mutex mutex. If your work load is insignificant, like in your example, the worker thread is likely to get the lock before the main thread and sets the state back to ThreadState::Stopped before the main thread ever reads it. This results in a dead lock.
Try adding a significant work load, e.g.
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
to the worker thread. Dead locks are far less likely now. Of course, this is not a fix for your problem. This is just for illustrating the problem.
You have two threads racing: one writes values of current_thread_state twice, another reads the value of current_thread_state once.
It is indeterminate whether the sequence of events is write-write-read or write-read-write as you expect, both are valid executions of your application.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i implemented the readers/writers problem in c++11… I'd like to know what's wrong with it, because these kinds of things are difficult to predict on my own.
Shared database:
Readers can access database when no writers
Writers can access database when no readers or writers
Only one thread manipulates state variables at a time
the example has 3 readers and 1 writer, but also use 2 or more writer....
Code:
class ReadersWriters {
private:
int AR; // number of active readers
int WR; // number of waiting readers
int AW; // number of active writers
int WW; // number of waiting writers
mutex lock;
mutex m;
condition_variable okToRead;
condition_variable okToWrite;
int data_base_variable;
public:
ReadersWriters() : AR(0), WR(0), AW(0), WW(0), data_base_variable(0) {}
void read_lock() {
unique_lock<mutex> l(lock);
WR++; // no writers exist
// is it safe to read?
okToRead.wait(l, [this](){ return WW == 0; });
okToRead.wait(l, [this](){ return AW == 0; });
WR--; // no longer waiting
AR++; // now we are active
}
void read_unlock() {
unique_lock<mutex> l(lock);
AR--; // no longer active
if (AR == 0 && WW > 0) { // no other active readers
okToWrite.notify_one(); // wake up one writer
}
}
void write_lock() {
unique_lock<mutex> l(lock);
WW++; // no active user exist
// is it safe to write?
okToWrite.wait(l, [this](){ return AR == 0; });
okToWrite.wait(l, [this](){ return AW == 0; });
WW--; // no longer waiting
AW++; // no we are active
}
void write_unlock() {
unique_lock<mutex> l(lock);
AW--; // no longer active
if (WW > 0) { // give priority to writers
okToWrite.notify_one(); // wake up one writer
}
else if (WR > 0) { // otherwize, wake reader
okToRead.notify_all(); // wake all readers
}
}
void data_base_thread_write(unsigned int thread_id) {
for (int i = 0; i < 10; i++) {
write_lock();
data_base_variable++;
m.lock();
cout << "data_base_thread: " << thread_id << "...write: " << data_base_variable << endl;
m.unlock();
write_unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
void data_base_thread_read(unsigned int thread_id) {
for (int i = 0; i < 10; i++) {
read_lock();
m.lock();
cout << "data_base_thread: " << thread_id << "...read: " << data_base_variable << endl;
m.unlock();
read_unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
};
int main() {
// your code goes here
ReadersWriters rw;
thread w1(&ReadersWriters::data_base_thread_write, &rw, 0);
thread r1(&ReadersWriters::data_base_thread_read, &rw, 1);
thread r2(&ReadersWriters::data_base_thread_read, &rw, 2);
thread r3(&ReadersWriters::data_base_thread_read, &rw, 3);
w1.join();
r1.join();
r2.join();
r3.join();
cout << "\nThreads successfully completed..." << endl;
return 0;
}
Feedback:
1. It is missing all necessary #includes.
2. It presumes a using namespace std, which is bad style in declarations, as that pollutes all of your clients with namespace std.
3. The release of your locks is not exception safe:
write_lock();
data_base_variable++;
m.lock();
cout << "data_base_thread: " << thread_id << "...write: " << data_base_variable << endl;
m.unlock(); // leaked if an exception is thrown after m.lock()
write_unlock(); // leaked if an exception is thrown after write_lock()
4. The m.lock() wrapping of cout in data_base_thread_write is really unnecessary since write_lock() should already be providing exclusive access. However I understand that this is just a demo.
5. I think I see a bug in the read/write logic:
step 1 2 3 4 5 6
WR 0 1 1 1 0 0
AR 0 0 0 0 1 1
WW 0 0 1 1 1 0
AW 1 1 1 0 0 1
In step 1, thread 1 has the write lock.
In step 2, thread 2 attempts to acquire a read lock, increments WR, and blocks on the second okToRead, waiting for AW == 0.
In step 3, thread 3 attempts to acquire a write lock, increments WW, and blocks on the second okToWrite, waiting for AW == 0.
In step 4, thread 1 releases, the write lock by decrementing AW to 0, and signals okToWrite.
In step 5, thread 2, despite not being signaled, is awoken spuriously, notes that AW == 0, and grabs the read lock by setting WR to 0 and AR to 1.
In step 6, thread 3 receives the signal, notes that AW == 0, and grabs the write lock by setting WW to 0 and AW to 1.
In step 6, both thread 2 owns the read lock and thread 3 owns the write lock (simultaneously).
6. The class ReadersWriters has two functions:
It implements a read/write mutex.
It implements tasks for threads to execute.
A better design would take advantage of the mutex/lock framework established in C++11:
Create a ReaderWriter mutex with members:
// unique ownership
void lock(); // write_lock
void unlock(); // write_unlock
// shared ownership
lock_shared(); // read_lock
unlock_shared(); // read_unlock
The first two names, lock and unlock are purposefully the same names as those used by the C++11 mutex types. Just doing this much allows you to do things like:
std::lock_guard<ReaderWriter> lk1(mut);
// ...
std::unique_lock<ReaderWriter> lk2(mut);
// ...
std::condition_variable_any cv;
cv.wait(lk2); // wait using the write lock
And if you add:
void try_lock();
Then you can also:
std::lock(lk2, <any other std or non-std locks>); // lock multiple locks
The lock_shared and unlock_shared names are chosen because of the std::shared_lock<T> type currently in the C++1y (we hope y is 4) working draft. It is documented in N3659.
And then you can say things like:
std::shared_lock<ReaderWriter> lk3(mut); // read_lock
std::condition_variable_any cv;
cv.wait(lk3); // wait using the read lock
I.e. By just creating a stand-alone ReaderWriter mutex type, with very carefully chosen names for the member functions, you get interoperability with the std-defined locks, condition_variable_any, and locking algorithms.
See N2406 for a more in-depth rationale of this framework.