Usage of boost::unique_lock::timed_lock - c++

boost::timed_lock
void wait(int seconds)
{
boost::this_thread::sleep(boost::posix_time::seconds(seconds));
}
boost::timed_mutex mutex;
void thread()
{
for (int i = 0; i < 5; ++i)
{
wait(1);
boost::unique_lock<boost::timed_mutex> lock(mutex, boost::try_to_lock);
if (!lock.owns_lock())
lock.timed_lock(boost::get_system_time() + boost::posix_time::seconds(1));//<<<<
std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl;
boost::timed_mutex *m = lock.release();
m->unlock();
}
}
timed_lock
Question> I have problems to understand the following lines:
if (!lock.owns_lock())
lock.timed_lock(boost::get_system_time() +
boost::posix_time::seconds(1));//<<<<
Here is my understanding. Assume lock.owns_lock() returns false which means the current object DOES NOT own the lock on the lockable object. So next line will be executed. If after the specified time lapsed and the object still cannot get the lock, then the boost::timed_lock will return false. So the following line will be executed???
std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl;
Is this idea correct? I think the purpose of the code is to make sure the above line is executed iff the object has the lock. But based on my understanding(i guess is NOT correct), the above line always gets run!
Where is the problem?

You are right and the example does NOT guarantee the lock is always properly acquired before executing the protected code.
Given the explanation below the example:
The above program passes boost::try_to_lock as the second parameter to the constructor of boost::unique_lock. Whether or not the mutex has been acquired can be checked via the owns_lock() method afterwards. In case it has not - owns_lock() returns false - another function provided by boost::unique_lock is used: timed_lock() waits for a certain time to acquire the mutex. The given program waits for up to one second which should be more than enough time to acquire the mutex.
The example actually shows the three fundamental ways of acquiring a mutex: lock() waits until the mutex has been acquired. try_lock() does not wait but acquires the mutex if it is available at the time of the call and returns false otherwise. Finally, timed_lock() tries to acquire the mutex within a given period of time. As with try_lock(), success or failure is indicated by the return value of type bool.
the authors seem aware of the problem (given that the document the return value of timed_lock) but did not think a re-test if the lock had been acquired was needed (as demonstrated by them saying "waits for up to one second which should be more than enough time to acquire the mutex").
One error in your understanding:
If after the specified time lapsed and the object still cannot get the lock, then the boost::timed_lock will return false.
This is not true. timed_lock will 'continuously' try to obtain the lock, but give up if the specified time has expired.

You are right. The example doesn't properly handle the state of when the mutex fails to lock. If you read closely just below that example you'll see this quoted there:
The above example uses various methods to illustrate some of the features provided by boost::unique_lock. Certainly, the usage of these features does not necessarily make sense for the given scenario; the usage of boost::lock_guard in the previous example was already adequate. This example is rather meant to demonstrate the possibilities offered by boost::unique_lock.

Related

Why doesn't mutex work without lock guard?

I have the following code:
#include <chrono>
#include <iostream>
#include <mutex>
#include <thread>
int shared_var {0};
std::mutex shared_mutex;
void task_1()
{
while (true)
{
shared_mutex.lock();
const auto temp = shared_var;
std::this_thread::sleep_for(std::chrono::seconds(1));
if(temp == shared_var)
{
//do something
}
else
{
const auto timenow = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
std::cout << ctime(&timenow) << ": Data race at task_1: shared resource corrupt \n";
std::cout << "Actual value: " << shared_var << "Expected value: " << temp << "\n";
}
shared_mutex.unlock();
}
}
void task_2()
{
while (true)
{
std::this_thread::sleep_for(std::chrono::seconds(2));
++shared_var;
}
}
int main()
{
auto task_1_thread = std::thread(task_1);
auto task_2_thread = std::thread(task_2);
task_1_thread.join();
task_2_thread.join();
return 0;
}
shared_var is protected in task_1 but not protected in task_2
What is expected:
I was expecting else branch is not entered in task_1 as the shared resource is locked.
What actually happens:
Running this code will enter else branch in task_1.
Expected outcome is obtained when replace shared_mutex.lock(); with std::lock_guard<std::mutex> lock(shared_mutex); and shared_mutex.unlock(); with std::lock_guard<std::mutex> unlock(shared_mutex);
Questions:
What is the problem in my current approach?
Why does it work with loack_guard?
I am running the code on:
https://www.onlinegdb.com/online_c++_compiler
Suppose you have a room with two entries. One entry has a door the other not. The room is called shared_var. There are two guys that want to enter the room, they are called task_1 and task_2.
You now want to make sure somehow that only one of them is inside the room at any time.
taks_2 can enter the room freely through the entry without a door. task_1 uses the door called shared_mutex.
Your question is now: Can achieve that only one guy is in the room by adding a lock to the door at the first entry?
Obviously no, because the second door can still be entered and left without you having any control over it.
If you experiment you might observe that without the lock it happens that you find both guys in the room while after adding the lock you don't find both guys in the room. Though this is pure luck (bad luck actually, because it makes you beleive that the lock helped). In fact the lock did not change much. The guy called task_2 can still enter the room while the other guy is inside.
The solution would be to make both go through the same door. They lock the door when going inside and unlock it when leaving the room. Putting an automatic lock on the door can be nice, because then the guys cannot forget to unlock the door when they leave.
Oh sorry, i got lost in telling a story.
TL;DR: In your code it does not matter if you use the lock or not. Actually also the mutex in your code is useless, because only one thread un/locks it. To use the mutex properly, both threads need to lock it before reading/writing shared memory.
With UB (as data race), output is undetermined, you might see "expected" output, or strange stuff, crash, ...
What is the problem in my current approach?
In first sample, you have data race as you write (non-atomic) shared_var in one thread without synchronization and read in another thread.
Why does it work with loack_guard?
In modified sample, you lock twice the same (non-recursive) mutex, which is also UB
From std::mutex::lock:
If lock is called by a thread that already owns the mutex, the behavior is undefined
You just have 2 different behaviours for 2 different UB (when anything can happen for both cases).
A mutex lock does not lock a variable, it just locks the mutex so that other code cannot lock the same mutex at the same time.
In other words, all accesses to a shared variable need to be wrapped in a mutex lock on the same mutex to avoid multiple simultaneous accesses to the same variable, it's not in any way automatic just because the variable is wrapped in a mutex lock in another place in the code.
You're not locking the mutex at all in task2, so there is a race condition.
The reason it seems to work when you wrap the mutex in a std::lock_guard is that the lock guard holds the mutex lock until the end of the scope which in this case is the end of the function.
Your function first locks the mutex with the lock lock_guard to later in the same scope try to lock the same mutex with the unlock lock_guard. Since the mutex is already locked by the lock lock_guard, execution stops and there is no output because the program is in effect not running anymore.
If you output "ok" in your code at the point of the "//do something" comment, you'll see that you get the output once and then the program stops all output.
Note; as of this behaviour being guaranteed, see #Jarod42s answer for much better info on that. As with most unexpected behaviour in C++, there is probably an UB involved.

Does wrapping a std::atomic_flag in a getter/setter void its "atomicity"?

Say I have a class that contains a std::atomic_flag as private member, exposed through a getter. Something like the following (pseudo-code):
class Thing
{
private:
std::atomic_flag ready = ATOMIC_FLAG_INIT;
public:
isReady()
{
return ready.test_and_set();
}
}
My naive question is: does querying the flag through a method turn it into a non-atomic operation, being a function call non-atomic (or is it?)? Should I make my ready flag a public member and querying it directly?
No, it doesn't. The test_and_set() operation itself is atomic, so it doesn't matter how deep different threads' call-stacks are.
To demonstrate this, consider the base case where the atomic_flag object is "exposed" directly:
static atomic_flag flag = ATOMIC_FLAG_INIT;
void threadMethod() {
bool wasFirst = !flag.test_and_set();
if( wasFirst ) cout << "I am thread " << this_thread::get_id() << ", I was first!" << endl;
else cout << "I am thread " << this_thread::get_id() << ", I'm the runner-up" << endl;
}
If two threads enter threadMethod - with one thread (t1) slightly before the other (t2) then we can expect the console output to be the following (in the same order):
I am thread t1, I was first!
I am thread t2, I'm the runner-up
Now if both threads enter simultaneously, but t2 is a microsecond ahead of t1, but t2 then becomes slower than t1 as it writes to stdout, then the output would be:
I am thread t1, I'm the runner-up
I am thread t2, I was first!
...so the call to test_and_set was still atomic, even though the output is not necessarily in the expected order.
Now if you were to wrap flag in another method (not inlined, just to be sure), like so...
__declspec(noinline)
bool wrap() {
return !flag.test_and_set();
}
void threadMethod() {
bool wasFirst = wrap();
if( wasFirst ) cout << "I am thread " << this_thread::get_id() << ", I was first!" << endl;
else cout << "I am thread " << this_thread::get_id() << ", I'm the runner-up" << endl;
}
...then the program would not behave any differently - because the false or true return bool value from test_and_set() will still be in each thread's stacks. Ergo, wrapping a atomic_flag does not change its atomicity.
The atomicity property of C++ atomics guarantees that an operation can not be broken in the middle. That is, for a second thread observing the atomic, it will either observe the state before the test_and_set or the state after the test_and_set. It is not possible for such a thread to sneak in a clear between the test and the set part.
However, this is only true for the operation itself. As soon as the test_and_set call has completed, all bets are off again. You should always assume that the thread executing the test_and_set might get pre-empted immediately after finishing that instruction, so you can not assume that any instruction executing after the test_and_set will still observe the same state.
As such, adding the function call does not get you into trouble here. Any instruction following the atomic one must assume that the state of the atomic variable could have changed in the meantime. Atomics take this into account by providing interfaces that are designed in a special way: For example, test_and_set returning the result of the test, because obtaining that information through a separate call would not be atomic anymore.
No, the isReady() method would work exactly the same as direct test_and_set() call, that is atomically.

Boost w/ C++ - Curious mutex behavior

I'm experimenting with Boost threads, as it's to my knowledge I can write a multi-threaded Boost application and compile it in Windows or Linux, while pthreads, which I'm more familiar with, is strictly for use on *NIX systems.
I have the following sample application, which is borrowed from another SO question:
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>
#define NAP_DURATION (10000UL) // 10ms
boost::mutex io_mutex;
void count(int id)
{
for (int i = 0; i < 1000; ++i)
{
boost::mutex::scoped_lock lock(io_mutex);
std::cout << "Thread ID:" << id << ": " << i << std::endl;
if (id == 1)
{
std::cout << "I'm thread " << id << " and I'm taking a short nap" << std::endl;
usleep(NAP_DURATION);
}
else
{
std::cout << "I'm thread " << id << ", I drink 100 cups of coffee and don't need a nap" << std::endl;
}
std::cout << "Thread ID:" << id << ": " << i << std::endl;
boost::thread::yield();
}
}
int main(int argc, char* argv[])
{
boost::thread thrd1( boost::bind(&count, 1));
boost::thread thrd2( boost::bind(&count, 2));
thrd1.join();
thrd2.join();
return 0;
}
I installed Boost on my Ubuntu 14.04 LTS system via:
sudo apt-get install libboost-all-dev
And I compile the above code via:
g++ test.cpp -lboost_system -lboost_thread -I"$BOOST_INLCUDE" -L"$BOOST_LIB"
I've run into what appears to be some interesting inconsistencies. If I set a lengthy NAP_DURATION, say 1 second (1000000) it seems that only thread 1 ever gets the mutex until it completes its operations, and it's very rare that thread 2 ever gets the lock until thread 1 is done, even when I set the NAP_DURATION to be just a few milliseconds.
When I've written similar such applications using pthreads, the lock would typically alternate more or less randomly between threads, since another thread would already be blocked on the mutex.
So, to the question(s):
Is this expected behavior?
Is there a way to control this behavior, such as making scoped locks behave like locking operations are queued?
If the answer to (2) is "no", is it possible to achieve something similar with Boost condition variables and not having to worry about lock/unlock calls failing?
Are scoped_locks guaranteed to unlock? I'm using the RAII approach rather than manually locking/unlocking because apparently the unlock operation can fail and throw an exception, and I'm trying to make this code solid.
Thank you.
Clarifications
I'm aware that putting the calling thread to sleep won't unlock the mutex, since it's still in scope, but the expected scheduling was along the lines of:
Thread1 locks, gets the mutex.
Thread2 locks, blocks.
Thread1 executes, releases the lock, and immediately attempts to lock again.
Thread2 was already waiting on the lock, gets it before thread1.
Is this expected behavior?
Yes and no. You shouldn't have any expectations about which thread will get a mutex, since it's unspecified. But it's certainly within the range of expected behavior.
Is there a way to control this behavior, such as making scoped locks behave like locking operations are queued?
Don't use mutexes this way. Just don't. Use mutexes only such that they're held for very short periods of time relative to other things a thread is doing.
If the answer to (2) is "no", is it possible to achieve something similar with Boost condition variables and not having to worry about lock/unlock calls failing?
Sure. Code what you want.
Are scoped_locks guaranteed to unlock? I'm using the RAII approach rather than manually locking/unlocking because apparently the unlock operation can fail and throw an exception, and I'm trying to make this code solid.
It's not clear what it is you're worried about, but the RAII approach is recommended.
Why are you surprised, exactly ?
If you were expecting thread 2 to acquire the mutex while thread 1 is asleep, then, yes, this is expecting behaviour and your understanding was wrong, because your lock is in scope.
But if you are surprised because of lack of alternance between thread 1 and thread 2 at the end of loop iteration, then you can have a look at this SO question about scheduling that seems "unfair"

Boost Mutex Scoped Lock

I was reading through a Boost Mutex tutorial on drdobbs.com, and found this piece of code:
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>
boost::mutex io_mutex;
void count(int id)
{
for (int i = 0; i < 10; ++i)
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout << id << ": " <<
i << std::endl;
}
}
int main(int argc, char* argv[])
{
boost::thread thrd1(
boost::bind(&count, 1));
boost::thread thrd2(
boost::bind(&count, 2));
thrd1.join();
thrd2.join();
return 0;
}
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout. Does this code just lock everything within the scope until the scope is finished?
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout.
std::cout is a global object, so you can see that as a shared resource. If you access it concurrently from several threads, those accesses must be synchronized somehow, to avoid data races and undefined behavior.
Perhaps it will be easier for you to notice that concurrent access occurs by considering that:
std::cout << x
Is actually equivalent to:
::operator << (std::cout, x)
Which means you are calling a function that operates on the std::cout object, and you are doing so from different threads at the same time. std::cout must be protected somehow. But that's not the only reason why the scoped_lock is there (keep reading).
Does this code just lock everything within the scope until the scope is finished?
Yes, it locks io_mutex until the lock object itself goes out of scope (being a typical RAII wrapper), which happens at the end of each iteration of your for loop.
Why is it needed? Well, although in C++11 individual insertions into cout are guaranteed to be thread-safe, subsequent, separate insertions may be interleaved when several threads are outputting something.
Keep in mind that each insertion through operator << is a separate function call, as if you were doing:
std::cout << id;
std::cout << ": ";
std::cout << i;
std::cout << endl;
The fact that operator << returns the stream object allows you to chain the above function calls in a single expression (as you have done in your program), but the fact that you are having several separate function calls still holds.
Now looking at the above snippet, it is more evident that the purpose of this scoped lock is to make sure that each message of the form:
<id> ": " <index> <endl>
Gets printed without its parts being interleaved with parts from other messages.
Also, in C++03 (where insertions into cout are not guaranteed to be thread-safe) , the lock will protect the cout object itself from being accessed concurrently.
A mutex has nothing to do with anything else in the program
(except a conditional variable), at least at a higher level.
A mutex has two effeccts: it controls program flow, and prevents
multiple threads from executing the same block of code
simultaneously. It also ensures memory synchronization. The
important issue here, is that mutexes aren't associated with
resources, and don't prevent two threads from accessing the same
resource at the same time. A mutex defines a critical section
of code, which can only be entered by one thread at a time. If
all of the use of a particular resource is done in critical
sections controled by the same mutex, then the resource is
effectively protected by the mutex. But the relationship is
established by the coder, by ensuring that all use does take
place in the critical sections.

How is it possible to lock a GMutex twice?

I have a test program that I wrote to try and debug a GMutex issue that I am having and I cannot seem to figure it out. I am using the class below to lock and unlock a mutex within a scoped context. This is similar to BOOST's guard.
/// #brief Helper class used to create a mutex.
///
/// This helper Mutex class will lock a mutex upon creation and unlock when deleted.
/// This class may also be referred to as a guard.
///
/// Therefore this class allows scoped access to the Mutex's locking and unlocking operations
/// and is good practice since it ensures that a Mutex is unlocked, even if an exception is thrown.
///
class cSessionMutex
{
GMutex* apMutex;
/// The object used for logging.
mutable cLog aLog;
public:
cSessionMutex (GMutex *ipMutex) : apMutex(ipMutex), aLog ("LOG", "->")
{
g_mutex_lock(apMutex);
aLog << cLog::msDebug << "MUTEX LOCK " << apMutex << "," << this << cLog::msEndL;
}
~cSessionMutex ()
{
aLog << cLog::msDebug << "MUTEX UNLOCK " << apMutex << "," << this << cLog::msEndL;
g_mutex_unlock(apMutex);
}
};
Using this class, I call it as follows:
bool van::cSessionManager::RegisterSession(const std::string &iSessionId)
{
cSessionMutex lRegistryLock (apRegistryLock);
// SOME CODE
}
where apRegistryLock is a member variable of type GMutex* and is initialized using g_mutex_new() before I ever call RegisterSession.
With this said, when I run the application with several threads, I sometimes notice at the beginning, when RegisterSession is called for the first few times that the log (from the constructor above)
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14ad7ae10
[DEBUG] LOG.-> - MUTEX LOCK 0x26abb40,0x7fc14af7ce10
is logged twice in a row with the same mutex but different instance; therefore, suggesting that the mutex is being locked twice or the second lock is simply being ignored - which is seriously bad.
Moreover, it is worth noting that I also check to see if these logs were initiated from the same thread using the g_thread_self() function, and this returned two separate thread identifiers; thus, suggesting that the mutex was locked twice from separate threads.
So my question is, how is it possible for this to occur?
If it's called twice in the same call chain in the same thread this could happen. The second lock is typically (although not always) ignored. At least in pthreads it's possible to configure multiple locks as counted.
What was happening in my case was there was another thread that was calling g_cond_timed_wait function with the same mutex, but with the mutex unlocked. In this case, g_cond_timed_wait function unlocks a mutex that is not locked and leaves the mutex in an undefined state, which explains why I was seeing the behaviour explained in this question: the mutex being locked twice.