Multiple threads queuing for global lock should all return true once first lock acquired - c++

A similar problem is this one: Are threads waiting on a lock FIFO? However, in this problem, once the lock is acquired only one thread executes the protected code, and in the end all threads will have executed the code.
What I would like to do is to execute the protected code once, but for all threads queuing for the method call at that moment, return true.
Basically, the protected code is a global checkpoint, which is relevant for all threads waiting at that moment. I.e., doing N consecutive checkpoints would not achieve more than only 1.
Note that while the checkpointing is done, there will be other calls to the method, which themselves need a new checkpoint call.
I believe what I want to do is "batch-wise" synchronized calls to the global function.
How can I achieve this in C++, perhaps with Boost?

You seem to be looking for try_lock().
Given some Boost.Thread Lockable, a call to Lockable::try_lock() will return true if it can acquire the lock at that moment, otherwise false if it cannot acquire the lock.
When your thread reaches a checkpoint, have it try to acquire this lock. If it fails, another thread is already in the function. If it succeeds, check some bool to see if the checkpoint has already been run. If it has been run, release the lock and continue. If it hasn't been run, keep the lock and run the checkpoint function and set the checkpoint bool to true.

What you seem to want looks like a barrier which is provided by boost. However, if that doesn't help you, you can make something with condition variables, also in boost

Here is pseudo-code for how I would do it. I am assuming the existing of a mutex class with lock() and unlock() operations.
// This forward declaration helps with declaration
// of the "friend" status for the nested class.
class DoItOnce;
class DoItOnce
{
private:
bool m_amFirst;
mutex m_mutex;
friend class ::DoItOnce::Op;
public:
DoItOnce()
{
m_amFirst = true;
init(m_mutex);
}
~DoItOnce() { destroy(m_mutex); }
void reset()
{
m_mutex.lock();
m_amFirst = true;
m_mutex.lock();
}
//--------
// Nested class
//--------
class Op {
public:
Op(DoItOnce & sync)
: m_sync(sync)
{
m_sync.m_mutex.lock();
m_amFirst = m_sync.m_amFirst;
m_sync.m_amFirst = false;
}
~Op() { m_sync.m_mutex.unlock(); }
bool amFirst() { return m_amFirst; }
private:
DoItOnce & m_sync;
bool m_amFirst;
}; // end of nested class
}; // end of outer class
Here is an example to illustrate its intended use. You will implement the doWork() operation and have all your threads invoke it.
class WorkToBeDoneOnce
{
private:
DoItOnce m_sync;
public:
bool doWork()
{
DoItOnce::Op scopedLock(m_sync);
if (!scopedLock.amFirst()) {
// The work has already been done.
return true;
}
... // Do the work
return true;
}
void resetAmFirstFlag()
{
m_sync.reset();
}
}
If you are confused by my use of the DoItOnce::Op nested class, then you can find an explanation of this coding idiom in my Generic Synchronisation Policies paper, which is available here in various formats (HTML, PDF and slides).

Related

How to write a blockng waitUntil() method to a pool that uses std::atomics

I have a validation class that uses ah threadpool to process all of its jobs.
Now, when the user asks, I start a thread that feeds my validation class with jobs by reading from disk. And I am certain that at one point reading will be faster than processing. So I want to write a method that allows this thread to wait if there are more than, say, 1000 jobs being processed.
I already introduced an atomic that increases when a job is added and decreases when one is finished.
My attempt to add a method have been less than pretty. And I know it must be possible to use something better.
void Validator::waitUntilAvailable() {
while (m_blocksInFlight > 1000) { // thats my atomic
usleep(50000); // seems to be unavailable on Windows.
}
}
Would anyone here be able to assist in having a non-polling method to solve my issue?
Thank you.
There is a condition that your would like to wait for, but no waiting mechanism.
That mechanism is std::condition_variable and std::mutex. E.g.:
class Validator
{
std::mutex m_mutex;
std::condition_variable m_condition;
std::atomic<int> m_blocksInFlight{0};
bool test() const {
return m_blocksInFlight.load(std::memory_order_relaxed) > 1000;
}
void addJob() {
++m_blocksInFlight;
// Only lock the mutex when the test succeeds.
if(this->test()) {
std::unique_lock<decltype(m_mutex)> lock(m_mutex);
m_condition.notify_one();
}
}
void waitUntilAvailable() {
std::unique_lock<decltype(m_mutex)> lock(m_mutex);
while(!this->test())
m_condition.wait(lock);
}
};

Qt/C++: Recursive mutex, 'sync zones' and blocking signals

Firstly I'd like to point out that I've looked this up but can't find the answer I'm looking for/have got confused with overly detailed answers.
I have a program which uses two threads. A Boolean values need to be set and read in Thread A but only read in Thread B.
Thread A:
Module::Module(){
}
void Module::foo(){
mutex.lock();
bool request = true;
mutex.unlock();
}
void Module::bar(){
mutex.lock();
if (request){
mutex.unlock();
// do stuff
}else{
mutex.unlock();
}
}
Thread B:
Provider::Provider(){
module = new Module; // pointer to class request 'lives in'
}
void Provider::foo(){
mutex.lock();
if (module->request){
mutex.unlock();
// do stuff
}
}else{
mutex.unlock();
}
}
My question might seem rather trivial, but it's bugged me. Thread A cannot read and write at the same time, thus i'd argue recursive mutex is not required for A. However, there is a small possibility foo() and bar() could get called simultaneous from Thread B (Signals and slots). Does this mean I need recursive mutex?
Also; is there any reason not to use a Qt::BlockingQueudConnection? A colleague argued that this is dangerous as it sends calling threads to sleep until the signal has executed the slot- but is this not sort of the same as mutex?
Furthermore; seen a post regarding structuring mutex (pthread mutex locking variables used in statements). In here it mentions making local copies on values. If i was to employ something similar for Thread A e.g.
mutex.lock();
requestCopy = request;
mutex.lock();
...
if(requestCopy){
// do stuff
}
Will this also block access of request whereever requestCopy is getting used? I was looking to use this style in my code for simplicity but this would not work if you read AND write in a thread?
Any help would be great.
From what you have shown, it looks like (rewritten)
Some module (Thread A):
class Module {
private:
bool request = false;
QMutex m;
public:
void set_request(bool b) {
QMutexLocker lock(&m);
request = b;
}
bool get_request() {
QMutexLocker lock(&m);
return request;
}
void bar() {
if (get_request()) {
// do stuff
}
}
};
Thread B:
class Provider {
public:
Provider() {
module = new Module();
}
void foo() {
if (module->get_request()){
// do stuff
}
}
private:
Module *module;
};
If this is really the case (and everything is fine this way), there is no need for a recursive mutex.

Ensure that a thread doesn't lock a mutex twice?

Say I have a thread running a member method like runController in the example below:
class SomeClass {
public:
SomeClass() {
// Start controller thread
mControllerThread = std::thread(&SomeClass::runController, this)
}
~SomeClass() {
// Stop controller thread
mIsControllerThreadInterrupted = true;
// wait for thread to die.
std::unique_lock<std:::mutex> lk(mControllerThreadAlive);
}
// Both controller and external client threads might call this
void modifyObject() {
std::unique_lock<std::mutex> lock(mObjectMutex);
mObject.doSomeModification();
}
//...
private:
std::mutex mObjectMutex;
Object mObject;
std::thread mControllerThread;
std::atomic<bool> mIsControllerInterrupted;
std::mutex mControllerThreadAlive;
void runController() {
std::unique_lock<std::mutex> aliveLock(mControllerThreadAlive);
while(!mIsControllerInterruped) {
// Say I need to synchronize on mObject for all of these calls
std::unique_lock<std::mutex> lock(mObjectMutex);
someMethodA();
modifyObject(); // but calling modifyObject will then lock mutex twice
someMethodC();
}
}
//...
};
And some (or all) of the subroutines in runController need to modify data that is shared between threads and guarded by a mutex. Some (or all) of them, might also be called by other threads that need to modify this shared data.
With all the glory of C++11 at my disposal, how can I ensure that no thread ever locks a mutex twice?
Right now, I'm passing unique_lock references into the methods as parameters as below. But this seems clunky, difficult to maintain, potentially disastrous, etc...
void modifyObject(std::unique_lock<std::mutex>& objectLock) {
// We don't even know if this lock manages the right mutex...
// so let's waste some time checking that.
if(objectLock.mutex() != &mObjectMutex)
throw std::logic_error();
// Lock mutex if not locked by this thread
bool wasObjectLockOwned = objectLock.owns_lock();
if(!wasObjectLockOwned)
objectLock.lock();
mObject.doSomeModification();
// restore previous lock state
if(!wasObjectLockOwned)
objectLock.unlock();
}
Thanks!
There are several ways to avoid this kind of programming error. I recommend doing it on a class design level:
separate between public and private member functions,
only public member functions lock the mutex,
and public member functions are never called by other member functions.
If a function is needed both internally and externally, create two variants of the function, and delegate from one to the other:
public:
// intended to be used from the outside
int foobar(int x, int y)
{
std::unique_lock<std::mutex> lock(mControllerThreadAlive);
return _foobar(x, y);
}
private:
// intended to be used from other (public or private) member functions
int _foobar(int x, int y)
{
// ... code that requires locking
}

C++ std::timed_mutex has recursive behaviour

i have a problem. i want to use a mutex for my program. so what happens is this:
i am constructing an object that holds a std::timed_mutex. on creation this object locks the mutex because it should be unlocked later on. the same thread that created the mutex should now wait for that mutex while some other thread does work in the background. joining the thread is no option.
class A{
std::timed_mutex mutex;
A(){
mutex.lock();
}
bool waitForIt(int timeout){
if(mutex.try_lock_for(std::chrono::milliseconds(timeout))){
mutex.unlock();
return true;
}else{
return false;
}
}
}
when calling waitForIt from the same thread the program just goes through and instantly gets a false, totally ignoring the timeout.(yes its intended to unlock the mutex afterwards. it should mime something like an event so every thread waiting gets through)
so in the documentation it says this mutex has a nonrecursive behaviour. but testing revealed that for example i can use the .lock() multiple times from the same thread without getting blocked. i also can use try_lock_for multiple times and every time get true!!! if i once use lock before the try_lock_fors i always get false. sadly i need something that also blocks the same thread that locked the mutex. and i have no idea what to use. im programming on linux btw. so maybe there is a native solution?
also i didnt find a semaphore in the std libs.i could use that instead of the mutex. using my own implementation would be possible but i dont know how to make my own semaphore. any ideas?
as people dont seems to understand that its not that simple:
class IObservable : public IInterface{
private:
std::list<std::shared_ptr<IObserver>> observers;
public:
virtual ~IObservable(){}
void AddObserver(std::shared_ptr<IObserver> observer);
void RemoveObserver(std::shared_ptr<IObserver> observer);
void ClearObservers();
void TellCompleted(bool wasCanceled = false, std::shared_ptr<void> status = 0);
TYPEIDHASHFUNC(IObservable)
};
IObservable is the thing that threads can add observers to. the thing deriving from IObservable calls the method TellCompleted at the end of its actions.
class IObserver : public IInterface{
public:
virtual ~IObserver(){}
virtual CompleteResult Complete(bool wasCanceled, std::shared_ptr<void> status) = 0;
virtual bool WaitForCompletion(int timeoutInMs) = 0;
virtual bool IsCompleted() const = 0;
virtual bool WasCanceled() const = 0;
virtual std::shared_ptr<void> GetStatus() const = 0;
virtual void Reset() = 0;
TYPEIDHASHFUNC(IObserver)
};
IObserver are the observer that can be added to a IObservable. if IObservable completes the method Complete gets called on each observer that was added to the observable
class BasicObserver : public IObserver{
private:
bool isCompleted;
bool wasCanceled;
CompleteResult completeResult;
std::shared_ptr<void> status;
std::timed_mutex mutex;
public:
BasicObserver(CompleteResult completeResult);
~BasicObserver();
CompleteResult Complete(bool wasCanceled, std::shared_ptr<void> status);
bool WaitForCompletion(int timeoutInMs);
bool IsCompleted() const;
bool WasCanceled() const;
std::shared_ptr<void> GetStatus() const;
void Reset();
TYPEIDHASHFUNC(BasicObserver)
};
this is one implementation of an observer. it holds the mutex and implements the WaitForCompletion with the timeout. WaitForCompletion should block. when complete is being called its mutex should be unlocked. when the timeout runs WaitForCompletion returns false
BasicObserver::BasicObserver(CompleteResult completeResult):
isCompleted(false),
wasCanceled(false),
completeResult(completeResult)
{
std::thread createThread([this]{
this->mutex.lock();
});
createThread.join();
}
BasicObserver::~BasicObserver(){
}
CompleteResult BasicObserver::Complete(bool wasCanceled, std::shared_ptr<void> status){
this->wasCanceled = wasCanceled;
this->status = status;
isCompleted = true;
mutex.unlock();
return completeResult;
}
bool BasicObserver::WaitForCompletion(int timeoutInMs){
std::chrono::milliseconds time(timeoutInMs);
if(mutex.try_lock_for(time)){
mutex.unlock();
return true;
}else{
return false;
}
}
bool BasicObserver::IsCompleted() const{
return isCompleted;
}
bool BasicObserver::WasCanceled() const{
return wasCanceled;
}
std::shared_ptr<void> BasicObserver::GetStatus() const{
return status;
}
void BasicObserver::Reset(){
isCompleted = false;
wasCanceled = false;
status = 0;
std::chrono::milliseconds time(250);
mutex.try_lock_for(time); //if this fails it might be already resetted
}
//edit: solved by using a semaphore instead (sem_t from semaphore.h)
You could use a condation_variable, specifically wait_until or wait_for.
I would consider a redesign of your locking structure.
Why not have the lock held by the main thread, and when event x happens you unlock it. If you need to block for a duration I would just make the thread sleep.
Have all working threads blocking on the mutex trying to acquire the lock, if they need to run concurrently have them immediately release the lock once they acquire it.
maybe use a second mutex to emulate event x.
i want to setup the lock from thread 1 then start a thread 2 that
does something (wait for input from hardware in this case) and then
wait for the mutex in thread 1. thread 2 then unlocks the mutex when i
press the switch on the hardware. im using some kind of observer
pattern. so i have something observable where i add an observer to(in
this case the class A is the observer). at some point the observable
tells all added observers that it completed its task and thus unlocks
the mutex. as we have hardware here it could be that the hardware
locks up or a sensor doesnt work. so i NEED a timeout. – fredlllll 3
mins ago
EDIT - Maybe this would work?
Hold lock in thread 1, after thread 2 gets input block on that lock. Have thread 1 release the lock after timeout duration, maybe sleep a little to allow threads through then acquire the lock again. Have thread 2 release lock 1 then begin blocking on a second mutex after acquiring mutex 1, have hardware switch unlock mutex 2 which causes thread 2 to lock mutex2 then unlock mutex 2. Have hardware switch acquire mutex 2 again.

Trade off between a recursive mutexes V a more complex class?

I've heard that recursive mutexs are evil but I can't think of the best way to make this class work without one.
class Object {
public:
bool check_stuff() {
lock l(m);
return /* some calculation */;
}
void do_something() {
lock l(m);
if (check_stuff()) {
/* ... */
}
/* ... */
}
private:
mutex m;
}
The function do_something() will deadlock if m isn't recursive but what's the alternative unless I have, say, a two ways of performing the check one of which doesn't lock.
That choice, as the link above suggests, will ultimately makes the class more complex, is there a neater solution which doesn't require a recursive mutex? Or is a recursive mutex sometimes not that evil?
If you need to expose check_stuff as public (as well as do_something which currently calls it), a simple refactoring will see you through without repeating any calculations:
class Object {
public:
bool check_stuff() {
lock l(m);
return internal_check_stuff(l);
}
void do_something() {
lock l(m);
if (internal_check_stuff(l)) {
/* ... */
}
/* ... */
}
private:
mutex m;
bool internal_check_stuff(lock&) {
return /* some calculation */;
}
}
(as per #Logan's answer, passing a lock reference to the internal function is just a good idea to avoid forgetfulness from future maintainers;-).
Recursive locks are only needed when you call a method which locks from another one which also does (with the same lock), but a little refactoring (moving the common functionality into private methods that don't lock but take lock references as "reminders";-) removes the need.
No requirement to repeat any functionality -- just make public functions which expose exactly the same (locking) functionality that's also needed (in non-locking ways because the lock is already acquired) from other functions, into "semi-empty shells" that perform locking then call such private functions, which do all the necessary work. The other functions in the class that need such functionality but want to do their own locking clearly can just call the private ones directly.
Make check_stuff private, and not lock, and simply call it from do_something with the mutex locked. A common idiom to use to make this safer is to pass a reference to the lock to check_stuff:
bool check_stuff(const lock&)
{
return /* some calculation */;
}
void do_something()
{
lock l(m);
if(check_stuff(l)) { // can't forget to lock, because you need to pass it in to call check_stuff
...
}
}
check_stuff shouldn't lock and return a bool anyway, any value it returns is out of date by the time you look at it, making it pretty useless for consumers without access to the lock.