I am trying to implement very simple Windows events in Linux. Only for my scenario - 3 threads, 1 main and 2 secondary. Each of secondary threads raise 1 event by SetEvent and main thread wait it. Example:
int main()
{
void* Events[2];
Events[0] = CreateEvent();
Events[1] = CreateEvent();
pthread_start(Thread, Events[0]);
pthread_start(Thread, Events[1]);
WaitForMultipleObjects(2, Events, 30000) // 30 seconds timeout
return 0;
}
int* thread(void* Event)
{
// Do something
SetEvent(Event);
// Do something
}
So, to implement it, i use conditional variables. But my question is - is this a right way? Or i doing something wrong? My implementation:
// Actually, this function return pointer to struct with mutex and cond
// here i just simplified example
void* CreateEvent(mutex, condition)
{
pthread_mutex_init(mutex, NULL);
pthread_cond_init(condition, NULL);
}
bool SetEvent (mutex, condition)
{
pthread_mutex_lock(mutex);
pthread_cond_signal(condition);
pthread_mutex_unlock(mutex);
}
int WaitForSingleObject(mutex, condition, timeout)
{
pthread_mutex_lock(mutex);
pthread_cond_timedwait(condition, mutex, timeout);
pthread_mutex_unlock(mutex);
}
// Call WaitForSingleObject for each event.
// Yes, i know, that its a wrong way, but it should work in my example.
int WaitForMultipleObjects(count, mutex[], condition[], timeout);
And all seems good, but i think, that problem will appear when i call WaitFor.. function in Main thread before SetEvent in secondary thread will be called. In Windows, it worked well, but in Linux - only idea is described above.
Maybe you tell me the better way to solve it? Thank you.
UPD: Timeout is very important, because one of the secondary threads may not pass SetEvent().
Basing this on the description of WaitForSingleObject
The WaitForSingleObject function checks the current state of the specified object. If the object's state is nonsignaled, the calling thread enters the wait state until the object is signaled or the time-out interval elapses.
The difference between that behavior and the code is that the code will always wait on the condition variable, as it does not check a predicate. This introduces synchronization issues between the pthread_condt_timewait and pthread_cond_signal calls.
The general idiom for signalling a condition variable is:
lock mutex
set predicate
unlock mutex
signal condition variable
And when waiting for a condition variable:
lock mutex
while ( !predicate )
{
wait on condition variable
}
unlock mutex
Based on what is trying to be accomplished, a separate bool could be used as a predicate for each Event. By introducing a predicate, WaitForSingleObject should only wait on the condition variable if the Event has not been signaled. The code would look similar to the following:
bool SetEvent (mutex, condition)
{
pthread_mutex_lock(mutex); // lock mutex
bool& signalled = find_signal(condition); // find predicate
signalled = true; // set predicate
pthread_mutex_unlock(mutex); // unlock mutex
pthread_cond_signal(condition); // signal condition variable
}
int WaitForSingleObject(mutex, condition, timeout)
{
pthread_mutex_lock(mutex); // lock mutex
bool& signalled = find_signal(condition); // find predicate
while (!signalled)
{
pthread_cond_timedwait(condition, mutex, timeout);
}
signalled = false; // reset predicate
pthread_mutex_unlock(mutex); // unlock mutex
}
There was similar question on stackoverflow already: WaitForSingleObject and WaitForMultipleObjects equivalent in linux
In addition you can use semaphores:
sem_t semOne ;
sem_t semTwo ;
sem_t semMain ;
In main thread:
sem_init(semOne,0,0) ;
sem_init(semTwo,0,0) ;
sem_init(semMain,0,0) ;
...
sem_wait(&semMain);
// Thread 1
sem_wait(&semOne);
sem_post(&semMain);
// Thread 2
sem_wait(&semTwo);
sem_post(&semMain);
Detailed description and various examples could be found here: ------http://www.ibm.com/developerworks/linux/library/l-ipc2lin3/index.html
The previous link is no longer available. The most recent archived version at The Internet Archive's Wayback Machine is:
https://web.archive.org/web/20130515223326/http://www.ibm.com/developerworks/linux/library/l-ipc2lin3/index.html
I think semaphore is a better solution here, because it can be used inter-process.
You can wrap the interface, if name is not provide, then use pthread_ to initialize it for intra-process to use, which can short the resource usage, but when name used, try to use sem initialize it, for intra-process use.
Related
We have implemented TaskRunner whose functions will be called by different threads to start, stop and post tasks. TaskRunner will internally create a thread and if the queue is not empty, it will pop the task from queue and executes it. Start() will check if the thread is running. If not creates a new thread. Stop() will join the thread. The code is as below.
bool TaskRunnerImpl::PostTask(Task* task) {
tasks_queue_.push_back(task);
return true;
}
void TaskRunnerImpl::Start() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
if(is_running_) {
return;
}
is_running_ = true;
runner_thread_ = std::thread(&TaskRunnerImpl::Run, this);
}
void TaskRunnerImpl::Run() {
while(is_running_) {
if(tasks_queue_.empty()) {
continue;
}
Task* task_to_run = tasks_queue_.front();
task_to_run->Run();
tasks_queue_.pop_front();
delete task_to_run;
}
}
void TaskRunnerImpl::Stop() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
is_running_ = false;
if(runner_thread_.joinable()) {
runner_thread_.join();
}
}
We want to use conditional variables now otherwise the thread will be continuously checking whether the task queue is empty or not. We implemented as below.
Thread function (Run()) will wait on condition variable.
PostTask() will signal if some one posts a task.
Stop() will signal if some one calls stop.
Code is as below.
bool TaskRunnerImpl::PostTask(Task* task) {
std::lock_guard<std::mutex> taskGuard(m_task_mutex);
tasks_queue_.push_back(task);
m_task_cond_var.notify_one();
return true;
}
void TaskRunnerImpl::Start() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
if(is_running_) {
return;
}
is_running_ = true;
runner_thread_ = std::thread(&TaskRunnerImpl::Run, this);
}
void TaskRunnerImpl::Run() {
while(is_running_) {
Task* task_to_run = nullptr;
{
std::unique_lock<std::mutex> mlock(m_task_mutex);
m_task_cond_var.wait(mlock, [this]() {
return !(is_running_ && tasks_queue_.empty());
});
if(!is_running_) {
return;
}
if(!tasks_queue_.empty()) {
task_to_run = tasks_queue_.front();
task_to_run->Run();
tasks_queue_.pop_front();
}
}
if(task_to_run)
delete task_to_run;
}
}
void TaskRunnerImpl::Stop() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
is_running_ = false;
m_task_cond_var.notify_one();
if(runner_thread_.joinable()) {
runner_thread_.join();
}
}
I have couple of questions as below. Can some one please help me to understand these.
Condition variable m_task_cond_var is linked with mutex m_task_mutex. But Stop() already locks mutex is_running_mutex to gaurd 'is_running_'. Do I need to lock m_task_mutex before signaling? Here I am not convinced why to lock m_task_mutex as we are not protecting any thing related to task queue.
In Thread function(Run()), we are reading is_running_ without locking is_running_mutex. Is this correct?
Do I need to lock m_task_mutex before signaling [In Stop]?
When the predicate being tested in condition_variable::wait method depends on something happening in the signaling thread (which is almost always), then you should obtain the mutex before signaling. Consider the following possibility if you are not holding the m_task_mutex:
The watcher thread (TaskRunnerImpl::Run) wakes up (via spurious wakeup or being notified from elsewhere) and obtains the mutex.
The watcher thread checks its predicate and sees that it is false.
The signaler thread (TaskRunnerImpl::Stop) changes the predicate to return true (by setting is_running_ = false;).
The signaler thread signals the condition variable.
The watcher thread waits to be signaled (bad)
the signal has already come and gone
the predicate was false, so the watcher begins waiting, possibly indefinitely.
The worst that can happen if you are holding the mutex when you signal is that, the blocked thread (TaskRunnerImpl::Run) wakes up and is immediately blocked when trying to obtain the mutex. This can have some performance implications.
In [TaskRunnerImpl::Run] , we are reading is_running_ without locking is_running_mutex. Is this correct?
In general no. Even if it's of type bool. Because a boolean is typically implemented as a single byte, it's possible that one thread is writing to the byte while you are reading, resulting in a partial read. In practice, however, it's safe. That said, you should obtain the mutex before you read (and then release immediately afterwards).
In fact, it may be preferable to use std::atomic<bool> instead of a bool + mutex combination (or std::atomic_flag if you want to get fancy) which will have the same effect, but be easier to work with.
Do I need to lock m_task_mutex before signaling [In Stop]?
Yes you do. You must change condition under the same mutex and send signal either after the mutex is locked or unlocked after the change. If you do not use the same mutex, or send signal before that mutex is locked you create race condition that std::condition_variable is created to solve.
Logic is this:
Watching thread locks mutex and checks watched condition. If it did not happen it goes to sleep and unlocks the mutex atomically. So signaling thread lock the mutex, change condition and signal. If signalling thread does that before watching one locks the mutex, then watchiong one would see condition happen and would not go to sleep. If it locks before, it would go to sleep and woken when signalling thread raise the signal.
Note: you can signal condition variable before or after mutex is unlocked, both cases is correct but may affect performance. But it is incorrect to signal before locking the mutex.
Condition variable m_task_cond_var is linked with mutex m_task_mutex. But Stop() already locks mutex is_running_mutex to gaurd 'is_running_'. Do I need to lock m_task_mutex before signaling? Here I am not convinced why to lock m_task_mutex as we are not protecting any thing related to task queue.
You overcomlicated your code and made things worse. You should use only one mutex in this case and it would work as intended.
In Thread function(Run()), we are reading is_running_ without locking is_running_mutex. Is this correct?
On x86 hardware it may "work", but from language point of view this is UB.
I'm having trouble with properly "swapping" locks. Consider this situation:
bool HidDevice::wait(const std::function<bool(const Info&)>& predicate)
{
/* A method scoped lock. */
std::unique_lock waitLock(this->waitMutex, std::defer_lock);
/* A scoped, general access, lock. */
{
std::lock_guard lock(this->mutex);
bool exitEarly = false;
/* do some checks... */
if (exitEarly)
return false;
/* Only one thread at a time can execute this method, however
other threads can execute other methods or abort this one. Thus,
general access mutex "this->mutex" should be unlocked (to allow threads
to call other methods) while at the same time, "this->waitMutex" should
be locked to prevent multiple executions of code below. */
waitLock.lock(); // How do I release "this->mutex" here?
}
/* do some stuff... */
/* The main problem is with this event based OS function. It can
only be called once with the data I provide, therefore I need to
have a 2 locks - one blocks multiple method calls (the usual stuff)
and "waitLock" makes sure that only one instance of "osBlockingFunction"
is ruinning at the time. Since this is a thread blocking function,
"this->mutex" must be unlocked at this point. */
bool result = osBlockingFunction(...);
/* In methods, such as "close", "this->waitMutex" and others are then used
to make sure that thread blocking methods have returned and I can safely
modify related data. */
/* do some more stuff... */
return result;
}
How could I solve this "swapping" problem without overly complicating code? I could unlock this->mutex before locking another, however I'm afraid that in that nanosecond, a race condition might occur.
Edit:
Imagine that 3 threads are calling wait method. The first one will lock this->mutex, then this->waitMutex and then will unlock this->mutex. The second one will lock this->mutex and will have to wait for this->waitMutex to be available. It will not unlock this->mutex. The third one will get stuck on locking this->mutex.
I would like to get the last 2 threads to wait for this->waitMutex to be available.
Edit 2:
Expanded example with osBlockingFunction.
It smells like that the design/implementation should be a bit different with std::condition_variable cv on the HidDevice::wait and only one mutex. And as you write "other threads can execute other methods or abort this one" will call cv.notify_one to "abort" this wait. The cv.wait {enter wait & unlocks the mutex} atomically and on cv.notify {exits wait and locks the mutex} atomically. Like that HidDevice::wait is more simple:
bool HidDevice::wait(const std::function<bool(const Info&)>& predicate)
{
std::unique_lock<std::mutex> lock(this->m_Mutex); // Only one mutex.
m_bEarlyExit = false;
this->cv.wait(lock, spurious wake-up check);
if (m_bEarlyExit) // A bool data-member for abort.
return;
/* do some stuff... */
}
My assumption is (according to the name of the function) that on /* do some checks... */ the thread waits until some logic comes true.
"Abort" the wait, will be in the responsibility of other HidDevice function, called by the other thread:
void HidDevice::do_some_checks() /* do some checks... */
{
if ( some checks )
{
if ( other checks )
m_bEarlyExit = true;
this->cv.notify_one();
}
}
Something similar to that.
I recommend creating a little "unlocker" facility. This is a mutex wrapper with inverted semantics. On lock it unlocks and vice-versa:
template <class Lock>
class unlocker
{
Lock& locked_;
public:
unlocker(Lock& lk) : locked_{lk} {}
void lock() {locked_.unlock();}
bool try_lock() {locked_.unlock(); return true;}
void unlock() {locked_.lock();}
};
Now in place of:
waitLock.lock(); // How do I release "this->mutex" here?
You can instead say:
unlocker temp{lock};
std::lock(waitLock, temp);
where lock is a unique_lock instead of a lock_guard holding mutex.
This will lock waitLock and unlock mutex as if by one uninterruptible instruction.
And now, after coding all of that, I can reason that it can be transformed into:
waitLock.lock();
lock.unlock(); // lock must be a unique_lock to do this
Whether the first version is more or less readable is a matter of opinion. The first version is easier to reason about (once one knows what std::lock does). But the second one is simpler. But with the second, the reader has to think more carefully about the correctness.
Update
Just read the edit in the question. This solution does not fix the problem in the edit: The second thread will block the third (and following threads) from making progress in any code that requires mutex but not waitMutex, until the first thread releases waitMutex.
So in this sense, my answer is technically correct, but does not satisfy the desired performance characteristics. I'll leave it up for informational purposes.
I've got a logic inside a dynamic library that run some service that is requested by the main executable.
Upon calling start_service from the library code, some preparation is required prior to service ready, and during this time, the main code should not try to access the service.
To notify the main code when the service is ready I use conditional variable and signal the main executable.
I'd like to hear some advices about the best way to handle a situation when the library become serviceable BEFORE the main code wait for the conditional variable. in this case, the waiting can last forever...
here's the service code :
extern pthread_cond_t cnd;
extern pthread_mutex_t mtx;
void start_service()
{
// getting serviceable.
pthread_cond_signal(&cnd);
main_loop();
}
And here's the main code.
pthread_cond_t cnd;
pthread_mutex_t mtx;
int main()
{
pthread_cond_init(&cnd,NULL);
pthread_mutex_init(&mtx, NULL);
std::thread mytrd(&start_service);
...
pthread_mutex_lock(&mtx);
pthread_cond_wait(&cnd, &mtx); // what happens if we get here after the signal already sent.
pthread_mutex_unlock(&mtx);
...
}
P.S the desired behavior should be that the main code avoid waiting for the conditional variable if it's already signaled.
You need a predicate and enclose the wait in a check for the predicate:
pthread_mutex_lock(&mtx);
while( ! predicate ) pthread_cond_wait(&cnd, &mtx);
pthread_mutex_unlock(&mtx);
In this way, the thread doesnt event start to wait when it doesnt need to.
The second reason you need this is to avoid spurious wakeups thay may occur, ie pthread_cond_wait may return even if the other thread did not signal the condition.
The other thread then has to do (you need to lock the mutex to protect the predicate):
pthread_lock_mutex(&mtx);
predicate = true;
pthread_cond_signal(&cnd);
pthread_unlock_mutex(&mtx);
You can use a semaphore with initial count of 0, so main gets blocked, until start_service() increase semaphore count to 1.If before the block call in main, start_service() increases the semaphore count to 1, main will never enter into wait state.Something like below.
void start_service()
{
// getting serviceable.
sem_post(&sem_mutex);//increases to 1
main_loop();
}
int main()
{
sem_init(&sem_mutex, 0, 0);//initially blocked.
std::thread mytrd(&start_service);
sem_wait(&sem_mutex);
...
}
I have two condition variables:
CondVar1
CondVar2
Used in two threads like this (pseudo-code):
// thread1 starts in 'waiting' mode, and then Thread2 signals
void Thread1()
{
CondVar1->Wait();
CondVar2->Signal();
}
void Thread2()
{
CondVar1->Signal();
CondVar2->Wait();
}
Can this cause a deadlock? meaning, thread1 waits, thread2 signals, and then can thread1 signals before thread2 enters Wait(), meaning thread2 will never return?
Thanks
You don't usually just wait on a condition variable. The common use pattern is holding a lock, checking a variable that determines whether you can proceed or not and if you cannot wait in the condition:
// pseudocode
void push( T data ) {
Guard<Mutex> lock( m_mutex ); // Hold a lock on the queue
while (m_queue.full()) // [1]
m_cond1.wait(lock); // Wait until a consumer leaves a slot for me to write
// insert data
m_cond2.signal_one(); // Signal consumers that might be waiting on an empty queue
}
Some things to note: most libraries allow for spurious wakes in condition variables. While it is possible to implement a condition variable that avoid spurious wakes, the cost of the operations would be higher, so it is considered a lesser evil to require users to recheck the state before continuing (while loop in [1]).
Some libraries, notably C++11, allow you to pass a predicate, and will implement the loop internally: cond.wait(lock, [&queue](){ return !queue.full(); } );
There are two situations that could lead to a deadlock here:
In normal execution, the one you described. It is possible that the variable is signaled before the thread reaches the call to Wait, so the signal is lost.
A spurious wake-up could happen, causing the first thread to leave the call to Wait before actually being signaled, hence signaling Thread 2 who is not yet waiting.
You should design your code as follows when using signaling mechanisms:
bool thread1Waits = true;
bool thread2Waits = true;
void Thread1()
{
while(thread1Waits) CondVar1->Wait();
thread2Waits = false;
CondVar2->Signal();
}
void Thread2()
{
thread1Waits = false;
CondVar1->Signal();
while(thread2Waits) CondVar2->Wait();
}
Of course, this assumes there are locks protecting the condition variables and that additionally thread 1 runs before thread 2.
I'm wondering if there is a boost equivalent of ManualResetEvent? Basically, I'd like a cross-platform implementation... Or, could someone help me mimic ManualResetEvent's functionality using Boost::thread? Thanks guys
It's pretty easy to write a manual reset event when you have mutexes and condition variables.
What you will need is a field that represents whether your reset event is signalled or not. Access to the field will need to be guarded by a mutex - this includes both setting/resetting your event as well as checking to see if it is signaled.
When you are waiting on your event, if it is currently not signaled, you will want to wait on a condition variable until it is signaled. Finally, in your code that sets the event, you will want to notify the condition variable to wake up anyone waiting on your event.
class manual_reset_event
{
public:
manual_reset_event(bool signaled = false)
: signaled_(signaled)
{
}
void set()
{
{
boost::lock_guard<boost::mutex> lock(m_);
signaled_ = true;
}
// Notify all because until the event is manually
// reset, all waiters should be able to see event signalling
cv_.notify_all();
}
void unset()
{
boost::lock_guard<boost::mutex> lock(m_);
signaled_ = false;
}
void wait()
{
boost::lock_guard<boost::mutex> lock(m_);
while (!signaled_)
{
cv_.wait(lock);
}
}
private:
boost::mutex m_;
boost::condition_variable cv_;
bool signaled_;
};
IIRC, ManualResetEvents exist to allow multiple threads to wait on an object, and one thread to get woken at a time when the object is signaled. The "manual reset" part comes from the fact that the system does not automatically reset the event after it wakes a thread; you do that instead.
This sounds very similar to condition variables:
The general usage pattern is that one thread locks a mutex and then calls wait on an instance of condition_variable or condition_variable_any. When the thread is woken from the wait, then it checks to see if the appropriate condition is now true, and continues if so. If the condition is not true, then the thread then calls wait again to resume waiting.