How does the wait() call get invoked with notify in this code? - c++

I have a c++ code as follows that uses condition-variable for synchronization.
#include <iostream>
#include <condition_variable>
int n = 4;
enum class Turn { FOO, BAR };
Turn turn = Turn::FOO;
std::mutex mut;
std::condition_variable cv;
void foo() {
for (int i = 0; i < n; i++) {
std::unique_lock<std::mutex> lock(mut);
// wait for signal from bar & turn == FOO
cv.wait(lock, [] {return turn == Turn::FOO; });
std::cout << "foo" << std::endl;
// unlock & signal bar
lock.unlock();
turn = Turn::BAR;
cv.notify_one();
}
}
void bar() {
for (int i = 0; i < n; i++) {
std::unique_lock<std::mutex> lock(mut);
// wait for signal from foo & turn == BAR
cv.wait(lock, [] {return turn == Turn::BAR; });
std::cout << "bar" << std::endl;
// unlock & signal foo
lock.unlock();
turn = Turn::FOO;
cv.notify_one();
}
}
int main() {
std::thread thread_1(foo);
std::thread thread_2(bar);
thread_2.join();
thread_1.join();
return 0;
}
The output observed:
Question:
How would the cv.wait(lock, [] {return turn == Turn::FOO; }); inside the foo() get triggered in the beginning?
From what I read, the wait() call with the predicate would be equivalent to:while (!pred()) { wait(lock); }. The predicate is true at the beginning (the initial value of turn is Turn::FOO), but how would the wait call get a notify? Regarding wait(), I see this:
Atomically unlocks lock, blocks the current executing thread, and adds it to the list of threads waiting on *this. The thread will be unblocked when notify_all() or notify_one() is executed. It may also be unblocked spuriously. When unblocked, regardless of the reason, lock is reacquired and wait exits.
But I don't see the other thread's (the one running bar()) to have it's notify_one() executed since turn is still FOO.

How would the cv.wait inside the foo() get triggered in the beginning?
It would be triggered by the predicate evaluating to true. The equivalent loop:
while (!pred()) {
wait(lock);
}
would not call wait() even once (the first time that line of code is visited, anyway).

Related

Correct way to check bool flag in thread

How can I check bool variable in class considering thread safe?
For example in my code:
// test.h
class Test {
void threadFunc_run();
void change(bool _set) { m_flag = _set; }
...
bool m_flag;
};
// test.cpp
void Test::threadFunc_run()
{
// called "Playing"
while(m_flag == true) {
for(int i = 0; i < 99999999 && m_flag; i++) {
// do something .. 1
}
for(int i = 0; i < 111111111 && m_flag; i++) {
// do something .. 2
}
}
}
I wan to stop "Playing" as soon as change(..) function is executed in the external code.
It also wants to be valid in process of operating the for statement.
According to the search, there are variables for recognizing immediate changes, such as atomic or volatile.
If not immediately, is there a better way to use a normal bool?
Actually synchronizing threads safely requires more then a bool.
You will need a state, a mutex and a condition variable like this.
The approach also allows for quick reaction to stop from within the loop.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <future>
#include <mutex>
class Test
{
private:
// having just a bool to check the state of your thread is NOT enough.
// your thread will have some intermediate states as well
enum play_state_t
{
idle, // initial state, not started yet (not scheduled by OS threadscheduler yet)
playing, // running and doing work
stopping, // request for stop is issued
stopped // thread has stopped (could also be checked by std::future synchronization).
};
public:
void play()
{
// start the play loop, the lambda is not guaranteed to have started
// after the call returns (depends on threadscheduling of the underlying OS)
// I use std::async since that has far superior synchronization with the calling thead
// the returned future can be used to pass both values & exceptions back to it.
m_play_future = std::async(std::launch::async, [this]
{
// give a signal the asynchronous function has really started
set_state(play_state_t::playing);
std::cout << "play started\n";
// as long as state is playing keep doing the work
while (get_state() == play_state_t::playing)
{
// loop to show we can break fast out of it when stop is called
for (std::size_t i = 0; (i < 100l) && (get_state() == play_state_t::playing); ++i)
{
std::cout << ".";
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
set_state(play_state_t::stopped);
std::cout << "play stopped.\n";
});
// avoid race conditions really wait for
// trhead handling async to have started playing
wait_for_state(play_state_t::playing);
}
void stop()
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
if (m_state == play_state_t::playing)
{
std::cout << "\nrequest stop.\n";
m_state = play_state_t::stopping;
m_cv.wait(lock, [&] { return m_state == play_state_t::stopped; });
}
};
~Test()
{
stop();
}
private:
void set_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
m_state = state;
m_cv.notify_all(); // let other threads that are wating on condition variable wakeup to check new state
}
play_state_t get_state() const
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
return m_state;
}
void wait_for_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] { return m_state == state; });
}
// for more info on condition variables
// see : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
mutable std::mutex m_mtx;
std::condition_variable m_cv; // a condition variable is not really a variable more a signal to threads to wakeup
play_state_t m_state{ play_state_t::idle };
std::future<void> m_play_future;
};
int main()
{
Test test;
test.play();
std::this_thread::sleep_for(std::chrono::seconds(1));
test.stop();
return 0;
}

C++ mutex lock declared within or outside for-loop

Given n, below code aims to print "foo", "bar" alternatively.
For example,
Input: n = 2
Output: "foobarfoobar"
Explanation: "foobar" is being output 2 times.
detailed code is listed here. There are two questions:
1) Which one is better, declaring mutex lock outside or inside for-loop?
2) When we declare mutex lock outside the for-loop, why we can NOT call lock.unlock() before cv.notify_one();. It produces runtime error!
class FooBar {
private:
mutex m;
condition_variable cv;
int n;
bool flag = false; // for foo printed
public:
FooBar(int n) {
this->n = n;
}
void foo(function<void()> printFoo) {
//question: can we put here? unique_lock<mutex> lock(m);
for (int i = 0; i < n; i++) {
unique_lock<mutex> lock(m); //question: why shall this lock be declared again and again in each for-loop?
cv.wait(lock, [&](){ return !flag; });
// printFoo() outputs "foo". Do not change or remove this line.
printFoo();
flag = true;
//question: can we call lock.unlock()? yes, it works b/c each for-loop iteration grabs a new lock! if we declare lock outside for-loop, then we can NOT unlock here? why?
lock.unlock();
cv.notify_one();
}
}
void bar(function<void()> printBar) {
for (int i = 0; i < n; i++) {
unique_lock<mutex> lock(m);
cv.wait(lock, [&](){ return flag; }); //wait until lambda returns true
// printBar() outputs "bar". Do not change or remove this line.
printBar();
flag = false;
lock.unlock();
cv.notify_one();
}
}
};```
It doesn't matter where you define unique_lock (inside or outside loop).
All what you need to do is to ensure that mutex is locked when condition_variable::wait is called.
You got runtime error, because when you define unique_lock outside loop
unique_lock<mutex> lock(m); // mutex is LOCKED
for (int ...) {
cv.wait(); // problem is here at second iteratio
lock.unlock();
cv.notify_one();
}
mutex is locked only in first iteration of for (it was locked in ctor of unique_lock).
In second iteration wait is called with unlocked lock, since c++14 it leads std::terminate to be called (
you can read about this behaviour here).
unique_lock has overloaded constructor taking std::defer_lock_t type tag.
When this ctor is called passed mutex is not being locked. Then when you want to "open" critical section
on given mutex you need to call explicitly lock function.
So, the two versions below do the same thing:
1)
unique_lock<mutex> lock(m, std::defer_lock); // mutex is un-locked
for ()
{
lock.lock();
cv.wait(...); // LOCK is locked
lock.unlock();
cv.notify_one();
}
2)
for ()
{
unique_lock<mutex> lock(m); // is locked
cv.wait();
lock.unlock();
cv.notify_one();
}

std::condition_variable::notify_one: does it wake multiple threads if some have false predicate?

I have a ring buffer that is used for read/writers. I keep track of the number if entries in the ring buffer and do not allow overwriting entries that have not been read. I use std::condition_variable wait() and notify_one() to synchronize the readers and writers. Basically the condition on the reader is that the number of entries > 0. The condition on the writers is that the number of entries < capacity.
It all seems to work but there is one thing I don't understand. When a reader or writer calls notify_one(), it does not cause a context switch. I've read and understand that it works this way. However, in a case where a writer writes an entry to fill the buffer, the writer calls notify_one() and continues to write another in which case its predicate fails in its wait(). In this case I see that another writer() may wake up and its predicate will fail as well. Then a reader will wake up and its predicate succeeds and it can begin reading.
What I don't understand it why on one notify_one() multiple threads are being unblocked. Does a wait() with a failed predicate not eat up the notify? I can't find anything that states this is the case.
I could call notify_all() just to be sure but it seems to be working with notify_one().
Here's the code.
#include <iostream>
#include <stdint.h>
#include <boost/circular_buffer.hpp>
#include <condition_variable>
#include <thread>
// ring buffer with protection for overwrites
template <typename T>
class ring_buffer {
public:
ring_buffer(size_t size) {
cb.set_capacity(size);
}
void read(T& entry) {
{
std::unique_lock<std::mutex> lk(cv_mutex);
cv.wait(lk, [this] {
std::cout << "read woke up, test=" << (cb.size() > 0) << std::endl;
return 0 < cb.size();});
auto iter = cb.begin();
entry = *iter;
cb.pop_front();
std::cout << "Read notify_one" << std::endl;
}
cv.notify_one();
}
void write(const T& entry) {
{
std::unique_lock<std::mutex> lk(cv_mutex);
//std::cout << "Write wait" << std::endl;
cv.wait(lk, [this] {
std::cout << "write woke up, test=" << (cb.size() < cb.capacity()) << std::endl;
return cb.size() < cb.capacity();});
cb.push_back(entry);
std::cout << "Write notify_one" << std::endl;
}
cv.notify_one();
}
size_t get_number_entries() {
std::unique_lock<std::mutex> lk(cv_mutex);
return cb.size();
}
private:
boost::circular_buffer<T> cb;
std::condition_variable cv;
std::mutex cv_mutex;
};
void write_loop(ring_buffer<int> *buffer) {
for (int i = 0; i < 100000; ++i) {
buffer->write(i);
}
}
void read_loop(ring_buffer<int> *buffer) {
for (int i = 0; i < 50000; ++i) {
int val;
buffer->read(val);
}
}
int main() {
ring_buffer<int> buffer(1000);
std::thread writer(write_loop, &buffer);
std::thread reader(read_loop, &buffer);
std::thread reader2(read_loop, &buffer);
writer.join();
reader.join();
reader2.join();
return 0;
}
I see the following in the output where multiple threads are awoken because the predicate is false.
read woke up, test=0
read woke up, test=0
write woke up, test=1
You are seeing the initial test of the condition when each of your read threads checks if it should wait, or if the condition is already met.
From here, this overload of wait() is equivalent to
while (!pred()) {
wait(lock);
}
So wait() is only called when the condition is true, but the condition must be checked first.
read woke up, test=0 // tests condition on reader1 thread, false, wait is called
read woke up, test=0 // tests condition on reader2 thread, false, wait is called
write woke up, test=1 // tests condition on writer thread, true, wait is not called
This might make it obvious where 2 values are written and each reader will only read a single value.

Using infinite loops in std::thread to increment and display a value

Considering the following, simple code:
using ms = std::chrono::milliseconds;
int val = 0;
for(;;)
{
std::cout << val++ << ' ';
std::this_thread::sleep_for(ms(200));
}
We see that we infinitely print subsequent numbers each 0.2 second.
Now, I would like to implement the same logic using a helper class and multithreading. My aim is to be able to run something similar to this:
int main()
{
Foo f;
std::thread t1(&Foo::inc, f);
std::thread t2(&Foo::dis, f);
t1.join();
t2.join();
}
where Foo::inc() will increment a member variable val of an object f by 1 and Foo::dis() will display the same variable.
Since the original idea consisted of incrementing and printing the value infinitely, I would assume that both of those functions must contain an infinite loop. The problem that could occur is data race - reading and incrementing the very same variable. To prevent that I decided to use std::mutex.
My idea of implementing Foo is as follows:
class Foo {
int val;
public:
Foo() : val{0} {}
void inc()
{
for(;;){
mtx.lock();
++val;
mtx.unlock();
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
std::this_thread::sleep_for(ms(200));
mtx.unlock();
}
}
};
Obviously it's missing the mtx object, so the line
std::mutex mtx;
is written just under the #includes, declaring mtx as a global variable.
To my understanding, combining this class' definition with the above main() function should issue two, separate, infinite loops that each will firstly lock the mutex, either increment or display val and unlock the mutex so the other one could perform the second action.
What actually happens is instead of displaying the sequence of 0 1 2 3 4... it simply displays 0 0 0 0 0.... My guess is that I am either using std::mutex::lock and std::mutex::unlock incorrectly, or my fundamental understanding of multithreading is lacking some basic knowledge.
The question is - where is my logic wrong?
How would I approach this problem using a helper class and two std::threads with member functions of the same object?
Is there a guarantee that the incrementation of val and printing of it will each occur one after the other using this kind of logic? i.e. will there never be a situation when val is incremented twice before it being displayed, or vice versa?
You are sleeping with the thread locked preventing the other thread from running for most of the time.
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
std::this_thread::sleep_for(ms(200)); // this is still blocking the other thread
mtx.unlock();
}
}
Try this:
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx.lock();
std::cout << val << ' ';
mtx.unlock(); // unlock to allow the other thread to progress
std::this_thread::sleep_for(ms(200));
}
}
Also, rather than using a global std::mutex you could add it as a member of your class.
If you want to synchronize the threads to produce an even output of numbers incrementing by exactly one each time then you need something like a std::condition_variable so that each thread can signal the other when it has done it's part of the job (thread one - incrementing and thread 2 - printing).
Here is an example:
class Foo {
int val;
std::mutex mtx;
std::condition_variable cv;
bool new_value; // flag when a new value is ready
public:
Foo() : val{0}, new_value{false} {}
void inc()
{
for(;;){
std::unique_lock<std::mutex> lock(mtx);
// release the lock and wait until new_value has been consumed
cv.wait(lock, [this]{ return !new_value; }); // wait for change in new_value
++val;
new_value = true; // signal for the other thread there is a new value
cv.notify_one(); // wake up the other thread
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
// a nice delay
std::this_thread::sleep_for(ms(200));
std::unique_lock<std::mutex> lock(mtx);
// release the lock and wait until new_value has been produced
cv.wait(lock, [this]{ return new_value; }); // wait for a new value
std::cout << val << ' ' << std::flush; // don't forget to flush
new_value = false; // signal for the other thread that the new value was used
cv.notify_one(); // wake up the other thread
}
}
};
int main(int argc, char** argv)
{
Foo f;
std::thread t1(&Foo::inc, &f);
std::thread t2(&Foo::dis, &f);
t1.join();
t2.join();
}
A mutex is not a signal. It is not fair. You can unlock then relock a mutex, and someone waiting for it can never notice.
All it guarantees is that exactly one thread has it locked.
Your task, splitting it into two threads, seems utterly pointless. Using sleep for is also a bad idea, as printing takes an unknown amount of time, making the period between displays drift by an unpredictable amount.
You probably (A) do not want to do this, and failing that (B) use a condition variable. One thread increments the value every X time (based off a fixed start time, not based off delays of X), and then signs the condition variable. It holds no mutex while waiting.
The other thread waits on the condition variable and the counter value changing. When it wakes, it copies the counter, unlocks, prints once, updates the last value seen, then waits on the condition variable (and value changing) again.
A mild benefit to this is that if the io is ridiculously slow or blocking, the counter keeps incrementing, so other consumers can use it.
struct Counting {
int val = -1; // optionally atomic
std::mutex mtx;
std::condition_variable cv;
void counting() {
while(true){
{
auto l=std::unique_lock<std::mutex>(mtx);
++val; // even if atomic, val must be modified while or before the mtx is held and before the notify.
}
// or notify all:
cv.notify_one(); // no need to hold lock here
using namespace std::literals;
std::this_thread::sleep_for(200ms); // ideally wait to an absolute time instead of delay here
}
}
void printing() {
int old_val=-1;
while(true){
int new_val=[&]{
auto lock=std::unique_lock<std::mutex>(mtx);
cv.wait(lock, [&]{ return val!=old_val; }); // only print if we have a new value
return val;
}();// release lock, no need to hold it while printing
std::cout << new_val << std::endl; // endl flushes. Note there are threading issues streaming to cout like this.
old_val=new_val; // update last printed value
}
}
};
if one thread is printing the other counting, you'll get basically what you want.
When launching a thread with a member function, you need to pass the address of the object, not the object itself
std::thread t2(&Foo::dis, &f);
Please note that this still won't print 1 2 3 4 .. You'll need to have the increment operation and the print alternate exactly for that.
#include <thread>
#include<iostream>
#include <mutex>
std::mutex mtx1, mtx2;
class Foo {
int val;
public:
Foo() : val{0} { mtx2.lock(); }
void inc()
{
for(;;){
mtx1.lock();
++val;
mtx2.unlock();
}
}
void dis()
{
using ms = std::chrono::milliseconds;
for(;;){
mtx2.lock();
std::cout << val <<std::endl;
std::this_thread::sleep_for(ms(200));
mtx1.unlock();
}
}
};
int main()
{
Foo f;
std::thread t1(&Foo::inc, &f);
std::thread t2(&Foo::dis, &f);
t1.join();
t2.join();
}
Also take a look at http://en.cppreference.com/w/cpp/thread/condition_variable

How to say to std::thread to stop?

I have two questions.
1) I want to launch some function with an infinite loop to work like a server and checking for messages in a separate thread. However I want to close it from the parent thread when I want. I'm confusing how to std::future or std::condition_variable in this case. Or is it better to create some global variable and change it to true/false from the parent thread.
2) I'd like to have something like this. Why this one example crashes during the run time?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
std::mutex mu;
bool stopServer = false;
bool serverFunction()
{
while (true)
{
// checking for messages...
// processing messages
std::this_thread::sleep_for(std::chrono::seconds(1));
mu.lock();
if (stopServer)
break;
mu.unlock();
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
system("pause");
mu.lock();
stopServer = true;
mu.unlock();
serverThread.join();
}
Why this one example crashes during the run time?
When you leave the inner loop of your thread, you leave the mutex locked, so the parent thread may be blocked forever if you use that mutex again.
You should use std::unique_lock or something similar to avoid problems like that.
You leave your mutex locked. Don't lock mutexes manually in 999/1000 cases.
In this case, you can use std::unique_lock<std::mutex> to create a RAII lock-holder that will avoid this problem. Simply create it in a scope, and have the lock area end at the end of the scope.
{
std::unique_lock<std::mutex> lock(mu);
stopServer = true;
}
in main and
{
std::unique_lock<std::mutex> lock(mu);
if (stopServer)
break;
}
in serverFunction.
Now in this case your mutex is pointless. Remove it. Replace bool stopServer with std::atomic<bool> stopServer, and remove all references to mutex and mu from your code.
An atomic variable can safely be read/written to from different threads.
However, your code is still busy-waiting. The right way to handle a server processing messages is a condition variable guarding the message queue. You then stop it by front-queuing a stop server message (or a flag) in the message queue.
This results in a server thread that doesn't wake up and pointlessly spin nearly as often. Instead, it blocks on the condition variable (with some spurious wakeups, but rare) and only really wakes up when there are new messages or it is told to shut down.
template<class T>
struct cross_thread_queue {
void push( T t ) {
{
auto l = lock();
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return halt || !data.empty(); } );
if (halt) return {};
T r = data.front();
data.pop_front();
return std::move(r); // returning to optional<T>, so we'll explicitly `move` here.
}
void terminate() {
{
auto l = lock();
data.clear();
halt = true;
}
cv.notify_all();
}
private:
std::mutex m;
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
bool halt = false;
std::deque<T> data;
std::condition_variable cv;
};
We use boost::optional for the return type of pop -- if the queue is halted, pop returns an empty optional. Otherwise, it blocks until there is data.
You can replace this with anything optional-like, even a std::pair<bool, T> where the first element says if there is anything to return, or a std::unique_ptr<T>, or a std::experimental::optional, or a myriad of other choices.
cross_thread_queue<int> queue;
bool serverFunction()
{
while (auto message = queue.pop()) {
// processing *message
std::cout << "Processing " << *message << std::endl;
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
queue.push(42);
system("pause");
queue.terminate();
serverThread.join();
}
live example.