unique lock and condition variable - explicitly calling unlock - c++

I found an example code which demonstrates how to use a condition variable :
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <deque>
using namespace std;
deque<int> qu;
mutex mu;
condition_variable cond;
void fun1()
{
int count = 100;
while (count > 0)
{
unique_lock<mutex> locker(mu);
qu.push_front(count);
locker.unlock(); // explicit unlock 1
cond.notify_one();
--count;
}
}
void fun2()
{
int data = 0;
while(data != 1)
{
unique_lock<mutex> locker(mu);
cond.wait(locker, [](){ return !(qu.empty()); });
data = qu.back();
qu.pop_back();
locker.unlock(); // explicit unlock 2
cout<<"data: "<<data<<endl;
}
}
int main()
{
thread t1(fun1);
thread t2(fun2);
t1.join();
t2.join();
system("pause");
return 0;
}
I think that explicitly calling unlock is not necessary. However in fun1 calling it before notify_one might increase a performace, right ? Why unlock is called in fun2 (in each iteration unlock is called implicitly, so doing it explicitly make no sense) ?

std::unique_lock use the RAII pattern.
That means its doesn't need to explicitly call unlock on mutex. This provides exception safety i.e in case of exception after locking the mutex and before explicitly unlocking it it automatically gets unlocked as it goes out of scope.

It seems misleading to me. Locking with a mutex is required to use condition variables. This example uses the same mutex for multiple shared variables (cond and qu).
I think, it will not work properly if fun1 or fun2 runs on more than one thread.
Below would be more clear:
mutex mu;
mutex mu_for_cv;
condition_variable cond;
void fun1()
{
int count = 100;
while (count > 0)
{
unique_lock<mutex> locker(mu);
qu.push_front(count);
{
unique_lock<mutex> locker(mu_for_cv);
cond.notify_one();
}
--count;
}
}
void fun2()
{
int data = 0;
while(data != 1)
{
{
unique_lock<mutex> locker(mu_for_cv);
cond.wait(locker, [](){ return !(qu.empty()); });
}
unique_lock<mutex> locker(mu);
if (!qu.empty())
{
data = qu.back();
qu.pop_back();
cout<<"data: "<<data<<endl;
}
}
}
Also, it'd be better to check the queue is not empty in fun2 to defense against spurious wakeups.

Related

Sharing Barriers across objects/threads

Lets say I have Object A and Object B. ObjA creates multiple 'ObjB's and keeps a pointer to each, then detaches a thread on each object B to do work. I want to implement a barrier in ObjA that only unlocks whenever all 'ObjB's have reached a certain internal condition within their work functions.
How can I create a barrier with a dynamic count within ObjA, and then make ObjB aware of that barrier so that it can arrive at the barrier? I wanted to use std::barrier but I've had problems trying to do so.
Thus far I cannot make a std::barrier member variable in ObjA because it requires an input size which I will only know once ObjA is constructed. If I create the barrier inside of the busy function of ObjA, then any signal function that ObjB calls to A with won't have scope to it.
Is the best approach to do some homespun semaphore with busy waiting?
You can use a conditional variable.
#include <iostream>
#include <condition_variable>
#include <thread>
#include <vector>
std::condition_variable cv;
std::mutex cv_m; // This mutex is used for three purposes:
// 1) to synchronize accesses to count
// 3) for the condition variable cv
int total_count = 10; // This is count of objBs
int count = total_count;
void obj_b_signals()
{
// Do something..
bool certainCondition = true;
// We have reached the condition..
if (certainCondition) {
{
std::lock_guard<std::mutex> lk(cv_m);
count--;
}
std::cerr << "Notifying...\n";
cv.notify_one();
}
}
int main()
{
// obj A logic
std::vector<std::thread> threads;
for (size_t i=0; i<total_count; ++i) {
threads.emplace_back(std::thread(obj_b_signals));
}
{
std::unique_lock<std::mutex> lk(cv_m);
std::cerr << "Waiting for ObjBs to reach a certain condition... \n";
cv.wait(lk, []{return count == 0;});
std::cerr << "...finished waiting. count == 0\n";
}
// Do something else
for (std::thread & t: threads) {
t.join();
}
}

Threading queue in c++

Currently working on a project, im struggeling with threading and queue at the moment, the issue is that all threads take the same item in the queue.
Reproduceable example:
#include <iostream>
#include <queue>
#include <thread>
using namespace std;
void Test(queue<string> queue){
while (!queue.empty()) {
string proxy = queue.front();
cout << proxy << "\n";
queue.pop();
}
}
int main()
{
queue<string> queue;
queue.push("101.132.186.39:9090");
queue.push("95.85.24.83:8118");
queue.push("185.211.193.162:8080");
queue.push("87.106.37.89:8888");
queue.push("159.203.61.169:8080");
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++){
ThreadVector.emplace_back([&]() {Test(queue); });
}
for (auto& t : ThreadVector){
t.join();
}
ThreadVector.clear();
return 0;
}
You are giving each thread its own copy of the queue. I imagine that what you want is all the threads to work on the same queue and for that you will need to use some synchronization mechanism when multiple threads work on the shared queue as std queue is not thread safe.
edit: minor note: in your code you are spawning 11 threads not 10.
edit 2: OK, try this one to begin with:
std::mutex lock_work;
std::mutex lock_io;
void Test(queue<string>& queue){
while (!queue.empty()) {
string proxy;
{
std::lock_guard<std::mutex> lock(lock_work);
proxy = queue.front();
queue.pop();
}
{
std::lock_guard<std::mutex> lock(lock_io);
cout << proxy << "\n";
}
}
}
Look at this snippet:
void Test(std::queue<std::string> queue) { /* ... */ }
Here you pass a copy of the queue object to the thread.
This copy is local to each thread, so it gets destroyed after every thread exits so in the end your program does not have any effect on the actual queue object that resides in the main() function.
To fix this, you need to either make the parameter take a reference or a pointer:
void Test(std::queue<std::string>& queue) { /* ... */ }
This makes the parameter directly refer to the queue object present inside main() instead of creating a copy.
Now, the above code is still not correct since queue is prone to data-race and neither std::queue nor std::cout is thread-safe and can get interrupted by another thread while currently being accessed by one. To prevent this, use a std::mutex:
// ...
#include <mutex>
// ...
// The mutex protects the 'queue' object from being subjected to data-race amongst different threads
// Additionally 'io_mut' is used to protect the streaming operations done with 'std::cout'
std::mutex mut, io_mut;
void Test(std::queue<std::string>& queue) {
std::queue<std::string> tmp;
{
// Swap the actual object with a local temporary object while being protected by the mutex
std::lock_guard<std::mutex> lock(mut);
std::swap(tmp, queue);
}
while (!tmp.empty()) {
std::string proxy = tmp.front();
{
// Call to 'std::cout' needs to be synchronized
std::lock_guard<std::mutex> lock(io_mut);
std::cout << proxy << "\n";
}
tmp.pop();
}
}
This synchronizes each thread call and prevents access from any other threads while queue is still being accessed by a thread.
Edit:
Alternatively, it'd be much faster in my opinion to make each thread wait until one of them receives a notification of your push to std::queue. You can do this through the use of std::condition_variable:
// ...
#include <mutex>
#include <condition_variable>
// ...
std::mutex mut1, mut2;
std::condition_variable cond;
void Test(std::queue<std::string>& queue, std::chrono::milliseconds timeout = std::chrono::milliseconds{10}) {
std::unique_lock<std::mutex> lock(mut1);
// Wait until 'queue' is not empty...
cond.wait(lock, [queue] { return queue.empty(); });
while (!queue.empty()) {
std::string proxy = std::move(queue.front());
std::cout << proxy << "\n";
queue.pop();
}
}
// ...
int main() {
std::queue<string> queue;
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++)
ThreadVector.emplace_back([&]() { Test(queue); });
// Notify the vectors of each 'push()' call to 'queue'
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("101.132.186.39:9090");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("95.85.24.83:8118");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("185.211.193.162:8080");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("87.106.37.89:8888");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("159.203.61.169:8080");
cond.notify_one();
}
for (auto& t : ThreadVector)
t.join();
ThreadVector.clear();
}

C++ mutex lock declared within or outside for-loop

Given n, below code aims to print "foo", "bar" alternatively.
For example,
Input: n = 2
Output: "foobarfoobar"
Explanation: "foobar" is being output 2 times.
detailed code is listed here. There are two questions:
1) Which one is better, declaring mutex lock outside or inside for-loop?
2) When we declare mutex lock outside the for-loop, why we can NOT call lock.unlock() before cv.notify_one();. It produces runtime error!
class FooBar {
private:
mutex m;
condition_variable cv;
int n;
bool flag = false; // for foo printed
public:
FooBar(int n) {
this->n = n;
}
void foo(function<void()> printFoo) {
//question: can we put here? unique_lock<mutex> lock(m);
for (int i = 0; i < n; i++) {
unique_lock<mutex> lock(m); //question: why shall this lock be declared again and again in each for-loop?
cv.wait(lock, [&](){ return !flag; });
// printFoo() outputs "foo". Do not change or remove this line.
printFoo();
flag = true;
//question: can we call lock.unlock()? yes, it works b/c each for-loop iteration grabs a new lock! if we declare lock outside for-loop, then we can NOT unlock here? why?
lock.unlock();
cv.notify_one();
}
}
void bar(function<void()> printBar) {
for (int i = 0; i < n; i++) {
unique_lock<mutex> lock(m);
cv.wait(lock, [&](){ return flag; }); //wait until lambda returns true
// printBar() outputs "bar". Do not change or remove this line.
printBar();
flag = false;
lock.unlock();
cv.notify_one();
}
}
};```
It doesn't matter where you define unique_lock (inside or outside loop).
All what you need to do is to ensure that mutex is locked when condition_variable::wait is called.
You got runtime error, because when you define unique_lock outside loop
unique_lock<mutex> lock(m); // mutex is LOCKED
for (int ...) {
cv.wait(); // problem is here at second iteratio
lock.unlock();
cv.notify_one();
}
mutex is locked only in first iteration of for (it was locked in ctor of unique_lock).
In second iteration wait is called with unlocked lock, since c++14 it leads std::terminate to be called (
you can read about this behaviour here).
unique_lock has overloaded constructor taking std::defer_lock_t type tag.
When this ctor is called passed mutex is not being locked. Then when you want to "open" critical section
on given mutex you need to call explicitly lock function.
So, the two versions below do the same thing:
1)
unique_lock<mutex> lock(m, std::defer_lock); // mutex is un-locked
for ()
{
lock.lock();
cv.wait(...); // LOCK is locked
lock.unlock();
cv.notify_one();
}
2)
for ()
{
unique_lock<mutex> lock(m); // is locked
cv.wait();
lock.unlock();
cv.notify_one();
}

notifyall not working in C++ multithreading . Causing deadlock

#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
using namespace std;
int num = 1;
#define NUM 20
condition_variable odd;
condition_variable even;
mutex mut;
void thread_odd()
{
while(num < NUM -1)
{
if(num%2 != 1)
{
unique_lock<mutex> lock(mut);
odd.wait(lock);
}
cout<<"ODD : "<<num<<endl;
num++;
even.notify_all(); // Line X
}
}
void thread_even()
{
while(num < NUM )
{
if(num%2 != 0)
{
unique_lock<mutex> lock(mut);
even.wait(lock);
}
cout<<"EVEN : "<<num<<endl;
num++;
odd.notify_all();
}
}
int main()
{
thread t1(thread_odd), t2(thread_even);
t1.join();
t2.join();
return 0;
}
/* Above is the program to print ODD & EVEN numbers in synchronized manner ( one by one ) . The code is working fine most of the time .
But it is getting into a deadlock situation sometimes .
That is happening when odd thread is hitting notify_all but before the even thread wakes up it odd thread acquires the lock and then as it finds wait condition it goes into wait while the even thread hasn't wake up .
Leaving a dealock situation . I tried replacing notify_all to notify_one ,
but the problem still persists . Is there any change in the design required ?
Or is there anything which I am missing completely ? */
As a general rule in a concurrent program, when you want to access a shared resource to read it and modify it (in your case, modulo operator on num is first reading and num++ is writing), you need to obtain mutual exclusive access to that resource and not release it until you're done with that resource.
Your lock is going to be released when it exists the if-statement scope so you are not following this rule.
If you modify your code as follows, you won't deadlock:
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <thread>
using namespace std;
int num = 1;
#define NUM 20
condition_variable odd;
condition_variable even;
mutex mut;
void thread_odd()
{
while(num < NUM -1)
{
unique_lock<mutex> lock(mut);
if(num%2 != 1)
{
odd.wait(lock);
}
cout<<"ODD : "<<num<<endl;
num++;
lock.unlock();
even.notify_all(); // Line X
}
}
void thread_even()
{
while(num < NUM )
{
unique_lock<mutex> lock(mut);
if(num%2 != 0)
{
even.wait(lock);
}
cout<<"EVEN : "<<num<<endl;
num++;
lock.unlock();
odd.notify_all();
}
}
int main()
{
thread t1(thread_odd), t2(thread_even);
t1.join();
t2.join();
return 0;
}
Notice how I am releasing the lock before notifying. In C++ this is not only possible (as opposed to Java) but recommended as you will decrease the chances of having the releaser greedily re-enter the critical block. You'll get some more insights into this last point here.

Alternative barrier to spinlock?

Lets say I have this function that multiple threads need to run in a sort of lock step
std::atomic<bool> go = false;
void func() {
while (!go.load()) {} //sync barrier
...
}
I want to get rid of the spinlock and replace it for something mutex based since I have a lot of threads doing all kinds of stuff and spinlocking a dozen of threads is disasterous to the overall throughput, it runs much quicker if I include Sleep(1) inside the spinlock for example.
So is there something in STL that would be similar to AllMemoryBarrierWithGroupSync() in HLSL for example? Basically it would just put each of the threads to sleep at the barrier until all of them have reached it.
It sounds like you want to do exactly what a condition variable is good for.
bool go = false;
std::mutex mtx;
std::condition_variable cv;
void thread_func()
{
{
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, []{ return go; });
}
// Do stuff
}
void start_all()
{
{
std::unique_lock<std::mutex> lock(mtx);
go = true;
}
cv.notify_all();
}
IF you are willing to use experimental features, then latch or barrier will help you. Otherwise you might create your own similar construct using conditional_variable or conditional_variable_any with shared_lock (C++17 feature).
Using shared_mutex to implement a barrier:
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <shared_mutex>
#include <thread>
#include <vector>
std::shared_mutex mtx;
std::condition_variable_any cv;
bool ready = false;
void thread_func()
{
{
std::shared_lock<std::shared_mutex> lock(mtx);
cv.wait(lock, []{return ready;});
}
std::cout << '0';
//Rest of calculations
}
int main()
{
std::vector<std::thread> threads;
for(int i = 0; i < 5; ++i)
threads.emplace_back(thread_func);
std::this_thread::sleep_for(std::chrono::seconds(1));
{
std::unique_lock<std::shared_mutex> lock(mtx);
std::cout << "Go\n";
ready = true;
}
cv.notify_all();
for(auto& t: threads)
t.join();
std::cout << "\nFinished\n";
}