Looking for critique of my reader/writer implementation [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i implemented the readers/writers problem in c++11… I'd like to know what's wrong with it, because these kinds of things are difficult to predict on my own.
Shared database:
Readers can access database when no writers
Writers can access database when no readers or writers
Only one thread manipulates state variables at a time
the example has 3 readers and 1 writer, but also use 2 or more writer....
Code:
class ReadersWriters {
private:
int AR; // number of active readers
int WR; // number of waiting readers
int AW; // number of active writers
int WW; // number of waiting writers
mutex lock;
mutex m;
condition_variable okToRead;
condition_variable okToWrite;
int data_base_variable;
public:
ReadersWriters() : AR(0), WR(0), AW(0), WW(0), data_base_variable(0) {}
void read_lock() {
unique_lock<mutex> l(lock);
WR++; // no writers exist
// is it safe to read?
okToRead.wait(l, [this](){ return WW == 0; });
okToRead.wait(l, [this](){ return AW == 0; });
WR--; // no longer waiting
AR++; // now we are active
}
void read_unlock() {
unique_lock<mutex> l(lock);
AR--; // no longer active
if (AR == 0 && WW > 0) { // no other active readers
okToWrite.notify_one(); // wake up one writer
}
}
void write_lock() {
unique_lock<mutex> l(lock);
WW++; // no active user exist
// is it safe to write?
okToWrite.wait(l, [this](){ return AR == 0; });
okToWrite.wait(l, [this](){ return AW == 0; });
WW--; // no longer waiting
AW++; // no we are active
}
void write_unlock() {
unique_lock<mutex> l(lock);
AW--; // no longer active
if (WW > 0) { // give priority to writers
okToWrite.notify_one(); // wake up one writer
}
else if (WR > 0) { // otherwize, wake reader
okToRead.notify_all(); // wake all readers
}
}
void data_base_thread_write(unsigned int thread_id) {
for (int i = 0; i < 10; i++) {
write_lock();
data_base_variable++;
m.lock();
cout << "data_base_thread: " << thread_id << "...write: " << data_base_variable << endl;
m.unlock();
write_unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
void data_base_thread_read(unsigned int thread_id) {
for (int i = 0; i < 10; i++) {
read_lock();
m.lock();
cout << "data_base_thread: " << thread_id << "...read: " << data_base_variable << endl;
m.unlock();
read_unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
};
int main() {
// your code goes here
ReadersWriters rw;
thread w1(&ReadersWriters::data_base_thread_write, &rw, 0);
thread r1(&ReadersWriters::data_base_thread_read, &rw, 1);
thread r2(&ReadersWriters::data_base_thread_read, &rw, 2);
thread r3(&ReadersWriters::data_base_thread_read, &rw, 3);
w1.join();
r1.join();
r2.join();
r3.join();
cout << "\nThreads successfully completed..." << endl;
return 0;
}

Feedback:
1. It is missing all necessary #includes.
2. It presumes a using namespace std, which is bad style in declarations, as that pollutes all of your clients with namespace std.
3. The release of your locks is not exception safe:
write_lock();
data_base_variable++;
m.lock();
cout << "data_base_thread: " << thread_id << "...write: " << data_base_variable << endl;
m.unlock(); // leaked if an exception is thrown after m.lock()
write_unlock(); // leaked if an exception is thrown after write_lock()
4. The m.lock() wrapping of cout in data_base_thread_write is really unnecessary since write_lock() should already be providing exclusive access. However I understand that this is just a demo.
5. I think I see a bug in the read/write logic:
step 1 2 3 4 5 6
WR 0 1 1 1 0 0
AR 0 0 0 0 1 1
WW 0 0 1 1 1 0
AW 1 1 1 0 0 1
In step 1, thread 1 has the write lock.
In step 2, thread 2 attempts to acquire a read lock, increments WR, and blocks on the second okToRead, waiting for AW == 0.
In step 3, thread 3 attempts to acquire a write lock, increments WW, and blocks on the second okToWrite, waiting for AW == 0.
In step 4, thread 1 releases, the write lock by decrementing AW to 0, and signals okToWrite.
In step 5, thread 2, despite not being signaled, is awoken spuriously, notes that AW == 0, and grabs the read lock by setting WR to 0 and AR to 1.
In step 6, thread 3 receives the signal, notes that AW == 0, and grabs the write lock by setting WW to 0 and AW to 1.
In step 6, both thread 2 owns the read lock and thread 3 owns the write lock (simultaneously).
6. The class ReadersWriters has two functions:
It implements a read/write mutex.
It implements tasks for threads to execute.
A better design would take advantage of the mutex/lock framework established in C++11:
Create a ReaderWriter mutex with members:
// unique ownership
void lock(); // write_lock
void unlock(); // write_unlock
// shared ownership
lock_shared(); // read_lock
unlock_shared(); // read_unlock
The first two names, lock and unlock are purposefully the same names as those used by the C++11 mutex types. Just doing this much allows you to do things like:
std::lock_guard<ReaderWriter> lk1(mut);
// ...
std::unique_lock<ReaderWriter> lk2(mut);
// ...
std::condition_variable_any cv;
cv.wait(lk2); // wait using the write lock
And if you add:
void try_lock();
Then you can also:
std::lock(lk2, <any other std or non-std locks>); // lock multiple locks
The lock_shared and unlock_shared names are chosen because of the std::shared_lock<T> type currently in the C++1y (we hope y is 4) working draft. It is documented in N3659.
And then you can say things like:
std::shared_lock<ReaderWriter> lk3(mut); // read_lock
std::condition_variable_any cv;
cv.wait(lk3); // wait using the read lock
I.e. By just creating a stand-alone ReaderWriter mutex type, with very carefully chosen names for the member functions, you get interoperability with the std-defined locks, condition_variable_any, and locking algorithms.
See N2406 for a more in-depth rationale of this framework.

Related

How to let thread wait for specific other thread to unlock data c++

Lets say I have one thread that continuously updates a certain object. During the update, the object must be locked for thread safety.
Now the second thread is more of an event kind of operation. If such a thread is spawned, I'd like the running update to finish it's call and then immediately perform the event operation.
What I absolutely want to avoid is a situation where the event thread needs to wait until it gets lucky to be given computation time at a specific time the update thread doesn't lock up the data it needs to access.
Is there any way I could use the threading/mutex tools in c++ to accomplish this? Or should I save the to-be-done operation in an unlocked var and perform the operation on the update thread?
//// System.h
#pragma once
#include <mutex>
#include <iostream>
#include <chrono>
#include <thread>
class System {
private:
int state = 0;
std::mutex mutex;
public:
void update();
void reset(int e);
};
//////// System.cpp
#include "System.h"
void System::update() {
std::lock_guard<std::mutex> guard(mutex);
state++;
std::cout << state << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
void System::reset(int e) {
std::lock_guard<std::mutex> guard(mutex);
state = e;
std::cout << state << std::endl;
}
////// ThreadTest.h
#pragma once
#include <iostream>
#include "System.h"
void loop_update(System& system);
void reset_system(System& system);
int main();
////// ThreadTest.cpp
#include "ThreadTest.h"
void loop_update(System& system) {
while (true) system.update();
};
void reset_system(System& system) {
system.reset(0);
};
int main()
{
System system;
std::thread t1 = std::thread(loop_update, std::ref(system));
int reset = 0;
while (true) {
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "Reset" << std::endl;
reset_system(system);
}
}
Example gives following output. You can clearly see a huge delay in the actual update.
1
...
10
Reset
11
...
16
0
1
...
10
Reset
11
...
43
0
1
If I understand you correctly, you have 2 threads using the same mutex. However, you want one thread to get a higher preference than the other to get the actual lock.
As far as I know, there ain't a way to ensure preference using the native tools. You can work around it, if you don't mind the code of both threads knowing about it.
For example:
std::atomic<int> shouldPriorityThreadRun{0};
auto priorityThreadCode = [&shouldPriorityThreadRun](){
++shouldPriorityThreadRun;
auto lock = std::unique_lock{mutex};
doMyStuff();
--shouldPriorityThreadRun;
};
auto backgroundThreadCode = [&shouldPriorityThreadRun](){
while (true)
{
if (shouldPriorityThreadRun == 0)
{
auto lock = std::unique_lock{mutex};
doBackgroundStuff();
}
else
std::this_thread::yield();
}
};
If you have multiple priority threads, those can't have priority over each other.
If you don't like the yield, you could do fancier stuff with std::condition_variable, so you can inform other threads that the mutex is available. However, I believe it's good enough.
it should already work with your current approach.
The mutex is locking concurrent access to your data, so you can lock it within the first thread to update the data.
If the event routine / your second thread comes to execution, it always has to check if the mutex is unlocked. If the mutex is unlocked - and only then, you can lock the mutex and perform the tasks of the second thread.
If I understand your code correctly (i am not a c++ expert), the std::lock_guard<std::mutex> guard(mutex); seems to be locking the mutex the entire time of the update function...
And therefore other threads merely have time to access the mutex.
When the update thread finish the job, it needs to unlock the mutex before entering the sleep state, then the reset thread could have a chance to take the lock without any delay. I also tried running your codes on my machine and observe it's still waiting for the lock. I don't know when it gets lucky to take the lock. I think in this case it's an UB
2
3
4
5
6
7
8
9
10
Reset
11
12
13
14
15
16
17
18...
void System::update() {
mutex.lock();
state++;
std::cout << state << std::endl;
mutex.unlock();
std::this_thread::sleep_for(std::chrono::seconds(1));
}
void System::reset(int e) {
mutex.lock();
state = e;
std::cout << state << std::endl;
mutex.unlock();
}

Program returned: 143 when thread

I have a method:
void move_robot(const vector<vector<double> > &map) {
// accumulate the footprint while the robot moves
// Iterate through the path
//std::unique_lock<std::mutex> lck(mtx);
for (unsigned int i=1; i < map.size(); i++) {
while (distance(position , map[i]) > DISTANCE_TOLERANCE ) {
this->direction = unitary_vector(map[i], this->position);
this->next_step();
lck.unlock();
this_thread::sleep_for(chrono::milliseconds(10)); // sleep for 500 ms
lck.lock();
}
std::cout << "New position is x:" << this->position[0] << " and y:" << this->position[1] << std::endl;
}
this->moving = false;
// notify to end
}
When the sleep and locks are included I get:
ASM generation compiler returned: 0
Execution build compiler returned: 0
Program returned: 143
Killed - processing time exceeded
Nevertheless, if I comment all the locks and this_thread::sleep_for it works as expected.
I need the locks because I am dealing with other threads. The complete code is this one: https://godbolt.org/z/7ErjrG
I am quite stacked because the otput does not provide much information
You have not posted the code of next_step and the definition of mtx, this is important information.
std::mutex mtx;
void next_step() {
std::unique_lock<std::mutex> lck(mtx);
this->position[0] += DT * this->direction[0];
this->position[1] += DT * this->direction[1];
}
If you read the manual for std::mutex you find out:
A calling thread must not own the mutex prior to calling lock or try_lock.
And std::unique_lock:
Locks the associated mutex by calling m.lock(). The behavior is undefined if the current thread already owns the mutex except when the mutex is recursive.
next_step called from move_robot violates this, it tries to lock already owned mutex object by that calling thread.
The relative topic for your question is Can unique_lock be used with a recursive_mutex?. There you get the fix:
std::recursive_mutex mtx;
std::unique_lock<std::recursive_mutex> lck(mtx);

boost::fiber scheduling - when and how

According to the documentation
the currently-running fiber retains control until it invokes some
operation that passes control to the manager
I can think about only one operation - boost::this_fiber::yield which may cause control switch from fiber to fiber. However, when I run something like
bf::fiber([](){std::cout << "Bang!" << std::endl;}).detach();
bf::fiber([](){std::cout << "Bung!" << std::endl;}).detach();
I get output like
Bang!Bung!
\n
\n
Which means control was passed between << operators from one fiber to another. How it could happen? Why? What is the general definition of controll passing from fiber to fiber in the context of boost::fiber library?
EDIT001:
Cant get away without code:
#include <boost/fiber/fiber.hpp>
#include <boost/fiber/mutex.hpp>
#include <boost/fiber/barrier.hpp>
#include <boost/fiber/algo/algorithm.hpp>
#include <boost/fiber/algo/work_stealing.hpp>
namespace bf = boost::fibers;
class GreenExecutor
{
std::thread worker;
bf::condition_variable_any cv;
bf::mutex mtx;
bf::barrier barrier;
public:
GreenExecutor() : barrier {2}
{
worker = std::thread([this] {
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
// wait till all threads joining the work stealing have been registered
barrier.wait();
mtx.lock();
// suspend main-fiber from the worker thread
cv.wait(mtx);
mtx.unlock();
});
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
// wait till all threads have been registered the scheduling algorithm
barrier.wait();
}
template<typename T>
void PostWork(T&& functor)
{
bf::fiber {std::move(functor)}.detach();
}
~GreenExecutor()
{
cv.notify_all();
worker.join();
}
};
int main()
{
GreenExecutor executor;
std::this_thread::sleep_for(std::chrono::seconds(1));
int i = 0;
for (auto j = 0ul; j < 10; ++j) {
executor.PostWork([idx {++i}]() {
auto res = pow(sqrt(sin(cos(tan(idx)))), M_1_PI);
std::cout << idx << " - " << res << std::endl;
});
}
while (true) {
boost::this_fiber::yield();
}
return 0;
}
Output
2 - 1 - -nan
0.503334 3 - 4 - 0.861055
0.971884 5 - 6 - 0.968536
-nan 7 - 8 - 0.921959
0.9580699
- 10 - 0.948075
0.961811
Ok, there were a couple of things I missed, first, my conclusion was based on misunderstanding of how stuff works in boost::fiber
The line in the constructor mentioned in the question
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
was installing the scheduler in the thread where the GreenExecutor instance was created (in the main thread) so, when launching two worker fibers I was actually initiating two threads which are going to process submitted fibers which in turn would process these fibers asynchronously thus mixing the std::cout output. No magic, everything works as expected, the boost::fiber::yield still is the only option to pass control from one fiber to another

Four Threads in Function

I have 4 threads that should enter to same function A.
I want to allow that only two can perform.
I want to wait for all the four and then perform function A.
How should I do it (in C++)?
A condition variable in C++ should suffice here.
This should work for allowing only 2 threads from proceeding at once:
// globals
std::condition_variable cv;
std::mutex m;
int active_runners = 0;
int FunctionA()
{
// do work
}
void ThreadFunction()
{
// enter lock and wait until we can grab one of the two runner slots
{
std::unique_lock<std::mutex> lock(m); // enter lock
while (active_runners >= 2) // evaluate the condition under a lock
{
cv.wait(); // release the lock and wait for a signal
}
active_runners++; // become one of the runners
} // release lock
FunctionA();
// on return from FunctionA, notify everyone that there's one less runner
{
std::unique_lock<std::mutex> lock(m); // enter lock
active_runners--;
cv.notify(); // wake up anyone blocked on "wait"
} // release lock
}

Linux 3.0: futex-lock deadlock bug?

// SubFetch(x,y) = atomically x-=y and return x (__sync_sub_and_fetch)
// AddFetch(x,y) = atomically x+=y and return x (__sync_add_and_fetch)
// CompareWait(x, y) = futex(&x, FUTEX_WAIT, y) wait on x if x == y
// Wake(x, y) = futex(&x, FUTEX_WAKE, y) wake up y waiters
struct Lock
{
Lock() : x(1) {}
void lock()
{
while (true)
{
if (SubFetch(x, 1) == 0)
return;
x = -1;
CompareWait(x, -1);
}
}
void unlock()
{
if (AddFetch(x, 1) == 1)
return;
x = 1;
Wake(x, 1);
}
private:
int x;
};
Linux 3.0 provides a system call called futex, upon which many concurrency utilities are based including recent pthread_mutex implementations. Whenever you write code you should always consider whether using an existing implementation or writing it yourself is the better choice for your project.
Above is an implementation of a Lock (mutex, 1 permit counting semaphore) based upon futex and the semantics description in man futex(7)
It appears to contain a deadlock bug whereby after multiple threads are trying to lock and unlock it a few thousand times, the threads can get into a state where x == -1 and all the threads are stuck in CompareWait, however noone is holding the lock.
Can anyone see where the bug is?
Update: I'm a little surprised that futex(7)/semantics is so broken. I completely rewrote Lock as follows... is this correct now?
// CompareAssign(x,y,z) atomically: if (x == y) {x = z; ret true; } else ret false;
struct Lock
{
Lock() : x(0) {}
void lock()
{
while (!CompareAssign(x, 0, 1))
if (x == 2 || CompareAssign(x, 1, 2))
CompareWait(x, 2);
}
void unlock()
{
if (SubFetch(x, 1) == 0)
return;
x = 0;
Wake(x, 1);
}
private:
int x;
};
The idea here is that x has the following three states:
0: unlocked
1: locked & no waiters
2: locked & waiters
The problem is that you explicitly -1 assign to x if the SubFetch fails to acquire the lock. This races with the unlock.
Thread 1 acquires the lock. x==0.
Thread 2 tries to acquire the lock. The SubFetch sets x to -1, and then thread 2 is suspended.
Thread 1 releases the lock. The AddFetch sets x to 0, so the code then explicitly sets x to 1 and calls Wake.
Thread 2 wakes up and sets x to -1, and then calls CompareWait.
Thread 2 is now stuck waiting, with x set to -1, but there is no one around to wake it, as thread 1 has already released the lock.
The proper implementation of a futex-based Mutex is described in Ulrich Drepper's paper "Futexes are tricky"
http://people.redhat.com/drepper/futex.pdf
It includes not only the code but also a very detailed explanation of why it is correct. The code from the paper:
class mutex
{
public:
mutex () : val (0) { }
void lock () {
int c;
if ((c = cmpxchg (val, 0, 1)) != 0)
do {
if (c == 2 || cmpxchg (val, 1, 2) != 0)
futex_wait (&val, 2);
} while ((c = cmpxchg (val, 0, 2)) != 0);
}
void unlock () {
//NOTE: atomic_dec returns the value BEFORE the operation, unlike your SubFetch !
if (atomic_dec (val) != 1) {
val = 0;
futex_wake (&val, 1);
}
}
private:
int val;
};
Comparing the code in the paper with your code, I spot a difference
You have
if (x == 2 || CompareAssign(x, 1, 2))
using the futex's value directly whereas Drepper uses the return value from the previous CompareAssign(). That difference will probably affect performance only.
Your unlock code is different, too, but seems to be semantically equivalent.
In any case I would strongly advise you to follow Drepper's code to the letter. That paper has stood the test of time and received a lot of peer review. You gain nothing from rolling your own.
How about this scenario with three threads, A, B , and C.
The initial state of this scenario has:
thread A holding the lock
thread B not contending for the lock just yet
thread C in CompareWait()
x == -1 from when C failed to acquire the lock
A B C
============== ================ ===============
AddFetch()
(so x == 0)
SubFetch()
(so x == -1)
x = 1
x = -1
Wake()
At this point whether B or C are unblocked, they will not get a result of 0 when they SubFetch().