Lock for reading, writing and exclusivity - c++

I need to implement three different types of locking. Lock for reading, writing and exclusivity. For example, there is an abstract object named table and many transactions work from different threads.
A read lock is a lock that allows you to simultaneously read data from different transactions, but if one of the transactions requires a table for writing, it needs to wait until all read locks are removed.
The write lock allows any transaction to read from the table, but only one transaction that owns the table for writing can write to the table
And an exclusive lock is a lock when only one transaction has access to the table, and others wait when the lock is removed.
And I'm looking for how this can be implemented using WinApi and C/C++ i am trying like that
class Table
{
void LockWrite()
{
if (LockLW.IsLock())
LockLW.wait_and_lock();
//
if (LockEx.IsLock())
LockEx.wait();
}
void LockExclusive()
{
{
// it is assumed that this is a thread-safe check
if (readers != 0)
FreeLockRead.wait();
}
//but there is a problem, because in this place some transactions have started to read again
if (LockLW.IsLock())
LockLW.wait_and_lock();
if (LockEx.IsLock())
LockEx.wait_and_lock();
}
void UnLockExclusive()
{
LockEx.unLock();
}
void LockRead()
{
if (LockEx.IsLock())
LockEx.wait();
//Set Lock Read
//a problem, because in this place one of transactions have got LockEx
readers += 1;
}
void UnLockRead()
{
readers -= 1;
if (readers == 0)
FreeLockRead.pulse();
}
mutex LockLW;
mutex LockEx;
event FreeLockRead;
atomic readers;
};

After studying the materials, I found the following solution:
class Table
{
enum eFlag
{
eFree = 0,
eRead = 1,
eWrite = 2,
eExclusive = 4
};
uint32_t m_nReaders;
std::mutex m_Mutex;
uint32_t m_nFlag;
std::condition_variable m_condition;
void LockForRead()
{
std::unique_lock<std::mutex> lk(m_Mutex);
if (m_nFlag & eExclusive)
{
m_condition.wait(lk, [this]() {return !(m_nFlag & eExclusive); });
}
m_nReaders += 1;
m_nFlag |= eRead;
}
void UnLockForRead()
{
std::unique_lock<std::mutex> lk(m_Mutex);
assert(m_nReaders != 0 && (m_nFlag & eRead));
m_nReaders -= 1;
if (m_nReaders == 0)
{
m_nFlag &= ~eRead;
m_condition.notify_one();
}
}
void LockForExclusive()
{
std::unique_lock<std::mutex> lk(m_Mutex);
if (m_nFlag != eFree)
m_condition.wait(lk, [this]() {return m_nFlag == eFree; });
m_nFlag = eExclusive;
}
void UnLockForExclusive()
{
std::unique_lock<std::mutex> lk(m_Mutex);
assert(m_nFlag == eExclusive);
m_nFlag = eFree;
m_condition.notify_all();
}
void LockForWrite()
{
std::unique_lock<std::mutex> lk(m_Mutex);
if (m_nFlag & (eExclusive | eWrite))
m_condition.wait(lk, [this]() {return !(m_nFlag & (eExclusive | eWrite)); });
m_nFlag |= eWrite;
}
void UnLockForWrite()
{
std::unique_lock<std::mutex> lk(m_Mutex);
assert(m_nFlag & eWrite);
m_nFlag &= ~eWrite;
m_condition.notify_one();
}
};

Related

Correct way to check bool flag in thread

How can I check bool variable in class considering thread safe?
For example in my code:
// test.h
class Test {
void threadFunc_run();
void change(bool _set) { m_flag = _set; }
...
bool m_flag;
};
// test.cpp
void Test::threadFunc_run()
{
// called "Playing"
while(m_flag == true) {
for(int i = 0; i < 99999999 && m_flag; i++) {
// do something .. 1
}
for(int i = 0; i < 111111111 && m_flag; i++) {
// do something .. 2
}
}
}
I wan to stop "Playing" as soon as change(..) function is executed in the external code.
It also wants to be valid in process of operating the for statement.
According to the search, there are variables for recognizing immediate changes, such as atomic or volatile.
If not immediately, is there a better way to use a normal bool?
Actually synchronizing threads safely requires more then a bool.
You will need a state, a mutex and a condition variable like this.
The approach also allows for quick reaction to stop from within the loop.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <future>
#include <mutex>
class Test
{
private:
// having just a bool to check the state of your thread is NOT enough.
// your thread will have some intermediate states as well
enum play_state_t
{
idle, // initial state, not started yet (not scheduled by OS threadscheduler yet)
playing, // running and doing work
stopping, // request for stop is issued
stopped // thread has stopped (could also be checked by std::future synchronization).
};
public:
void play()
{
// start the play loop, the lambda is not guaranteed to have started
// after the call returns (depends on threadscheduling of the underlying OS)
// I use std::async since that has far superior synchronization with the calling thead
// the returned future can be used to pass both values & exceptions back to it.
m_play_future = std::async(std::launch::async, [this]
{
// give a signal the asynchronous function has really started
set_state(play_state_t::playing);
std::cout << "play started\n";
// as long as state is playing keep doing the work
while (get_state() == play_state_t::playing)
{
// loop to show we can break fast out of it when stop is called
for (std::size_t i = 0; (i < 100l) && (get_state() == play_state_t::playing); ++i)
{
std::cout << ".";
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
set_state(play_state_t::stopped);
std::cout << "play stopped.\n";
});
// avoid race conditions really wait for
// trhead handling async to have started playing
wait_for_state(play_state_t::playing);
}
void stop()
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
if (m_state == play_state_t::playing)
{
std::cout << "\nrequest stop.\n";
m_state = play_state_t::stopping;
m_cv.wait(lock, [&] { return m_state == play_state_t::stopped; });
}
};
~Test()
{
stop();
}
private:
void set_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
m_state = state;
m_cv.notify_all(); // let other threads that are wating on condition variable wakeup to check new state
}
play_state_t get_state() const
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
return m_state;
}
void wait_for_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] { return m_state == state; });
}
// for more info on condition variables
// see : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
mutable std::mutex m_mtx;
std::condition_variable m_cv; // a condition variable is not really a variable more a signal to threads to wakeup
play_state_t m_state{ play_state_t::idle };
std::future<void> m_play_future;
};
int main()
{
Test test;
test.play();
std::this_thread::sleep_for(std::chrono::seconds(1));
test.stop();
return 0;
}

why do we need a reference count in this Reentrant lock example?

Why do we need m_refCount in the example below? What would happen if we leaved it out and also removed the if statement and just left its body there ?
class ReentrantLock32
{
std::atomic<std::size_t> m_atomic;
std::int32_t m_refCount;
public:
ReentrantLock32() : m_atomic(0), m_refCount(0) {}
void Acquire()
{
std::hash<std::thread::id> hasher;
std::size_t tid = hasher(std::this_thread::get_id());
// if this thread doesn't already hold the lock...
if (m_atomic.load(std::memory_order_relaxed) != tid)
{
// ... spin wait until we do hold it
std::size_t unlockValue = 0;
while (!m_atomic.compare_exchange_weak(
unlockValue,
tid,
std::memory_order_relaxed, // fence below!
std::memory_order_relaxed))
{
unlockValue = 0;
PAUSE();
}
}
// increment reference count so we can verify that
// Acquire() and Release() are called in pairs
++m_refCount;
// use an acquire fence to ensure all subsequent
// reads by this thread will be valid
std::atomic_thread_fence(std::memory_order_acquire);
}
void Release()
{
// use release semantics to ensure that all prior
// writes have been fully committed before we unlock
std::atomic_thread_fence(std::memory_order_release);
std::hash<std::thread::id> hasher;
std::size_t tid = hasher(std::this_thread::get_id());
std::size_t actual = m_atomic.load(std::memory_order_relaxed);
assert(actual == tid);
--m_refCount;
if (m_refCount == 0)
{
// release lock, which is safe because we own it
m_atomic.store(0, std::memory_order_relaxed);
}
}
bool TryAcquire()
{
std::hash<std::thread::id> hasher;
std::size_t tid = hasher(std::this_thread::get_id());
bool acquired = false;
if (m_atomic.load(std::memory_order_relaxed) == tid)
{
acquired = true;
}
else
{
std::size_t unlockValue = 0;
acquired = m_atomic.compare_exchange_strong(
unlockValue,
tid,
std::memory_order_relaxed, // fence below!
std::memory_order_relaxed);
}
if (acquired)
{
++m_refCount;
std::atomic_thread_fence(
std::memory_order_acquire);
}
return acquired;
}
};
EDIT: Example is from a book called "Game engine architecture 3rd edition" by Jason Gregory
The count is needed to implement recursive locking. If it were not there, Release would always unlock no matter how many Acquire calls there were, that is not what you expect and want in many cases.
Consider the following common pattern:
void helper_method(){
Acquire();
// Work #2
Release();
}
void method(){
Acquire();
// Work #1
helper_method();
// Work #3
Release();
}
One has to be careful if the lock is not recursive. In that case #3 is no longer called under lock and you now have a hard to trace bug. It happens just because Release() in helper_method unlocked the lock, doing so in good faith because it locked it in the first place, not knowing it was already locked before.
This is also the reason why there are std::mutex and std::recursive_mutex, locking the former twice is UB (will often deadlock in my experience).

c++ how can I write single-process named_mutex?

I need a class which will allow me to lock/unlock specific names (or simply indexes), and I don't want it to be multi-processing, so I can run multiple instances of my application. Also I want to avoid use of system-specific APIs, just std or boost. (For simplicity sake, we can say: max number of names/indexes used at the same time is 100)
Unfortunately I has no usage example for you, I just interested is it possible to make.
I tried to find anything like that, but all I found is boost::interprocess::named_mutex and some WinApi methods, like CreateMutexW.
I also tried to write my own code (below), but it definitely not perfect and has at least one potential bug.
So, does anyone has any suggestion, code ideas, or already existing classes?
Thanks in advance
class IndexMutex
{
public:
void Lock(uint32_t id);
void Unlock(uint32_t id);
private:
struct IndexLock
{
static constexpr uint32_t unlocked = ~0u;
void Lock(uint32_t id) {
index_ = id;
mutex_.lock();
}
void Unlock() {
mutex_.unlock();
index_ = unlocked;
}
bool IsLocked() const {
return index_ != unlocked;
}
std::atomic<uint32_t> index_ = unlocked;
std::mutex mutex_{};
};
std::array<IndexLock, 100> mutexes_{};
std::mutex masterMutex_{};
};
void IndexMutex::Lock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
uint32_t possibleId = IndexLock::unlocked;
for (uint32_t i = 0; i < mutexes_.size(); ++i) {
if (mutexes_[i].index_ == id) {
masterMutex_.unlock();
// POTENTIAL BUG: TIME GAP
mutexes_[i].Lock(id);
return;
}
// Searching for unlocked mutex in the same time.
if (possibleId == IndexLock::unlocked && !mutexes_[i].IsLocked()) {
possibleId = i;
}
}
if (possibleId == IndexLock::unlocked) {
throw std::runtime_error{ "No locks were found." };
}
// We are sure here, that mutex can't be locked
// because we were protected by the muster mutex all that time.
mutexes_[possibleId].Lock(id);
}
void IndexMutex::Unlock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
for (auto& lock : mutexes_) {
if (lock.index_ == id) {
lock.Unlock();
return;
}
}
throw std::runtime_error{ "No mutexes there found by specified index." };
}
You want a reference counted mutex map, protected by a master mutex. An implementation in terms of
std::map<int, std::pair<int, std::mutex>>
would do the job.
The lock operation works like this (untested pseudocode):
master.lock()
std::pair<int, std::mutex>& m = mymap[index]; //inserts a new one if needed
m.first++;
master.unlock();
m.second.lock();
The unlock operation:
master.lock();
std::pair<int, std::mutex>& m = mymap[index];
m.second.unlock();
m.first--;
if (m.first==0) mymap.remove(index);
master.unlock();
No deadlocks! It is possible to first unlock the master and then lock the found mutex. Even if another thread intervenes and unlocks the mutex, the reference count won't drop to zero and the mutex will not be removed.

Thread synchronization problem in thread's procedure

I have a question. I add the object to the map and in the thread call the run() procedure for all elements in the map.
I correctly understand that in this code there is a synchronization problem in the process procedure. Can I add a mutex? Given that this procedure is called in the thread?
class Network {
public:
Network() {
std::cout << "Network constructor" << std::endl;
}
void NetworkInit(const std::string& par1) {
this->par1 = par1;
}
~Network() {
std::cout << "Network destructor" << std::endl;
my_map.clear();
}
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
cv.notify_one();
}
void removeLogic(uint32_t Id) {
std::unique_lock<std::mutex> lk(mutex);
cv.wait(lk, [this]{return !my_map.empty(); });
auto p = this->my_map.find(roomId);
if (p != end(this->my_map)) {
this->my_map.erase(roomId);
}
lk.unlock();
}
/**
* Start thread
*/
void StartThread(int id = 1) {
running = true;
first = std::thread([this, id] { process(id); });
first.detach();
}
/**
* Stop thread
*/
void StopThread() {
running = false;
}
private:
std::thread first;
std::atomic<bool> running = ATOMIC_VAR_INIT(true);
void process(int id) {
while (running) {
for (const auto& it:my_map) {
it.second->run();
}
std::this_thread::sleep_for(10ms);
}
}
private:
std::mutex mutex;
std::condition_variable cv;
using MyMapType = std::map<uint32_t, std::shared_ptr<Logic> >;
MyMapType my_map;
std::string par1;
};
The first idea is to protect the map as a whole with a mutex that is released during run. This works for addLogic because inserting into a map invalidates no iterators, but not for deleteLogic which might invalidate the very iterator value being used by process.
More efficient, lock-free approaches like hazard pointers may be applicable here, but the basic idea is to use a deferred deletion list. Assuming that the intent of concurrent deletion is cancellation of the task (not merely cleanup after all work is completed), it’s sensible to have the consumer thread to check immediately before execution. Using a set (to correspond to your map) will let the deletion list be dynamic and those checks be efficient.
So have another mutex protect the deletion list and take it at the beginning of each iteration in process:
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
}
void removeLogic(uint32_t Id) {
std::lock_guard<std::mutex> kg(kill_mutex);
kill.insert(Id);
}
private:
std::set<uint32_t> kill;
std::mutex mutex,kill_mutex;
void process(int id) {
for(;running;std::this_thread::sleep_for(10ms)) {
std::unique_lock<std::mutex> lg(mutex);
for(auto i=my_map.begin(),e=my_map.end();i!=e;) {
if(std::lock_guard<std::mutex>(kill_mutex),kill.erase(i->first)) {
i=my_map.erase(i);
continue; // test i!=e again
}
lg.unlock();
i->second->run();
lg.lock();
++i;
}
}
}
This code omits your condition_variable usage: it’s not necessary to wait before enqueuing something for deletion.
The solution with low level concurrency primitives usually does not scale and is not easy to maintain.
A better alternative would be to have a thread-safe "control" queue of map update or worker termination instructions.
Something like this:
enum Op {
ADD,
DROP,
STOP
};
struct Request {
Op op;
uint32_t id;
std::function<void()> action;
};
...
// the map which required protection in your code
std::map<uint32_t, std::function<void()>> subs;
// requests queue and its mutex (not very optimal, just to demonstrate the idea)
std::vector<Request> requests;
std::mutex mutex;
// the worker thread
std::thread worker([&](){
// the temporary buffer where requests are drained to from the queue before processing
decltype(requests) buffer;
// the main loop
while (true) {
// requests collection (requires synchronization)
{
std::lock_guard<decltype(mutex)> const guard {mutex};
buffer.swap(requests);
}
// requests processing
for(auto&& request: buffer) {
switch (request.op) {
case ADD:
subs[request.id] = std::move(request.action);
break;
case DROP:
subs.erase(request.id);
break;
case STOP: goto endloop;
}
}
// map iteration
for (auto&& entry: subs) {
entry.second();
}
}
endloop:;
});

Using boost condition variables

I am designing an asynchronous logger class as follows. However, not sure if I am using the boost condition variable in the right way. Can anyone comment on this? Here the processLogEntry method is a thread function and I am using boost here.
void LogWriter::stopThread()
{
mStop = true;
mCond.notify_one();
mThread->join();
}
void LogWriter::processLogEntry()
{
while(!mStop)
{
boost::mutex::scoped_lock lock(mMutex);
mCond.wait(lock);
while(!q.empty())
{
// process begins
}
}
}
void LogWriter::addLogEntry()
{
boost::mutex::scoped_lock lock(mMutex);
// add it in the queue
mCond.notify_one();
}
As it has been pointed out, you must either make mStop atomic or guard all its accesses with the mutex. Forget about volatile, it's not relevant to your purposes.
Furthermore, when waiting on a condition variable a call to wait may return even if no notification functions were called (those are so-called spurious wake-ups). As such, calls to wait need to be guarded.
void LogWriter::stopThread()
{
{
boost::mutex::scoped_lock lock(mMutex);
mStop = true;
mCond.notify_one();
}
mThread->join();
}
void LogWriter::processLogEntry()
{
for(;;) {
boost::mutex::scoped_lock lock(mMutex);
// We wait as long as we're not told to stop and
// we don't have items to process
while(!mStop && q.empty()) mCond.wait(lock);
// Invariant: if we get here then
// mStop || !q.empty() holds
while(!q.empty())
{
// process begins
}
if(mStop) return;
}
}