Thread synchronization problem in thread's procedure - c++

I have a question. I add the object to the map and in the thread call the run() procedure for all elements in the map.
I correctly understand that in this code there is a synchronization problem in the process procedure. Can I add a mutex? Given that this procedure is called in the thread?
class Network {
public:
Network() {
std::cout << "Network constructor" << std::endl;
}
void NetworkInit(const std::string& par1) {
this->par1 = par1;
}
~Network() {
std::cout << "Network destructor" << std::endl;
my_map.clear();
}
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
cv.notify_one();
}
void removeLogic(uint32_t Id) {
std::unique_lock<std::mutex> lk(mutex);
cv.wait(lk, [this]{return !my_map.empty(); });
auto p = this->my_map.find(roomId);
if (p != end(this->my_map)) {
this->my_map.erase(roomId);
}
lk.unlock();
}
/**
* Start thread
*/
void StartThread(int id = 1) {
running = true;
first = std::thread([this, id] { process(id); });
first.detach();
}
/**
* Stop thread
*/
void StopThread() {
running = false;
}
private:
std::thread first;
std::atomic<bool> running = ATOMIC_VAR_INIT(true);
void process(int id) {
while (running) {
for (const auto& it:my_map) {
it.second->run();
}
std::this_thread::sleep_for(10ms);
}
}
private:
std::mutex mutex;
std::condition_variable cv;
using MyMapType = std::map<uint32_t, std::shared_ptr<Logic> >;
MyMapType my_map;
std::string par1;
};

The first idea is to protect the map as a whole with a mutex that is released during run. This works for addLogic because inserting into a map invalidates no iterators, but not for deleteLogic which might invalidate the very iterator value being used by process.
More efficient, lock-free approaches like hazard pointers may be applicable here, but the basic idea is to use a deferred deletion list. Assuming that the intent of concurrent deletion is cancellation of the task (not merely cleanup after all work is completed), it’s sensible to have the consumer thread to check immediately before execution. Using a set (to correspond to your map) will let the deletion list be dynamic and those checks be efficient.
So have another mutex protect the deletion list and take it at the beginning of each iteration in process:
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
}
void removeLogic(uint32_t Id) {
std::lock_guard<std::mutex> kg(kill_mutex);
kill.insert(Id);
}
private:
std::set<uint32_t> kill;
std::mutex mutex,kill_mutex;
void process(int id) {
for(;running;std::this_thread::sleep_for(10ms)) {
std::unique_lock<std::mutex> lg(mutex);
for(auto i=my_map.begin(),e=my_map.end();i!=e;) {
if(std::lock_guard<std::mutex>(kill_mutex),kill.erase(i->first)) {
i=my_map.erase(i);
continue; // test i!=e again
}
lg.unlock();
i->second->run();
lg.lock();
++i;
}
}
}
This code omits your condition_variable usage: it’s not necessary to wait before enqueuing something for deletion.

The solution with low level concurrency primitives usually does not scale and is not easy to maintain.
A better alternative would be to have a thread-safe "control" queue of map update or worker termination instructions.
Something like this:
enum Op {
ADD,
DROP,
STOP
};
struct Request {
Op op;
uint32_t id;
std::function<void()> action;
};
...
// the map which required protection in your code
std::map<uint32_t, std::function<void()>> subs;
// requests queue and its mutex (not very optimal, just to demonstrate the idea)
std::vector<Request> requests;
std::mutex mutex;
// the worker thread
std::thread worker([&](){
// the temporary buffer where requests are drained to from the queue before processing
decltype(requests) buffer;
// the main loop
while (true) {
// requests collection (requires synchronization)
{
std::lock_guard<decltype(mutex)> const guard {mutex};
buffer.swap(requests);
}
// requests processing
for(auto&& request: buffer) {
switch (request.op) {
case ADD:
subs[request.id] = std::move(request.action);
break;
case DROP:
subs.erase(request.id);
break;
case STOP: goto endloop;
}
}
// map iteration
for (auto&& entry: subs) {
entry.second();
}
}
endloop:;
});

Related

Correct way to check bool flag in thread

How can I check bool variable in class considering thread safe?
For example in my code:
// test.h
class Test {
void threadFunc_run();
void change(bool _set) { m_flag = _set; }
...
bool m_flag;
};
// test.cpp
void Test::threadFunc_run()
{
// called "Playing"
while(m_flag == true) {
for(int i = 0; i < 99999999 && m_flag; i++) {
// do something .. 1
}
for(int i = 0; i < 111111111 && m_flag; i++) {
// do something .. 2
}
}
}
I wan to stop "Playing" as soon as change(..) function is executed in the external code.
It also wants to be valid in process of operating the for statement.
According to the search, there are variables for recognizing immediate changes, such as atomic or volatile.
If not immediately, is there a better way to use a normal bool?
Actually synchronizing threads safely requires more then a bool.
You will need a state, a mutex and a condition variable like this.
The approach also allows for quick reaction to stop from within the loop.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <future>
#include <mutex>
class Test
{
private:
// having just a bool to check the state of your thread is NOT enough.
// your thread will have some intermediate states as well
enum play_state_t
{
idle, // initial state, not started yet (not scheduled by OS threadscheduler yet)
playing, // running and doing work
stopping, // request for stop is issued
stopped // thread has stopped (could also be checked by std::future synchronization).
};
public:
void play()
{
// start the play loop, the lambda is not guaranteed to have started
// after the call returns (depends on threadscheduling of the underlying OS)
// I use std::async since that has far superior synchronization with the calling thead
// the returned future can be used to pass both values & exceptions back to it.
m_play_future = std::async(std::launch::async, [this]
{
// give a signal the asynchronous function has really started
set_state(play_state_t::playing);
std::cout << "play started\n";
// as long as state is playing keep doing the work
while (get_state() == play_state_t::playing)
{
// loop to show we can break fast out of it when stop is called
for (std::size_t i = 0; (i < 100l) && (get_state() == play_state_t::playing); ++i)
{
std::cout << ".";
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
set_state(play_state_t::stopped);
std::cout << "play stopped.\n";
});
// avoid race conditions really wait for
// trhead handling async to have started playing
wait_for_state(play_state_t::playing);
}
void stop()
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
if (m_state == play_state_t::playing)
{
std::cout << "\nrequest stop.\n";
m_state = play_state_t::stopping;
m_cv.wait(lock, [&] { return m_state == play_state_t::stopped; });
}
};
~Test()
{
stop();
}
private:
void set_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
m_state = state;
m_cv.notify_all(); // let other threads that are wating on condition variable wakeup to check new state
}
play_state_t get_state() const
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
return m_state;
}
void wait_for_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] { return m_state == state; });
}
// for more info on condition variables
// see : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
mutable std::mutex m_mtx;
std::condition_variable m_cv; // a condition variable is not really a variable more a signal to threads to wakeup
play_state_t m_state{ play_state_t::idle };
std::future<void> m_play_future;
};
int main()
{
Test test;
test.play();
std::this_thread::sleep_for(std::chrono::seconds(1));
test.stop();
return 0;
}

Threading queue in c++

Currently working on a project, im struggeling with threading and queue at the moment, the issue is that all threads take the same item in the queue.
Reproduceable example:
#include <iostream>
#include <queue>
#include <thread>
using namespace std;
void Test(queue<string> queue){
while (!queue.empty()) {
string proxy = queue.front();
cout << proxy << "\n";
queue.pop();
}
}
int main()
{
queue<string> queue;
queue.push("101.132.186.39:9090");
queue.push("95.85.24.83:8118");
queue.push("185.211.193.162:8080");
queue.push("87.106.37.89:8888");
queue.push("159.203.61.169:8080");
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++){
ThreadVector.emplace_back([&]() {Test(queue); });
}
for (auto& t : ThreadVector){
t.join();
}
ThreadVector.clear();
return 0;
}
You are giving each thread its own copy of the queue. I imagine that what you want is all the threads to work on the same queue and for that you will need to use some synchronization mechanism when multiple threads work on the shared queue as std queue is not thread safe.
edit: minor note: in your code you are spawning 11 threads not 10.
edit 2: OK, try this one to begin with:
std::mutex lock_work;
std::mutex lock_io;
void Test(queue<string>& queue){
while (!queue.empty()) {
string proxy;
{
std::lock_guard<std::mutex> lock(lock_work);
proxy = queue.front();
queue.pop();
}
{
std::lock_guard<std::mutex> lock(lock_io);
cout << proxy << "\n";
}
}
}
Look at this snippet:
void Test(std::queue<std::string> queue) { /* ... */ }
Here you pass a copy of the queue object to the thread.
This copy is local to each thread, so it gets destroyed after every thread exits so in the end your program does not have any effect on the actual queue object that resides in the main() function.
To fix this, you need to either make the parameter take a reference or a pointer:
void Test(std::queue<std::string>& queue) { /* ... */ }
This makes the parameter directly refer to the queue object present inside main() instead of creating a copy.
Now, the above code is still not correct since queue is prone to data-race and neither std::queue nor std::cout is thread-safe and can get interrupted by another thread while currently being accessed by one. To prevent this, use a std::mutex:
// ...
#include <mutex>
// ...
// The mutex protects the 'queue' object from being subjected to data-race amongst different threads
// Additionally 'io_mut' is used to protect the streaming operations done with 'std::cout'
std::mutex mut, io_mut;
void Test(std::queue<std::string>& queue) {
std::queue<std::string> tmp;
{
// Swap the actual object with a local temporary object while being protected by the mutex
std::lock_guard<std::mutex> lock(mut);
std::swap(tmp, queue);
}
while (!tmp.empty()) {
std::string proxy = tmp.front();
{
// Call to 'std::cout' needs to be synchronized
std::lock_guard<std::mutex> lock(io_mut);
std::cout << proxy << "\n";
}
tmp.pop();
}
}
This synchronizes each thread call and prevents access from any other threads while queue is still being accessed by a thread.
Edit:
Alternatively, it'd be much faster in my opinion to make each thread wait until one of them receives a notification of your push to std::queue. You can do this through the use of std::condition_variable:
// ...
#include <mutex>
#include <condition_variable>
// ...
std::mutex mut1, mut2;
std::condition_variable cond;
void Test(std::queue<std::string>& queue, std::chrono::milliseconds timeout = std::chrono::milliseconds{10}) {
std::unique_lock<std::mutex> lock(mut1);
// Wait until 'queue' is not empty...
cond.wait(lock, [queue] { return queue.empty(); });
while (!queue.empty()) {
std::string proxy = std::move(queue.front());
std::cout << proxy << "\n";
queue.pop();
}
}
// ...
int main() {
std::queue<string> queue;
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++)
ThreadVector.emplace_back([&]() { Test(queue); });
// Notify the vectors of each 'push()' call to 'queue'
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("101.132.186.39:9090");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("95.85.24.83:8118");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("185.211.193.162:8080");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("87.106.37.89:8888");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("159.203.61.169:8080");
cond.notify_one();
}
for (auto& t : ThreadVector)
t.join();
ThreadVector.clear();
}

c++ how can I write single-process named_mutex?

I need a class which will allow me to lock/unlock specific names (or simply indexes), and I don't want it to be multi-processing, so I can run multiple instances of my application. Also I want to avoid use of system-specific APIs, just std or boost. (For simplicity sake, we can say: max number of names/indexes used at the same time is 100)
Unfortunately I has no usage example for you, I just interested is it possible to make.
I tried to find anything like that, but all I found is boost::interprocess::named_mutex and some WinApi methods, like CreateMutexW.
I also tried to write my own code (below), but it definitely not perfect and has at least one potential bug.
So, does anyone has any suggestion, code ideas, or already existing classes?
Thanks in advance
class IndexMutex
{
public:
void Lock(uint32_t id);
void Unlock(uint32_t id);
private:
struct IndexLock
{
static constexpr uint32_t unlocked = ~0u;
void Lock(uint32_t id) {
index_ = id;
mutex_.lock();
}
void Unlock() {
mutex_.unlock();
index_ = unlocked;
}
bool IsLocked() const {
return index_ != unlocked;
}
std::atomic<uint32_t> index_ = unlocked;
std::mutex mutex_{};
};
std::array<IndexLock, 100> mutexes_{};
std::mutex masterMutex_{};
};
void IndexMutex::Lock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
uint32_t possibleId = IndexLock::unlocked;
for (uint32_t i = 0; i < mutexes_.size(); ++i) {
if (mutexes_[i].index_ == id) {
masterMutex_.unlock();
// POTENTIAL BUG: TIME GAP
mutexes_[i].Lock(id);
return;
}
// Searching for unlocked mutex in the same time.
if (possibleId == IndexLock::unlocked && !mutexes_[i].IsLocked()) {
possibleId = i;
}
}
if (possibleId == IndexLock::unlocked) {
throw std::runtime_error{ "No locks were found." };
}
// We are sure here, that mutex can't be locked
// because we were protected by the muster mutex all that time.
mutexes_[possibleId].Lock(id);
}
void IndexMutex::Unlock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
for (auto& lock : mutexes_) {
if (lock.index_ == id) {
lock.Unlock();
return;
}
}
throw std::runtime_error{ "No mutexes there found by specified index." };
}
You want a reference counted mutex map, protected by a master mutex. An implementation in terms of
std::map<int, std::pair<int, std::mutex>>
would do the job.
The lock operation works like this (untested pseudocode):
master.lock()
std::pair<int, std::mutex>& m = mymap[index]; //inserts a new one if needed
m.first++;
master.unlock();
m.second.lock();
The unlock operation:
master.lock();
std::pair<int, std::mutex>& m = mymap[index];
m.second.unlock();
m.first--;
if (m.first==0) mymap.remove(index);
master.unlock();
No deadlocks! It is possible to first unlock the master and then lock the found mutex. Even if another thread intervenes and unlocks the mutex, the reference count won't drop to zero and the mutex will not be removed.

How to say to std::thread to stop?

I have two questions.
1) I want to launch some function with an infinite loop to work like a server and checking for messages in a separate thread. However I want to close it from the parent thread when I want. I'm confusing how to std::future or std::condition_variable in this case. Or is it better to create some global variable and change it to true/false from the parent thread.
2) I'd like to have something like this. Why this one example crashes during the run time?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
std::mutex mu;
bool stopServer = false;
bool serverFunction()
{
while (true)
{
// checking for messages...
// processing messages
std::this_thread::sleep_for(std::chrono::seconds(1));
mu.lock();
if (stopServer)
break;
mu.unlock();
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
system("pause");
mu.lock();
stopServer = true;
mu.unlock();
serverThread.join();
}
Why this one example crashes during the run time?
When you leave the inner loop of your thread, you leave the mutex locked, so the parent thread may be blocked forever if you use that mutex again.
You should use std::unique_lock or something similar to avoid problems like that.
You leave your mutex locked. Don't lock mutexes manually in 999/1000 cases.
In this case, you can use std::unique_lock<std::mutex> to create a RAII lock-holder that will avoid this problem. Simply create it in a scope, and have the lock area end at the end of the scope.
{
std::unique_lock<std::mutex> lock(mu);
stopServer = true;
}
in main and
{
std::unique_lock<std::mutex> lock(mu);
if (stopServer)
break;
}
in serverFunction.
Now in this case your mutex is pointless. Remove it. Replace bool stopServer with std::atomic<bool> stopServer, and remove all references to mutex and mu from your code.
An atomic variable can safely be read/written to from different threads.
However, your code is still busy-waiting. The right way to handle a server processing messages is a condition variable guarding the message queue. You then stop it by front-queuing a stop server message (or a flag) in the message queue.
This results in a server thread that doesn't wake up and pointlessly spin nearly as often. Instead, it blocks on the condition variable (with some spurious wakeups, but rare) and only really wakes up when there are new messages or it is told to shut down.
template<class T>
struct cross_thread_queue {
void push( T t ) {
{
auto l = lock();
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return halt || !data.empty(); } );
if (halt) return {};
T r = data.front();
data.pop_front();
return std::move(r); // returning to optional<T>, so we'll explicitly `move` here.
}
void terminate() {
{
auto l = lock();
data.clear();
halt = true;
}
cv.notify_all();
}
private:
std::mutex m;
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
bool halt = false;
std::deque<T> data;
std::condition_variable cv;
};
We use boost::optional for the return type of pop -- if the queue is halted, pop returns an empty optional. Otherwise, it blocks until there is data.
You can replace this with anything optional-like, even a std::pair<bool, T> where the first element says if there is anything to return, or a std::unique_ptr<T>, or a std::experimental::optional, or a myriad of other choices.
cross_thread_queue<int> queue;
bool serverFunction()
{
while (auto message = queue.pop()) {
// processing *message
std::cout << "Processing " << *message << std::endl;
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
queue.push(42);
system("pause");
queue.terminate();
serverThread.join();
}
live example.

Using boost condition variables

I am designing an asynchronous logger class as follows. However, not sure if I am using the boost condition variable in the right way. Can anyone comment on this? Here the processLogEntry method is a thread function and I am using boost here.
void LogWriter::stopThread()
{
mStop = true;
mCond.notify_one();
mThread->join();
}
void LogWriter::processLogEntry()
{
while(!mStop)
{
boost::mutex::scoped_lock lock(mMutex);
mCond.wait(lock);
while(!q.empty())
{
// process begins
}
}
}
void LogWriter::addLogEntry()
{
boost::mutex::scoped_lock lock(mMutex);
// add it in the queue
mCond.notify_one();
}
As it has been pointed out, you must either make mStop atomic or guard all its accesses with the mutex. Forget about volatile, it's not relevant to your purposes.
Furthermore, when waiting on a condition variable a call to wait may return even if no notification functions were called (those are so-called spurious wake-ups). As such, calls to wait need to be guarded.
void LogWriter::stopThread()
{
{
boost::mutex::scoped_lock lock(mMutex);
mStop = true;
mCond.notify_one();
}
mThread->join();
}
void LogWriter::processLogEntry()
{
for(;;) {
boost::mutex::scoped_lock lock(mMutex);
// We wait as long as we're not told to stop and
// we don't have items to process
while(!mStop && q.empty()) mCond.wait(lock);
// Invariant: if we get here then
// mStop || !q.empty() holds
while(!q.empty())
{
// process begins
}
if(mStop) return;
}
}