I am designing an asynchronous logger class as follows. However, not sure if I am using the boost condition variable in the right way. Can anyone comment on this? Here the processLogEntry method is a thread function and I am using boost here.
void LogWriter::stopThread()
{
mStop = true;
mCond.notify_one();
mThread->join();
}
void LogWriter::processLogEntry()
{
while(!mStop)
{
boost::mutex::scoped_lock lock(mMutex);
mCond.wait(lock);
while(!q.empty())
{
// process begins
}
}
}
void LogWriter::addLogEntry()
{
boost::mutex::scoped_lock lock(mMutex);
// add it in the queue
mCond.notify_one();
}
As it has been pointed out, you must either make mStop atomic or guard all its accesses with the mutex. Forget about volatile, it's not relevant to your purposes.
Furthermore, when waiting on a condition variable a call to wait may return even if no notification functions were called (those are so-called spurious wake-ups). As such, calls to wait need to be guarded.
void LogWriter::stopThread()
{
{
boost::mutex::scoped_lock lock(mMutex);
mStop = true;
mCond.notify_one();
}
mThread->join();
}
void LogWriter::processLogEntry()
{
for(;;) {
boost::mutex::scoped_lock lock(mMutex);
// We wait as long as we're not told to stop and
// we don't have items to process
while(!mStop && q.empty()) mCond.wait(lock);
// Invariant: if we get here then
// mStop || !q.empty() holds
while(!q.empty())
{
// process begins
}
if(mStop) return;
}
}
Related
How can I check bool variable in class considering thread safe?
For example in my code:
// test.h
class Test {
void threadFunc_run();
void change(bool _set) { m_flag = _set; }
...
bool m_flag;
};
// test.cpp
void Test::threadFunc_run()
{
// called "Playing"
while(m_flag == true) {
for(int i = 0; i < 99999999 && m_flag; i++) {
// do something .. 1
}
for(int i = 0; i < 111111111 && m_flag; i++) {
// do something .. 2
}
}
}
I wan to stop "Playing" as soon as change(..) function is executed in the external code.
It also wants to be valid in process of operating the for statement.
According to the search, there are variables for recognizing immediate changes, such as atomic or volatile.
If not immediately, is there a better way to use a normal bool?
Actually synchronizing threads safely requires more then a bool.
You will need a state, a mutex and a condition variable like this.
The approach also allows for quick reaction to stop from within the loop.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <future>
#include <mutex>
class Test
{
private:
// having just a bool to check the state of your thread is NOT enough.
// your thread will have some intermediate states as well
enum play_state_t
{
idle, // initial state, not started yet (not scheduled by OS threadscheduler yet)
playing, // running and doing work
stopping, // request for stop is issued
stopped // thread has stopped (could also be checked by std::future synchronization).
};
public:
void play()
{
// start the play loop, the lambda is not guaranteed to have started
// after the call returns (depends on threadscheduling of the underlying OS)
// I use std::async since that has far superior synchronization with the calling thead
// the returned future can be used to pass both values & exceptions back to it.
m_play_future = std::async(std::launch::async, [this]
{
// give a signal the asynchronous function has really started
set_state(play_state_t::playing);
std::cout << "play started\n";
// as long as state is playing keep doing the work
while (get_state() == play_state_t::playing)
{
// loop to show we can break fast out of it when stop is called
for (std::size_t i = 0; (i < 100l) && (get_state() == play_state_t::playing); ++i)
{
std::cout << ".";
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
set_state(play_state_t::stopped);
std::cout << "play stopped.\n";
});
// avoid race conditions really wait for
// trhead handling async to have started playing
wait_for_state(play_state_t::playing);
}
void stop()
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
if (m_state == play_state_t::playing)
{
std::cout << "\nrequest stop.\n";
m_state = play_state_t::stopping;
m_cv.wait(lock, [&] { return m_state == play_state_t::stopped; });
}
};
~Test()
{
stop();
}
private:
void set_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
m_state = state;
m_cv.notify_all(); // let other threads that are wating on condition variable wakeup to check new state
}
play_state_t get_state() const
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
return m_state;
}
void wait_for_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] { return m_state == state; });
}
// for more info on condition variables
// see : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
mutable std::mutex m_mtx;
std::condition_variable m_cv; // a condition variable is not really a variable more a signal to threads to wakeup
play_state_t m_state{ play_state_t::idle };
std::future<void> m_play_future;
};
int main()
{
Test test;
test.play();
std::this_thread::sleep_for(std::chrono::seconds(1));
test.stop();
return 0;
}
I need a class which will allow me to lock/unlock specific names (or simply indexes), and I don't want it to be multi-processing, so I can run multiple instances of my application. Also I want to avoid use of system-specific APIs, just std or boost. (For simplicity sake, we can say: max number of names/indexes used at the same time is 100)
Unfortunately I has no usage example for you, I just interested is it possible to make.
I tried to find anything like that, but all I found is boost::interprocess::named_mutex and some WinApi methods, like CreateMutexW.
I also tried to write my own code (below), but it definitely not perfect and has at least one potential bug.
So, does anyone has any suggestion, code ideas, or already existing classes?
Thanks in advance
class IndexMutex
{
public:
void Lock(uint32_t id);
void Unlock(uint32_t id);
private:
struct IndexLock
{
static constexpr uint32_t unlocked = ~0u;
void Lock(uint32_t id) {
index_ = id;
mutex_.lock();
}
void Unlock() {
mutex_.unlock();
index_ = unlocked;
}
bool IsLocked() const {
return index_ != unlocked;
}
std::atomic<uint32_t> index_ = unlocked;
std::mutex mutex_{};
};
std::array<IndexLock, 100> mutexes_{};
std::mutex masterMutex_{};
};
void IndexMutex::Lock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
uint32_t possibleId = IndexLock::unlocked;
for (uint32_t i = 0; i < mutexes_.size(); ++i) {
if (mutexes_[i].index_ == id) {
masterMutex_.unlock();
// POTENTIAL BUG: TIME GAP
mutexes_[i].Lock(id);
return;
}
// Searching for unlocked mutex in the same time.
if (possibleId == IndexLock::unlocked && !mutexes_[i].IsLocked()) {
possibleId = i;
}
}
if (possibleId == IndexLock::unlocked) {
throw std::runtime_error{ "No locks were found." };
}
// We are sure here, that mutex can't be locked
// because we were protected by the muster mutex all that time.
mutexes_[possibleId].Lock(id);
}
void IndexMutex::Unlock(uint32_t id)
{
if (id == IndexLock::unlocked) {
return;
}
const std::lock_guard<std::mutex> __guard{ masterMutex_ };
for (auto& lock : mutexes_) {
if (lock.index_ == id) {
lock.Unlock();
return;
}
}
throw std::runtime_error{ "No mutexes there found by specified index." };
}
You want a reference counted mutex map, protected by a master mutex. An implementation in terms of
std::map<int, std::pair<int, std::mutex>>
would do the job.
The lock operation works like this (untested pseudocode):
master.lock()
std::pair<int, std::mutex>& m = mymap[index]; //inserts a new one if needed
m.first++;
master.unlock();
m.second.lock();
The unlock operation:
master.lock();
std::pair<int, std::mutex>& m = mymap[index];
m.second.unlock();
m.first--;
if (m.first==0) mymap.remove(index);
master.unlock();
No deadlocks! It is possible to first unlock the master and then lock the found mutex. Even if another thread intervenes and unlocks the mutex, the reference count won't drop to zero and the mutex will not be removed.
I have a question. I add the object to the map and in the thread call the run() procedure for all elements in the map.
I correctly understand that in this code there is a synchronization problem in the process procedure. Can I add a mutex? Given that this procedure is called in the thread?
class Network {
public:
Network() {
std::cout << "Network constructor" << std::endl;
}
void NetworkInit(const std::string& par1) {
this->par1 = par1;
}
~Network() {
std::cout << "Network destructor" << std::endl;
my_map.clear();
}
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
cv.notify_one();
}
void removeLogic(uint32_t Id) {
std::unique_lock<std::mutex> lk(mutex);
cv.wait(lk, [this]{return !my_map.empty(); });
auto p = this->my_map.find(roomId);
if (p != end(this->my_map)) {
this->my_map.erase(roomId);
}
lk.unlock();
}
/**
* Start thread
*/
void StartThread(int id = 1) {
running = true;
first = std::thread([this, id] { process(id); });
first.detach();
}
/**
* Stop thread
*/
void StopThread() {
running = false;
}
private:
std::thread first;
std::atomic<bool> running = ATOMIC_VAR_INIT(true);
void process(int id) {
while (running) {
for (const auto& it:my_map) {
it.second->run();
}
std::this_thread::sleep_for(10ms);
}
}
private:
std::mutex mutex;
std::condition_variable cv;
using MyMapType = std::map<uint32_t, std::shared_ptr<Logic> >;
MyMapType my_map;
std::string par1;
};
The first idea is to protect the map as a whole with a mutex that is released during run. This works for addLogic because inserting into a map invalidates no iterators, but not for deleteLogic which might invalidate the very iterator value being used by process.
More efficient, lock-free approaches like hazard pointers may be applicable here, but the basic idea is to use a deferred deletion list. Assuming that the intent of concurrent deletion is cancellation of the task (not merely cleanup after all work is completed), it’s sensible to have the consumer thread to check immediately before execution. Using a set (to correspond to your map) will let the deletion list be dynamic and those checks be efficient.
So have another mutex protect the deletion list and take it at the beginning of each iteration in process:
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
}
void removeLogic(uint32_t Id) {
std::lock_guard<std::mutex> kg(kill_mutex);
kill.insert(Id);
}
private:
std::set<uint32_t> kill;
std::mutex mutex,kill_mutex;
void process(int id) {
for(;running;std::this_thread::sleep_for(10ms)) {
std::unique_lock<std::mutex> lg(mutex);
for(auto i=my_map.begin(),e=my_map.end();i!=e;) {
if(std::lock_guard<std::mutex>(kill_mutex),kill.erase(i->first)) {
i=my_map.erase(i);
continue; // test i!=e again
}
lg.unlock();
i->second->run();
lg.lock();
++i;
}
}
}
This code omits your condition_variable usage: it’s not necessary to wait before enqueuing something for deletion.
The solution with low level concurrency primitives usually does not scale and is not easy to maintain.
A better alternative would be to have a thread-safe "control" queue of map update or worker termination instructions.
Something like this:
enum Op {
ADD,
DROP,
STOP
};
struct Request {
Op op;
uint32_t id;
std::function<void()> action;
};
...
// the map which required protection in your code
std::map<uint32_t, std::function<void()>> subs;
// requests queue and its mutex (not very optimal, just to demonstrate the idea)
std::vector<Request> requests;
std::mutex mutex;
// the worker thread
std::thread worker([&](){
// the temporary buffer where requests are drained to from the queue before processing
decltype(requests) buffer;
// the main loop
while (true) {
// requests collection (requires synchronization)
{
std::lock_guard<decltype(mutex)> const guard {mutex};
buffer.swap(requests);
}
// requests processing
for(auto&& request: buffer) {
switch (request.op) {
case ADD:
subs[request.id] = std::move(request.action);
break;
case DROP:
subs.erase(request.id);
break;
case STOP: goto endloop;
}
}
// map iteration
for (auto&& entry: subs) {
entry.second();
}
}
endloop:;
});
I have two questions.
1) I want to launch some function with an infinite loop to work like a server and checking for messages in a separate thread. However I want to close it from the parent thread when I want. I'm confusing how to std::future or std::condition_variable in this case. Or is it better to create some global variable and change it to true/false from the parent thread.
2) I'd like to have something like this. Why this one example crashes during the run time?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
std::mutex mu;
bool stopServer = false;
bool serverFunction()
{
while (true)
{
// checking for messages...
// processing messages
std::this_thread::sleep_for(std::chrono::seconds(1));
mu.lock();
if (stopServer)
break;
mu.unlock();
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
system("pause");
mu.lock();
stopServer = true;
mu.unlock();
serverThread.join();
}
Why this one example crashes during the run time?
When you leave the inner loop of your thread, you leave the mutex locked, so the parent thread may be blocked forever if you use that mutex again.
You should use std::unique_lock or something similar to avoid problems like that.
You leave your mutex locked. Don't lock mutexes manually in 999/1000 cases.
In this case, you can use std::unique_lock<std::mutex> to create a RAII lock-holder that will avoid this problem. Simply create it in a scope, and have the lock area end at the end of the scope.
{
std::unique_lock<std::mutex> lock(mu);
stopServer = true;
}
in main and
{
std::unique_lock<std::mutex> lock(mu);
if (stopServer)
break;
}
in serverFunction.
Now in this case your mutex is pointless. Remove it. Replace bool stopServer with std::atomic<bool> stopServer, and remove all references to mutex and mu from your code.
An atomic variable can safely be read/written to from different threads.
However, your code is still busy-waiting. The right way to handle a server processing messages is a condition variable guarding the message queue. You then stop it by front-queuing a stop server message (or a flag) in the message queue.
This results in a server thread that doesn't wake up and pointlessly spin nearly as often. Instead, it blocks on the condition variable (with some spurious wakeups, but rare) and only really wakes up when there are new messages or it is told to shut down.
template<class T>
struct cross_thread_queue {
void push( T t ) {
{
auto l = lock();
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return halt || !data.empty(); } );
if (halt) return {};
T r = data.front();
data.pop_front();
return std::move(r); // returning to optional<T>, so we'll explicitly `move` here.
}
void terminate() {
{
auto l = lock();
data.clear();
halt = true;
}
cv.notify_all();
}
private:
std::mutex m;
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
bool halt = false;
std::deque<T> data;
std::condition_variable cv;
};
We use boost::optional for the return type of pop -- if the queue is halted, pop returns an empty optional. Otherwise, it blocks until there is data.
You can replace this with anything optional-like, even a std::pair<bool, T> where the first element says if there is anything to return, or a std::unique_ptr<T>, or a std::experimental::optional, or a myriad of other choices.
cross_thread_queue<int> queue;
bool serverFunction()
{
while (auto message = queue.pop()) {
// processing *message
std::cout << "Processing " << *message << std::endl;
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
queue.push(42);
system("pause");
queue.terminate();
serverThread.join();
}
live example.
I have x boost threads that work at the same time. One producer thread fills a synchronised queue with calculation tasks. The consumer threads pop out tasks and calculates them.
Image Source: https://www.quantnet.com/threads/c-multithreading-in-boost.10028/
The user may finish the programm during this process, so I need to shutdown my threads properly. My current approach seems to not work, since exceptions are thrown. It's intented that on system shutdown all processes should be killed and stop their current task no matter what they do. Could you please show me, how you would kill thoses threads?
Thread Initialisation:
for (int i = 0; i < numberOfThreads; i++)
{
std::thread* thread = new std::thread(&MyManager::worker, this);
mThreads.push_back(thread);
}
Thread Destruction:
void MyManager::shutdown()
{
for (int i = 0; i < numberOfThreads; i++)
{
mThreads.at(i)->join();
delete mThreads.at(i);
}
mThreads.clear();
}
Worker:
void MyManager::worker()
{
while (true)
{
int current = waitingList.pop();
Object * p = objects.at(current);
p->calculateMesh(); //this task is internally locked by a mutex
try
{
boost::this_thread::interruption_point();
}
catch (const boost::thread_interrupted&)
{
// Thread interruption request received, break the loop
std::cout << "- Thread interrupted. Exiting thread." << std::endl;
break;
}
}
}
Synchronised Queue:
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
template <typename T>
class ThreadSafeQueue
{
public:
T pop()
{
std::unique_lock<std::mutex> mlock(mutex_);
while (queue_.empty())
{
cond_.wait(mlock);
}
auto item = queue_.front();
queue_.pop();
return item;
}
void push(const T& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(item);
mlock.unlock();
cond_.notify_one();
}
int sizeIndicator()
{
std::unique_lock<std::mutex> mlock(mutex_);
return queue_.size();
}
private:
bool isEmpty() {
std::unique_lock<std::mutex> mlock(mutex_);
return queue_.empty();
}
std::queue<T> queue_;
std::mutex mutex_;
std::condition_variable cond_;
};
The thrown error call stack:
... std::_Mtx_lockX(_Mtx_internal_imp_t * * _Mtx) Line 68 C++
... std::_Mutex_base::lock() Line 42 C++
... std::unique_lock<std::mutex>::unique_lock<std::mutex>(std::mutex & _Mtx) Line 220 C++
... ThreadSafeQueue<int>::pop() Line 13 C++
... MyManager::worker() Zeile 178 C++
From my experience on working with threads in both Boost and Java, trying to shut down threads externally is always messy. I've never been able to really get that to work cleanly.
The best I've gotten is to have a boolean value available to all the consumer threads that is set to true. When you set it to false, the threads will simply return on their own. In your case, that could easily be put into the while loop you have.
On top of that, you're going to need some synchronization so that you can wait for the threads to return before you delete them, otherwise you can get some hard to define behavior.
An example from a past project of mine:
Thread creation
barrier = new boost::barrier(numOfThreads + 1);
threads = new detail::updater_thread*[numOfThreads];
for (unsigned int t = 0; t < numOfThreads; t++) {
//This object is just a wrapper class for the boost thread.
threads[t] = new detail::updater_thread(barrier, this);
}
Thread destruction
for (unsigned int i = 0; i < numOfThreads; i++) {
threads[i]->requestStop();//Notify all threads to stop.
}
barrier->wait();//The update request will allow the threads to get the message to shutdown.
for (unsigned int i = 0; i < numOfThreads; i++) {
threads[i]->waitForStop();//Wait for all threads to stop.
delete threads[i];//Now we are safe to clean up.
}
Some methods that may be of interest from the thread wrapper.
//Constructor
updater_thread::updater_thread(boost::barrier * barrier)
{
this->barrier = barrier;
running = true;
thread = boost::thread(&updater_thread::run, this);
}
void updater_thread::run() {
while (running) {
barrier->wait();
if (!running) break;
//Do stuff
barrier->wait();
}
}
void updater_thread::requestStop() {
running = false;
}
void updater_thread::waitForStop() {
thread.join();
}
Try moving 'try' up (like in the sample below). If your thread is waiting for data (inside waitingList.pop()) then may be waiting inside the condition variable .wait(). This is an 'interruption point' and so may throw when the thread gets interrupted.
void MyManager::worker()
{
while (true)
{
try
{
int current = waitingList.pop();
Object * p = objects.at(current);
p->calculateMesh(); //this task is internally locked by a mutex
boost::this_thread::interruption_point();
}
catch (const boost::thread_interrupted&)
{
// Thread interruption request received, break the loop
std::cout << "- Thread interrupted. Exiting thread." << std::endl;
break;
}
}
}
Maybe you are catching the wrong exception class?
Which would mean it does not get caught.
Not too familiar with threads but is it the mix of std::threads and boost::threads that is causing this?
Try catching the lowest parent exception.
I think this is a classic problem of reader/writer thread working on a common buffer. One of the most secured way of working out this problem is to use mutexes and signals.( I am not able to post the code here. Please send me an email, I post the code to you).