I have the following mutex manager which I aims to lock/unlock mutex given a topic name. I want to be able to lock/unlock mutexes depending on a specific tag (in this example a string). What I am doing is simply mapping a string to a mutex. Then, the outside world would invoke MutexManager::lock on tag name, then the MutexManager would lock the correct mutex.
Is this the way to do it, or should I instead be creating a map of std::unique_lock<std::mutex>
#include <iostream>
#include <unordered_map>
#include <mutex>
class MutexManager {
public:
std::unordered_map<std::string, std::mutex> mutexes;
std::unique_lock<std::mutex> lock_mutex(const std::string& name) {
try {
std::unique_lock<std::mutex> lock(mutexes.at(name));
return lock;
} catch (...) {
std::cout << "Failed to acquire lock";
}
}
void unlock_mutex(std::unique_lock<std::mutex> locked_mutex)
{
try {
locked_mutex.unlock();
} catch (...) {
std::cout << "Failed to release lock.";
}
}
void add_mutex(std::string topic) {
mutexes[topic]; // is that really the solution?
}
};
int main()
{
MutexManager mutexManager;
mutexManager.add_mutex("test");
auto& mutexx = mutexManager.mutexes.at("test");
return 0;
}
My concern with the above is if I got two threads where thread 1 runs lock followed by thread2 :
thread 1:
mutexManager.lock("test");
thread 2:
mutexManager.lock("test");
Will thread two be blocked untill thread 1 has released the lock ? In other words, does the locks above target the same mutex given we got the same topic?
Related
Is it possible I can invoke a function on a specific thread given the thread ID? I am currently on a different thread.
You need cooperation from the target thread; for instance, the target thread has to execute a loop at the top of which it waits on some sort of message box. Through that message box you give it a message that contains the function to be called and the arguments to use. Through the same mechanism, the function can produce a reply containing the result of the call.
But you can't just make a random thread that is running arbitrary code call your function. Although, never say never. There are tricks like, for instance, asynchronous POSIX signals and such: send a signal to a thread, which inspects some datum that tells it to call a function. That is confounded by the limitations as to what can be safely done out of a signal handler.
In a debugger, you can stop all the threads, then "switch" to a particular one and evaluate expressions in its context, including function calls. That is also an approach that would be inadvisable to integrate into production code; you have no idea what state a stopped thread is in to be able to safely and reliably do anything in that thread.
One possible solution is to make the worker threads execute based on tasks (functions),i.e you use a container to store functions you'd like the worker thread to execution, and the work thread's job is to execute functions in the container.
Here's an example, hope it helps.
#include <iostream>
#include <list>
#include <functional>
#include <thread>
#include <mutex>
#include <atomic>
#include <condition_variable>
using namespace std;
void foo() {
cout << "foo() is called" << endl;
}
template<typename T>
class TaskQueue {
public:
void enqueue(T&& task) {
unique_lock<mutex> l(m);
tasks.push_back(move(task));
cv.notify_one();
}
bool empty() { unique_lock<mutex> l(m); return tasks.empty(); }
void setStop() { stop = true; unique_lock<mutex> l(m); cv.notify_one(); }
void run() {
T t;
while (!stop) {
{
unique_lock<mutex> l(m);
cv.wait(l, [&] {return !tasks.empty() || stop;});
if (!tasks.empty()) {
t = move(tasks.front());
tasks.pop_front();
}
else
return;
}
t();
}
}
private:
atomic<bool> stop = false;
mutex m;
condition_variable cv;
list<T> tasks;
};
int main() {
TaskQueue<function<void(void)>> taskq;
thread t(&TaskQueue<function<void(void)>>::run, &taskq);
taskq.enqueue(foo);
taskq.enqueue(foo);
taskq.enqueue(foo);
while (!taskq.empty()) {}
taskq.setStop();
t.join();
}
I have a function that must not be called from more than one thread at the same time. Can you suggest some elegant assert for this?
You can use a thin RAII wrapper around std::atomic<>:
namespace {
std::atomic<int> access_counter;
struct access_checker {
access_checker() { check = ++access_counter; }
access_checker( const access_checker & ) = delete;
~access_checker() { --access_counter; }
int check;
};
}
void foobar()
{
access_checker checker;
// assert than checker.check == 1 and react accordingly
...
}
it is simplified version for single use to show the idea and can be improved to use for multiple functions if necessary
Sounds like you need a mutex. Assuming you are using std::thread you can look at the coding example in the following link for specifically using std::mutex: http://www.cplusplus.com/reference/mutex/mutex/
// mutex example
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex
std::mutex mtx; // mutex for critical section
void print_block (int n, char c) {
// critical section (exclusive access to std::cout signaled by locking mtx):
mtx.lock();
for (int i=0; i<n; ++i) { std::cout << c; }
std::cout << '\n';
mtx.unlock();
}
int main ()
{
std::thread th1 (print_block,50,'*');
std::thread th2 (print_block,50,'$');
th1.join();
th2.join();
return 0;
}
In the above code print_block locks mtx, does what it needs to do, and then unlocks mtx. If print_block is called from two different threads, one thread will lock mtx first and the other thread will block on mtx.lock() and be force to wait until the other thread calls mtx.unlock(). This means only one thread can execute the code between mtx.lock() and mtx.unlock() (exclusive) at the same time.
This assumes by "at the same time" you mean at the same literal time. If you only want one thread to be able to call a function I would recommend looking into std::this_thread::get_id which will get you the id of the current thread. An assert could be as simple as storing the owning thread in owning_thread_id and then calling assert(owning_thread_id == std::this_thread::get_id()).
I have a program which spawns multiple threads, each of which executes a long-running task. The main thread then waits for all worker threads to join, collects results, and exits.
If an error occurs in one of the workers, I want the remaining workers to stop gracefully, so that the main thread can exit shortly afterwards.
My question is how best to do this, when the implementation of the long-running task is provided by a library whose code I cannot modify.
Here is a simple sketch of the system, with no error handling:
void threadFunc()
{
// Do long-running stuff
}
void mainFunc()
{
std::vector<std::thread> threads;
for (int i = 0; i < 3; ++i) {
threads.push_back(std::thread(&threadFunc));
}
for (auto &t : threads) {
t.join();
}
}
If the long-running function executes a loop and I have access to the code, then
execution can be aborted simply by checking a shared "keep on running" flag at the top of each iteration.
std::mutex mutex;
bool error;
void threadFunc()
{
try {
for (...) {
{
std::unique_lock<std::mutex> lock(mutex);
if (error) {
break;
}
}
}
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
Now consider the case when the long-running operation is provided by a library:
std::mutex mutex;
bool error;
class Task
{
public:
// Blocks until completion, error, or stop() is called
void run();
void stop();
};
void threadFunc(Task &task)
{
try {
task.run();
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
In this case, the main thread has to handle the error, and call stop() on
the still-running tasks. As such, it cannot simply wait for each worker to
join() as in the original implementation.
The approach I have used so far is to share the following structure between
the main thread and each worker:
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
}
When a worker completes successfully, it decrements the running count. If
an exception is caught, the worker sets the error flag. In both cases, it
then calls condVar.notify_one().
The main thread then waits on the condition variable, waking up if either
error is set or running reaches zero. On waking up, the main thread
calls stop() on all tasks if error has been set.
This approach works, but I feel there should be a cleaner solution using some
of the higher-level primitives in the standard concurrency library. Can
anyone suggest an improved implementation?
Here is the complete code for my current solution:
// main.cpp
#include <chrono>
#include <mutex>
#include <thread>
#include <vector>
#include "utils.h"
// Class which encapsulates long-running task, and provides a mechanism for aborting it
class Task
{
public:
Task(int tidx, bool fail)
: tidx(tidx)
, fail(fail)
, m_run(true)
{
}
void run()
{
static const int NUM_ITERATIONS = 10;
for (int iter = 0; iter < NUM_ITERATIONS; ++iter) {
{
std::unique_lock<std::mutex> lock(m_mutex);
if (!m_run) {
out() << "thread " << tidx << " aborting";
break;
}
}
out() << "thread " << tidx << " iter " << iter;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (fail) {
throw std::exception();
}
}
}
void stop()
{
std::unique_lock<std::mutex> lock(m_mutex);
m_run = false;
}
const int tidx;
const bool fail;
private:
std::mutex m_mutex;
bool m_run;
};
// Data shared between all threads
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
SharedData(int count)
: error(false)
, running(count)
{
}
};
void threadFunc(Task &task, SharedData &shared)
{
try {
out() << "thread " << task.tidx << " starting";
task.run(); // Blocks until task completes or is aborted by main thread
out() << "thread " << task.tidx << " ended";
} catch (std::exception &) {
out() << "thread " << task.tidx << " failed";
std::unique_lock<std::mutex> lock(shared.mutex);
shared.error = true;
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
--shared.running;
}
shared.condVar.notify_one();
}
int main(int argc, char **argv)
{
static const int NUM_THREADS = 3;
std::vector<std::unique_ptr<Task>> tasks(NUM_THREADS);
std::vector<std::thread> threads(NUM_THREADS);
SharedData shared(NUM_THREADS);
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
const bool fail = (tidx == 1);
tasks[tidx] = std::make_unique<Task>(tidx, fail);
threads[tidx] = std::thread(&threadFunc, std::ref(*tasks[tidx]), std::ref(shared));
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
// Wake up when either all tasks have completed, or any one has failed
shared.condVar.wait(lock, [&shared](){
return shared.error || !shared.running;
});
if (shared.error) {
out() << "error occurred - terminating remaining tasks";
for (auto &t : tasks) {
t->stop();
}
}
}
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
out() << "waiting for thread " << tidx << " to join";
threads[tidx].join();
out() << "thread " << tidx << " joined";
}
out() << "program complete";
return 0;
}
Some utility functions are defined here:
// utils.h
#include <iostream>
#include <mutex>
#include <thread>
#ifndef UTILS_H
#define UTILS_H
#if __cplusplus <= 201103L
// Backport std::make_unique from C++14
#include <memory>
namespace std {
template<typename T, typename ...Args>
std::unique_ptr<T> make_unique(
Args&& ...args)
{
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
}
} // namespace std
#endif // __cplusplus <= 201103L
// Thread-safe wrapper around std::cout
class ThreadSafeStdOut
{
public:
ThreadSafeStdOut()
: m_lock(m_mutex)
{
}
~ThreadSafeStdOut()
{
std::cout << std::endl;
}
template <typename T>
ThreadSafeStdOut &operator<<(const T &obj)
{
std::cout << obj;
return *this;
}
private:
static std::mutex m_mutex;
std::unique_lock<std::mutex> m_lock;
};
std::mutex ThreadSafeStdOut::m_mutex;
// Convenience function for performing thread-safe output
ThreadSafeStdOut out()
{
return ThreadSafeStdOut();
}
#endif // UTILS_H
I've been thinking about your situation for sometime and this maybe of some help to you. You could probably try doing a couple of different methods to achieve you goal. There are 2-3 options that maybe of use or a combination of all three. I will at minimum show the first option for I'm still learning and trying to master the concepts of Template Specializations as well as using Lambdas.
Using a Manager Class
Using Template Specialization Encapsulation
Using Lambdas.
Pseudo code of a Manager Class would look something like this:
class ThreadManager {
private:
std::unique_ptr<MainThread> mainThread_;
std::list<std::shared_ptr<WorkerThread> lWorkers_; // List to hold finished workers
std::queue<std::shared_ptr<WorkerThread> qWorkers_; // Queue to hold inactive and waiting threads.
std::map<unsigned, std::shared_ptr<WorkerThread> mThreadIds_; // Map to associate a WorkerThread with an ID value.
std::map<unsigned, bool> mFinishedThreads_; // A map to keep track of finished and unfinished threads.
bool threadError_; // Not needed if using exception handling
public:
explicit ThreadManager( const MainThread& main_thread );
void shutdownThread( const unsigned& threadId );
void shutdownAllThreads();
void addWorker( const WorkerThread& worker_thread );
bool isThreadDone( const unsigned& threadId );
void spawnMainThread() const; // Method to start main thread's work.
void spawnWorkerThread( unsigned threadId, bool& error );
bool getThreadError( unsigned& threadID ); // Returns True If Thread Encountered An Error and passes the ID of that thread,
};
Only for demonstration purposes did I use bool value to determine if a thread failed for simplicity of the structure, and of course this can be substituted to your like if you prefer to use exceptions or invalid unsigned values, etc.
Now to use a class of this sort would be something like this: Also note that a class of this type would be considered better if it was a Singleton type object since you wouldn't want more than 1 ManagerClass since you are working with shared pointers.
SomeClass::SomeClass( ... ) {
// This class could contain a private static smart pointer of this Manager Class
// Initialize the smart pointer giving it new memory for the Manager Class and by passing it a pointer of the Main Thread object
threadManager_ = new ThreadManager( main_thread ); // Wouldn't actually use raw pointers here unless if you had a need to, but just shown for simplicity
}
SomeClass::addThreads( ... ) {
for ( unsigned u = 1, u <= threadCount; u++ ) {
threadManager_->addWorker( some_worker_thread );
}
}
SomeClass::someFunctionThatSpawnsThreads( ... ) {
threadManager_->spawnMainThread();
bool error = false;
for ( unsigned u = 1; u <= threadCount; u++ ) {
threadManager_->spawnWorkerThread( u, error );
if ( error ) { // This Thread Failed To Start, Shutdown All Threads
threadManager->shutdownAllThreads();
}
}
// If all threads spawn successfully we can do a while loop here to listen if one fails.
unsigned threadId;
while ( threadManager_->getThreadError( threadId ) ) {
// If the function passed to this while loop returns true and we end up here, it will pass the id value of the failed thread.
// We can now go through a for loop and stop all active threads.
for ( unsigned u = threadID + 1; u <= threadCount; u++ ) {
threadManager_->shutdownThread( u );
}
// We have successfully shutdown all threads
break;
}
}
I like the design of manager class since I have used them in other projects, and they come in handy quite often especially when working with a code base that contains many and multiple resources such as a working Game Engine that has many assets such as Sprites, Textures, Audio Files, Maps, Game Items etc. Using a Manager Class helps to keep track and maintain all of the assets. This same concept can be applied to "Managing" Active, Inactive, Waiting Threads, and knows how to intuitively handle and shutdown all threads properly. I would recommend using an ExceptionHandler if your code base and libraries support exceptions as well as thread safe exception handling instead of passing and using bools for errors. Also having a Logger class is good to where it can write to a log file and or a console window to give an explicit message of what function the exception was thrown in and what caused the exception where a log message might look like this:
Exception Thrown: someFunctionNamedThis in ThisFile on Line# (x)
threadID 021342 failed to execute.
This way you can look at the log file and find out very quickly what thread is causing the exception, instead of using passed around bool variables.
The implementation of the long-running task is provided by a library whose code I cannot modify.
That means you have no way to synchronize the job done by working threads
If an error occurs in one of the workers,
Let's suppose that you can really detect worker errors; some of then can be easily detected if reported by the used library others cannot i.e.
the library code loops.
the library code prematurely exit with an uncaught exception.
I want the remaining workers to stop **gracefully**
That's just not possible
The best you can do is writing a thread manager checking on worker thread status and if an error condition is detected it just (ungracefully) "kills" all the worker threads and exits.
You should also consider detecting a looped working thread (by timeout) and offer to the user the option to kill or continue waiting for the process to finish.
Your problem is that the long running function is not your code, and you say you cannot modify it. Consequently you cannot make it pay any attention whatsoever to any kind of external synchronisation primitive (condition variables, semaphores, mutexes, pipes, etc), unless the library developer has done that for you.
Therefore your only option is to do something that wrestles control away from any code no matter what it's doing. This is what signals do. For that, you're going to have to use pthread_kill(), or whatever the equivalent is these days.
The pattern would be that
The thread that detects an error needs to communicate that error back to the main thread in some manner.
The main thread then needs to call pthread_kill() for all the other remaining threads. Don't be confused by the name - pthread_kill() is simply a way of delivering an arbitrary signal to a thread. Note that signals like STOP, CONTINUE and TERMINATE are process-wide even if raised with pthread_kill(), not thread specific so don't use those.
In each of those threads you'll need a signal handler. On delivery of the signal to a thread the execution path in that thread will jump to the handler no matter what the long running function was doing.
You are now back in (limited) control, and can (probably, well, maybe) do some limited cleanup and terminate the thread.
In the meantime the main thread will have been calling pthread_join() on all the threads it's signaled, and those will now return.
My thoughts:
This is a really ugly way of doing it (and signals / pthreads are notoriously difficult to get right and I'm no expert), but I don't really see what other choice you have.
It'll be a long way from looking 'graceful' in source code, though the end user experience will be OK.
You will be aborting execution part way through running that library function, so if there's any clean up it would normally do (e.g. freeing up memory it has allocated) that won't get done and you'll have a memory leak. Running under something like valgrind is a way of working out if this is happening.
The only way of getting the library function to clean up (if it needs it) will be for your signal handler to return control to the function and letting it run to completion, just what you don't want to do.
And of course, this won't work on Windows (no pthreads, at least none worth speaking of, though there may be an equivalent mechanism).
Really the best way is going to be to re-implement (if at all possible) that library function.
I have a seperate thread for audio in my application because it sounded like a good idea at the time but now I am conserned at how other threads will comunicate with the audio thread.
audioThread() {
while(!isCloseRequested) {
If(audio.dogSoundRequested) {
audio.playDogSound();
}
}
}
otherThread() {
Audio.dogSoundRequested();
}
Would this be an efficient way to thread audio or do you see issues with this setup?
The issue at stake here seems to be
1: how to make audio.dogSoundRequested and isCloseRequested thread-safe.
2: audioThread is busy-waiting (e.g. spinning infinitely until audio.dogSoundRequested becomes true.
As others have suggested, you could use a mutex to protect both variables, but this is overkill - additionally, it's generally good form in audio code not to use blocking sychronisation in order to avoid issues with priority inversion.
Instead, assuming you're using C++11 or C++14, you could use an atomic variable, whcih are lightweight and don't (in most implementations) block:
#include <atomic>
...
std::atomic<bool> dogSoundRequested{false};
std::atomic<bool> isCloseRequested{false};
Reads and writes to the std::atomic have the same contract as for built-in types, but will generate code that ensures the read and write are atomic with respect to other threads, and that the results are synchronised with other CPUs.
In the case of audio.dogSoundRequested you want both of these effects, and in the case of isCloseRequested, that the result is immediately visible on other CPU.
To solve the busy-waiting issue, use a condition variable to awake audioThread when there's something to do:
#include <condition_variable>
std::mutex m;
std::condition_variable cv;
audioThread()
{
while(!isCloseRequested)
{
m.lock();
cv.wait(m);
// wait() returns with the mutex still held.
m.unlock();
if(audio.dogSoundRequested)
{
audio.playDogSound();
}
}
}
void dogSoundRequested()
{
dogSoundRequested = true;
cv.notify_one();
}
In addition to the use of mutex, here is a simple setup for multiple threads
// g++ -o multi_threading -pthread -std=c++11 multi_threading.cpp
#include <iostream>
#include <thread>
#include <exception>
#include <mutex>
#include <climits> // min max of short int
void launch_consumer() {
std::cout << "launch_consumer" << std::endl;
} // launch_consumer
void launch_producer(std::string chosen_file) {
std::cout << "launch_producer " << chosen_file << std::endl;
} // launch_producer
// -----------
int main(int argc, char** argv) {
std::string chosen_file = "audio_file.wav";
std::thread t1(launch_producer, chosen_file);
std::this_thread::sleep_for (std::chrono::milliseconds( 100));
std::thread t2(launch_consumer);
// -------------------------
t1.join();
t2.join();
return 0;
}
Rather than complicate the code with mutexes and condition variables, consider making a thread-safe FIFO. In this case, one that could have multiple writers and one consumer. Other threads of the application are the writers to this FIFO, the audioThread() is the consumer.
// NOP = no operation
enum AudioTask {NOP, QUIT, PLAY_DOG, ...};
class Fifo
{
public:
bool can_push() const; // is it full?
void push(AudioTask t); // safely writes to the FIFO
AudioTask pop(); // safely reads from the FIFO, if empty NOP
};
Now the audioThread() is a bit cleaner, assuming fifo and audio are application class members:
void audioThread()
{
bool running = true;
while(running)
{
auto task = fifo.pop();
switch(task)
{
case NOP: std::this_thread::yield(); break;
case QUIT: running = false; break;
case PLAY_DOG: audio.playDogSound(); break;
}
}
}
Finally, the calling code only needs to push tasks into the FIFO:
void request_dog_sound()
{
fifo.push(PLAY_DOG);
}
void stop_audio_thread()
{
fifo.push(QUIT);
audio_thread.join();
}
This puts the details of the thread-safe synchronization inside the Fifo class, keeping the rest of the application cleaner.
If you want to be sure that no other thread touches the playDogSound() function, use a mutex lock to lock the resource.
std::mutex mtx;
audioThread() {
while(!isCloseRequested) {
if (audio.dogSoundRequested) {
mtx.lock();
audio.playDogSound();
mtx.unlock();
}
}
}
I'm having the hardest time trying to wrap my head around how to allow threads to signal each other.
My design:
The main function creates a single master thread that coordinates a bunch of other worker threads. The main function also creates the workers because the worker threads spawn and exit at intervals programmed in the main. The master thread needs to be able to signal these worker threads and signal_broadcast them all as well as the worker threads have to signal the master back (pthread_cond_signal). Since each thread needs a pthread_mutex and pthread_cond I made a Worker class and a Master class with these variables. Now this is where I am stuck. C++ does not allow you to pass member functions as the pthread_create(...) handler so I had to make a static handler inside and pass a pointer to itself to reinterpret_cast it to use its class data...
void Worker::start() {
pthread_create(&thread, NULL, &Worker::run, this);
}
void* Worker::run(void *ptr) {
Worker* data = reinterpret_cast<Worker*>(ptr);
}
The problem I have with this, probably wrong, setup is that when I passed in an array of worker pointers to the Master thread it signals a different reference of worker because I think the cast did some sort of copy. So I tried static_cast and same behavior.
I just need some sort of design where the Master and workers can pthread_cond_wait(...) and pthread_cond_signal(...) each other.
Edit 1
Added:
private:
Worker(const Worker&);
Still not working.
Edit Fixed the potential race in all versions:
1./1b Employs a sempaaphore built from a (mutex+condition+counter) as outlined in C++0x has no semaphores? How to synchronize threads?
2. uses a 'reverse' wait to ensure that a signal got ack-ed by the intended worker
I'd really suggest to use c++11 style <thread> and <condition_variable> to achieve this.
I have two (and a half) demonstations. They each assume you have 1 master that drives 10 workers. Each worker awaits a signal before it does it's work.
We'll use std::condition_variable (which works in conjunction with a std::mutex) to do the signaling. The difference between the first and second version will be the way in which the signaling is done:
1. Notifying any worker, one at a time:
1b. With a worker struct
2. Notifying all threads, coordinating which recipient worker is to respond
1. Notifying any worker, one at a time:
This is the simplest to do, because there's little coordination going on:
#include <vector>
#include <thread>
#include <mutex>
#include <algorithm>
#include <iostream>
#include <condition_variable>
using namespace std;
class semaphore
{ // see https://stackoverflow.com/questions/4792449/c0x-has-no-semaphores-how-to-synchronize-threads
std::mutex mx;
std::condition_variable cv;
unsigned long count;
public:
semaphore() : count() {}
void notify();
void wait();
};
static void run(int id, struct master& m);
struct master
{
mutable semaphore sem;
master()
{
for (int i = 0; i<10; ++i)
threads.emplace_back(run, i, ref(*this));
}
~master() {
for(auto& th : threads) if (th.joinable()) th.join();
std::cout << "done\n";
}
void drive()
{
// do wakeups
for (unsigned i = 0; i<threads.size(); ++i)
{
this_thread::sleep_for(chrono::milliseconds(rand()%100));
sem.notify();
}
}
private:
vector<thread> threads;
};
static void run(int id, master& m)
{
m.sem.wait();
{
static mutex io_mx;
lock_guard<mutex> lk(io_mx);
cout << "signaled: " << id << "\n";
}
}
int main()
{
master instance;
instance.drive();
}
/// semaphore members
void semaphore::notify()
{
lock_guard<mutex> lk(mx);
++count;
cv.notify_one();
}
void semaphore::wait()
{
unique_lock<mutex> lk(mx);
while(!count)
cv.wait(lk);
--count;
}
1b. With a worker struct
Note, if you had worker classes with worker::run a non-static member function, you can do the same with minor modifications:
struct worker
{
worker(int id) : id(id) {}
void run(master& m) const;
int id;
};
// ...
struct master
{
// ...
master()
{
for (int i = 0; i<10; ++i)
workers.emplace_back(i);
for (auto& w: workers)
threads.emplace_back(&worker::run, ref(w), ref(*this));
}
// ...
void worker::run(master& m) const
{
m.sem.wait();
{
static mutex io_mx;
lock_guard<mutex> lk(io_mx);
cout << "signaled: " << id << "\n";
}
}
A caveat
cv.wait() could suffer spurious wake-ups, in which the condition variable wasn't atually raised (e.g. in the event of OS signal handlers). This is a common thing to happen with condition variables on any platfrom.
The following approach fixes this:
2. Notifying all threads, coordinating which recipient worker
Use a flag to signal which thread was intended to receive the signal:
struct master
{
mutable mutex mx;
mutable condition_variable cv;
int signaled_id; // ADDED
master() : signaled_id(-1)
{
Let's pretend that driver got a lot more interesting and wants to signal all workers in a specific (random...) order:
void drive()
{
// generate random wakeup order
vector<int> wakeups(10);
iota(begin(wakeups), end(wakeups), 0);
random_shuffle(begin(wakeups), end(wakeups));
// do wakeups
for (int id : wakeups)
{
this_thread::sleep_for(chrono::milliseconds(rand()%1000));
signal(id);
}
}
private:
void signal(int id) // ADDED id
{
unique_lock<mutex> lk(mx);
std::cout << "signaling " << id << "\n";
signaled_id = id; // ADDED put it in the shared field
cv.notify_all();
cv.wait(lk, [&] { return signaled_id == -1; });
}
Now all we have to do is make sure that the receiving thread checks that it's id matches:
m.cv.wait(lk, [&] { return m.signaled_id == id; });
m.signaled_id = -1;
m.cv.notify_all();
This puts an end to spurious wake-ups.
Full code listings/live demos:
1. notify_one.cpp http://coliru.stacked-crooked.com/view?id=c968f8cffd57afc2a0c6777105203f85-03e740563a9d9c6bf97614ba6099fe92
1b. id. with worker struct: http://coliru.stacked-crooked.com/view?id=7bd224c42130a0461b0c894e0b7c74ae-03e740563a9d9c6bf97614ba6099fe92
2. notify_all.cpp http://coliru.stacked-crooked.com/view?id=1d3145ccbb93c1bec03b232d372277b8-03e740563a9d9c6bf97614ba6099fe92
It is not clear what your exact circumstances are, but it seems like you are using a container to hold your "Worker" instances that are created in main, and passing them to your "Master". If this is the case, there are a few remedies available to you. You need to pick one that is appropriate to your implementation.
Pass a reference to the container in main to the Master.
Change the container to hold (smart) pointers to Workers.
Make the container part of "Master" itself, so that it doesn't need to be passed to it.
Implement a proper destructor, copy constructor, and assignment operator for your Worker class (in other words, obey the Rule of Three).
Technically speaking, since pthread_create() is a C API, the function pointer that is passed to it needs to have C linkage (extern "C"). You can't make a method of a C++ class have C linkage, so you should define an external function:
extern "C" { static void * worker_run (void *arg); }
class Worker { //...
};
static void * worker_run (void *arg) {
return Worker::run(arg);
}