Dividing work between fixed number of threads with pthread - c++

I have n number of jobs, which there is no shared resource between them, and mthreads. I want to efficiently divide number of jobs in threads in such a way that there is no idle thread untill everything is processed?
This is a prototype of my program:
class Job {
//constructor and other stuff
//...
public: doWork();
};
struct JobParams{
int threadId;
Job job;
};
void* doWorksOnThread(void* job) {
JobParams* j = // cast argument
cout << "Thread #" << j->threadId << " started" << endl;
j->job->doWork();
return (void*)0;
}
Then in my main file I have something like:
int main() {
vector<Job> jobs; // lets say it has 17 jobs
int numThreads = 4;
pthread_t* threads = new pthread_t[numThreads];
JobParams* jps = new JubParams[jobs.size()];
for(int i = 0; i < jobs.size(); i++) {
jps[i]->job = jobs[i];
}
for(int i = 0; i < numThread; i++) {
pthread_create(&t[i], null, doWorkOnThread, &jps[0])
}
//another for loop and call join on 4 threads...
return 0;
}
how can I efficiently make sure that there is no idle thread until all jobs are completed?

You'll need to add a loop to identify the threads that completed and then start new ones, making sure you always have up to 4 threads running.
Here is a very basic way to do that. Using a sleep as proposed could be a good start and will do the job (even if adding an extra delay before you'll figure out the last thread completed). Ideally, you should use a condition variable notified by the thread when job is done to wake up the main loop (then sleep instruction would be replaced by a wait condition instruction).
struct JobParams{
int threadId;
Job job;
std::atomic<bool> done; // flag to know when job is done, could also be an attribute of Job class!
};
void* doWorksOnThread(void* job) {
JobParams* j = // cast argument
cout << "Thread #" << j->threadId << " started" << endl;
j->job->doWork();
j->done = true; // signal job completed
return (void*)0;
}
int main() {
....
std::map<JobParams*,pthread_t*> runningThreads; // to keep track of running jobs
for(int i = 0; i < jobs.size(); i++) {
jps[i]->job = jobs[i];
jps[i]->done = false; // mark as not done yet
}
while ( true )
{
vector<JobParams*> todo;
for( int i = 0; i < jobs.size(); i++ )
{
if ( !jps[i]->done )
{
if ( runningThreads.find(jps[i]) == runningThreads.end() )
todo.push_back( &jps[i] ); // job not started yet, mask as to be done
// else, a thread is already processing the job and did not complete it yet
}
else
{
if ( runningThreads.find(jps[i]) != runningThreads.end() )
{
// thread just completed the job!
// let's join to wait for the thread to end cleanly
// I'm not familiar with pthread, hope this is correct
void* res;
pthread_join(runningThreads[jps[i]], &res);
runningThreads.erase(jps[i]); // not running anymore
}
// else, job was already done and thread joined from a previous iteration
}
}
if ( todo.empty() && runningThreads.empty() )
break; // done all jobs
// some jobs remain undone
if ( runningThreads.size() < numThreads && !todo.empty() )
{
// some new threads shall be started...
int newThreadsToBeCreatedCount = numThreads - runningThreads.size();
// make sure you don't end up with too many threads running
if ( todo.size() > newThreadsToBeCreatedCount )
todo.resize( newThreadsToBeCreatedCount );
for ( auto jobParam : todo )
{
pthread_t* thread = runningThreads[&jobParam];
pthread_create(thread, null, doWorkOnThread, &jobParam );
}
}
// else: you already have 4 runnign jobs
// sanity check that everything went as expected:
assert( runningThreads.size() <= numThreads );
msleep( 100 ); // give a chance for some jobs to complete (100ms)
// adjust sleep duration if necessary
}
}
Note: I'm not very familiar with pthread. Hope the syntax is correct.

Related

QtConcurrent: why releaseThread and reserveThread cause deadlock?

In Qt 4.7 Reference for QThreadPool, we find:
void QThreadPool::releaseThread()
Releases a thread previously reserved by a call to reserveThread().
Note: Calling this function without previously reserving a thread
temporarily increases maxThreadCount(). This is useful when a thread
goes to sleep waiting for more work, allowing other threads to
continue. Be sure to call reserveThread() when done waiting, so that
the thread pool can correctly maintain the activeThreadCount().
See also reserveThread().
void QThreadPool::reserveThread()
Reserves one thread, disregarding activeThreadCount() and
maxThreadCount().
Once you are done with the thread, call releaseThread() to allow it to
be reused.
Note: This function will always increase the number of active threads.
This means that by using this function, it is possible for
activeThreadCount() to return a value greater than maxThreadCount().
See also releaseThread().
I want to use releaseThread() to make it possible to use nested concurrent map, but in the following code, it hangs in waitForFinished():
#include <QApplication>
#include <QMainWindow>
#include <QtConcurrentMap>
#include <QtConcurrentRun>
#include <QFuture>
#include <QThreadPool>
#include <QtTest/QTest>
#include <QFutureSynchronizer>
struct Task2 { // only calculation
typedef void result_type;
void operator()(int count) {
int k = 0;
for (int i = 0; i < count * 10; ++i) {
for (int j = 0; j < count * 10; ++j) {
k++;
}
}
assert(k >= 0);
}
};
struct Task1 { // will launch some other concurrent map
typedef void result_type;
void operator()(int count) {
QVector<int> vec;
for (int i = 0; i < 5; ++i) {
vec.push_back(i+count);
}
Task2 task;
QFuture<void> f = QtConcurrent::map(vec.begin(), vec.end(), task);
{
// with out releaseThread before wait, it will hang directly
QThreadPool::globalInstance()->releaseThread();
f.waitForFinished(); // BUG: may hang there
QThreadPool::globalInstance()->reserveThread();
}
}
};
int main() {
QThreadPool* gtpool = QThreadPool::globalInstance();
gtpool->setExpiryTimeout(50);
int count = 0;
for (;;) {
QVector<int> vec;
for (int i = 0; i < 40 ; i++) {
vec.push_back(i);
}
// launch a task with nested map
Task1 task; // Task1 will have nested concurrent map
QFuture<void> f = QtConcurrent::map(vec.begin(), vec.end(),task);
f.waitForFinished(); // BUG: may hang there
count++;
// waiting most of thread in thread pool expire
while (QThreadPool::globalInstance()->activeThreadCount() > 0) {
QTest::qSleep(50);
}
// launch a task only calculation
Task2 task2;
QFuture<void> f2 = QtConcurrent::map(vec.begin(), vec.end(), task2);
f2.waitForFinished(); // BUG: may hang there
qDebug() << count;
}
return 0;
}
This code will not run forever; it will hang in after many loops (1~10000), with all threads waiting for condition variable.
My questions are:
Why does it hang?
Can I fix it and keep the nested concurrent map?
dev env:
Linux version 2.6.32-696.18.7.el6.x86_64; Qt4.7.4; GCC 3.4.5
Windows 7; Qt4.7.4; mingw 4.4.0
The program hangs because of the race condition in QThreadPool when you try to deal with expiryTimeout. Here is the analysis in detail :
The problem in QThreadPool - source
When starting a task, QThreadPool did something along the lines of:
QMutexLocker locker(&mutex);
taskQueue.append(task); // Place the task on the task queue
if (waitingThreads > 0) {
// there are already running idle thread. They are waiting on the 'runnableReady'
// QWaitCondition. Wake one up them up.
waitingThreads--;
runnableReady.wakeOne();
} else if (runningThreadCount < maxThreadCount) {
startNewThread(task);
}
And the the thread's main loop looks like this:
void QThreadPoolThread::run()
{
QMutexLocker locker(&manager->mutex);
while (true) {
/* ... */
if (manager->taskQueue.isEmpty()) {
// no pending task, wait for one.
bool expired = !manager->runnableReady.wait(locker.mutex(),
manager->expiryTimeout);
if (expired) {
manager->runningThreadCount--;
return;
} else {
continue;
}
}
QRunnable *r = manager->taskQueue.takeFirst();
// run the task
locker.unlock();
r->run();
locker.relock();
}
}
The idea is that the thread will wait for a given amount of second for
a task, but if no task was added in a given amount of time, the thread
expires and is terminated. The problem here is that we rely on the
return value of runnableReady. If there is a task that is scheduled at
exactly the same time as the thread expires, then the thread will see
false and will expire. But the main thread will not restart any other
thread. That might let the application hang as the task will never be
run.
The quick workaround is to use a long expiryTime (30000 by default) and remove the while loop that waits for the threads expired.
Here is the main function modified, the program runs smoothly in Windows 7, 4 threads used by default :
int main() {
QThreadPool* gtpool = QThreadPool::globalInstance();
//gtpool->setExpiryTimeout(50); <-- don't set the expiry Timeout, use the default one.
qDebug() << gtpool->maxThreadCount();
int count = 0;
for (;;) {
QVector<int> vec;
for (int i = 0; i < 40 ; i++) {
vec.push_back(i);
}
// launch a task with nested map
Task1 task; // Task1 will have nested concurrent map
QFuture<void> f = QtConcurrent::map(vec.begin(), vec.end(),task);
f.waitForFinished(); // BUG: may hang there
count++;
/*
// waiting most of thread in thread pool expire
while (QThreadPool::globalInstance()->activeThreadCount() > 0)
{
QTest::qSleep(50);
}
*/
// launch a task only calculation
Task2 task2;
QFuture<void> f2 = QtConcurrent::map(vec.begin(), vec.end(), task2);
f2.waitForFinished(); // BUG: may hang there
qDebug() << count ;
}
return 0;
}
#tungIt's answer is good enough, I found the qtbug and fix commit, just for reference:
https://bugreports.qt.io/browse/QTBUG-3786
https://github.com/qt/qtbase/commit/a9b6a78e54670a70b96c122b10ad7bd64d166514#diff-6d5794cef91df41c39b5e7cc6b71d041

Thread pool on a queue in C++

I've been trying to solve a problem concurrently, which fits the thread pool pattern very nicely. Here I will try to provide a minimal representative example:
Say we have a pseudo-program like this:
Q : collection<int>
while (!Q.empty()) {
for each q in Q {
// perform some computation
}
// assign a new value to Q
Q = something_completely_new();
}
I'm trying to implement that in a parallel way, with n-1 workers and one main thread. The workers will perform the computation in the inner loop by grabbing elements from Q.
I tried to solve this using two conditional variables, work, on which the master threads notifies the workers that Q has been assigned to, and another, work_done, where the workers notify master that the entire computation might be done.
Here's my C++ code:
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <queue>
#include <thread>
using namespace std;
std::queue<int> Q;
std::mutex mut;
std::condition_variable work;
std::condition_variable work_done;
void run_thread() {
for (;;) {
std::unique_lock<std::mutex> lock(mut);
work.wait(lock, [&] { return Q.size() > 0; });
// there is work to be done - pretend we're working on something
int x = Q.front(); Q.pop();
std::cout << "Working on " << x << std::endl;
work_done.notify_one();
}
}
int main() {
// your code goes here
std::vector<std::thread *> workers(3);
for (size_t i = 0; i < 3; i++) {
workers[i] = new std::thread{
[&] { run_thread(); }
};
}
for (int i = 4; i > 0; --i) {
std::unique_lock<std::mutex> lock(mut);
Q = std::queue<int>();
for (int k = 0; k < i; k++) {
Q.push(k);
}
work.notify_all();
work_done.wait(lock, [&] { return Q.size() == 0; });
}
for (size_t i = 0; i < 3; i++) {
delete workers[i];
}
return 0;
}
Unfortunately, after compiling it on OS X with g++ -std=c++11 -Wall -o main main.cpp I get the following output:
Working on 0
Working on 1
Working on 2
Working on 3
Working on 0
Working on 1
Working on 2
Working on 0
Working on 1
Working on 0
libc++abi.dylib: terminating
Abort trap: 6
After a while of googling it looks like a segmentation fault. It probably has to do with me misusing conditional variables. I would appreciate some insight, both architectural (on how to approach this type of problem) and specific, as in what I'm doing wrong here exactly.
I appreciate the help
Your application was killed by std::terminate.
Body of your thread function is infinite-loop, so when these lines are executed
for (size_t i = 0; i < 3; i++) {
delete workers[i];
}
you want to delete threads which are still running (each thread is in joinable state). When you call destructor of thread which is in joinable state the following thing happens (from http://www.cplusplus.com/reference/thread/thread/~thread/)
If the thread is joinable when destroyed, terminate() is called.
so if you want terminate not to be called, you should call detach() method after creating threads.
for (size_t i = 0; i < 3; i++) {
workers[i] = new std::thread{
[&] { run_thread(); }
};
workers[i]->detach(); // <---
}
Just because the queue is empty doesn't mean the work is done.
finished = true;
work.notify_all();
for (size_t i = 0; i < 3; i++) {
workers[i].join(); // wait for threads to finish
delete workers[i];
}
and we need some way to terminate the threads
for (;!finshed;) {
std::unique_lock<std::mutex> lock(mut);
work.wait(lock, [&] { return Q.size() > 0 || finished; });
if (finished)
return;

SDL and C++: Waiting for multiple threads to finish

I am having trouble to fix the following problem:
I have 10 threads that don't need to interact with each other and therefore can all run simultaneusly.
I create them in a loop.
However I need to wait until all threads are done until I can continue with the code that starts after the for loop.
Here is the problem in pseudo code:
//start threads
for (until 10) {
SDL_Thread* thread = SDL_CreateThread();
}
//the rest of the code starts here, all threads need to be done first
What's the best way to get this done?
I need to stay platform independent with that problem, that's why I try to only use SDL-functions.
If there is another platform independent solution for c++, I am fine with that too.
You can take the following approach:
const int THREAD_COUNT = 10;
static int ThreadFunction(void *ptr);
{
// Some useful work done by thread
// Here it will sleep for 5 seconds and then exit
sleep ( 5 );
return 0;
}
int main()
{
vector<SDL_Thread*> threadIdVector;
// create 'n' threads
for ( int count = 0; count < THREAD_COUNT; count++ )
{
SDL_Thread *thread;
stringstream ss;
ss << "Thread::" << count;
string tname = ss.str();
thread = SDL_CreateThread( ThreadFunction, tname, (void *)NULL);
if ( NULL != thread )
{
threadIdVector.push_back ( thread );
}
}
// iterate through the vector and wait for each thread
int tcount = 0;
vector<SDL_Thread*>::iterator iter;
for ( iter = threadIdVector.begin();
iter != threadIdVector.end(); iter++ )
{
int threadReturnValue;
cout << "Main waiting for Thread : " << tcount++ << endl;
SDL_WaitThread( *iter, &threadReturnValue);
}
cout << "All Thread have finished execution. Main will proceed...." << endl;
return 0;
}
I ran this program with standard posix libary commands and it worked fine. Then I replaced the posix library calls with SDL Calls. I do not have the SDL library so you may have to compile the code once.
Hope this helps.
You can implement a Semaphor which increments for every running thread, if the thread is done it decrements the Semaphor and your main programm waits until it is 0.
There are plenty example how semaphore is implemented and by that it will be platform independent.

One producer, two consumers acting on one 'queue' produced by producer

Preface: I'm new to multithreaded programming, and a little rusty with C++. My requirements are to use one mutex, and two conditions mNotEmpty and mEmpty. I must also create and populate the vectors in the way mentioned below.
I have one producer thread creating a vector of random numbers of size n*2, and two consumers inserting those values into two separate vectors of size n.
I am doing the following in the producer:
Lock the mutex: pthread_mutex_lock(&mMutex1)
Wait for consumer to say vector is empty: pthread_cond_wait(&mEmpty,&mMutex1)
Push back a value into the vector
Signal the consumer that the vector isn't empty anymore: pthread_cond_signal(&mNotEmpty)
Unlock the mutex: pthread_mutex_unlock(&mMutex1)
Return to step 1
In the consumer:
Lock the mutex: pthread_mutex_lock(&mMutex1)
Check to see if the vector is empty, and if so signal the producer: pthread_cond_signal(&mEmpty)
Else insert value into one of two new vectors (depending on which thread) and remove from original vector
Unlock the mutex: pthread_mutex_unlock(&mMutex1)
Return to step 1
What's wrong with my process? I keep getting segmentation faults or infinite loops.
Edit: Here's the code:
void Producer()
{
srand(time(NULL));
for(unsigned int i = 0; i < mTotalNumberOfValues; i++){
pthread_mutex_lock(&mMutex1);
pthread_cond_wait(&mEmpty,&mMutex1);
mGeneratedNumber.push_back((rand() % 100) + 1);
pthread_cond_signal(&mNotEmpty);
pthread_mutex_unlock(&mMutex1);
}
}
void Consumer(const unsigned int index)
{
for(unsigned int i = 0; i < mNumberOfValuesPerVector; i++){
pthread_mutex_lock(&mMutex1);
if(mGeneratedNumber.empty()){
pthread_cond_signal(&mEmpty);
}else{
mThreadVector.at(index).push_back[mGeneratedNumber.at(0)];
mGeneratedNumber.pop_back();
}
pthread_mutex_unlock(&mMutex1);
}
}
I'm not sure I understand the rationale behind the way you're doing
things. In the usual consumer-provider idiom, the provider pushes as
many items as possible into the channel, waiting only if there is
insufficient space in the channel; it doesn't wait for empty. So the
usual idiom would be:
provider (to push one item):
pthread_mutex_lock( &mutex );
while ( ! spaceAvailable() ) {
pthread_cond_wait( &spaceAvailableCondition, &mutex );
}
pushTheItem();
pthread_cond_signal( &itemAvailableCondition );
pthread_mutex_unlock( &mutex );
and on the consumer side, to get an item:
pthread_mutex_lock( &mutex );
while ( ! itemAvailable() ) {
pthread_cond_wait( &itemAvailableCondition, &mutex );
}
getTheItem();
pthread_cond_signal( &spaceAvailableCondition );
pthread_mutex_unlock( &mutex );
Note that for each condition, one side signals, and the other waits. (I
don't see any wait in your consumer.) And if there is more than one
process on either side, I'd recommend using pthread_cond_broadcast,
rather than pthread_cond_signal.
There are a number of other issues in your code. Some of them look more
like typos: you should copy/paste actual code to avoid this. Do you
really mean to read and pop mGeneratedValues, when you push into
mGeneratedNumber, and check whether that is empty? (If you actually
do have two different queues, then you're popping from a queue where no
one has pushed.) And you don't have any loops waiting for the
conditions; you keep iterating through the number of elements you
expect (incrementing the counter each time, so you're likely to
gerninate long before you should)—I can't see an infinite loop,
but I can readily see a endless wait in pthread_cond_wait in the
producer. I don't see a core dump off hand, but what happens when one
of the processes terminates (probably the consumer, because it never
waits for anything); if it ends up destroying the mutex or the condition
variables, you could get a core dump when another process attempts to
use them.
In producer, call pthread_cond_wait only when queue is not empty. Otherwise you get blocked forever due to a race condition.
You might want to consider taking mutex only after condition is fulfilled, e.g.
producer()
{
while true
{
waitForEmpty();
takeMutex();
produce();
releaseMutex();
}
}
consumer()
{
while true
{
waitForNotEmpty();
takeMutex();
consume();
releaseMutex();
}
}
Here is a solution to a similar problem like you. In this program producer produces a no and writes it to a array(buffer) and a maintains a file then update a status(status array) about it, while on getting data in the array(buffer) consumers start to consume(read and write to their file) and update a status that it has consumed. when producer looks that both the consumer has consumed the data it overrides the data with a new value and goes on. for convenience here i have restricted the code to run for 2000 nos.
// Producer-consumer //
#include <iostream>
#include <fstream>
#include <pthread.h>
#define MAX 100
using namespace std;
int dataCount = 2000;
int buffer_g[100];
int status_g[100];
void *producerFun(void *);
void *consumerFun1(void *);
void *consumerFun2(void *);
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t dataNotProduced = PTHREAD_COND_INITIALIZER;
pthread_cond_t dataNotConsumed = PTHREAD_COND_INITIALIZER;
int main()
{
for(int i = 0; i < MAX; i++)
status_g[i] = 0;
pthread_t producerThread, consumerThread1, consumerThread2;
int retProducer = pthread_create(&producerThread, NULL, producerFun, NULL);
int retConsumer1 = pthread_create(&consumerThread1, NULL, consumerFun1, NULL);
int retConsumer2 = pthread_create(&consumerThread2, NULL, consumerFun2, NULL);
pthread_join(producerThread, NULL);
pthread_join(consumerThread1, NULL);
pthread_join(consumerThread2, NULL);
return 0;
}
void *producerFun(void *)
{
//file to write produced data by producer
const char *producerFileName = "producer.txt";
ofstream producerFile(producerFileName);
int index = 0, producerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] == 0)
{
static int data = 0;
data++;
cout << "Produced: " << data << endl;
buffer_g[index] = data;
producerFile << data << endl;
status_g[index] = 5;
index ++;
producerCount ++;
pthread_cond_broadcast(&dataNotProduced);
}
else
{
cout << ">> Producer is in wait.." << endl;
pthread_cond_wait(&dataNotConsumed, &mutex);
}
pthread_mutex_unlock(&mutex);
if(producerCount == dataCount)
{
producerFile.close();
return NULL;
}
}
}
void *consumerFun1(void *)
{
const char *consumerFileName = "consumer1.txt";
ofstream consumerFile(consumerFileName);
int index = 0, consumerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] != 0 && status_g[index] != 2)
{
int data = buffer_g[index];
cout << "Cosumer1 consumed: " << data << endl;
consumerFile << data << endl;
status_g[index] -= 3;
index ++;
consumerCount ++;
pthread_cond_signal(&dataNotConsumed);
}
else
{
cout << "Consumer1 is in wait.." << endl;
pthread_cond_wait(&dataNotProduced, &mutex);
}
pthread_mutex_unlock(&mutex);
if(consumerCount == dataCount)
{
consumerFile.close();
return NULL;
}
}
}
void *consumerFun2(void *)
{
const char *consumerFileName = "consumer2.txt";
ofstream consumerFile(consumerFileName);
int index = 0, consumerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] != 0 && status_g[index] != 3)
{
int data = buffer_g[index];
cout << "Consumer2 consumed: " << data << endl;
consumerFile << data << endl;
status_g[index] -= 2;
index ++;
consumerCount ++;
pthread_cond_signal(&dataNotConsumed);
}
else
{
cout << ">> Consumer2 is in wait.." << endl;
pthread_cond_wait(&dataNotProduced, &mutex);
}
pthread_mutex_unlock(&mutex);
if(consumerCount == dataCount)
{
consumerFile.close();
return NULL;
}
}
}
Here is only one problem that producer in not independent to produce, that is it needs to take lock on the whole array(buffer) before it produces new data, and if the mutex is locked by consumer it waits for that and vice versa, i am trying to look for it.

How do I reverse set_value() and 'deactivate' a promise?

I have a total n00b question here on synchronization. I have a 'writer' thread which assigns a different value 'p' to a promise at each iteration. I need 'reader' threads which wait for shared_futures of this value and then process them, and my question is how do I use future/promise to ensure that the reader threads wait for a new update of 'p' before performing their processing task at each iteration? Many thanks.
You can "reset" a promise by assigning it to a blank promise.
myPromise = promise< int >();
A more complete example:
promise< int > myPromise;
void writer()
{
for( int i = 0; i < 10; ++i )
{
cout << "Setting promise.\n";
myPromise.set_value( i );
myPromise = promise< int >{}; // Reset the promise.
cout << "Waiting to set again...\n";
this_thread::sleep_for( chrono::seconds( 1 ));
}
}
void reader()
{
int result;
do
{
auto myFuture = myPromise.get_future();
cout << "Waiting to receive result...\n";
result = myFuture.get();
cout << "Received " << result << ".\n";
} while( result < 9 );
}
int main()
{
std::thread write( writer );
std::thread read( reader );
write.join();
read.join();
return 0;
}
A problem with this approach, however, is that synchronization between the two threads can cause the writer to call promise::set_value() more than once between the reader's calls to future::get(), or future::get() to be called while the promise is being reset. These problems can be avoided with care (e.g. with proper sleeping between calls), but this takes us into the realm of hacking and guesswork rather than logically correct concurrency.
So although it's possible to reset a promise by assigning it to a fresh promise, doing so tends to raise broader synchronization issues.
A promise/future pair is designed to carry only a single value (or exception.). To do what you're describing, you probably want to adopt a different tool.
If you wish to have multiple threads (your readers) all stop at a common point, you might consider a barrier.
The following code demonstrates how the producer/consumer pattern can be implemented with future and promise.
There are two promise variables, used by a producer and a consumer thread. Each thread resets one of the two promise variables and waits for the other one.
#include <iostream>
#include <future>
#include <thread>
using namespace std;
// produces integers from 0 to 99
void producer(promise<int>& dataready, promise<void>& consumed)
{
for (int i = 0; i < 100; ++i) {
// do some work here ...
consumed = promise<void>{}; // reset
dataready.set_value(i); // make data available
consumed.get_future().wait(); // wait for the data to be consumed
}
dataready.set_value(-1); // no more data
}
// consumes integers
void consumer(promise<int>& dataready, promise<void>& consumed)
{
for (;;) {
int n = dataready.get_future().get(); // wait for data ready
if (n >= 0) {
std::cout << n << ",";
dataready = promise<int>{}; // reset
consumed.set_value(); // mark data as consumed
// do some work here ...
}
else
break;
}
}
int main(int argc, const char*argv[])
{
promise<int> dataready{};
promise<void> consumed{};
thread th1([&] {producer(dataready, consumed); });
thread th2([&] {consumer(dataready, consumed); });
th1.join();
th2.join();
std::cout << "\n";
return 0;
}