I'm having this problem, where I have a main loop, that needs to trigger an async work and must not wait for it to finish. What I want it to do is to check every while-loop whether the async work is done.
This can be accomplished with the future.wait_for().
Since I don't want to block the main loop, I can use future.wait_for(0).
So far so good.
In addition, I'd like to verify that I received (or didn't receive) an answer within X ms.
I can do that by checking how long since I launched the "async", and verify what comes first - X ms passed or future_status::ready returned.
My question - is this a good practice, or is there a better way to do it?
Some more information:
Since the main loop must launch many different async jobs, it means I need to have a lot of duplicated code - every launch needs to "remember" the timestamp it was launched and every time I check if the async job is ready, I need to re-calculate the time differences for each async job. This might be quite a hassle.
for now - this is an example of what I described (might have build errors):
#define MAX_TIMEOUT_MS 30
bool myFunc()
{
bool result = false;
//do something for quite some time
return result;
}
int main()
{
int timeout_ms = MAX_TIMEOUT_MS;
steady_clock::time_point start;
bool async_return = false;
std::future_status status = std::future_status::ready;
int delta_ms = 0;
while(true) {
// On first time, or once we have an answer, launch async again
if (status == std::future_status::ready) {
std::future<bool> fut = std::async (std::launch::async, myFunc);
start = steady_clock::now(); // record the start timestamp whenever we launch async()
}
// do something...
status = fut.wait_for(std::chrono::seconds(0));
// check how long since we launched async
delta_ms = chrono::duration_cast<chrono::milliseconds>(steady_clock::now() - start).count();
if (status != std::future_status::ready && delta_ms > timeout_ms ) {
break;
} else {
async_return = fut.get();
// and we do something with the result
}
}
return 0;
}
One thing you might want to consider: If your while loop doesn't do any relevant work, and just checks for task completion, you may be doing a busy-wait (https://en.wikipedia.org/wiki/Busy_waiting).
This means you are wasting a lot of CPU time doing useless work. This may sound counter-intuitive, but it can negatively affect your performance in evaluating task completion even if you are constantly checking it!
This can happen because this thread will look like it is doing a lot of work to the OS, and will receive high priority for processing. Which may make other threads (that are doing your async job) look less important and took longer to complete. Of course, this is not set in stone and anything can happen, but still, it is a waste of CPU if you are not doing any other work in that loop.
wait_for(0) is not the best option since it effectively breaks the execution of this thread, even if the work is not ready yet. And it may take longer than you expect for it to resume work (https://en.cppreference.com/w/cpp/thread/future/wait_for). std::future doesn't seem to have a truly non-blocking API yet (C++ async programming, how to not wait for future?), but you can use other resources such as a mutex and the try_lock (http://www.cplusplus.com/reference/mutex/try_lock/).
That said, if your loop still does important work, this flow is ok to use. But you might want to have a queue of completed jobs to check, instead of a single future. This queue would only be consumed by your main thread and can be implemented with a non-blocking thread-safe "try_get" call to get next completed jobs. As others commented, you may want to wrap your time-saving logic in a job dispatcher class or similar.
Maybe something like this (pseudo code!):
struct WorkInfo {
time_type begin_at; // initialized on job dispatch
time_type finished_at;
// more info
};
thread_safe_vector<WorkInfo> finished_work;
void timed_worker_job() {
info.begin_at = current_time();
do_real_job_work();
WorkInfo info;
info.finished_at = current_time();
finished_work.push(some_data);
}
void main() {
...
while (app_loop)
{
dispatch_some_jobs();
WorkInfo workTemp;
while (finished_work.try_get(&work)) // returns true if returned work
{
handle_finished_job(workTemp);
}
}
...
}
And if you are not familiar, I also suggest you to read about Thread-Pools (https://en.wikipedia.org/wiki/Thread_pool) and Producer-Consumer (https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem).
The code below runs tasks async and checks later if they are finished.
I've added some fake work and waits to see the results.
#define MAX_TIMEOUT_MS 30
struct fun_t {
size_t _count;
bool finished;
bool result;
fun_t () : _count (9999), finished (false), result (false) {
}
fun_t (size_t c) : _count (c), finished (false), result (false) {
}
fun_t (const fun_t & f) : _count (f._count), finished (f.finished), result (f.result) {
}
fun_t (fun_t && f) : _count (f._count), finished (f.finished), result (f.result) {
}
~fun_t () {
}
const fun_t & operator= (fun_t && f) {
_count = f._count;
finished = f.finished;
result = f.result;
return *this;
}
void run ()
{
for (int i = 0; i < 50; ++i) {
cout << _count << " " << i << endl;;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
result = true;
finished = true;
cout << " results: " << finished << ", " << result << endl;
}
operator bool () { return result; }
};
int main()
{
int timeout_ms = MAX_TIMEOUT_MS;
chrono::steady_clock::time_point start;
bool async_return = false;
std::future_status status = std::future_status::ready;
int delta_ms = 0;
std::map<size_t, fun_t> futs;
std::vector<std::future<void>> futfuncs;
size_t count = 0;
bool loop = true;
cout << "Begin --------------- " << endl;
while (loop) {
loop = false;
// On first time, or once we have an answer, launch async again
if (count < 3 && status == std::future_status::ready) {
//std::future<bool> fut = std::async (std::launch::async, myFunc);
futs[count] = std::move(fun_t(count));
//futs[futs.size() - 1].fut = std::async (std::launch::async, futs[futs.size() - 1]);
futfuncs.push_back (std::move(std::async(std::launch::async, &fun_t::run, &futs[count])));
}
// do something...
std::this_thread::sleep_for(std::chrono::seconds(2));
for (auto & f : futs) {
if (! f.second.finished) {
cout << " Not finished " << f.second._count << ", " << f.second.finished << endl;
loop = true;
} else {
bool aret = f.second;
cout << "Result: " << f.second._count << ", " << aret << endl;;
}
}
++count;
}
for (auto & f : futs) {
cout << " Verify " << f.second._count << ", " << f.second.finished;
if (f.second.finished) {
bool aret = f.second;
cout << "; result: " << aret;
}
cout << endl;
}
cout << "End --------------- " << endl;
return 0;
}
After removing lines (there are too much) you see the tasks. First number is the task id, second the iteration number.
Begin ---------------
0 0
0 1
0 2
Not finished 0, 0
1 0
0 20
1 1
Not finished 0, 0
Not finished 1, 0
2 0
1 20
0 40
2 1
0 49 // here task 0 ends
2 10
1 30
results: 1, 1 // "run" function ends
1 39
Result: 0, 1 // this is the verification "for"
Not finished 1, 0
Not finished 2, 0
results: 1, 1
Result: 0, 1
Result: 1, 1
Result: 2, 1
Verify 0, 1; result: 1
Verify 1, 1; result: 1
Verify 2, 1; result: 1
End ---------------
Related
Details:
In the program below, multiple threads (in this case only 2 for simplicity) listen out for the same value of 66 to be returned from the functions, following some logic in both functions that produces the result 66.
The threads use async, and the values of 66 are returned using futures. A while loop is used in an attempt to continually check the status of threads one and two to check if either of them have completed, in which case the fastest result from either of the threads is then fetched and used in some calculation.
Goal:
Out of the two threads, to detect which one of them is first to return the value of 66
As soon as a thread returns 66 (regardless of if the other thread has completed), the returned value is then made available in main() for some further simple arithmetic to be performed upon it
If a thread returns 66 and arithmetic is performed upon this value in main(), and then the other thread later on delivers 66 as well, this second returned value should not be used in any calculations
Please note: before deciding to post this question, the following resources have been consulted:
How to check if thread has finished work in C++11 and above?
Using Multithreading two threads return same value with different inputs?
C++ threads for background loading
Future returned by a function
Start multiple threads and wait only for one to finish to obtain results
Problems and Current Ouput:
Currently, the program always outputs that the first thread to finish is rf1, even if the code in function1 is substantially slower (e.g. a for loop with 1000 iterations in function1, and a for loop with 10 iterations in function1). This leads me to believe there is some sort of blocking behaviour somewhere that I may have introduced?
Program Attempt:
#include <future>
#include <iostream>
#include "unistd.h"
double function1() {
// Logic to retrieve _value
// ...
double _value = 66;
return _value;
}
double function2() {
// Logic to retrieve _value
// ...
double _value = 66;
return _value;
}
int main() {
double ret_value = 0;
auto rf1 = std::async(std::launch::async, function1);
auto status1 = rf1.wait_for(std::chrono::nanoseconds(1));
auto rf2 = std::async(std::launch::async, function2);
auto status2 = rf2.wait_for(std::chrono::nanoseconds(1));
while (true) {
if (status1 == std::future_status::ready) {
std::cout << "RF1 FINISHED FIRST" << std::endl;
// No longer need returned val from 2nd thread.
// Get result from 1st thread
ret_value = rf1.get();
break;
}
else if (status2 == std::future_status::ready) {
std::cout << "RF2 FINISHED FIRST" << std::endl;
// No longer need returned val from 1st thread.
// Get result from 2nd thread
ret_value = rf2.get();
break;
}
else if (status1 != std::future_status::ready) {
// RF1 not finished yet
status1 = rf1.wait_for(std::chrono::nanoseconds(1));
}
else if (status2 != std::future_status::ready) {
// RF2 not finished yet
status2 = rf2.wait_for(std::chrono::nanoseconds(1));
}
}
// Do some calculations on the quickest
// returned value
double some_value = ret_value + 40;
return 0;
}
Questions:
Q1. Can the program be modified in any way to detect the fastest thread to return so that the returned value of 66 can be used within main() for further calculations?
Q2. Has the while loop introduced any sort of blocking behaviour?
If anyone may be able to advise or point to some resources that could aid in solving this dilemma, it would be greatly appreciated. So far, it has been a challenge to find multithreading documentation that exactly matches this scenario.
EDIT:
Based on a helpful answer from #jxh, the else if conditions instructing the WHILE loop to continue waiting have been removed, as seen further below.
Furthermore, some logic has been added to function1 and function2 to see which one will finish first. As seen in the code, function1 has 98 iterations and function2 has 100 iterations, yet the output continually says that function2 has finished first:
#include <future>
#include <iostream>
#include "unistd.h"
double function1() {
// Logic to retrieve _value
for (int i = 0; i < 98; i++) {
std::cout << std::endl;
}
double _value = 66;
return _value;
}
double function2() {
// Logic to retrieve _value
for (int i = 0; i < 100; i++) {
std::cout << std::endl;
}
double _value = 66;
return _value;
}
int main() {
double ret_value = 0;
auto rf1 = std::async(std::launch::async, function1);
auto status1 = rf1.wait_for(std::chrono::nanoseconds(1));
auto rf2 = std::async(std::launch::async, function2);
auto status2 = rf2.wait_for(std::chrono::nanoseconds(1));
while (true) {
if (status1 == std::future_status::ready) {
std::cout << "RF1 FINISHED FIRST" << std::endl;
// No longer need returned val from 2nd thread.
// Get result from 1st thread
ret_value = rf1.get();
break;
}
else if (status2 == std::future_status::ready) {
std::cout << "RF2 FINISHED FIRST" << std::endl;
// No longer need returned val from 1st thread.
// Get result from 2nd thread
ret_value = rf2.get();
break;
}
status1 = rf1.wait_for(std::chrono::nanoseconds(1));
status2 = rf2.wait_for(std::chrono::nanoseconds(1));
}
// Do some calculations on the quickest
// returned value
double some_value = ret_value + 40;
return 0;
}
The logic in your code inside the while will always call rf1.wait_for() until its status is ready.
Since your ready checks will break out of the loop, you do not need to use else if to decide to do further waiting. Just do the two waits again, like you did before you entered the while.
status1 = rf1.wait_for(std::chrono::nanoseconds(1));
status2 = rf2.wait_for(std::chrono::nanoseconds(1));
You have updated your question and you changed the behavior of the called functions away from the original behavior. Instead of hashing over the changes, let's talk about the problem in general terms.
You are attempting to wait at a nanosecond resolution.
Your threads as currently implemented only differ by 2 iterations of a fairly trivial loop body.
The compiler is free to optimize the code in such a way that the functions could execute in nearly the same amount of time.
So, a 1 nanosecond peek back and forth on the futures is not a reliable way to determine which future was actually returned first.
To resolve a close finish, you could use a buzzer, like they do in game shows. The first to buzz is clearly indicated by a signal. Instead of trying to implement a real buzzer, we can mimic one with a pipe. And, the function buzzes in by writing a value to the pipe.
int photo_finish[2];
double function1() {
int id = 1;
//...
write(photo_finish[1], &id, sizeof(id));
double _value = 66;
return _value;
}
double function2() {
int id = 2;
//...
write(photo_finish[1], &id, sizeof(id));
double _value = 66;
return _value;
}
The waiting code then reads from the pipe, and observes the value. The value indicates which function completed first, and so the code then waits on the appropriate future to get the value.
pipe(photo_finish);
auto rf1 = std::async(std::launch::async, function1);
auto rf2 = std::async(std::launch::async, function2);
int id = 0;
read(photo_finish[0], &id, sizeof(id));
if (id == 1) {
auto status1 = rf1.wait();
//...
} else if (id == 2) {
auto status2 = rf2.wait();
//...
}
Note that the sample code omits error handling for brevity.
I have an asynchronous process running (using std::async) which I want to measure the execution time and kill it if its taking too long. This process also returns a value after its execution, I would like to assign some default value as the result if it takes too long to compute. Any help/suggestions would be much appreciated!
#include <thread>
#include <future>
int compute(int val)
{
int result;
// do large computations
return result;
}
void main()
{
auto compute_thread = std::async(compute, 100);
// TODO: wait for result only for x milliseconds else assign some default value
int result = compute_thread.get();
// resume sequential code.
int final = result * 2;
}
Here is how my idea looks like (see inline code comments):
// Performs computations and exits when computation takes
// longer than maxTime. If the execution is timed out
// function returns valueIfTooLong.
// If the computation complete the function returns 0.
static int compute(int maxTime /*ms*/, int valueIfTooLong)
{
auto start = std::chrono::steady_clock::now();
for (short i = 0; i < std::numeric_limits<short>::max(); ++i)
{
auto now = std::chrono::steady_clock::now();
if (std::chrono::duration_cast<std::chrono::milliseconds>(now - start).count() > maxTime)
{
return valueIfTooLong;
}
}
return 0;
}
Usage of the function:
int main()
{
const auto valueIfTooLong = 111;
const auto waitingTime = 10; // ms.
auto compute_thread = std::async(std::launch::async, compute, waitingTime, valueIfTooLong);
// Wait for result only for waitingTime milliseconds else assign valueIfTooLong
int result = compute_thread.get();
if (result == valueIfTooLong)
{
std::cout << "The calculation was longer than "
<< waitingTime << "ms. and has been terminated" << '\n';
}
else
{
std::cout << "The calculation is done" << '\n';
}
return 0;
}
You can use
std::future<int> compute_thread;
void main()
{
auto timeToWait = std::chrono::system_clock::now() + std::chrono::minutes(1); // wait for a minute
compute_thread = std::async(compute, 100);
// TODO: wait for result only for x milliseconds else assign some default value
std::future_status status = compute_thread.wait_until(timeToWait);
if(status == std::future_status::ready)
int final = compute_thread.get() * 2;
else
// you need another value based on what you're doing
}
Note: if your async is a long computation you may have for example another function that calculates the same thing but less accurate...
In this case there's not a kill of the sync task. You only wait for the completion (if in time) and you kepp doing your job if the result is not ready... It's a way to not being blocked on a compute_thread.wait()
Note 2: std::future<int> compute_thread is declared as global, because if you do this in a function (not in main) you have to make sure that compute_thread's life is longer than the function life.
I have n number of jobs, which there is no shared resource between them, and mthreads. I want to efficiently divide number of jobs in threads in such a way that there is no idle thread untill everything is processed?
This is a prototype of my program:
class Job {
//constructor and other stuff
//...
public: doWork();
};
struct JobParams{
int threadId;
Job job;
};
void* doWorksOnThread(void* job) {
JobParams* j = // cast argument
cout << "Thread #" << j->threadId << " started" << endl;
j->job->doWork();
return (void*)0;
}
Then in my main file I have something like:
int main() {
vector<Job> jobs; // lets say it has 17 jobs
int numThreads = 4;
pthread_t* threads = new pthread_t[numThreads];
JobParams* jps = new JubParams[jobs.size()];
for(int i = 0; i < jobs.size(); i++) {
jps[i]->job = jobs[i];
}
for(int i = 0; i < numThread; i++) {
pthread_create(&t[i], null, doWorkOnThread, &jps[0])
}
//another for loop and call join on 4 threads...
return 0;
}
how can I efficiently make sure that there is no idle thread until all jobs are completed?
You'll need to add a loop to identify the threads that completed and then start new ones, making sure you always have up to 4 threads running.
Here is a very basic way to do that. Using a sleep as proposed could be a good start and will do the job (even if adding an extra delay before you'll figure out the last thread completed). Ideally, you should use a condition variable notified by the thread when job is done to wake up the main loop (then sleep instruction would be replaced by a wait condition instruction).
struct JobParams{
int threadId;
Job job;
std::atomic<bool> done; // flag to know when job is done, could also be an attribute of Job class!
};
void* doWorksOnThread(void* job) {
JobParams* j = // cast argument
cout << "Thread #" << j->threadId << " started" << endl;
j->job->doWork();
j->done = true; // signal job completed
return (void*)0;
}
int main() {
....
std::map<JobParams*,pthread_t*> runningThreads; // to keep track of running jobs
for(int i = 0; i < jobs.size(); i++) {
jps[i]->job = jobs[i];
jps[i]->done = false; // mark as not done yet
}
while ( true )
{
vector<JobParams*> todo;
for( int i = 0; i < jobs.size(); i++ )
{
if ( !jps[i]->done )
{
if ( runningThreads.find(jps[i]) == runningThreads.end() )
todo.push_back( &jps[i] ); // job not started yet, mask as to be done
// else, a thread is already processing the job and did not complete it yet
}
else
{
if ( runningThreads.find(jps[i]) != runningThreads.end() )
{
// thread just completed the job!
// let's join to wait for the thread to end cleanly
// I'm not familiar with pthread, hope this is correct
void* res;
pthread_join(runningThreads[jps[i]], &res);
runningThreads.erase(jps[i]); // not running anymore
}
// else, job was already done and thread joined from a previous iteration
}
}
if ( todo.empty() && runningThreads.empty() )
break; // done all jobs
// some jobs remain undone
if ( runningThreads.size() < numThreads && !todo.empty() )
{
// some new threads shall be started...
int newThreadsToBeCreatedCount = numThreads - runningThreads.size();
// make sure you don't end up with too many threads running
if ( todo.size() > newThreadsToBeCreatedCount )
todo.resize( newThreadsToBeCreatedCount );
for ( auto jobParam : todo )
{
pthread_t* thread = runningThreads[&jobParam];
pthread_create(thread, null, doWorkOnThread, &jobParam );
}
}
// else: you already have 4 runnign jobs
// sanity check that everything went as expected:
assert( runningThreads.size() <= numThreads );
msleep( 100 ); // give a chance for some jobs to complete (100ms)
// adjust sleep duration if necessary
}
}
Note: I'm not very familiar with pthread. Hope the syntax is correct.
I am try to solve the following problem, I know there are multiple solutions but I'm looking for the most elegant way (less code) to solve it.
I've 4 threads, 3 of them try to write a unique value (0,1,or 2) to a volatile integer variable in an infinite loop, the forth thread try to read the value of this variable and print the value to the stdout also in an infinite loop.
I'd like to sync between the thread so the thread that writes 0 will be run and then the "print" thread and then the thread that writes 1 and then again the print thread, an so on...
So that finally what I expect to see at the output of the "print" thread is a sequence of zeros and then sequence of 1 and then 2 and then 0 and so on...
What is the most elegant and easy way to sync between these threads.
This is the program code:
volatile int value;
int thid[4];
int main() {
HANDLE handle[4];
for (int ii=0;ii<4;ii++) {
thid[ii]=ii;
handle[ii] = (HANDLE) CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE) ThreadProc, &thid[ii], 0, NULL);
}
return 0;
}
void WINAPI ThreadProc( LPVOID param ) {
int h=*((int*)param);
switch (h) {
case 3:
while(true) {
cout << value << endl;
}
break;
default:
while(true) {
// setting a unique value to the volatile variable
value=h;
}
break;
}
}
your problem can be solved with the producer consumer pattern.
I got inspired from Wikipedia so here is the link if you want some more details.
https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
I used a random number generator to generate the volatile variable but you can change that part.
Here is the code: it can be improved in terms of style (using C++11 for random numbers) but it produces what you expect.
#include <iostream>
#include <sstream>
#include <vector>
#include <stack>
#include <thread>
#include <mutex>
#include <atomic>
#include <condition_variable>
#include <chrono>
#include <stdlib.h> /* srand, rand */
using namespace std;
//random number generation
std::mutex mutRand;//mutex for random number generation (given that the random generator is not thread safe).
int GenerateNumber()
{
std::lock_guard<std::mutex> lk(mutRand);
return rand() % 3;
}
// print function for "thread safe" printing using a stringstream
void print(ostream& s) { cout << s.rdbuf(); cout.flush(); s.clear(); }
// Constants
//
const int num_producers = 3; //the three producers of random numbers
const int num_consumers = 1; //the only consumer
const int producer_delay_to_produce = 10; // in miliseconds
const int consumer_delay_to_consume = 30; // in miliseconds
const int consumer_max_wait_time = 200; // in miliseconds - max time that a consumer can wait for a product to be produced.
const int max_production = 1; // When producers has produced this quantity they will stop to produce
const int max_products = 1; // Maximum number of products that can be stored
//
// Variables
//
atomic<int> num_producers_working(0); // When there's no producer working the consumers will stop, and the program will stop.
stack<int> products; // The products stack, here we will store our products
mutex xmutex; // Our mutex, without this mutex our program will cry
condition_variable is_not_full; // to indicate that our stack is not full between the thread operations
condition_variable is_not_empty; // to indicate that our stack is not empty between the thread operations
//
// Functions
//
// Produce function, producer_id will produce a product
void produce(int producer_id)
{
while (true)
{
unique_lock<mutex> lock(xmutex);
int product;
is_not_full.wait(lock, [] { return products.size() != max_products; });
product = GenerateNumber();
products.push(product);
print(stringstream() << "Producer " << producer_id << " produced " << product << "\n");
is_not_empty.notify_all();
}
}
// Consume function, consumer_id will consume a product
void consume(int consumer_id)
{
while (true)
{
unique_lock<mutex> lock(xmutex);
int product;
if(is_not_empty.wait_for(lock, chrono::milliseconds(consumer_max_wait_time),
[] { return products.size() > 0; }))
{
product = products.top();
products.pop();
print(stringstream() << "Consumer " << consumer_id << " consumed " << product << "\n");
is_not_full.notify_all();
}
}
}
// Producer function, this is the body of a producer thread
void producer(int id)
{
++num_producers_working;
for(int i = 0; i < max_production; ++i)
{
produce(id);
this_thread::sleep_for(chrono::milliseconds(producer_delay_to_produce));
}
print(stringstream() << "Producer " << id << " has exited\n");
--num_producers_working;
}
// Consumer function, this is the body of a consumer thread
void consumer(int id)
{
// Wait until there is any producer working
while(num_producers_working == 0) this_thread::yield();
while(num_producers_working != 0 || products.size() > 0)
{
consume(id);
this_thread::sleep_for(chrono::milliseconds(consumer_delay_to_consume));
}
print(stringstream() << "Consumer " << id << " has exited\n");
}
//
// Main
//
int main()
{
vector<thread> producers_and_consumers;
// Create producers
for(int i = 0; i < num_producers; ++i)
producers_and_consumers.push_back(thread(producer, i));
// Create consumers
for(int i = 0; i < num_consumers; ++i)
producers_and_consumers.push_back(thread(consumer, i));
// Wait for consumers and producers to finish
for(auto& t : producers_and_consumers)
t.join();
return 0;
}
Hope that helps, tell me if you need more info or if you disagree with something :-)
And Good Bastille Day to all French people!
If you want to synchronise the threads, then using a sync object to hold each of the threads in a "ping-pong" or "tick-tock" pattern.
In C++ 11 you can use condition variables, the example here shows something similar to what you are asking for.
I have a total n00b question here on synchronization. I have a 'writer' thread which assigns a different value 'p' to a promise at each iteration. I need 'reader' threads which wait for shared_futures of this value and then process them, and my question is how do I use future/promise to ensure that the reader threads wait for a new update of 'p' before performing their processing task at each iteration? Many thanks.
You can "reset" a promise by assigning it to a blank promise.
myPromise = promise< int >();
A more complete example:
promise< int > myPromise;
void writer()
{
for( int i = 0; i < 10; ++i )
{
cout << "Setting promise.\n";
myPromise.set_value( i );
myPromise = promise< int >{}; // Reset the promise.
cout << "Waiting to set again...\n";
this_thread::sleep_for( chrono::seconds( 1 ));
}
}
void reader()
{
int result;
do
{
auto myFuture = myPromise.get_future();
cout << "Waiting to receive result...\n";
result = myFuture.get();
cout << "Received " << result << ".\n";
} while( result < 9 );
}
int main()
{
std::thread write( writer );
std::thread read( reader );
write.join();
read.join();
return 0;
}
A problem with this approach, however, is that synchronization between the two threads can cause the writer to call promise::set_value() more than once between the reader's calls to future::get(), or future::get() to be called while the promise is being reset. These problems can be avoided with care (e.g. with proper sleeping between calls), but this takes us into the realm of hacking and guesswork rather than logically correct concurrency.
So although it's possible to reset a promise by assigning it to a fresh promise, doing so tends to raise broader synchronization issues.
A promise/future pair is designed to carry only a single value (or exception.). To do what you're describing, you probably want to adopt a different tool.
If you wish to have multiple threads (your readers) all stop at a common point, you might consider a barrier.
The following code demonstrates how the producer/consumer pattern can be implemented with future and promise.
There are two promise variables, used by a producer and a consumer thread. Each thread resets one of the two promise variables and waits for the other one.
#include <iostream>
#include <future>
#include <thread>
using namespace std;
// produces integers from 0 to 99
void producer(promise<int>& dataready, promise<void>& consumed)
{
for (int i = 0; i < 100; ++i) {
// do some work here ...
consumed = promise<void>{}; // reset
dataready.set_value(i); // make data available
consumed.get_future().wait(); // wait for the data to be consumed
}
dataready.set_value(-1); // no more data
}
// consumes integers
void consumer(promise<int>& dataready, promise<void>& consumed)
{
for (;;) {
int n = dataready.get_future().get(); // wait for data ready
if (n >= 0) {
std::cout << n << ",";
dataready = promise<int>{}; // reset
consumed.set_value(); // mark data as consumed
// do some work here ...
}
else
break;
}
}
int main(int argc, const char*argv[])
{
promise<int> dataready{};
promise<void> consumed{};
thread th1([&] {producer(dataready, consumed); });
thread th2([&] {consumer(dataready, consumed); });
th1.join();
th2.join();
std::cout << "\n";
return 0;
}