Get return code from std::thread? [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C++: Simple return value from std::thread?
Is there anyway to get the return code from a std::thread? I have a function which returns a integer, and I want to be able to get the return code from the function when the thread is done executing.

No, that's not what std::thread is for.
Instead, use async to get a future:
#include <future>
int myfun(double, char, bool);
auto f = std::async(myfun, arg1, arg2, arg3); // f is a std::future<int>
// ...
int res = f.get();
You can use the wait_for member function of f (with zero timeout) to see if the result is ready.

As others have suggested, the facilities in <future> can be used for this. However I object to the answer
No, you can't do this with std::thread
Here is one way to do what you want with std::thread. It is by no means the only way:
#include <thread>
#include <iostream>
int func(int x)
{
return x+1;
}
int main()
{
int i;
std::thread t([&] {i = func(2);});
t.join();
std::cout << i << '\n';
}
This will portably output:
3

Kerrek SB is correct with his answer, but I suggested to add another example (which he suggested should be an answer, so here it is).
I discovered recently that at least in VC11, std::async will not release all the resources of the thread until the end of the application, making possible to get memory leak false positive (if you are monitoring them using, for example Visual Leak Detector).
Here I mean that in most basic applications it is not worth looking at the rest of this answer, but if like me you need to check memory leaks and can't afford to let false positive, like static data not released at the end of the main function. If it's your case, then this might help.
std::async is not guaranteed to run in a separate thread by default, it is only if you use std::launch::async as first parameter. Otherwise the implementation decide what to do, and that's why VC11 implementation will use the new Microsoft Concurrency Runtime task manager to manage the provided function as a task pushed in a task pool, which mean threads are maintained and managed in a transparent way. There are ways to explicitely terminate the task manager but that's too platform specific, making async a poor choice when you want exactly 1) be sure to launch a thread and 2) get a result later and 3) be sure the thread is fully released when you get the result.
The alternative that does exactly that is to use std::packaged_task and std::thread in combination with std::future. The way it is done is almost similar to using std::async, just a bit more verbose (which mean you can generalize it in a custom template function if you want).
#include <packaged_task>
#include <thread>
int myfun(double, char, bool);
std::packaged_task<int(double, char, bool)> task(myfun, arg1, arg2, arg3);
auto f = task.get_future(); // f is a std::future<int>
First we create a task, basically an object containing both the function and the std::promise that will be associated with the future. std::packaged_task works mostly like an augmented version of std::function:
Now we need to execute the thread explicitly:
std::thread thread(std::move(task));
thread.detach();
The move is necessary because std::packaged_task is not copyable. Detaching the thread is only necessary if you only want to synchronize using the future – otherwise you will need to join the thread explicitly. If you don't, when thread's destructor is called, it will just call std::terminate().
// ...
int res = f.get(); // Synchronization and retrieval.

Here's an example using packaged_task:
#include <future>
#include <iostream>
void task_waiter(std::future<int>&& f) {
std::future<int> ft = std::move(f);
int result = ft.get();
std::cout << result << '\n';
}
int the_task() {
return 17;
}
int main() {
std::packaged_task<int()> task(the_task);
std::thread thr(task_waiter, task.get_future());
task();
thr.join();
return 0;
}

Related

Why std::future is different returned from std::packaged_task and std::async?

I got to know the reason that future returned from std::async has some special shared state through which wait on returned future happened in the destructor of future. But when we use std::pakaged_task, its future does not exhibit the same behavior.
To complete a packaged task, you have to explicitly call get() on future object from packaged_task.
Now my questions are:
What could be the internal implementation of future (thinking std::async vs std::packaged_task)?
Why the same behavior was not applied to future returned from std::packaged_task? Or, in other words, how is the same behavior stopped for std::packaged_task future?
To see the context, please see the code below:
It does not wait to finish countdown task. However, if I un-comment // int value = ret.get();, it would finish countdown and is obvious because we are literally blocking on returned future.
// packaged_task example
#include <iostream> // std::cout
#include <future> // std::packaged_task, std::future
#include <chrono> // std::chrono::seconds
#include <thread> // std::thread, std::this_thread::sleep_for
// count down taking a second for each value:
int countdown (int from, int to) {
for (int i=from; i!=to; --i) {
std::cout << i << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
std::cout << "Lift off!" <<std::endl;
return from-to;
}
int main ()
{
std::cout << "Start " << std::endl;
std::packaged_task<int(int,int)> tsk (countdown); // set up packaged_task
std::future<int> ret = tsk.get_future(); // get future
std::thread th (std::move(tsk),10,0); // spawn thread to count down from 10 to 0
// int value = ret.get(); // wait for the task to finish and get result
std::cout << "The countdown lasted for " << std::endl;//<< value << " seconds.\n";
th.detach();
return 0;
}
If I use std::async to execute task countdown on another thread, no matter if I use get() on returned future object or not, it will always finish the task.
// packaged_task example
#include <iostream> // std::cout
#include <future> // std::packaged_task, std::future
#include <chrono> // std::chrono::seconds
#include <thread> // std::thread, std::this_thread::sleep_for
// count down taking a second for each value:
int countdown (int from, int to) {
for (int i=from; i!=to; --i) {
std::cout << i << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
std::cout << "Lift off!" <<std::endl;
return from-to;
}
int main ()
{
std::cout << "Start " << std::endl;
std::packaged_task<int(int,int)> tsk (countdown); // set up packaged_task
std::future<int> ret = tsk.get_future(); // get future
auto fut = std::async(std::move(tsk), 10, 0);
// int value = fut.get(); // wait for the task to finish and get result
std::cout << "The countdown lasted for " << std::endl;//<< value << " seconds.\n";
return 0;
}
std::async has definite knowledge of how and where the task it is given is executed. That is its job: to execute the task. To do that, it has to actually put it somewhere. That somewhere could be a thread pool, a newly created thread, or in a place to be executed by whomever destroys the future.
Because async knows how the function will be executed, it has 100% of the information it needs to build a mechanism that can communicate when that potentially asynchronous execution has concluded, as well as to ensure that if you destroy the future, then whatever mechanism that's going to execute that function will eventually get around to actually executing it. After all, it knows what that mechanism is.
But packaged_task doesn't. All packaged_task does is store a callable object which can be called with the given arguments, create a promise with the type of the function's return value, and provide a means to both get a future and to execute the function that generates the value.
When and where the task actually gets executed is none of packaged_task's business. Without that knowledge, the synchronization needed to make future's destructor synchronize with the task simply can't be built.
Let's say you want to execute the task on a freshly-created thread. OK, so to synchronize its execution with the future's destruction, you'd need a mutex which the destructor will block on until the task thread finishes.
But what if you want to execute the task in the same thread as the caller of the future's destructor? Well, then you can't use a mutex to synchronize that since it all on the same thread. Instead, you need to make the destructor invoke the task. That's a completely different mechanism, and it is contingent on how you plan to execute.
Because packaged_task doesn't know how you intend to execute it, it cannot do any of that.
Note that this is not unique to packaged_task. All futures created from a user-created promise object will not have the special property of async's futures.
So the question really ought to be why async works this way, not why everyone else doesn't.
If you want to know that, it's because of two competing needs: async needed to be a high-level, brain-dead simple way to get asynchronous execution (for which sychronization-on-destruction makes sense), and nobody wanted to create a new future type that was identical to the existing one save for the behavior of its destructor. So they decided to overload how future works, complicating its implementation and usage.
#Nicol Bolas has already answered this question quite satisfactorily. So I'll attempt to answer the question slightly from different perspective, elaborating the points already mentioned by #Nicol Bolas.
The design of related things and their goals
Consider this simple function which we want to execute, in various ways:
int add(int a, int b) {
std::cout << "adding: " << a << ", "<< b << std::endl;
return a + b;
}
Forget std::packaged_task, std ::future and std::async for a while, let's take one step back and revisit how std::function works and what problem it causes.
case 1 — std::function isn't good enough for executing things in different threads
std::function<int(int,int)> f { add };
Once we have f, we can execute it, in the same thread, like:
int result = f(1, 2); //note we can get the result here
Or, in a different thread, like this:
std::thread t { std::move(f), 3, 4 };
t.join();
If we see carefully, we realize that executing f in a different thread creates a new problem: how do we get the result of the function? Executing f in the same thread does not have that problem — we get the result as returned value, but when executed it in a different thread, we don't have any way to get the result. That is exactly what is solved by std::packaged_task.
case 2 — std::packaged_task solves the problem which std::function does not solve
In particular, it creates a channel between threads to send the result to the other thread. Apart from that, it is more or less same as std::function.
std::packaged_task<int(int,int)> f { add }; // almost same as before
std::future<int> channel = f.get_future(); // get the channel
std::thread t{ std::move(f), 30, 40 }; // same as before
t.join(); // same as before
int result = channel.get(); // problem solved: get the result from the channel
Now you see how std::packaged_task solves the problem created by std::function. That however does not mean that std::packaged_task has to be executed in a different thread. You can execute it in the same thread as well, just like std::function, though you will still get the result from the channel.
std::packaged_task<int(int,int)> f { add }; // same as before
std::future<int> channel = f.get_future(); // same as before
f(10, 20); // execute it in the current thread !!
int result = channel.get(); // same as before
So fundamentally std::function and std::packaged_task are similar kind of thing: they simply wrap callable entity, with one difference: std::packaged_task is multithreading-friendly, because it provides a channel through which it can pass the result to other threads. Both of them do NOT execute the wrapped callable entity by themselves. One needs to invoke them, either in the same thread, or in another thread, to execute the wrapped callable entity. So basically there are two kinds of thing in this space:
what is executed i.e regular functions, std::function, std::packaged_task, etc.
how/where is executed i.e threads, thread pools, executors, etc.
case 3: std::async is an entirely different thing
It's a different thing because it combines what-is-executed with how/where-is-executed.
std::future<int> fut = std::async(add, 100, 200);
int result = fut.get();
Note that in this case, the future created has an associated executor, which means that the future will complete at some point as there is someone executing things behind the scene. However, in case of the future created by std::packaged_task, there is not necessarily an executor and that future may never complete if the created task is never given to any executor.
Hope that helps you understand how things work behind the scene. See the online demo.
The difference between two kinds of std::future
Well, at this point, it becomes pretty much clear that there are two kinds of std::future which can be created:
One kind can be created by std::async. Such future has an associated executor and thus can complete.
Other kind can be created by std::packaged_task or things like that. Such future does not necessarily have an associated executor and thus may or may not complete.
Since, in the second case the future does not necessarily have an associated executor, its destructor is not designed for its completion/wait because it may never complete:
{
std::packaged_task<int(int,int)> f { add };
std::future<int> fut = f.get_future();
} // fut goes out of scope, but there is no point
// in waiting in its destructor, as it cannot complete
// because as `f` is not given to any executor.
Hope this answer helps you understand things from a different perspective.
The change in behaviour is due to the difference between std::thread and std::async.
In the first example, you have created a daemon thread by detaching. Where you print std::cout << "The countdown lasted for " << std::endl; in your main thread, may occur before, during or after the print statements inside the countdown thread function. Because the main thread does not await the spawned thread, you will likely not even see all of the print outs.
In the second example, you launch the thread function with the std::launch::deferred policy. The behaviour for std::async is:
If the async policy is chosen, the associated thread completion synchronizes-with the successful return from the first function that is waiting on the shared state, or with the return of the last function that releases the shared state, whichever comes first.
In this example, you have two futures for the same shared state. Before their dtors are called when exiting main, the async task must complete. Even if you had not explicitly defined any futures, the temporary future that gets created and destroyed (returned from the call to std::async) will mean that the task completes before the main thread exits.
Here is a great blog post by Scott Meyers, clarifying the behaviour of std::future & std::async.
Related SO post.

boost::future::then() not returning future that blocks on destruction

I wrote this sample code to test boost::future continuations to use in my application.
#include <iostream>
#include <functional>
#include <unistd.h>
#include <exception>
#define BOOST_THREAD_PROVIDES_FUTURE
#define BOOST_THREAD_PROVIDES_FUTURE_CONTINUATION
#include <boost/thread/future.hpp>
void magicNumber(std::shared_ptr<boost::promise<long>> p)
{
sleep(5);
p->set_value(0xcafebabe);
}
boost::future<long> foo()
{
std::shared_ptr<boost::promise<long>> p =
std::make_shared<boost::promise<long>>();
boost::future<long> f = p->get_future();
boost::thread t([p](){magicNumber(p);});
t.detach();
return f;
}
void bar()
{
auto f = foo();
f.then([](boost::future<long> f) { std::cout << f.get() << std::endl; });
std::cout << "Should have blocked?" << std::endl;
}
int main()
{
bar();
sleep (6);
return 0;
}
When compiled, linked and run with boost version 1.64.0_1, I am getting following output:
Should have blocked?
3405691582
But according to boost::future::then's documentation here.
The execution should be blocked at f.then() in function bar() because the temporary variable of type boost::future<void> should block at destruction, and the output should be
3405691582
Should have blocked?
In my application though, the call to f.then() is blocking the execution till continuation is not invoked.
What is happening here?
Note that the only time a future would ever block in the destructor used to be documented as when you use std::async with a launch-policy of launch::async.
See Why is the destructor of a future returned from `std::async` blocking?
The answer lists the many discussions that have taken place around this subject. The proposal N3776 made it into C++14:
This paper provides proposed wording to implement a positive SG1 straw poll to clarify that ~future and
~shared_future don’t block except possibly in the presence of async.
cppreference.com documents std::async
Your code never used async, so it would be surprising if any future derived would block on destruction.
More gener
ally, it is clear that the consensus is that blocking destruction is an unfortunate design wart, not something you'd expect being introduced on newer extensions (such as .then continuations).
I can only assume this is a case of documentation error where the wording
The returned futures behave as the ones returned from boost::async, the destructor of the future object returned from then will block. This could be subject to change in future versions.
should be removed.

How to give the user some assigned time to answer?

Something like a stopwatch, give the person who is using my program about 30 second to answer, if no answer is got the program to exit ?
Basically the response shouldn't take more than the time given, otherwise the program will exit.
I found the answer by Axalo interesting, however fatally flawed by unfortunate minutia of std::async and std::future. So I'm presenting an alternative that eschews std::async but otherwise follows Axalo's basic design.
When I run Axalo's answer on my platform (which is conforming in the pertinent details), if the client never answers, getInputWithin never returns or exits. The program just hangs. And if the client answers well within the timeout, getInputWithin returns with the correct answer, but doesn't do so until the timeout period has expired.
The reason for this problem is subtle. It is well described in Herb Sutter's excellent paper N3630. A ~std::future() can block if it was returned by std::async() and will block until the associated task is done. This feature was intentionally put into async/future, and in the eyes of some, makes future completely useless.
Axalo's r1 and r2 are such std::futures whose destructor is supposed to block until the associated task is done. And this is why this solution hangs if the client never answers.
Below is an alternative answer which is built from thread, mutex, and condition_variable. It is otherwise very similar to Axalo's answer, but does not suffer from (what some consider) the design flaws of std::async.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <memory>
#include <mutex>
#include <stdexcept>
#include <string>
#include <thread>
#include <tuple>
std::string
getInputWithin(std::chrono::seconds timeout)
{
auto sp = std::make_shared<std::tuple<std::mutex, std::condition_variable,
std::string, bool>>();
std::thread([sp]() mutable
{
std::getline(std::cin, std::get<2>(*sp));
std::lock_guard<std::mutex> lk(std::get<0>(*sp));
std::get<3>(*sp) = true;
std::get<1>(*sp).notify_one();
sp.reset();
}).detach();
std::unique_lock<std::mutex> lk(std::get<0>(*sp));
if (!std::get<1>(*sp).wait_for(lk, timeout, [&]() {return std::get<3>(*sp);}))
throw std::runtime_error("time out");
return std::get<2>(*sp);
}
int main()
{
std::cout << "please answer within 10 seconds...\n";
std::string answer = getInputWithin(std::chrono::seconds(10));
std::cout << answer << '\n';
}
Notes:
The timing stays within the chrono type system always. Prefer the type std::chrono::seconds to a scalar with a suggestive name (int timeoutInSeconds vs std::chrono::seconds timeout).
We need to launch a std::thread to handle the read from std::cin, as Axalo demonstrated. However we are going to need a std::mutex and std::condition_variable for communication instead of using the convenience of std::future. Both the main thread and this auxiliary thread need to share ownership of these communication objects, and we don't know which will die first. If the client never responds, the auxiliary thread may live forever, creating an effective memory leak, which is another problem not solved herein. But at any rate, the easiest way to share ownership is to store the communication objects with a copied std::shared_ptr. Last one out turns out the lights.
Launch a std::thread that waits for std::cin and signals the main thread if it gets it. The signaling must be done with the mutex locked. Note that this thread can be (indeed must be) detached. The thread can not touch any memory that it does not own (because of the shared_ptr owning all referenced memory). If main exits while the auxiliary thread is running, the OS will bring the thread down gracefully with no UB.
The main thread then locks the mutex and does a wait_for on the condition_variable using the specified timeout, and a predicate that is checking for the bool in the tuple to turn to true. This wait_for will either return early with that bool set to true, or it will return with it set to false after timeout seconds. If they race (timeout and client answer at the same time) it is ok, either there will be a string there or not, and the bool in the tuple answers that question. While
the main thread is executing the wait_for, the mutex is unlocked so the auxiliary thread can use it.
If the main thread returns and the bool in the tuple has not been set to true, then an exception is thrown. If this exception is not caught, std::terminate() will be called. Otherwise, the string in the tuple will have the client's response.
This approach is susceptible to a client creating many responses to which it never answers, and thus effectively growing memory leaks held by shared_ptrs which never get destructed. Solving that problem is not something I know how to do in portable C++.
In C++14, a slight modification can be done with getInputWithin which reduces the error of choosing the wrong member of the tuple. Since our tuple is composed of all different types, we can index it by type instead of by position:
std::string
getInputWithin(std::chrono::seconds timeout)
{
auto sp = std::make_shared<std::tuple<std::mutex, std::condition_variable,
std::string, bool>>();
std::thread([sp]() mutable
{
std::getline(std::cin, std::get<std::string>(*sp)); // here
std::lock_guard<std::mutex> lk(std::get<std::mutex>(*sp)); // here
std::get<bool>(*sp) = true; // here
std::get<std::condition_variable>(*sp).notify_one(); // here
sp.reset();
}).detach();
std::unique_lock<std::mutex> lk(std::get<std::mutex>(*sp)); // here
if (!std::get<std::condition_variable>(*sp).wait_for(lk, timeout,
[&]() {return std::get<bool>(*sp);})) // here
throw std::runtime_error("time out");
return std::get<std::string>(*sp); // here
}
That is, the lines marked // here have been changed with std::get<type>(*sp) as opposed to std::get<index>(*sp).
Update
In a fit of paranoia inspired by the good comment from TemplateRex below, I've added a call to sp.reset() as the last thing the aux thread does. This forces the main thread to be the one to destruct the tuple, eliminating the possibility that the aux thread could stall before destructing its local copy of sp, and let main blow through the atexit chain, and then have the aux thread wake up and run the tuple destructor.
There may be other reasons that exist to make the call to sp.reset() unnecessary. But by adding this preventative medicine, we don't have to worry about it.
If you don't want to use exit and kill the process you could do it this way:
std::string getInputWithin(int timeoutInSeconds, bool *noInput = nullptr)
{
std::string answer;
bool exceeded = false;
bool gotInput = false;
auto r1 = std::async([&answer, &gotInput]()
{
std::getline(std::cin, answer);
gotInput = true;
});
auto r2 = std::async([&timeoutInSeconds, &exceeded]()
{
std::this_thread::sleep_for(std::chrono::seconds(timeoutInSeconds));
exceeded = true;
});
while(!gotInput && !exceeded)
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
if(gotInput)
{
if(noInput != nullptr) *noInput = false;
return answer;
}
if(noInput != nullptr) *noInput = true;
return "";
}
int main()
{
std::cout << "please answer within 10 seconds...\n";
bool noInput;
std::string answer = getInputWithin(10, &noInput);
return 0;
}
The nice thing about this is that you can now handle the missing input by using a default value or simply give the user a second chance, etc...

C++ program unexpectedly blocks / throws

I'm learning about mutexes in C++ and have a problem with the following code (taken from N. Josuttis' "The C++ Standard Library").
I don't understand why it blocks / throws unless I add this_thread::sleep_for in the main thread (then it doesn't block and all three calls are carried out).
The compiler is cl.exe used from the command line.
#include <future>
#include <mutex>
#include <iostream>
#include <string>
#include <thread>
#include <chrono>
std::mutex printMutex;
void print(const std::string& s)
{
std::lock_guard<std::mutex> lg(printMutex);
for (char c : s)
{
std::cout.put(c);
}
std::cout << std::endl;
}
int main()
{
auto f1 = std::async(std::launch::async, print, "Hello from thread 1");
auto f2 = std::async(std::launch::async, print, "Hello from thread 2");
// std::this_thread::sleep_for(std::chrono::seconds(1));
print(std::string("Hello from main"));
}
I think what you are seeing is an issue with the conformance of the MSVC implementation of async (in combination with future). I believe it is not conformant. I am able to reproduce it with VS2013, but unable to reproduce the issue with gcc.
The crash is because the main thread exits (and starts to clean up) before the other two threads complete.
Hence a simple delay (the sleep_for) or .get() or .wait() on the two futures should fix it for you. So the modified main could look like;
int main()
{
auto f1 = std::async(std::launch::async, print, "Hello from thread 1");
auto f2 = std::async(std::launch::async, print, "Hello from thread 2");
print(std::string("Hello from main"));
f1.get();
f2.get();
}
Favour the explicit wait or get over the timed "sleep".
Notes on the conformance
There was a proposal from Herb Sutter to change the wait or block on the shared state of the future returned from async. This may be the reason for the behaviour in MSVC, it could be seen as having implemented the proposal. I'm not sure what the final result was of the proposal was or its integration (or part thereof) into C++14. At least w.r.t. the blocking of the future returned from async it looks like the MSVC behaviour did not make it into the specification.
It is interesting to note that the wording in §30.6.8/5 changed;
From C++11
a call to a waiting function on an asynchronous return object that shares the shared state created
by this async call shall block until the associated thread has completed, as if joined
To C++14
a call to a waiting function on an asynchronous return object that shares the shared state created
by this async call shall block until the associated thread has completed, as if joined, or else time
out
I'm not sure how the "time out" would be specified, I would imagine it is implementation defined.
std::async returns a future. Its destructor blocks if get or wait has not been called:
it may block if all of the following are true: the shared state was created by a call to std::async, the shared state is not yet ready, and this was the last reference to the shared state.
See std::futures from std::async aren't special! for a detailed treatment of the subject.
Add these 2 lines at the end of main:
f1.wait();
f2.wait();
This will make sure the threads finish before main exists.

Error about std::promise in C++

I am trying to pass my class instance into threads and the return the processed objects from threads. I've googled about C++ multithreading, and found that std::promising can be helpful.
However, I am stuck at the very beginning. Here is my code:
void callerFunc()
{
//...
std::promise<DataWareHouse> data_chunks;
// DataWareHouse is my customized class
//data_chunks has a vector<vector<double>> member variable
std::thread(&run_thread,data_chunks);
// ............
}
void run_thread(std::promise<DataWareHouse> data_chunks)
{
// ...
vector<vector<double>> results;
// ...
data_chunks.set_value(results);
}
The above code generates an error:
`error C2248: 'std::promise<_Ty>::promise' : cannot access private member declared in class 'std::promise<_Ty>'`
May I know what am I wrong and how to fix it?
Many thanks. :-)
Your first problem is that you are using std::thread -- std::thread is a low level class which you should build higher abstractions up on. Threading is newly standardized in C++ in C++11, and all of the rough parts are not filed off yet.
There are three different patterns for using threading in C++11 that might be useful to you.
First, std::async. Second, std::thread mixed with std::packaged_task. And third, dealing with std::thread and std::promise in the raw.
I'll illustrate the third, which is the lowest level and most dangerous, because that is what you asked for. I would advise looking at the first two options.
#include <future>
#include <vector>
#include <iostream>
typedef std::vector<double> DataWareHouse;
void run_thread(std::promise<DataWareHouse> data_chunks)
{
DataWareHouse results;
results.push_back( 3.14159 );
data_chunks.set_value(results);
}
std::future<DataWareHouse> do_async_work()
{
std::promise<DataWareHouse> data_chunks;
std::future<DataWareHouse> retval = data_chunks.get_future();
// DataWareHouse is my customized class
//data_chunks has a vector<vector<double>> member variable
std::thread t = std::thread(&run_thread,std::move(data_chunks));
t.detach(); // do this or seg fault
return retval;
}
int main() {
std::future<DataWareHouse> result = do_async_work();
DataWareHouse vec = result.get(); // block and get the data
for (double d: vec) {
std::cout << d << "\n";
}
}
Live example
With std::async, you'd have a function returning DataWareHouse, and it would return a std::future<DataWareHouse> directly.
With std::packaged_task<>, it would take your run_thread and turn it into a packaged_task that can be executed, and a std::future extracted from it.
std::promise<> is not copyable, and in calling run_thread() you are implicitly trying to invoke the copy constructor. The error message is telling you that you cannot use the copy constructor since it is marked private.
You need to pass a promise by reference (std::promise<DataWareHouse> &). This is safe if callerFunc() is guaranteed not to return until run_thread() is finished with the object (otherwise you will be using a reference to a destroyed stack-allocated object, and I don't have to explain why that's bad).
You're trying to pass the promise to the thread by value; but you need to pass by reference to get the results back to the caller's promise. std::promise is uncopyable, to prevent this mistake.
std::thread(&run_thread,std::ref(data_chunks));
^^^^^^^^
void run_thread(std::promise<DataWareHouse> & data_chunks)
^
The error is telling you you cannot copy an std::promise, which you do here:
void run_thread(std::promise<DataWareHouse> data_chunks)
and here:
std::thread(&run_thread,data_chunks); // makes copy of data_chunks
You should pass a reference:
void run_thread(std::promise<DataWareHouse>& data_chunks);
// ^
And then pass an std::reference_wrapper to the thread, otherwise it too will attempt to copy the promise. This is easily done with std::ref:
std::thread(&run_thread, std::ref(data_chunks));
// ^^^^^^^^
Obviously data_chunks must be alive until the thread finished running, so you will have to join the thread in callerFunc().