#include <future>
#include <iostream>
void main()
{
std::async(std::launch::async,[] {std::cout << "async..." << std::endl; while (1);});
std::cout << "runing main..." << std::endl;
}
In this code, only "async..." will be outputted, which means the code is blocked at async. However, if I add future and let the statement become:
std::future<bool> fut = std::async([]
{std::cout << "async..." << std::endl; while (1); return false; });
Then everything runs smoothly (it will not be blocked). I am not sure why it happen in this way. I think async is supposed to run in a separate thread.
From encppreference.com:
If the std::future obtained from std::async is not moved from or bound to a reference, the destructor of the std::future will block at the end of the full expression until the asynchronous operation completes, essentially making code such as the following synchronous:
std::async(std::launch::async, []{ f(); }); // temporary's dtor waits for f()
std::async(std::launch::async, []{ g(); }); // does not start until f() completes
If I did get that right, it comes from these parts of the standard (N4527):
§30.6.6 [futures.unique_future]:
~future();
Effects:
— releases any shared state (30.6.4);
§30.6.4#5 [futures.state] (emphasis is mine):
When an asynchronous return object or an asynchronous provider is said to release its shared state, it means:
[...].
— these actions will not block for the shared state to become ready, except that it may block if all of the following are true: the shared state was created by a call to std::async, the shared state is not yet ready, and this was the last reference to the shared state.
Since you did not store the result of your first std::async call, the destructor of std::future is called and since all 3 conditions are met:
the std::future was created via std::async;
the shared state is not yet ready (due to your infinite loop);
there is no remaining reference to this future
...then the call is blocking.
Related
I got to know the reason that future returned from std::async has some special shared state through which wait on returned future happened in the destructor of future. But when we use std::pakaged_task, its future does not exhibit the same behavior.
To complete a packaged task, you have to explicitly call get() on future object from packaged_task.
Now my questions are:
What could be the internal implementation of future (thinking std::async vs std::packaged_task)?
Why the same behavior was not applied to future returned from std::packaged_task? Or, in other words, how is the same behavior stopped for std::packaged_task future?
To see the context, please see the code below:
It does not wait to finish countdown task. However, if I un-comment // int value = ret.get();, it would finish countdown and is obvious because we are literally blocking on returned future.
// packaged_task example
#include <iostream> // std::cout
#include <future> // std::packaged_task, std::future
#include <chrono> // std::chrono::seconds
#include <thread> // std::thread, std::this_thread::sleep_for
// count down taking a second for each value:
int countdown (int from, int to) {
for (int i=from; i!=to; --i) {
std::cout << i << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
std::cout << "Lift off!" <<std::endl;
return from-to;
}
int main ()
{
std::cout << "Start " << std::endl;
std::packaged_task<int(int,int)> tsk (countdown); // set up packaged_task
std::future<int> ret = tsk.get_future(); // get future
std::thread th (std::move(tsk),10,0); // spawn thread to count down from 10 to 0
// int value = ret.get(); // wait for the task to finish and get result
std::cout << "The countdown lasted for " << std::endl;//<< value << " seconds.\n";
th.detach();
return 0;
}
If I use std::async to execute task countdown on another thread, no matter if I use get() on returned future object or not, it will always finish the task.
// packaged_task example
#include <iostream> // std::cout
#include <future> // std::packaged_task, std::future
#include <chrono> // std::chrono::seconds
#include <thread> // std::thread, std::this_thread::sleep_for
// count down taking a second for each value:
int countdown (int from, int to) {
for (int i=from; i!=to; --i) {
std::cout << i << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
std::cout << "Lift off!" <<std::endl;
return from-to;
}
int main ()
{
std::cout << "Start " << std::endl;
std::packaged_task<int(int,int)> tsk (countdown); // set up packaged_task
std::future<int> ret = tsk.get_future(); // get future
auto fut = std::async(std::move(tsk), 10, 0);
// int value = fut.get(); // wait for the task to finish and get result
std::cout << "The countdown lasted for " << std::endl;//<< value << " seconds.\n";
return 0;
}
std::async has definite knowledge of how and where the task it is given is executed. That is its job: to execute the task. To do that, it has to actually put it somewhere. That somewhere could be a thread pool, a newly created thread, or in a place to be executed by whomever destroys the future.
Because async knows how the function will be executed, it has 100% of the information it needs to build a mechanism that can communicate when that potentially asynchronous execution has concluded, as well as to ensure that if you destroy the future, then whatever mechanism that's going to execute that function will eventually get around to actually executing it. After all, it knows what that mechanism is.
But packaged_task doesn't. All packaged_task does is store a callable object which can be called with the given arguments, create a promise with the type of the function's return value, and provide a means to both get a future and to execute the function that generates the value.
When and where the task actually gets executed is none of packaged_task's business. Without that knowledge, the synchronization needed to make future's destructor synchronize with the task simply can't be built.
Let's say you want to execute the task on a freshly-created thread. OK, so to synchronize its execution with the future's destruction, you'd need a mutex which the destructor will block on until the task thread finishes.
But what if you want to execute the task in the same thread as the caller of the future's destructor? Well, then you can't use a mutex to synchronize that since it all on the same thread. Instead, you need to make the destructor invoke the task. That's a completely different mechanism, and it is contingent on how you plan to execute.
Because packaged_task doesn't know how you intend to execute it, it cannot do any of that.
Note that this is not unique to packaged_task. All futures created from a user-created promise object will not have the special property of async's futures.
So the question really ought to be why async works this way, not why everyone else doesn't.
If you want to know that, it's because of two competing needs: async needed to be a high-level, brain-dead simple way to get asynchronous execution (for which sychronization-on-destruction makes sense), and nobody wanted to create a new future type that was identical to the existing one save for the behavior of its destructor. So they decided to overload how future works, complicating its implementation and usage.
#Nicol Bolas has already answered this question quite satisfactorily. So I'll attempt to answer the question slightly from different perspective, elaborating the points already mentioned by #Nicol Bolas.
The design of related things and their goals
Consider this simple function which we want to execute, in various ways:
int add(int a, int b) {
std::cout << "adding: " << a << ", "<< b << std::endl;
return a + b;
}
Forget std::packaged_task, std ::future and std::async for a while, let's take one step back and revisit how std::function works and what problem it causes.
case 1 — std::function isn't good enough for executing things in different threads
std::function<int(int,int)> f { add };
Once we have f, we can execute it, in the same thread, like:
int result = f(1, 2); //note we can get the result here
Or, in a different thread, like this:
std::thread t { std::move(f), 3, 4 };
t.join();
If we see carefully, we realize that executing f in a different thread creates a new problem: how do we get the result of the function? Executing f in the same thread does not have that problem — we get the result as returned value, but when executed it in a different thread, we don't have any way to get the result. That is exactly what is solved by std::packaged_task.
case 2 — std::packaged_task solves the problem which std::function does not solve
In particular, it creates a channel between threads to send the result to the other thread. Apart from that, it is more or less same as std::function.
std::packaged_task<int(int,int)> f { add }; // almost same as before
std::future<int> channel = f.get_future(); // get the channel
std::thread t{ std::move(f), 30, 40 }; // same as before
t.join(); // same as before
int result = channel.get(); // problem solved: get the result from the channel
Now you see how std::packaged_task solves the problem created by std::function. That however does not mean that std::packaged_task has to be executed in a different thread. You can execute it in the same thread as well, just like std::function, though you will still get the result from the channel.
std::packaged_task<int(int,int)> f { add }; // same as before
std::future<int> channel = f.get_future(); // same as before
f(10, 20); // execute it in the current thread !!
int result = channel.get(); // same as before
So fundamentally std::function and std::packaged_task are similar kind of thing: they simply wrap callable entity, with one difference: std::packaged_task is multithreading-friendly, because it provides a channel through which it can pass the result to other threads. Both of them do NOT execute the wrapped callable entity by themselves. One needs to invoke them, either in the same thread, or in another thread, to execute the wrapped callable entity. So basically there are two kinds of thing in this space:
what is executed i.e regular functions, std::function, std::packaged_task, etc.
how/where is executed i.e threads, thread pools, executors, etc.
case 3: std::async is an entirely different thing
It's a different thing because it combines what-is-executed with how/where-is-executed.
std::future<int> fut = std::async(add, 100, 200);
int result = fut.get();
Note that in this case, the future created has an associated executor, which means that the future will complete at some point as there is someone executing things behind the scene. However, in case of the future created by std::packaged_task, there is not necessarily an executor and that future may never complete if the created task is never given to any executor.
Hope that helps you understand how things work behind the scene. See the online demo.
The difference between two kinds of std::future
Well, at this point, it becomes pretty much clear that there are two kinds of std::future which can be created:
One kind can be created by std::async. Such future has an associated executor and thus can complete.
Other kind can be created by std::packaged_task or things like that. Such future does not necessarily have an associated executor and thus may or may not complete.
Since, in the second case the future does not necessarily have an associated executor, its destructor is not designed for its completion/wait because it may never complete:
{
std::packaged_task<int(int,int)> f { add };
std::future<int> fut = f.get_future();
} // fut goes out of scope, but there is no point
// in waiting in its destructor, as it cannot complete
// because as `f` is not given to any executor.
Hope this answer helps you understand things from a different perspective.
The change in behaviour is due to the difference between std::thread and std::async.
In the first example, you have created a daemon thread by detaching. Where you print std::cout << "The countdown lasted for " << std::endl; in your main thread, may occur before, during or after the print statements inside the countdown thread function. Because the main thread does not await the spawned thread, you will likely not even see all of the print outs.
In the second example, you launch the thread function with the std::launch::deferred policy. The behaviour for std::async is:
If the async policy is chosen, the associated thread completion synchronizes-with the successful return from the first function that is waiting on the shared state, or with the return of the last function that releases the shared state, whichever comes first.
In this example, you have two futures for the same shared state. Before their dtors are called when exiting main, the async task must complete. Even if you had not explicitly defined any futures, the temporary future that gets created and destroyed (returned from the call to std::async) will mean that the task completes before the main thread exits.
Here is a great blog post by Scott Meyers, clarifying the behaviour of std::future & std::async.
Related SO post.
I have the following very simple code:
void TestSleep()
{
std::cout << "TestSleep " << std::endl;
sleep(10);
std::cout << "TestSleep Ok" << std::endl;
}
void TestAsync()
{
std::cout << "TestAsync" << std::endl;
std::async(std::launch::async, TestSleep);
std::cout << "TestAsync ok!!!" << std::endl;
}
int main()
{
TestAsync();
return 0;
}
Since I use std::launch::async I expect that TestSleep() will be run asynchronously and I will have the following output:
TestAsync
TestAsync ok!!!
TestSleep
TestSleep Ok
But really I have the output for synchronous run:
TestAsync
TestSleep
TestSleep Ok
TestAsync ok!!!
Could you explain why and how to make TestSleep call really asynchronously.
From this std::async reference notes section
If the std::future obtained from std::async is not moved from or bound to a reference, the destructor of the std::future will block at the end of the full expression until the asynchronous operation completes, essentially making code ... synchronous
This is what happens here. Since you don't store the future that std::async returns, it will be destructed at the end of the expression (which is the std::async call) and that will block until the thread finishes.
If you do e.g.
auto f = std::async(...);
then the destruction of f at the end of TestAsync will block, and the text "TestAsync ok!!!" should be printed before "TestSleep Ok".
std::async() returns an instance of std::future. If you look at the documentation for std::future's destructor, it says the following:
these actions will not block for the shared state to become ready, except that it may block if all of the following are true: the shared state was created by a call to std::async, the shared state is not yet ready, and this was the last reference to the shared state.
You aren't storing the return value of std::async() into a local variable, but that value is still created and must be destroyed. Since the destructor will block until your function returns, this makes it synchronous.
If you change TestAsync() to return the std::future() created by std::async(), then it should be asynchronous.
I'm using std::call_once in my code to initialize some shared variables only once. The calling code is inside a callback that is triggered by multiple threads.
What I'm interested to know, since I couldn't find it in the documentation is whether std::call_once is blocking essentially as if there was a std::lock_guard instead?
In practice it looks like this is the case.
For example, the following will print "Done" before any print() will be called:
#include <future>
#include <iostream>
#include <thread>
#include <mutex>
std::once_flag flag;
void print()
{
for(int i=0;i<10;i++)
{
std::cout << "Hi, my name is " << std::this_thread::get_id()
<< ", what?" << std::endl;
}
}
void do_once()
{
std::cout << "sleeping for a while..." << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(500));
std::cout << "Done" << std::endl;
}
void work()
{
std::call_once(flag, [](){ do_once(); });
print();
}
int main()
{
auto handle1 = std::async(std::launch::async, work);
auto handle2 = std::async(std::launch::async, work);
auto handle3 = std::async(std::launch::async, work);
auto handle4 = std::async(std::launch::async, work);
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
I'm assuming this is indeed the case (since I don't see how it could be implemented otherwise), but is this behavior guaranteed or could there be a compiler that decides that std::call_once will indeed be called once but allow other threads to continue and just ignore this call?
Yes std::call_once is a blocking call. From [thread.once.callonce] we have
Effects: An execution of call_once that does not call its func is a passive execution. An execution of call_once that calls its func is an active execution. An active execution shall call INVOKE (DECAY_COPY ( std::forward<Callable>(func)), DECAY_COPY (std::forward<Args>(args))...). If such a call to func throws an exception the execution is exceptional, otherwise it is returning. An exceptional execution shall propagate the exception to the caller of call_once. Among all executions of call_once for any given once_flag: at most one shall be a returning execution; if there is a returning
execution, it shall be the last active execution; and there are passive executions only if there is a returning execution. [ Note: passive executions allow other threads to reliably observe the results produced by the earlier returning execution. —end note ]
Synchronization: For any given once_flag: all active executions occur in a total order; completion of an active execution synchronizes with (1.10) the start of the next one in this total order; and the returning execution synchronizes with the return from all passive executions.
emphasis mine
This means that all calls to call_once will wait until the function passed to call_once completes. In your case that means do_once() must be called before any thread calls print()
While working with the threaded model of C++11, I noticed that
std::packaged_task<int(int,int)> task([](int a, int b) { return a + b; });
auto f = task.get_future();
task(2,3);
std::cout << f.get() << '\n';
and
auto f = std::async(std::launch::async,
[](int a, int b) { return a + b; }, 2, 3);
std::cout << f.get() << '\n';
seem to do exactly the same thing. I understand that there could be a major difference if I ran std::async with std::launch::deferred, but is there one in this case?
What is the difference between these two approaches, and more importantly, in what use cases should I use one over the other?
Actually the example you just gave shows the differences if you use a rather long function, such as
//! sleeps for one second and returns 1
auto sleep = [](){
std::this_thread::sleep_for(std::chrono::seconds(1));
return 1;
};
Packaged task
A packaged_task won't start on it's own, you have to invoke it:
std::packaged_task<int()> task(sleep);
auto f = task.get_future();
task(); // invoke the function
// You have to wait until task returns. Since task calls sleep
// you will have to wait at least 1 second.
std::cout << "You can see this after 1 second\n";
// However, f.get() will be available, since task has already finished.
std::cout << f.get() << std::endl;
std::async
On the other hand, std::async with launch::async will try to run the task in a different thread:
auto f = std::async(std::launch::async, sleep);
std::cout << "You can see this immediately!\n";
// However, the value of the future will be available after sleep has finished
// so f.get() can block up to 1 second.
std::cout << f.get() << "This will be shown after a second!\n";
Drawback
But before you try to use async for everything, keep in mind that the returned future has a special shared state, which demands that future::~future blocks:
std::async(do_work1); // ~future blocks
std::async(do_work2); // ~future blocks
/* output: (assuming that do_work* log their progress)
do_work1() started;
do_work1() stopped;
do_work2() started;
do_work2() stopped;
*/
So if you want real asynchronous you need to keep the returned future, or if you don't care for the result if the circumstances change:
{
auto pizza = std::async(get_pizza);
/* ... */
if(need_to_go)
return; // ~future will block
else
eat(pizza.get());
}
For more information on this, see Herb Sutter's article async and ~future, which describes the problem, and Scott Meyer's std::futures from std::async aren't special, which describes the insights. Also do note that this behavior was specified in C++14 and up, but also commonly implemented in C++11.
Further differences
By using std::async you cannot run your task on a specific thread anymore, where std::packaged_task can be moved to other threads.
std::packaged_task<int(int,int)> task(...);
auto f = task.get_future();
std::thread myThread(std::move(task),2,3);
std::cout << f.get() << "\n";
Also, a packaged_task needs to be invoked before you call f.get(), otherwise you program will freeze as the future will never become ready:
std::packaged_task<int(int,int)> task(...);
auto f = task.get_future();
std::cout << f.get() << "\n"; // oops!
task(2,3);
TL;DR
Use std::async if you want some things done and don't really care when they're done, and std::packaged_task if you want to wrap up things in order to move them to other threads or call them later. Or, to quote Christian:
In the end a std::packaged_task is just a lower level feature for implementing std::async (which is why it can do more than std::async if used together with other lower level stuff, like std::thread). Simply spoken a std::packaged_task is a std::function linked to a std::future and std::async wraps and calls a std::packaged_task (possibly in a different thread).
TL;DR
std::packaged_task allows us to get the std::future "bounded" to some callable, and then control when and where this callable will be executed without the need of that future object.
std::async enables the first, but not the second. Namely, it allows us to get the future for some callable, but then, we have no control of its execution without that future object.
Practical example
Here is a practical example of a problem that can be solved with std::packaged_task but not with std::async.
Consider you want to implement a thread pool. It consists of a fixed number of worker threads and a shared queue. But shared queue of what? std::packaged_task is quite suitable here.
template <typename T>
class ThreadPool {
public:
using task_type = std::packaged_task<T()>;
std::future<T> enqueue(task_type task) {
// could be passed by reference as well...
// ...or implemented with perfect forwarding
std::future<T> res = task.get_future();
{ std::lock_guard<std::mutex> lock(mutex_);
tasks_.push(std::move(task));
}
cv_.notify_one();
return res;
}
void worker() {
while (true) { // supposed to be run forever for simplicity
task_type task;
{ std::unique_lock<std::mutex> lock(mutex_);
cv_.wait(lock, [this]{ return !this->tasks_.empty(); });
task = std::move(tasks_.top());
tasks_.pop();
}
task();
}
}
... // constructors, destructor,...
private:
std::vector<std::thread> workers_;
std::queue<task_type> tasks_;
std::mutex mutex_;
std::condition_variable cv_;
};
Such functionality cannot be implemented with std::async. We need to return an std::future from enqueue(). If we called std::async there (even with deferred policy) and return std::future, then we would have no option how to execute the callable in worker(). Note that you cannot create multiple futures for the same shared state (futures are non-copyable).
Packaged Task vs async
p> Packaged task holds a task [function or function object] and future/promise pair. When the task executes a return statement, it causes set_value(..) on the packaged_task's promise.
a> Given Future, promise and package task we can create simple tasks without worrying too much about threads [thread is just something we give to run a task].
However we need to consider how many threads to use or whether a task is best run on the current thread or on another etc.Such descisions can be handled by a thread launcher called async(), that decides whether to create a new a thread or recycle an old one or simply run the task on the current thread. It returns a future .
"The class template std::packaged_task wraps any callable target
(function, lambda expression, bind expression, or another function
object) so that it can be invoked asynchronously. Its return value or
exception thrown is stored in a shared state which can be accessed
through std::future objects."
"The template function async runs the function f asynchronously
(potentially in a separate thread) and returns a std::future that will
eventually hold the result of that function call."
I have some question about behavior of std::async function with std::launch::async policy & std::future object returned from async.
In following code, main thread waits for the completion of foo() on the thread created by async call.
#include <thread>
#include <future>
#include <iostream>
void foo()
{
std::cout << "foo:begin" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "foo:done" << std::endl;
}
int main()
{
std::cout << "main:begin" << std::endl;
{
auto f = std::async(std::launch::async, foo);
// dtor f::~f blocks until completion of foo()... why??
}
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "main:done" << std::endl;
}
And I know http://www.stdthread.co.uk/doc/headers/future/async.html says
The destructor of the last future object associated with the
asynchronous state of the returned std::future shall block until the
future is ready.
My question is:
Q1. Does this behavior conform to the current C++ standard?
Q2. If Q1's answer is yes, which statements say that?
Yes, this is required by the C++ Standard. 30.6.8 [futures.async] paragraph 5, final bullet:
— the associated thread completion synchronizes with (1.10) the return from the first function that successfully detects the ready status of the shared state or with the return from the last function that releases the shared state, whichever happens first.
The destructor of the one and only std:future satisfies that condition, and so has to wait for the completion of the thread.