std::async - Implementation dependent usage? - c++

I've been thinking about std::async and how one should use it in future compiler implementation. However, right now I'm a bit stuck with something that feels like a design flaw.
The std::async is pretty much implementation dependent, with probably two variants of launch::async, one which launches the task into a new thread and one that uses a thread-pool/task-scheduler.
However, depending one which one of these variants that are used to implement std::async, the usage would vary greatly.
For the "thread-pool" based variant you would be able to launch a lot of small tasks without worrying much about overheads, however, what if one of the tasks blocks at some point?
On the other hand a "launch new thread" variant wouldn't suffer problems with blocking tasks, on the other hand, the overhead of launching and executing tasks would be very high.
thread-pool:
+low-overhead, -never ever block
launch new thread:
+fine with blocks, -high overhead
So basically depending on the implementation, the way we use std::async would wary very much. If we have a program that works well with one compiler, it might work horribly on another.
Is this by design? Or am I missing something? Would you consider this, as I do, as a big problem?
In the current specification I am missing something like std::oversubscribe(bool) in order to enable implementation in-dependent usage of std::async.
EDIT: As far as I have read, the C++11 standard document does not give any hints in regards to whether tasks sent to std::async may block or not.

std::async tasks launched with a policy of std::launch::async run "as if in a new thread", so thread pools are not really supported --- the runtime would have to tear down and recreate all the thread-local variables in between each task execution, which is not straightforward.
This also means that you can expect tasks started with a policy of std::launch::async to run concurrently. There may be a start-up delay, and there will be task-switching if you have more running threads than processors, but they should be running, and not deadlock just because one happens to wait for another.
An implementation may choose to offer an extension that allows your tasks to run in a thread pool, in which case it is up to that implementation to document the semantics.

I would expect implementations to launch new threads, and leave the thread pool to a future version of C++ that standardizes it. Are there any implementations that use a thread pool?
MSVC initally used a thread pool based on their Concurrency Runtime. According to STL Fixes In VS 2015, Part 2 this has been removed. The C++ specification left some room for implementers to do clever things, however I don't think it quite left enough room for this thread pooling implementation. In particular I think the spec still required that thread_local objects would be destroyed and rebuilt, but that thread pooling with ConcRT would not have supported that.

Related

Cancelling arbitary jobs running in a thread_pool

Is there a way for a thread-pool to cancel a task underway? Better yet, is there a safe alternative for on-demand cancelling opaque function calls in thread_pools?
Killing the entire process is a bad idea and using native handle to perform pthread_cancel or similar API is a last resort only.
Extra
Bonus if the cancellation is immediate, but it's acceptable if the cancellation has some time constraint 'guarantees' (say cancellation within 0.1 execution seconds of the thread in question for example)
More details
I am not restricted to using Boost.Thread.thread_pool or any specific library. The only limitation is compatibility with C++14, and ability to work on at least BSD and Linux based OS.
The tasks are usually data-processing related, pre-compiled and loaded dynamically using C-API (extern "C") and thus are opaque entities. The aim is to perform compute intensive tasks with an option to cancel them when the user sends interrupts.
While launching, the thread_id for a specific task is known, and thus some API can be sued to find more details if required.
Disclaimer
I know using native thread handles to cancel/exit threads is not recommended and is a sign of bad design. I also can't modify the functions using boost::this_thread::interrupt_point, but can wrap them in lambdas/other constructs if that helps. I feel like this is a rock and hard place situation, so alternate suggestions are welcome, but they need to be minimally intrusive in existing functionality, and can be dramatic in their scope for the feature-set being discussed.
EDIT:
Clarification
I guess this should have gone in the 'More Details' section, but I want it to remain separate to show that existing 2 answers are based o limited information. After reading the answers, I went back to the drawing board and came up with the following "constraints" since the question I posed was overly generic. If I should post a new question, please let me know.
My interface promises a "const" input (functional programming style non-mutable input) by using mutexes/copy-by-value as needed and passing by const& (and expecting thread to behave well).
I also mis-used the term "arbitrary" since the jobs aren't arbitrary (empirically speaking) and have the following constraints:
some which download from "internet" already use a "condition variable"
not violate const correctness
can spawn other threads, but they must not outlast the parent
can use mutex, but those can't exist outside the function body
output is via atomic<shared_ptr> passed as argument
pure functions (no shared state with outside) **
** can be lambda binding a functor, in which case the function needs to makes sure it's data structures aren't corrupted (which is the case as usually, the state is a 1 or 2 atomic<inbuilt-type>). Usually the internal state is queried from an external db (similar architecture like cookie + web-server, and the tab/browser can be closed anytime)
These constraints aren't written down as a contract or anything, but rather I generalized based on the "modules" currently in use. The jobs are arbitrary in terms of what they can do: GPU/CPU/internet all are fair play.
It is infeasible to insert a periodic check because of heavy library usage. The libraries (not owned by us) haven't been designed to periodically check a condition variable since it'd incur a performance penalty for the general case and rewriting the libraries is not possible.
Is there a way for a thread-pool to cancel a task underway?
Not at that level of generality, no, and also not if the task running in the thread is implemented natively and arbitrarily in C or C++. You cannot terminate a running task prior to its completion without terminating its whole thread, except with the cooperation of the task.
Better
yet, is there a safe alternative for on-demand cancelling opaque
function calls in thread_pools?
No. The only way to get (approximately) on-demand preemption of a specific thread is to deliver a signal to it (that is is not blocking or ignoring) via pthread_kill(). If such a signal terminates the thread but not the whole process then it does not automatically make any provision for freeing allocated objects or managing the state of mutexes or other synchronization objects. If the signal does not terminate the thread then the interruption can produce surprising and unwanted effects in code not designed to accommodate such signal usage.
Killing the entire process is a bad idea and using native handle to
perform pthread_cancel or similar API is a last resort only.
Note that pthread_cancel() can be blocked by the thread, and that even when not blocked, its effects may be deferred indefinitely. When the effects do occur, they do not necessarily include memory or synchronization-object cleanup. You need the thread to cooperate with its own cancellation to achieve these.
Just what a thread's cooperation with cancellation looks like depends in part on the details of the cancellation mechanism you choose.
Cancelling a non cooperative, not designed to be cancelled component is only possible if that component has limited, constrained, managed interactions with the rest of the system:
the ressources owned by the components should be managed externally (the system knows which component uses what resources)
all accesses should be indirect
the modifications of shared ressources should be safe and reversible until completion
That would allow the system to clean up resource, stop operations, cancel incomplete changes...
None of these properties are cheap; all the properties of threads are the exact opposite of these properties.
Threads only have an implied concept of ownership apparent in the running thread: for a deleted thread, determining what was owned by the thread is not possible.
Threads access shared objects directly. A thread can start modifications of shared objects; after cancellation, such modifications that would be partial, non effective, incoherent if stopped in the middle of an operation.
Cancelled threads could leave locked mutexes around. At least subsequent accesses to these mutexes by other threads trying to access the shared object would deadlock.
Or they might find some data structure in a bad state.
Providing safe cancellation for arbitrary non cooperative threads is not doable even with very large scale changes to thread synchronization objects. Not even by a complete redesign of the thread primitives.
You would have to make thread almost like full processes to be able to do that; but it wouldn't be called a thread then!

std::async() does not seem to really implement single-threaded asynchronous behaviour?

Context: I was looking at how asynchronous programming really works. After some investigation on the topic, the resulting idea was that there are two things to differentiate:
Concurrency (synchronous/asynchronous): About tasks
Multi-threading: About workers
Based on these concepts, we can identify 4 main ways to parallelize tasks. Better than 100 words, I have made a drawing to illustrate this:
Note: The 4th column (Multi-threaded asynchronous) will not be considered here since it mixes multi-threading and asynchronous programming.
In c++, we have the template function std::async() to allow us to run a function asynchronously.
We can set the launch policy at:
std::launch::async: Run "asynchronously" in a separate thread.
std::launch::deferred: Run when the result is requested.
Question: If we take a look at my drawing, the std::launch::async policy seems to behave as Multi-threaded synchronous and the std::launch::deferred policy seems to behave as an isolated case of Single-threaded asynchronous (the function is oneshot executed when the result is requested).
But if I'm not mistaken, the idea behind Single-threaded asynchronous is that in case of waiting for a resource to be available or when struggling with some latency (disk access time, ...), the program should not keep blocking the main thread (and so wasting time) and go on to do the next task instead (and come back later to the previous one).
What I don't understand is that std::async() does not seem to allow this kind of behaviour. We can only either run the task synchronously in another thread or running it once and for all when the result is requested (as late as possible).
If we take a look at my drawing, the Single-threaded asynchronous method is not really implemented since the function runs in "oneshot" no matter if it will have to wait for a resource or not. So we will still waste time in this case.
I'm wondering why ? Is my understanding wrong ? Is it an oversight in the std::async() implementation or is it intentional (by the standard) ?
Edit: I'm not sure if it is the right place to ask this question since it is not really a "coding" issue/question.

Node C++ module vs libuv thread pool size

I've written a Nodejs C++ module that makes use of NAN's AsyncWorker to expose async module functionality. Works great. However, I understand that AsyncWorker makes use of libuv's thread pool, which defaults to just 4 threads.
While this (or a #-of-cores based limitation) might make sense for CPU-heavy functions, some of my exposed functions may run relatively long, even though they don't use the CPU (network activity, etc). Therefore the thread pool might get all used up even though no computation-intensive work is going on.
The easy solution is to increase the thread pool size (UV_THREADPOOL_SIZE). However, I am concerned that this thread pool is used for other things as well, which might suffer from a performance hit due to too much parallelization (the libuv documentation states, "The threadpool is global and shared across all event loops...").
Is my concern valid? Is there a way to make use of a separate, larger, thread pool only for certain AsyncWorker's that are long-running but not CPU-intenstive, while leaving the common thread-pool untouched?

What is the executor pattern in a C++ context?

The author of asio, Christopher Kohlhoff, is working on a library and proposal for executors in C++. His work so far includes this repo and docs. Unfortunately, the rationale portion has yet to be written. So far, the docs give a few examples of what the library does but I don't feel like I'm missing something. Somehow this is more than a family of fancy invoker functions.
Everything I can find on Google is very Java specific and a lot of it is particular to specific frameworks so I'm having trouble figuring out what this "executor pattern" is all about.
What are executors in this context? What do they do? What are the canonical examples of when they would be helpful? What variations exist among executors? What are the alternatives to executors and how do they compare? In particular, there seems to be a lot of overlap with an event loop where the events are initial input events, execution events, and a shutdown event.
When trying to figure out new abstractions I usually find understanding the motivation key. So for executors, what are we trying to abstract and why? What are we trying to make generic? Without executors, what extra work would we have to do?
The most basic benefit of executors is separating the definition of a program's parallelism from how it's used. Java's executor model exists because, by and large, you don't actually know, when you're first writing code, what parallelism model is best for your scenario. You might have little to gain from parallelism and shouldn't use threads at all, you might do best with a long running dedicated worker thread for each core, or a dynamically scaling pool of threads based on current load that cleans up threads after they've been idle a while to reduce memory usage, context switches, etc., or maybe just launching a thread for every task on demand, exiting when the task is done.
The key here is it's nigh impossible to know which approach is best when you're first writing code. You may know where parallelism might help you, but in traditional threading, you end up intermingling the parallelism "configuration" (when and whether to create threads) with the use of parallelism (determining which functions to call with what arguments). When you do mix the code like this, it's a royal pain to do performance testing of different options, because each and every thread launch is independent, and must be updated separately.
The main benefit of the executor model is that the parallelism configuration is done in one place (where the executor is created), and the users of that executor don't have to know anything about it. They just submit work to the executor, receive a future, and at some later point, retrieve the result (blocking if necessary) from the future. If you want to experiment with other configurations, you change the one line defining the executor and run your code again. Even if you decide you need to use different parallelism models for different sections of your code, refactoring to add a second executor and change some of the users of the first executor to use the second is easy compared to manually rewriting the threading details of every site; as long as the executor's name is (relatively) unique, finding users and changing them to use a different one is pretty easy. Executors both simplify your code (by avoiding intermingling thread creation/management with the tasks the threads do) and simplify performance testing.
As a side-benefit, you also abstract away the complexities of transferring data into and out of a worker thread (the submit method encapsulates the former, the future's result method encapsulates the latter). std::async gets you some of this benefit, but with no real control over the parallelism involved (just a yes/no/maybe choice of whether to force a thread, force deferred execution in the current thread, or let the compiler/library decide, with no fine grained control over whether a thread pool is used, and if so, how it behaves). A true executor framework gives you the control std::async fails to provide, with similar ease of use.

Using asynchronous method vs thread wait

I have 2 versions of a function which are available in a C++ library which do the same task. One is a synchronous function, and another is of asynchronous type which allows a callback function to be registered.
Which of the below strategies is preferable for giving a better memory and performance optimization?
Call the synchronous function in a worker thread, and use mutex synchronization to wait until I get the result
Do not create a thread, but call the asynchronous version and get the result in callback
I am aware that worker thread creation in option 1 will cause more overhead. I am wanting to know issues related to overhead caused by thread synchronization objects, and how it compares to overhead caused by asynchronous call. Does the asynchronous version of a function internally spin off a thread and use synchronization object, or does it uses some other technique like directly talk to the kernel?
"Profile, don't speculate." (DJB)
The answer to this question depends on too many things, and there is no general answer. The role of the developer is to be able to make these decisions. If you don't know, try the options and measure. In many cases, the difference won't matter and non-performance concerns will dominate.
"Premature optimisation is the root of all evil, say 97% of the time" (DEK)
Update in response to the question edit:
C++ libraries, in general, don't get to use magic to avoid synchronisation primitives. The asynchronous vs. synchronous interfaces are likely to be wrappers around things you would do anyway. Processing must happen in a context, and if completion is to be signalled to another context, a synchronisation primitive will be necessary to do that.
Of course, there might be other considerations. If your C++ library is talking to some piece of hardware that can do processing, things might be different. But you haven't told us about anything like that.
The answer to this question depends on context you haven't given us, including information about the library interface and the structure of your code.
Use asynchronous function because will probably do what you want to do manually with synchronous one but less error prone.
Asynchronous: Will create a thread, do work, when done -> call callback
Synchronous: Create a event to wait for, Create a thread for work, Wait for event, On thread call sync version , transfer result, signal event.
You might consider that threads each have their own environment so they use more memory than a non threaded solution when all other things are equal.
Depending on your threading library there can also be significant overhead to starting and stopping threads.
If you need interprocess synchronization there can also be a lot of pain debugging threaded code.
If you're comfortable writing non threaded code (i.e. you won't burn a lot of time writing and debugging it) then that might be the best choice.