Delayed Function Call - c++

What's the most elegant way of performing a delayed (and therefore also asynchronous) functional call using C++11, lambdas and async? Suggested naming: delayed_async. Reason for asking is that I want a GUI alert light to be switched off after given time (in this case one second) without blocking the main (wxWidgets main loop) thread of course. I've use wxWidgets' wxTimer for this and I find wxTimer rather cumbersome to use in this case. So that got my curious about how much more convenient this could be implemented if I instead used C++11's async1, 2. I'm aware of that I need to protect the resources involved with mutexes, when using async.

You mean something like this?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
int main()
{
// Use async to launch a function (lambda) in parallel
std::async(std::launch::async, [] () {
// Use sleep_for to wait specified time (or sleep_until).
std::this_thread::sleep_for( std::chrono::seconds{1});
// Do whatever you want.
std::cout << "Lights out!" << std::endl;
} );
std::this_thread::sleep_for( std::chrono::seconds{2});
std::cout << "Finished" << std::endl;
}
Just make sure that you don't capture a variable by reference in the lambda.

Related

How to compose asynchronous operations?

I'm looking for a way to compose asynchronous operations. The ultimate goal is to execute an asynchronous operation, and either have it run to completion, or return after a user-defined timeout.
For exemplary purposes, assume that I'm looking for a way to combine the following coroutines1:
IAsyncOperation<IBuffer> read(IBuffer buffer, uint32_t count)
{
auto&& result{ co_await socket_.InputStream().ReadAsync(buffer, count, InputStreamOptions::None) };
co_return result;
}
with socket_ being a StreamSocket instance.
And the timeout coroutine:
IAsyncAction timeout()
{
co_await 5s;
}
I'm looking for a way to combine these coroutines in a way, that returns as soon as possible, either once the data has been read, or the timeout has expired.
These are the options I have evaluated so far:
C++20 coroutines: As far as I understand P1056R0, there is currently no library or language feature "to enable creation and composition of coroutines".
Windows Runtime supplied asynchronous task types, ultimately derived from IAsyncInfo: Again, I didn't find any facilities that would allow me to combine the tasks the way I need.
Concurrency Runtime: This looks promising, particularly the when_any function template looks to be exactly what I need.
From that it looks like I need to go with the Concurrency Runtime. However, I'm having a hard time bringing all the pieces together. I'm particularly confused about how to handle exceptions, and whether cancellation of the respective other concurrent task is required.
The question is two-fold:
Is the Concurrency Runtime the only option (UWP application)?
What would an implementation look like?
1 The methods are internal to the application. It is not required to have them return Windows Runtime compatible types.
I think the easiest would be to use the concurrency library. You need to modify your timeout to return the same type as the first method, even if it returns null.
(I realize this is only a partial answer...)
My C++ sucks, but I think this is close...
array<task<IBuffer>, 2> tasks =
{
concurrency::create_task([]{return read(buffer, count).get();}),
concurrency::create_task([]{return modifiedTimeout.get();})
};
concurrency::when_any(begin(tasks), end(tasks)).then([](IBuffer buffer)
{
//do something
});
As suggested by Lee McPherson in another answer, the Concurrency Runtime looks like a viable option. It provides tasks, that can be combined with others, chained up using continuations, as well as seamlessly integrate with the Windows Runtime asynchronous model (see Creating Asynchronous Operations in C++ for UWP Apps). As a bonus, including the <pplawait.h> header provides adapters for concurrency::task class template instantiations to be used as C++20 coroutine awaitables.
I wasn't able to answer all of the questions, but this is what I eventually came up with. For simplicity (and ease of verification) I'm using Sleep in place of the actual read operation, and return an int instead of an IBuffer.
Composition of tasks
The ConcRT provides several ways to combine tasks. Given the requirements concurrency::when_any can be used to create a task that returns, when any of the supplied tasks completes. When only 2 tasks are supplied as input, there's also a convenience operator (operator||) available.
Exception propagation
Exceptions raised from either of the input tasks do not count as a successful completion. When used with the when_any task, throwing an exception will not suffice the wait condition. As a consequence, exceptions cannot be used to break out of combined tasks. To deal with this I opted to return a std::optional, and raise appropriate exceptions in a then continuation.
Task cancellation
This is still a mystery to me. It appears that once a task satisfies the wait condition of the when_any task, there is no requirement to cancel the respective other outstanding tasks. Once those complete (successfully or otherwise), they are silently dealt with.
Following is the code, using the simplifications mentioned earlier. It creates a task consisting of the actual workload and a timeout task, both returning a std::optional. The then continuation examines the return value, and throws an exception in case there isn't one (i.e. the timeout_task finished first).
#include <Windows.h>
#include <cstdint>
#include <iostream>
#include <optional>
#include <ppltasks.h>
#include <stdexcept>
using namespace concurrency;
task<int> read_with_timeout(uint32_t read_duration, uint32_t timeout)
{
auto&& read_task
{
create_task([read_duration]
{
::Sleep(read_duration);
return std::optional<int>{42};
})
};
auto&& timeout_task
{
create_task([timeout]
{
::Sleep(timeout);
return std::optional<int>{};
})
};
auto&& task
{
(read_task || timeout_task)
.then([](std::optional<int> result)
{
if (!result.has_value())
{
throw std::runtime_error("timeout");
}
return result.value();
})
};
return task;
}
The following test code
int main()
{
try
{
auto res1{ read_with_timeout(3000, 5000).get() };
std::cout << "Succeeded. Result = " << res1 << std::endl;
auto res2{ read_with_timeout(5000, 3000).get() };
std::cout << "Succeeded. Result = " << res2 << std::endl;
}
catch( std::runtime_error const& e )
{
std::cout << "Failed. Exception = " << e.what() << std::endl;
}
}
produces this output:
Succeeded. Result = 42
Failed. Exception = timeout

Using std::async for function call from thread?

I am running two parallel threads. One of the threads need to have an asynchronous function call upon the fulfillment of a conditional statement. I have found out that std::async performs asynchronous function call using the launch policies, but I have a few questions regarding them.
Is there a policy to make it wait for a conditional statement to happen? According to what I have understood from this post, there are a variety of wait_for and wait_until functions, but I have found them to take in a time function, can these be suitably modified?
Will there be automatic destructor call at the end of the async function?
Will the function call affect the parent thread's functioning in any manner?
When you call std::async, you pass it the address of a function to call (along with any parameters you want to pass to that function).
It then creates a thread to execute that function asynchronously. It returns a future, which the parent thread can use to get the result from the child. Typical usage is something like this:
#include <string>
#include <future>
#include <iostream>
#include <chrono>
std::chrono::seconds sec(1);
int process() {
std::cerr << "Doing something slow\n";
std::this_thread::sleep_for(sec);
std::cerr << "done\n";
return 1;
}
int main(int argc, char **argv) {
if (argc > 1) {
auto func = std::async(process);
std::cerr << "doing something else that takes a while\n";
std::this_thread::sleep_for(sec);
func.get();
}
}
Note that we only have to use .get on the returned future to synchronize the threads. The sleep_for is just to simulate each thread doing something that takes at least a little while--if they finished too quickly, they wouldn't get a chance to really execute in parallel, since the first to run could finish and exit before the second got a chance to start running at all.
If you want to create explicit threads (i.e., create instances of std::thread), that's when you end up using std::wait_for and such (or can end up using them, anyway). With futures (i.e., what you create with std::async) you just use .get to wait for the thread to finish and retrieve whatever the thread function returned.

Are futures a safe way to check for individual thread completion?

I've been toying around with Boost's futures and was wondering if they were an acceptable and safe way to check if an individual thread has completed.
I had never used them before so most of the code I wrote was based off of Boost's Synchronization documentation.
#include <iostream>
#include <boost/thread.hpp>
#include <boost/thread/future.hpp>
int calculate_the_answer_to_life_the_universe_and_everything()
{
boost::this_thread::sleep(boost::posix_time::seconds(10));
return 42;
}
int main()
{
boost::packaged_task<int> task(calculate_the_answer_to_life_the_universe_and_everything);
boost::unique_future<int> f(task.get_future());
boost::thread th(boost::move(task));
while(!f.is_ready())
{
std::cout << "waiting!" << std::endl;
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
std::cout << f.get() << std::endl;
th.join();
}
This appears to wait for the calculate_the_answer_to_life_the_universe_and_everything() thread to return 42. Could something possibly go wrong with this?
Thanks!
Yes, futures are safe to use in that way, and the code is (at a quick glance) safe and correct.
There are other ways to do the same thing (e.g. using an atomic_flag, or mutex-protected data, or many others) but your code is a valid way to do it.
N.B. instead of f.is_ready() and this_thread::sleep(seconds(1)) you could use f.wait_for(seconds(1)), which would wake as soon as the result is made ready. That waits directly on the future, instead of checking the future, then waiting using a separate mechanism, then checking, then waiting with a separate mechanism etc.
And instead of packaged_task and thread you could use async.
Using C++11 names instead of boost ...
int main()
{
auto f = std::async(std::launch::async, calculate_the_answer_to_life_the_universe_and_everything);
while(f.wait_for(std::chrono::seconds(1)) == std::future_status::timeout)
std::cout << "waiting!" << std::endl;
std::cout << f.get() << std::endl;
}
I've been toying around with Boost's futures and was wondering if they were an acceptable and safe way to check if an individual thread has completed.
Futures are a mechanism for asynchronous evaluation, not a synchronization mechanism. Although some of the primitives do have synchronization properties (future<>::get), the library is not designed to synchronize, but rather to fire a task and ignore it until the result is needed.

How do I make a function asynchronous in C++?

I want to call a function which will be asynchronous (I will give a callback when this task is done).
I want to do this in single thread.
This can be done portably with modern C++ or even with old C++ and some boost. Both boost and C++11 include sophisticated facilities to obtain asynchronous values from threads, but if all you want is a callback, just launch a thread and call it.
1998 C++/boost approach:
#include <iostream>
#include <string>
#include <boost/thread.hpp>
void callback(const std::string& data)
{
std::cout << "Callback called because: " << data << '\n';
}
void task(int time)
{
boost::this_thread::sleep(boost::posix_time::seconds(time));
callback("async task done");
}
int main()
{
boost::thread bt(task, 1);
std::cout << "async task launched\n";
boost::this_thread::sleep(boost::posix_time::seconds(5));
std::cout << "main done\n";
bt.join();
}
2011 C++ approach (using gcc 4.5.2, which needs this #define)
#define _GLIBCXX_USE_NANOSLEEP
#include <iostream>
#include <string>
#include <thread>
void callback(const std::string& data)
{
std::cout << "Callback called because: " << data << '\n';
}
void task(int time)
{
std::this_thread::sleep_for(std::chrono::seconds(time));
callback("async task done");
}
int main()
{
std::thread bt(task, 1);
std::cout << "async task launched\n";
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "main done\n";
bt.join();
}
As of C++11, plain c++ does have a concept of threads, but the most concise way to call a function asynchronously is to use the C++11 async command along with futures. This ends up looking a lot like the way you'd do the same thing in pthreads, but it's 100% portable to all OSes and platforms:
Say your function has a return value... int = MyFunc(int x, int y)
#include <future>
Just do:
// This function is called asynchronously
std::future<int> EventualValue = std::async(std::launch::async, MyFunc, x, y);
Catch? How do you know when it's done? (The barrier.)
Eventually, do:
int MyReturnValue = EventualValue.get(); // block until MyFunc is done
Note it's easy to do a parallel for loop this way - just create an array of futures.
You can't in plain C++. You'll need to use an OS-specific mechanism, and you need a point where execution is suspended in a way that allows the OS to execute the callback. E.g. for Windows, QueueUserAPC - the callback will be executed when you e.g. SleepEx or WaitForSingleObjectEx
The long answer involves implementing your own task scheduler and wrapping your "function" up into one or more tasks. I'm not sure you want the long answer. It certainly doesn't allow you to call something, completely forget about it, and then be notified when that thing is done; however if you are feeling ambitious, it will allow you to simulate coroutines on some level without reaching outside of standard C++.
The short answer is that this isn't possible. Use multiple threads or multiple processes. I can give you more specific information if you divulge what OS/platform you're developing for.
There are two bits to doing this.
Firstly, packing up the function call so that it can be executed later.
Secondly, scheduling it.
It is the scheduling which depends on other aspects of the implementation. If you know "when this task is done", then that's all you need - to go back and retrieve the "function call" and call it. So I am not sure this is necessarily a big problem.
The first part is then really about function objects, or even function pointers. The latter are the traditional callback mechanism from C.
For a FO, you might have:
class Callback
{
public:
virtual void callMe() = 0;
};
You derive from this and implement that as you see fit for your specific problem. The asyncronous event queue is then nothing more than a list<> of callbacks:
std::list<Callback*> asyncQ; // Or shared_ptr or whatever.
I'm not sure I understand what you want, but if it's how to make use of a callback: It works by defining a function pointer, like this (untested):
// Define callback signature.
typedef void (*DoneCallback) (int reason, char *explanation);
// A method that takes a callback as argument.
void doSomeWorkWithCallback(DoneCallback done)
{
...
if (done) {
done(1, "Finished");
}
}
//////
// A callback
void myCallback(int reason, char *explanation)
{
printf("Callback called with reason %d: %s", reason, explanation);
}
/////
// Put them together
doSomeWortkWithCallback(myCallback);
As others have said, you technically can't in plain C++.
However, you can create a manager that takes your task and does time-slicing or time scheduling; with each function call, the manager uses a timer to measure the amount of time the process took; if the process took less time than scheduled, and it thinks it can finish another call and use up the remaining time without going over, it can call it again; if the function does go over the alloted time, it means the function has less time next update to run. So, this will involve creating a somewhat complex system to handle it for you.
Or, if you have a specific platform in mind, you could use threading, or create another process to handle the work.

Multithreading using the boost library

Wish to simultaneously call a function multiple times. I wish to use threads to call a function which will utilize the machines capability to the fullest. This is a 8 core machine, and my requirement is to use the machine cpu from 10% to 100% or more.
My requirement is to use the boost class. Is there any way I can accomplish this using the boost thread or threadpool library? Or some other way to do it?
Also, if I have to call multiple functions with different parameters each time (with separate threads), what is the best way to do this? [using boost or not using boost] and how?
#include <iostream>
#include <fstream>
#include <string.h>
#include <time.h>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
using namespace std;
using boost::mutex;
using boost::thread;
int threadedAPI1( );
int threadedAPI2( );
int threadedAPI3( );
int threadedAPI4( );
int threadedAPI1( ) {
cout << "Thread0" << endl;
}
int threadedAPI2( ) {
cout << "Thread1" << endl;
}
int threadedAPI3( ) {
cout << "Thread2" << endl;
}
int threadedAPI4( ) {
cout << "Thread3" << endl;
}
int main(int argc, char* argv[]) {
boost::threadpool::thread_pool<> threads(4);
// start a new thread that calls the "threadLockedAPI" function
threads.schedule(boost::bind(&threadedAPI1,0));
threads.schedule(boost::bind(&threadedAPI2,1));
threads.schedule(boost::bind(&threadedAPI3,2));
threads.schedule(boost::bind(&threadedAPI4,3));
// wait for the thread to finish
threads.wait();
return 0;
}
The above is not working and I am not sure why? :-(
I suggest that you read up on the documentation for the functions you use. From your comment in James Hopkin's answer, it seems like you don't know what boost::bind does, but simply copy-pasted the code.
boost::bind takes a function (call it f), and optionally a number of parameters, and returns a function which, when called, calls f with the specified parameters.
That is, boost::bind(threadedAPI1, 0)() (creating a function which takes no arguments and calls threadedAPI1() with the argument 0, and then calling that) is equivalent to threadedAPI1(0).
Since your threadedAPI functions don't actually take any parameters, you can't pass any arguments to them. That is just fundamental C++. You can't call threadedAPI1(0), but only threadedAPI1(), and yet when you call the function, you try (via boost::bind) to pass the integer 0 as an argument.
So the simple answer to your question is to simply define threadedAPI1 as follows:
int threadedAPI1(int i);
However, one way to avoid the boost::bind calls is to call a functor instead of a free function when launching the thread. Declare a class something like this:
struct threadedAPI {
threadedAPI(int i) : i(i) {} // A constructor taking the arguments you wish to pass to the thread, and saves them in the class instance.
void operator()() { // The () operator is the function that is actually called when the thread starts, and because it is just a regular class member function, it can see the 'i' variable initialized by the constructor
cout << "Thread" << i << endl; // No need to create 4 identical functions. We can just reuse this one, and pass a different `i` each time we call it.
}
private:
int i;
};
Finally, depending on what you need, plain threads may be better suited than a threadpool. In general, a thread pool only runs a limited number of threads, so it may queue up some tasks until one of its threads finish executing. It is mainly intended for cases where you have many short-lived tasks.
If you have a fixed number of longer-duration tasks, creating a dedicated thread for each may be the way to go.
You're binding parameters to functions that don't take parameters:
int threadedAPI1( );
boost::bind(&threadedAPI1,0)
Just pass the function directly if there are no parameters:
threads.schedule(&threadedAPI1)
If your interest is in using your processor effeciently then you might want to consider intels thread building blocks http://www.intel.com/cd/software/products/asmo-na/eng/294797.htm. I believe it is designed specifically to utilise multi core processors while boost threads leaves control up to the user (i.e. TBB will thread differently on a quad core compared to a dual core).
As for your code you are binding functions which don't take parameters to a parameter. Why? You might also want to check the return code from schedule.