How to structure "future inside future" - c++

I am building a system where a top layer communicates with a driver layer, who in turn communicate with a I2C layer. I have put my I2C driver behind a message queue, in order to make it thread safe and serialize access to the I2C bus.
In order to return the reply to the driver, the I2C layer returns a std::future with a byte buffer inside that is filled out when the I2C bus read actually happens.
All this works and I like it.
My problem is that I also want the driver to return a future to the top layer, however this future will then depend on the previous future (when the I2C driver future-returns a byte buffer, the driver will have to interpret and condition those bytes to get the higher-level answer), and I am having problems making this dependency "nice".
For example, I have a driver for a PCT2075 temperature sensor chip, and I would like to have a:
future<double> getTemperature()
method in that driver, but so far I can't think of a better way than to make an intermediate "future-holder" class and then return that:
class PCT2075
{
public:
class TemperatureFuture
{
private:
std::future<std::pair<std::vector<uint8_t>, bool>> temperatureData;
public:
TemperatureFuture(std::future<std::pair<std::vector<uint8_t>, bool>> f);
template< class Clock, class Duration >
std::future_status wait_until(const std::chrono::time_point<Clock, Duration>& timeout_time) const;
void wait() const; // wait and wait_until just waits on the internal future
double get();
};
TemperatureFuture getTemperature();
};
This structure works and I can go forward with it, but for some reason I am not super happy with it (though I can't quite explain why... :/ ).
So my questions are:
Is there some pattern that can make this better?
Would it make sense to let TemperatureFuture inherit directly from std::future (I have heard that "do not inherit from std classes" is a good rule)?
Or is this just how you do it, and I should stop worrying about nothing?
Ps. I also have another method whose answer relies on two I2C reads, and thus two different futures. It is possible to rework this to only have a one-on-one dependency, but the current way can handle the one-on-multiple variant so it would be nice if a potential new proposal also could.

You are looking for an operation called then, which as commenters note is sadly missing even in C++20.
However, it's not hard to write a then yourself.
template<typename Fun, typename... Ins>
std::invoke_result_t<Fun, Ins...> invoke_future(Fun fun, std::future<Ins>... futs) {
return fun(futs.get()...);
}
template<typename Fun, typename... Ins>
std::future<std::invoke_result_t<Fun, Ins...>> then(Fun&& fun, std::future<Ins>... futs) {
return std::async(std::launch::deferred, invoke_future<Fun, Ins...>, std::forward<Fun>(fun), std::move(futs)...);
}
I expect something like this wasn't standardised because it makes loads of assumptions about how the function should be run once the result is ready.

A future that just reinterprets the results of a previous future or gathers multiple futures is a good job for std::async(std::launch::deferred, ...). This doesn't launch any thread, it executes on request.
std::future<int> f1 = std::async([]() -> int { return 1; });
std::future<float> f2 = std::async(
std::launch::deferred,
[](std::future<int>&& f) -> float { return f.get(); },
std::move(f1));
std::printf("%f\n", f2.get());
The downside is that certain features will not work, e.g. wait_until.
If, instead, you need to launch a new asynchronous action once the first future is ready (e.g. send another I2C message, or compute the higher-level result in a thread pool), C++ does not offer any better solution than making this part of your original task. For example your I2C driver could accept a list of std::functions as callbacks.

Related

How to customize a coroutine state in boost asio coroutine?

The thing is that I would like to create a global instance which I would be able to use separately by each coroutine to keep there, for instance, the list of named scopes, e.g. for log purposes.
so that when boost::asio::spawn is called a new custom state would be attached to the newly run coroutine.
As a guess, as a workaround it could be done by means of a global std::unordered_map indexed by smth similar to std::this_thread::get_id() but for coroutines. Yet right now I'm not aware of anything like that.
While it would be perfect if it is possible to accomplish this by a custom asio::yield_context. It keeps cancellation_slot, executor, why it cannot keep extra state? I have tryed to dig into the boost sources of yield_context, but I'm rather lost there, that's why I would appreciate some insights on this matter.
You need to implement await_transform for a custom type. That allows you to communicate with your promise type. Of course, that's an implementation detail of the library, so you haven't seen it yet.
Here's the await_transform for this_coro::executor_t:
// This await transformation obtains the associated executor of the thread of
// execution.
auto await_transform(this_coro::executor_t) noexcept
{
struct result
{
awaitable_frame_base* this_;
bool await_ready() const noexcept
{
return true;
}
void await_suspend(coroutine_handle<void>) noexcept
{
}
auto await_resume() const noexcept
{
return this_->attached_thread_->get_executor();
}
};
return result{this};
}
You can create your own awaitable type with your own promise-type, which adds you custom state.
There's a non-trivial amount of code here, that will be daunting to write. (It is to me). You should probably dive in with a simpler coroutine tutorial (as "simple" as that can be, which is not very).
I've seen a number of good talks
Gor Nishanov's from 2016 which I've personally watched and played along with, so I know it will do a good job https://www.youtube.com/watch?v=8C8NnE1Dg4A
A much more recent one by Andreas Fertig from 2022 https://www.youtube.com/watch?v=8sEe-4tig_A (which I haven't seen)
The same event had Andreas Weis "Deciphering C++ Coroutines - A Diagrammatic Coroutine Cheat Sheet" (which can also be found elsewhere as "Deciphering C++ Coroutines - A Visual Approach")
There is this blog post series by Raymond Chen which seems very apt. In particular this installment should land you close to the mark: C++ coroutines: Snooping in on the coroutine body

How to implement nested protocols with boost::asio?

I'm trying to write a server that handles protocol A over protocol B.
Protocol A is HTTP or RTSP, and protocol B is a simple sequence of binary packets:
[packet length][...encrypted packet data...]
So I want to use things like that:
boost::asio::async_read_until(socket, inputBuffer, "\r\n\r\n", read_handler);
However, instead of socket use some pseudo-socket connected to Protocol B handlers.
I have some ideas:
Forget about async_read, async_read_until, etc., and write two state machines for A and B.
Hybrid approach: async_read_* for protocol B, state machine for A.
Make internal proxy server.
I don't like (1) and (2) because
It's hard to decouple A from B (I want to be able to disable protocol B).
Ugly.
(3) just looks ugly :-)
So the question is: how do I implement this?
I have done something like your answer (2) in the past - using async_read calls to read the header first and then another async_read to read the length and forward the remaining things to a hand written state machine. But I wouldn't necessarily recommend that to you - You thereby might get zero-copy IO for protocol B but doing an IO call reading the 4-8 byte header is quite wasteful when you know there is always data coming behind it. And the problem is that your network abstraction for the 2 layers will be different - so the decoupling problem that you mention really exists.
Using a fixed length buffer, only calling async_read and then processing the data with 2 nested state machines (like you are basically proposing in answer (1)) works quite well. Your state machine for each would simple get pushed some new received data (from either directly the socket or from the lower state machine) and process that. This means A would not be coupled to B here, as you could directly push the data to the A state machine from asio, if the input/output data format matches.
Similar to this are the patterns that are used in the Netty and Facebook Wangle libraries, where you have handlers that get data pushed from a lower handler in the pipeline, perform their actions based on that input and output their decoded data to the next handler. These handlers can be state machines, but depending on the complexity of the protocol don't necessarily have to be. You can take some inspiration from that, e.g. look at some Wangle docs: https://github.com/facebook/wangle/blob/master/tutorial.md
If you don't want to push your data from one protocol handler to another but rather actively read it (most likely in an asynchronous fashion) you could also design yourself some interfaces (like ByteReader which implements an async_read(...) method or PacketReader which allows to read complete messages instead of bytes), implement them through your code (and ByteReader also through asio) and use them on the higher level. Thereby you are going from the push approach of data processing to a pull approach, which has some advantages and disadvantages.
I won't go over boost::asio, since this seems more a design pattern than a networking one.
I'd use the State Pattern. This way you could change protocol on the fly.
class net_protocol {
protected:
socket sock;
public:
net_protocol(socket _sock) : sock(_sock) {}
virtual net_protocol* read(Result& r) = 0;
};
class http_protocol : public net_protocol {
public:
http_protocol(socket _sock) : net_protocol(_sock) {}
net_protocol* read(Result& r) {
boost::asio::async_read_until(socket, inputBuffer, "\r\n\r\n", read_handler);
// set result, or have read_handler set it
return this;
}
};
class binary_protocol : public net_protocol {
public:
binary_protocol(socket _sock) : net_protocol(_sock) {}
net_protocol* read(Result& r) {
// read 4 bytes as int size and then size bytes in a buffer. using boost::asio::async_read
// set result, or have read_handler set it
// change strategy example
//if (change_strategy)
// return new http_strategy(sock);
return this;
}
};
You'd initialize the starting protocol with
std::unique_ptr<net_protocol> proto(new http_protocol(sock));
then you'd read with:
//Result result;
proto.reset(proto->read(result));
EDIT: the if() return new stragegy are, in fact, a state machine
if you are concerned about those async reads and thus can't decice which return policies, have the policy classes call a notify method in their read_handler
class caller {
std::unique_ptr<net_protocol> protocol;
boost::mutex io_mutex;
public:
void notify_new_strategy(const net_protocol* p) {
boost::unique_lock<boost::mutex> scoped_lock(mutex);
protocol.reset(p);
}
void notify_new_result(const Result r) { ... }
};
If you don't need to change used protocol on the fly you would have no need of State, thus read() would return Result (or, void and call caller::notify_new_result(const Result) if async). Still you could use the same approach (2 concrete classes and a virtual one) and it would probably be something very close to Strategy Pattern

c++ futures/promises like javascript?

I've been writing some javascript and one of the few things I like about the environment is the way it uses promises/futures to make handlers for asynchronous events.
In C++ you have to call .get on a future and it blocks until the result of the future is available but in Javascript you can write .then(fn) and it will call the function when the result is ready. Critically it does this in the same thread as the caller at a later time so there are no thread synchronization issues to worry about, at least not the same ones as in c++.
I'm thinking in c++ something like -
auto fut = asyncImageLoader("cat.jpg");
fut.then([](Image img) { std::cout << "Image is now loaded\n" << image; });
Is there any way to achieve this in c++? Clearly it will need some kind of event queue and event loop to handle dispatching the callbacks. I could probably eventually write the code to do most of this but wanted to see if there was any way to achieve the goal easily using standard facilities.
A .then function for std::future has been proposed for the upcoming C++17 standard.
Boost's implementation of future (which is compliant with the current standard, but provides additional features as extensions) already provides parts of that functionality in newer versions (1.53 or newer).
For a more well-established solution, take a look at the Boost.Asio library, which does allow easy implementation of asynchronous control flows as provided by future.then. Asio's concept is slightly more complicated, as it requires access to a central io_service object for dispatching asynchronous callbacks and requires manual management of worker threads. But in principle this is a very good match for what you asked for.
I don't like c++'s future, so i wrote a promise libraries as javascript here
https://github.com/xhawk18/promise-cpp
/* Convert callback to a promise (Defer) */
Defer myDelay(boost::asio::io_service &io, uint64_t time_ms) {
return newPromise([&io, time_ms](Defer &d) {
setTimeout(io, [d](bool cancelled) {
if (cancelled)
d.reject();
else
d.resolve();
}, time_ms);
});
}
void testTimer(io_service &io) {
myDelay(io, 3000).then([&] {
printf("timer after 3000 ms!\n");
return myDelay(io, 1000);
}).then([&] {
printf("timer after 1000 ms!\n");
return myDelay(io, 2000);
}).then([] {
printf("timer after 2000 ms!\n");
}).fail([] {
printf("timer cancelled!\n");
});
}
int main() {
io_service io;
testTimer(io);
io.run();
return 0;
}
compare with Javascript promise, just --
Use newPromise instead of js's new Promise
Use lambda instead of js function
Use d.resolve instead of js's resolve
Use d.reject instead of js's reject
You can resolve/reject with any type of paramters, and need not care about the troublesome of <> in c++ template.
While then is proposed, you can implement your own infix then via the named operator technique.
Create a struct then_t {}; and a static then_t then;. Now override operator* on the left and right so that std::future<bool> *then* lambda creates a std::async that waits on the future, and passes the result to the lambda, then returns the return value of the lambda.
This requires lots of care and attention, as you have to carefully create copies to avoid dangling references, and mess around with r and l value syntax to make it fully efficient.
The end syntax you get is:
aut fut = asyncLoader("cat.jpg");
fut *then* [&](Image img) { std::cout << "Image loaded: " << img; };
which is pretty close to what you want.
If you are really smart, you could even have it also support:
aut fut = asyncLoader("cat.jpg");
fut *then* [=] { std::cout << "Image loaded: " << fut.get(); };
which gets rid of some of the boilerplate and would be useful sometimes. This requires asyncLoader to return a std::shared_future instead of a future.
You could pass an object thats for example implementing a Runnable class to the "then" method of the Future class. Once the Future finished its work, call the "run" method of the passed object.
Take a look at https://github.com/Naios/continuable . It supports Javascript style .then(). It also supports exceptions with .fail() (instead of .catch()). There is a great talk about it here https://www.youtube.com/watch?v=l6-spMA_x6g
Use JavaScript-like Promises for C++20. It relies on C++20 coroutines, supports ES6 await/async semantics, and very importantly, it supports 'move' so you can write wrappers for frameworks like asio (e.g. because asio::ip::tcp::socket cannot be copied).
Link: https://github.com/virgil382/JSLikePromise
The question is a bit old, but here is a Javascript-like promise library (consist of a single header that you simply need to include) that aims to do exactly what you ask for, of course together with some sort of async I/O library to implement the actual asyncImageLoader().
https://github.com/alxvasilev/cpp-promise

How to execute a method in another thread?

I'm looking for a solution for this problem in C or C++.
edit: To clarify. This is on a linux system. Linux-specific solutions are absolutely fine. Cross-plaform is not a concern.
I have a service that runs in its own thread. This service is a class with several methods, some of which need to run in the own service's thread rather than in the caller's thread.
Currently I'm using wrapper methods that create a structure with input and output parameters, insert the structure on a queue and either return (if a "command" is asynchronous) or wait for its execution (if a "command" is synchronous).
On the thread side, the service wakes, pops a structure from the queue, figures out what to execute and calls the appropriate method.
This implementation works but adding new methods is quite cumbersome: define wrapper, structure with parameters, and handler. I was wondering if there is a more straightforward means of coding this kind of model: a class method that executes on the class's own thread, instead of in the caller's thread.
edit - kind of conclusion:
It seems that there's no de facto way to implement what I asked that doesn't involve extra coding effort.
I'll stick with what I came up with, it ensures type safeness, minimizes locking, allows sync and async calls and the overhead it fairly modest.
On the other hand it requires a bit of extra coding and the dispatch mechanism may become bloated as the number of methods increases. Registering the dispatch methods on construction, or having the wrappers do that work seem to solve the issue, remove a bit of overhead and also remove some code.
My standard reference for this problem is here.
Implementing a Thread-Safe Queue using Condition Variables
As #John noted, this uses Boost.Thread.
I'd be careful about the synchronous case you described here. It's easy to get perf problems if the producer (the sending thread) waits for a result from the consumer (the service thread). What happens if you get 1000 async calls, filling up the queue with a backlog, followed by a sync call from each of your producer threads? Your system will 'play dead' until the queue backlog clears, freeing up those sync callers. Try to decouple them using async only, if you can.
There are several ways to achieve this, depending upon the complexity you want to accept. Complexity of the code is directly proportional to the flexibility desired. Here's a simple one (and quite well used):
Define a classes corresponding to each functionality your server exposes.
Each of these classes implements a function called execute and take a basic structure called input args and output args.
Inside the service register these methods classes at the time of initialization.
Once a request comes to the thread, it will have only two args, Input and Ouput, Which are the base classes for more specialized arguments, required by different method classes.
Then you write you service class as mere delegation which takes the incoming request and passes on to the respective method class based on ID or the name of the method (used during initial registration).
I hope it make sense, a very good example of this approach is in the XmlRpc++ (a c++ implementation of XmlRpc, you can get the source code from sourceforge).
To recap:
struct Input {
virtual ~Input () = 0;
};
struct Ouput {
virtual ~Output () = 0;
};
struct MethodInterface {
virtual int32_t execute (Input* __input, Output* __output) = 0;
};
// Write specialized method classes and taking specialized input, output classes
class MyService {
void registerMethod (std::string __method_name, MethodInterface* __method);
//external i/f
int32_t execute (std::string __method, Input* __input, Output* __output);
};
You will still be using the queue mechanism, but you won't need any wrappers.
IMHO, If you want to decouple method execution and thread context, you should use Active Object Pattern (AOP)
However, you need to use ACE Framework, which supports many OSes, e.g. Windows, Linux, VxWorks
You can find detailed information here
Also, AOP is a combination of Command, Proxy and Observer Patterns, if you know the details of them, you may implement your own AOP. Hope it helps
In addition to using Boost.Thread, I would look at boost::function and boost::bind. That said, it seems fair to have untyped (void) arguments passed to the target methods, and let those methods cast to the correct type (a typical idiom for languages like C#).
Hey now Rajivji, I think you have it upside-down. Complexity of code is inversely proportional to flexibility. The more complex your data structures and algorithms are, the more restrictions you are placing on acceptable inputs and behaviour.
To the OP: your description seems perfectly general and the only solution, although there are different encodings of it. The simplest may be to derive a class from:
struct Xqt { virtual void xqt(){} virtual ~Xqt(){} };
and then have a thread-safe queue of pointers to Xqt. The service thread then just pops the queue to px and calls px->xqt(), and then delete px. The most important derived class is this one:
struct Dxqt : Xqt {
xqt *delegate;
Dxqt(xqt *d) : delegate(d) {}
void xqt() { delegate->xqt(); }
};
because "all problems in Computer Science can be solved by one more level of indirection" and in particular this class doesn't delete the delegate. This is much better than using a flag, for example, to determine if the closure object should be deleted by the server thread.

C++ threaded class design from non-threaded class

I'm working on a library doing audio encoding/decoding. The encoder shall be able to use multiple cores (i.e. multiple threads, using boost library), if available. What i have right now is a class that performs all encoding-relevant operations.
The next step i want to take is to make that class threaded. So i'm wondering how to do this.
I thought about writing a thread-class, creating n threads for n cores and then calling the encoder with the appropriate arguments. But maybe this is an overkill and there is no need for another class, so i'm going to make use of the "user interface" for thread-creation.
I hope there are any suggestions.
Edit: I'm forced to use multiple threads for the pre-processing, creating statistics of the input data using CUDA. So, if there are multiple Cards in a system the only way to use them in parallel is to create multiple threads.
Example: 4 Files, 4 different calculation units (separate memories, unique device id). Each of the files shall be executed on one calculation unit.
What i have right now is:
class Encoder {
[...]
public:
worker(T data, int devId);
[...]
}
So i think the best way is to call worker from threaded from main()
boost::thread w1(&Encoder::worker, data0, 0);
boost::thread w2(&Encoder::worker, data1, 1);
boost::thread w3(&Encoder::worker, data2, 2);
boost::thread w4(&Encoder::worker, data3, 3);
and not to implement a thread-class.
Have a look at OpenMP, if your compiler supports it. It can be as easy as adding a compiler flag and spraying on a few #pragmas.
I think the problem is more at a design level, can you elaborate a bit on what classes do you have ? I work on CUDA too, and usually one creates an interface (aka Facade pattern) for using the architecture specific (CUDA) layer.
Edit: After reading the update interface I think you are doing the right thing.
Keep the Encoder logic inside the class and use plain boost::threads to execute different units of work. Just pay attention on thread safety inside Encoder's methods.
Your current suggestion only works if Encoder::worker is static. I assume that is the case. One concern would be, if your current implementation supports a way to gracefully abort an encoding-job. I suppose there is some method in your code of the form:
while( MoreInputSamples ) {
// Do more encoding
}
This may be modified with some additional condition that checks if the jobs has received an abort signal. I work on video-decoding a lot and i like to have my decoder classes like that:
class Decoder {
public:
void DoOneStepOfDecoding( AccessUnit & Input );
}
The output usually goes to some ring-buffer. This way, I can easily wrap this in both single-and multithreaded scenarios.
The preceding code
boost::thread w1(&Encoder::worker, data0, 0);
is not valid until worker is static.
There is Boost.Task on th review Schedule that allows you to call asynchronously any callable, as follows
boost::tasks::async(
boost::tasks::make_task( &Encoder::worker, data0, 0) ) );
This results in Encoder::worker been called on a default threadpool. The function returns a handle that allows to know when the task has been executed.