How do I write my own async operations with (non-boost) Asio? - c++

I want to write my own asynchronous function based on Asio 1.19.2 (without boost).
The motivation is to create an interface that works with a JSON-RPC-like protocol (not JSON-RPC exactly).
On the socket it would look something like
--> { "method": "subtract", "params": [42, 23], "id": 1 }
<-- ... some irrelevant notifications, etc.
<-- { "result": 19, "id": 1 } <-- ID matches initial call
I have this figured out using std::futures (without Asio). That method signature is something like:
// near useless, because futures don't allow for continuation
std::future<std::string> json_rpc(std::string method, json::array args);
To implement that, I just needed to:
Keep the socket open in the background (it could also be passed into the method or whatever).
Store a set of pending promises.
When you call json_rpc, make a new promise, add it to the set, return the get_future().
Send the RPC on the socket.
If you get a { "result" ... } message on the socket, find the pending future with that ID and set_value() its promise with the result.
How do I translate this future-based approach into an asio-based one, that has a similar interface to e.g. async_read_until? In other words, I want to implement something like:
template <typename CompletionToken>
void async_json_rpc(std::string method, json::array args, CompletionToken&& completion_token);
I found this tutorial from beast, but it uses Boost.Beast, seems outdated, and the only asynchronous part of it is where it calls async_read_some. My code is not really able to make use of these asio async_ "primitives", since my completion handler needs to be invoked at some time outside of the OS's control.
I also found these examples from the asio docs, but they have the same limitation of just wrapping other async_ calls. They also get increasingly unmanageably complex, for application code.
The core of my question, I suppose, is: I have an async function that basically works like it uses asio::use_future as its CompletionToken. How do I let it use any kind of CompletionToken? What is the promise.set_value() equivallent for an asio completion handler?
I'm very interested in simple to understand solutions, so that I'm not the only one on my team able to maintain these sorts of functions.
To clarify further (from my comments):
I already have "servicing completions from a service thread" implemented, but I only support futures (if I was writing an asio async method, its as if I've only made the asio::use_future overload). I'm asking how to support CompletionHandlers in general—i.e. how to write the other async_* overloads that can take lambdas or yield_contexts.

I think you are asking how to implement the "strand"? I've done things similar to this.
In general, you need a worker thread that services a queue. It will loop and execute each item in the queue, but then sleep when it is empty. Adding something to the queue will signal the thread to wake up, if necessary.
A fancier variation is to use a thread pool and common queue, pruning excess threads if you have too many and creating more if there is too much work for the existing threads. Optimally this requires OS support (I've implemented this for Windows Completion Ports).
When you want to do something async, you put the promise on the queue rather than starting a dedicated thread for that.
Or... were you looking at coroutines, to do co_await calls in C++?

You can not write custom async_ overloads in Asio. Asio is not a general-purpose async library, it's a wrapper around epoll/kqueue/IOCP and as such is limited to non-blocking socket I/O.
The good news is you don't need that, as both std::future<std::string> json_rpc(...) as well as void async_json_rpc (..., CompletionCallback) can be implemented using the normal Asio async_read API - you either resolve the promise or invoke your callback in the completion handler of async_read that reads the RPC response from the socket.

Related

Design pattern to ensure on_close() is called once after all async r/w's are finished?

This question is asked from the context of Boost ASIO (C++).
Say you are using a library to do some async i/o on a socket, where:
you are always waiting to receive data
you occasionally send some data
Since you are always waiting to receive data (e.g. you trigger another async_read() from your completion handler), at any given time, you will either have:
an async read operation in progress
an async read operation in progress and an async write operation in progress
Now say you wanted to call some other function, on_close(), when the connection closes. In Boost ASIO, a connection error or cancel() will cause any oustanding async reads/writes to give an error to your completion handler. But there is no guarantee whether you are in scenario 1. or 2., nor is there a guarantee that the write will error before the read or vice versa. So to implement this, I can only imagine adding two variables called is_reading and is_writing which are set to true by async_read() and async_write() respectively, and set to false by the completion handlers. Then, from either completion handler, when there is an error and I think the connection may be closing, I would check if there is still an async operation in the opposite direction, and call on_close() if not.
The code, more or less:
atomic_bool is_writing;
atomic_bool is_reading;
...
void read_callback(error_code& error, size_t bytes_transferred)
{
is_reading = false;
if (error)
{
if (!is_writing) on_close();
}
else
{
process_data(bytes_transferred);
async_read(BUF_SIZE); // this will set is_reading to true
}
}
void write_callback(error_code& error, size_t bytes_transferred)
{
is_writing = false;
if (error)
{
if (!is_reading) on_close();
}
}
Assume that this is a single-threaded app, but the thread is handling multiple sockets so you can't just let the thread end.
Is there a better way to design this? To make sure on_close() is called after the last async operation finishes?
One of the most common patterns is to use enable_shared_from_this and binding all completion handlers ("continuations") to it.
That way if the async call chain ends (be it due to error or regular completion) the shared_ptr referee will be freed.
You can see many many examples by me using Asio/Beast on this site
You can put your close logic in a destructor, or if that, too, involves async calls, you can post it on the same strand/chain.
Advanced Ideas
If your traffic is full-duplex and one side fails in a way that necessitates cancelling the other direction, you can post cancellation on the strand and the async call will abort (e.g. with error_code boost::asio::error::operation_aborted).
Even more involved would be to create a custom IO service, where the lifetime of certain "backend" entities is governed by "handle" types. This is probably often overkill, but if you are writing a foundational framework that will be used in a larger number of places, you might consider it. I think this is a good starter: How to design proper release of a boost::asio socket or wrapper thereof (be sure to follow the comment links).
You can leave error handling logic only inside read_callback.

Custom creation of QFuture

I've faced quite an odd problem with QtConcurrent, mostly because of strange programming desires, maybe it's just an XY-problem, but...
So, there is my code, trying to communicate with the database, a backend code actually (on Qt, yes). It has to work quick and handle some requests, so I need a thread pool. As a well-known fact I suppose the connection establishing itself is a very time-consuming operation, so there is a need in persistent database connections resulting in persistent threads (QSqlDatabase cannot be moved around between the threads). Also it is quite natural to want asynchronous request handling, thus resulting in some need of a simple way to pass them to the persistent threads.
Nothing too complex, lets assume there already exists some boilerplate in a form like...
// That's what I want for now
QFuture<int> res = workers[i]->async(param1, param2);
// OR
// That's what I DO NOT want to get
workers[i]->async(param1, param2, [](QFuture<int> res) { // QFuture to pass exceptions
// callback here
});
That can be done for sure. Why not std::future? Well, it is much easier to use QFutureWatcher and it's signals for notifications about result's readiness. Pure C++ notification solutions are muuuch more complex and callbacks are also someting that has to be dragged through the class hierarchy. Each worker interfaces a thread with DB connections, obviously.
Okay, all of that can be written, but... custom thread pool would mean no QtConcurrent convenience, there seem to be only risky ways to create that QFuture so that it could be returned by the custom worker. QThreadPool is of no use, because it would be a whole big story to create persistent runnables in it. More to say, the boilerplate I've briefly described is gonna be some kind of project's core, used in many places, not something to be easily replaced by a 100 hand-made thread managings.
In short: if I could construst a QFuture for my results, the problem would be solved.
Could anyone point me to a solution or a workaround? Would be grateful for any bright ideas.
UPD:
#VladimirBershov offered a good modern solution which implements observer pattern. After some googling I've found a QPromise library. Of course, constructing a custom QFuture is still hacky and can be only done via undocumented QFutureInterface class, but still some "promise-like" solution makes asynchronous calls neater by far as I can judge.
You can use AsyncFuture library as a custom QFuture creation tool or ideas source:
AsyncFuture - Use QFuture like a Promise object
QFuture is used together with QtConcurrent to represent the result of
an asynchronous computation. It is a powerful component for
multi-thread programming. But its usage is limited to the result of
threads, it doesn't work with the asynchronous signal emitted by
QObject. And it is a bit trouble to setup the listener function via
QFutureWatcher.
AsyncFuture is designed to enhance the function to offer a better way
to use it for asynchronous programming. It provides a Promise object
like interface. This project is inspired by AsynQt and RxCpp.
Features:
Convert a signal from QObject into a QFuture object
Combine multiple futures with different type into a single future object
Use Future like a Promise object
Chainable Callback - Advanced multi-threading programming model
Convert a signal from QObject into a QFuture object:
#include "asyncfuture.h"
using namespace AsyncFuture;
// Convert a signal from QObject into a QFuture object
QFuture<void> future = observe(timer, &QTimer::timeout).future();
/* Listen from the future without using QFutureWatcher<T>*/
observe(future).subscribe([]() {
// onCompleted. It is invoked when the observed future is finished successfully
qDebug() << "onCompleted";
},[]() {
// onCanceled
qDebug() << "onCancel";
});
My idea is to use thread pools with maximum 1 thread available for each.
QThreadPool* persistentThread = new QThreadPool; // no need to write custom thread pool
persistentThread->setMaxThreadCount(1);
persistentThread->setExpiryTimeout(-1);
and then
QFuture<int> future_1 = QtConcurrent::run(persistentThread, func_1);
QFuture<int> future_2 = QtConcurrent::run(persistentThread, func_2);
func_2 will be executed after func_1 in the same one "persistent" thread.

QT c++ QFutures with signals without QConcurrent, like promises/observables?

I'm fuguring out how to use Futures with non-blocking event driven code (in a separate thread or not, both) but how can i end the future from a slot (~resolve the promise based on an signal) ?
QByteArray RfidCardReader::startTask(QByteArray send)
{
if(this->busy==false) {
this->sendFrame(send);
QObject::connect(this, &RfidCardReader::frameReady,
[=]() {/*this must be the startTask return*/ return this->int_read_buffer;});
} else {
throw 0;//Handle a queue instead
}
}
QFuture<QByteArray> RfidCardReader::send(QByteArray passed_send)
{
return QtConcurrent::run(QThreadPool::globalInstance(), this->startTask, passed_send);
}
basically what I want to do using only an instance is wrap a serial device (whic is sync by nature) in a queue of Futures but with only non blocking code using signals like &QIODevice::bytesWritten &QIODevice::readyRead etc... if there are better approches to the problem let me know, i would be glad to know the right way to write readable async code in Qt without blocking in separate threads
A serial device is asynchronous by nature, and using the serial port concurrently from multiple threads is undefined behavior. You can certainly resolve futures from any thread, but there's nothing in Qt that will give you a future on the same thread. Recall that a QFuture is not a class that you can sensibly instantiate. The default-constructed class is useless.
To get an idea of how to handle asynchronous serial I/O, see for example this answer.
Then you can use the undocumented <QFutureInterface> header, and create your own implementation that can wrap higher-level aspects of your protocol, i.e. commands/requests. You could then group such futures, and use a single watcher to determine when they are done.
Your approach is quite interesting in fact, and I might develop a complete example.

Consume a std::future by connecting a QObject

I have some existing code that uses std::future/std::promise that I'd like to integrate with a Qt GUI cleanly.
Ideally, one could just:
std::future<int> future{do_something()};
connect(future, this, &MyObject::resultOfFuture);
and then implement resultOfFuture as a slot that gets one argument: the int value that came out of the std::future<int>. I've added this suggestion as a comment on QTBUG-50676. I like this best because most of my future/promises are not concurrent anyway, so I'd like to avoid firing up a thread just to wait on them. Also, type inference could then work between the future and the slot's parameter.
But it seems to me that this shouldn't be hard to implement using a wrapper Qt object (e.g., a version of QFutureWatcher that takes a std::future<int>). The two issues with a wrapper are:
the wrapper will have to be concrete in its result type.
the watcher would have to be concurrent in a thread?
Is there a best-practice to implement this sort of connection? Is there another way that can hook into the Qt main loop and avoid thread creation?
std::future is missing continuations. The only way to turn the result of a std::future asynchronously into a function call delivering the result is to launch a thread watching it, and if you want to avoid busy-waiting you need one such thread per std::future, as there is no way to lazy-wait on multiple futures at once.
There are plans to create a future with continuation (a then operation), but they are not in C++ as of c++17 let alone c++11.
You could write your own system of future/promise that mimics the interface of std::future and std::promise that does support continuations, or find a library that already did that.
A busy-wait solution that regularly checked if the future was ready could avoid launching a new thread.
In any case, std::experimental::then would make your problem trivial.
future.then( [some_state](auto future){
try {
auto x = future.get();
// send message with x
} catch( ... ) {
// deal with exception
}
} );
you can write your own std::experimetnal::future or find an implementation to use yourself, but this functionality cannot be provided without using an extra thread with a std::future.

Using zmq socket in a callback function

I have a callback function which will be called in a thread that I don't have any access or control to (a library created that thread, and requires me to exposure the callback function to that thread). Since a zmq socket is not thread safe, here is what I'm doing:
void callback () {
zmq::socket_t tmp_sock(...); // create a socket which will be used only once
...
}
However, the callback is being invoked very frequently (hundreds of times per sec). Is there a better solution to use the socket more efficiently? I asked this because The Guide says: If you are opening and closing a lot of sockets, that's probably a sign that you need to redesign your application.
Edit:
Based on #raffian's answer. A thread_local static (available in C++11) variable in the callback function works fine.
I asked the same question, but in Java:
The principals are the same: pre-initialize a pool of worker threads, each with a dedicated socket, ready to use for reading/writing. In the Java example, I use ThreadLocal; I suppose in C++ you can use #include <boost/thread/tss.hpp>. This approach is consistent with ZeroMq's guide; use sockets only in the threads that created them.
I'm not a C++ programmer, but if you use this approach, you'll have to do something like this:
void callback () {
workerPool.doTask( new Task(args here));
...
}
Create a Task, with arguments, and send it to the workerPool, where it's assigned to a thread with dedicated socket. You'll want to create the worker pool with enough threads to accommodate load, nevertheless, concurrency shouldn't be a concern.