interrupting boost::asio::async_receive_from from another thread - c++

I am reading multicast input using async_receive_from. So the idea is that when I detect a gap, I will notify another helper thread to request/get the gap filling messages. While this is in the works the main thread will continue to receive and queue any incoming messages. This part I can implement. The other thread can use waitforsingleobject and I can pass it the details through shared memory and notify an event to wake it up.
But once it completes it task, how do I get the helper thread to interrupt the async_receive_from in the initiating thread? And when it comes up out of the the read it knows who interrupted so it will then know what to do next?

Why are you using shared memory between threads?
That aside, the mechanism you should use for executing something in the context of the io_service which is managing the socket is post(). You can post any arbitrary event to the io_service, and it will execute in that context. Quite easy really... Because you are calling async_receive_from, it's not blocking, i.e. the io_service can dispatch other events, which is why the post will work.

Related

Beast websocket idiomatic shutdown?

I have my C++ program. The main thread creates a new thread that is dedicated to only handling a websocket. This new thread reads and writes using for example boost beast's async_read() calls. It is much like https://www.boost.org/doc/libs/1_69_0/libs/beast/example/websocket/server/async/websocket_server_async.cpp where each async call gives rise to another async call.
But what is the idiomatic way to get the main thread to tell the websocket thread to shutdown given that there will likely always be some async read or write call outstanding like an async_read() idle waiting for the server to eventually send data. A shutdown would need to do something like cancel the remaining async_read() without introducing some kind of race condition where the read starts happening just before the cancel.
Use boost::asio::post to post a lambda to the io_context (using the appropriate strand if necessary) which calls cancel on the underlying basic_socket. Pending operations will complete immediately with boost::asio::error::operation_aborted. Inside your completion handler you can check basic_socket::is_open to know whether or not you should attempt new asynchronous calls.

Behavior of boost::asio::io_service thread pool during uneven load

I have hard time finding out how exactly does thread pool built with boost::asio::io_service behave.
The documentation says:
Multiple threads may call the run() function to set up a pool of
threads from which the io_service may execute handlers. All threads
that are waiting in the pool are equivalent and the io_service may
choose any one of them to invoke a handler.
I would imagine, that when threads executing run() are taking a handler to execute, they execute it, and then come back to wait for next handlers to execute. When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct? Or does io_service assign work to threads, without considering whether these are busy or not?
I am asking, because in one project that we are using (OSRM), that uses boost::asio::io_service based thread pool to handle incoming HTTP requests, I noticed that long running request, sometimes block other, fast requests, even though more threads and cores are available.
When executing a handler, a thread is not considered waiting, and hence no new handlers to execute are assigned to it. Is that correct?
Yes. It's a pull model queue.
A notable "apparent" exception is when strands are used: handlers wrapped on a on a strand do synchronize with other handlers running on that same strand.

ASIO Canonical way of running user-provided callbacks in the I/O Service Thread

I wrote a TCP client using ASIO that I would like to make a little bit more versatile by adding a user-defined callback for what happens when a packet is received. I am implementing a simple file transfer protocol along with a client protocol that talks to a server, and the only difference should be what happens when data is read.
ELO = Event Loop Owner and refers to the thread running io_service::run()
When socket->async_read_some(...) is called from the ELO, the data is stored in a std::shared_ptr<char> buffer. I would like to pass this buffer to a user-defined callback thread with the definition std::function<void(std::shared_ptr<char>)>. However, I'm afreaid that spawning a thread in a std::shared_ptr<std::thread> and detaching it is not the best way to go. This is because the stack of detached threads is not unwound.
With some testing, I've found that, if the user provides a callback with a mutex, there is a non-negligible chance that the main thread could exit without the mutex being unlocked (even when using std::lock_guard).
Is there any 'safe' way to call a callback in a new thread in an asynchronous program without blocking the event loop or violating thread safety?

Boost::Asio - wake up a thread when there are handlers to run

The common way to process Asio handlers is to have a thread (or several threads) either polling io_service (i.e. calling io_service::poll()) regularly to run the handlers or using io_service::run(), which blocks the thread until there's work to do, in which case the thread will run the required handlers and either return or go to sleep again.
However, I want to make a system where a thread is not only responsible for running Asio handlers, but also needs to sync up with another thread using a condition variable. Basically, I want the thread to do all of these:
Wake up when there are Asio handlers that need to be processed (i.e. if I call io_service::poll(), one or more handlers will be processed).
Wake up when there is non-Asio work to be done, indicated by my condition variable.
Sleep otherwise.
In other words, I need a way for Asio to signal me that there are handlers ready to execute, without having to busy-wait or continuously poll. Ideally, Asio will somehow signal a thread when work is available, and that thread will in turn wake up my main worker thread, which will process Asio handlers. That worker thread will also be occasionally woken up by yet another thread, and will process other, non-Asio related work.
Is this even feasible, or should I reconsider how I am designing my system?

How to execute async operations sequentially with c++ boost::asio?

I would like to have a way to add async tasks form multiple threads and execute them sequentially in a c++ boost::asio application.
Update: I would like to make a server-to-server communication with only one persistent socket between them and I need to sequence the multiple requests trough it. It needs to keep the incoming request in a queue, fire the top one / wait for it response and pick up the next. I'm trying to avoid using zeromq because it needs a dedicated thread.
Update2: Ok, Here is with what I ended up: The concurrent worker threads are "queued" for the use of the server-to-server socket with a simple mutex. The communication is blocking write/wait for response/read then release the mutex. Simple isn't it :)
From the ASIO documentation:
Asynchronous completion handlers will only be called from threads that
are currently calling io_service::run().
If you're already calling io_service::run() from multiple threads, you can wrap your async calls in an io_service::strand as described here.
Not sure if I understand you correctly either, but what's wrong with the approach in the client chat example? Messages are posted to the io_service thread, queued while a write is in progress and popped/sent in the write completion handler. If more messages were added in the meantime, the write handler launches the next async write.
Based on your comment to Sean, I also don't understand the benefit of having multiple threads calling io_service::run since you can only execute one async_write/async_read on one persistent socket at a time i.e. you can only call async_write again once the handler has returned? The number of calling threads might require you to lock the queue with a mutex though.
AFAICT the benefit of having multiple threads calling io_service::run is to increase the scalability of a server that is serving multiple requests simultaneously.