Asio: how to get async data back into synchronous method - c++

I'm using asio for async io, but there are some times where I'd like to "escape" the async world and get my data back into the regular synchronous world.
For instance, consider that I have a std::deque<string> _data that is being used in my async process (in a single thread always running in the background), and were I've created async function to read / write from it.
What is the "natural" way to read from this deque in a synchronous way from another thread ?
So far I've used atomics to do this but this feels a bit "wrong".
For example:
std::string getDataSync()
{
std::atomic<int> signal = 0;
std::string str;
asio::post(io_context, [this, &signal, &str] {
str = _data.front();
_data.pop_front();
signal = 1;
});
while(signal == 0) { }
return str;
}
Is it ok to do this?
Does asio provide anything cleaner to do this kind of operations?
Thanks

If you want to synchronize two threads, then you have to use sychronize primitives (like std::atomic). Asio doesn't provide more advanced primitives, but the STL (and boost) is full of it. For your simple example, you might want to use std::future and std::promise to move the top item of the deque to another thread.
Here is a small example. I assume that you don't want to access the deque directly from the other thread, just the top item. I also assume that you are running boost::asio::run in another thread.
inline constexpr std::string pop_from_queue() { return "hello world"; }
int main() {
auto context = boost::asio::io_context{};
auto promise = std::promise<std::string>{};
auto result = promise.get_future();
boost::asio::post(context,
[&promise] { promise.set_value(pop_from_queue()); });
auto thread = std::thread{[&context] { context.run(); }};
std::cout << result.get(); // blocking
thread.join();
}

Related

Benefits of using std::stop_source and std::stop_token instead of std::atomic<bool> for deferred cancellation?

When I run several std::threads in parallell and need to cancel other threads in a deferred manner if one thread fails I use a std::atomic<bool> flag:
#include <thread>
#include <chrono>
#include <iostream>
void threadFunction(unsigned int id, std::atomic<bool>& terminated) {
srand(id);
while (!terminated) {
int r = rand() % 100;
if (r == 0) {
std::cerr << "Thread " << id << ": an error occured.\n";
terminated = true; // without this line we have to wait for other thread to finish
return;
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
int main()
{
std::atomic<bool> terminated = false;
std::thread t1(&threadFunction, 1, std::ref(terminated));
std::thread t2(&threadFunction, 2, std::ref(terminated));
t1.join();
t2.join();
std::cerr << "Both threads finished.\n";
int k;
std::cin >> k;
}
However now I am reading about std::stop_sourceand std::stop_token.
I find that I can achieve the same as above by passing both a std::stop_sourceby reference and std::stop_token by value to the thread function?
How would that be superior?
I understand that when using std::jthread the std::stop_token is very convenient if I want to stop threads from outside the threads.
I could then call std::jthread::request_stop() from the main program.
However in the case where I want to stop threads from a thread is it still better?
I managed to achieve the same thing as in my code using std::stop_source:
void threadFunction(std::stop_token stoken, unsigned int id, std::stop_source source) {
srand(id);
while (!stoken.stop_requested()) {
int r = rand() % 100;
if (r == 0) {
std::cerr << "Thread " << id << ": an error occured.\n";
source.request_stop(); // without this line we have to wait for other thread to finish
return;
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
int main()
{
std::stop_source source;
std::stop_token stoken = source.get_token();
std::thread t1(&threadFunction, stoken, 1, source);
std::thread t2(&threadFunction, stoken, 2, source);
t1.join();
t2.join();
std::cerr << "Both threads finished.\n";
int k;
std::cin >> k;
}
Using std::jthread would have resulted in more compact code:
std::jthread t1(&threadFunction, 1, source);
std::jthread t2(&threadFunction, 2, source);
But that did not seem to work.
It didn't work because std::jthread has a special feature where, if the first parameter of a thread-function is a std::stop_token, it fills that token in by an internal stop_source object.
What you ought to do is only pass a stop_source (by value, not by reference), and extract the token from it within your thread function.
As for why this is better than a reference to an atomic, there are a myriad of reasons. The first being that stop_source is a lot safer than a bare reference to an object whose lifetime is not under the local control of the thread function. The second being that you don't have to do std::ref gymnastics to pass parameters. This can be a source of bugs since you might accidentally forget to do that in some place.
The standard stop_token mechanism has features beyond just requesting and responding to a stop. Since the response to a stop happens at an arbitrary time after issuing it, it may be necessary to execute some code when the stop is actually requested rather than when it is responded to. The stop_callback mechanism allows you to register a callback with a stop_token. This callback will be called in the thread of the stop_source::request_stop call (unless you register the callback after the stop was requested, in which case it's called right when you register it). This can be useful in limited cases, and it's not simple code to write yourself. Especially when all you have is an atomic<bool>.
And then there's simple readability. Passing a stop_source tells you exactly what is going on without having to even see the name of a parameter. Passing an atomic<bool> tells you very little from just the typename; you have to look at the parameter name or its usage in the function to know that it is for halting the thread.
Apart from being more expressive and communicating intentions better, stop_token and friends achieve something really important for jthread. To understand it you have to consider its destructor which looks something like this:
~jthread()
{
if(joinable())
{
// Not only user code, but the destructor as well
// will let your callback know it's time to go.
request_stop();
join();
}
}
by encapsulating a stop_source, jthread facilitates what is called cooperative cancellation. As you've also noted, you never have to pass the stop_token to a jthread, just provide a callback that accepts the token as its first parameter. What happens next is that the class can detect that your callback accepts a stop token and pass a token to its internal stop source when calling it.
What does this mean for cooperative cancellation? Safer termination of course! Since jthread will always attempt to join on destruction, it now has the means to prevent endless loops and deadlocks where two or more threads wait for each other to finish. By using stop_token your code can make sure that it can safely join when it's time to go.
However in the case where I want to stop threads from a thread is it still better?
Now regarding the feature you are requesting, that's what C# calls "linked cancellation". Yes, there are requests and discussions to add a parameter in the jthread constructor so that it can refer to an external stop source, but that's not yet available (and has many implications). Doing something similar purely with stop tokens would require a stop_callback to tie all cancellations together, but still it could be suboptimal (as shown in the link). The bottom line is that jthread needs stop_token, but in some cases you may not need jthread, especially if the following solution does not appeal to you:
stop_source ssource;
std::stop_callback cb {ssource.get_token(), [&] {
t1.request_stop();
t2.request_stop();
}};
ssource.request_stop(); // This stops boths threads.
The good news is that if you don't fall into the suboptimal pattern described in the link (i.e. you don't need an asynchronous termination), then this functionality is easy to abstract into a utility, something like:
auto linked_cancellations = [](auto&... jthreads) {
stop_source s;
return std::make_pair(s, std::stop_callback{
s.get_token(), [&]{ (jthreads.request_stop(), ...); }});
};
which you'd use as
auto [stop_source, cb] = linked_cancellations(t1, t2);
// or as many thread objects as you want to link ^^^
stop_source.request_stop(); // Stops all the threads that you linked.
Now if you want to control the linked threads from within the thread, I'd use the initial pattern (std::atomic<bool>), since having a callback with both a stop token and a stop source is somewhat confusing.

Any case of std::promise that can't be replaced by a single thread running sequential produce-then-consume?

Update 9th June 2020:
Consolidating all the comments and answers here, and putting some more thought to this, I have created a flowchart below (click to zoom) to help decide when to use std::promise/future, and what are the trade-offs.
Original post is as follows:
I have been thinking about the real benefit of the std::promise/future mechanism. Examples almost everywhere tout this pattern - a single producer, single producer scenario where the producer notifies the consumer one-time that the resource in question is ready for consumption:
#include <iostream>
#include <future>
#include <thread>
using namespace std::chrono_literals;
struct StewableFood {
int tenderness;
};
void slow_cook_for_12_hours(std::promise<StewableFood>& promise_of_stew) {
std::cout << "\nChef: Starting to cook ...";
// Cook till 100% tender
StewableFood food{ 0 };
for (int i = 0; i < 10; ++i) {
std::this_thread::sleep_for(10ms);
food.tenderness = (i + 1) * 10;
std::cout << "\nChef: Stewing ... " << food.tenderness << "%";
}
// Notify person waiting on the promise of stew that the promise has been fulfilled.
promise_of_stew.set_value(food);
std::cout << "\nChef: Stew is ready!";
}
void wait_to_eat_stew(std::future<StewableFood>& potenial_fulfilment_of_stew) {
std::cout << "\nJoe: Waiting for stew ...";
auto food = potenial_fulfilment_of_stew.get();
std::cout << "\nJoe: I have been notified that stew is ready. Tenderness " << food.tenderness << "%! Eat!";
}
int main()
{
std::promise<StewableFood> promise_of_stew;
auto potenial_fulfilment_of_stew = promise_of_stew.get_future();
std::thread async_cook(slow_cook_for_12_hours, std::ref(promise_of_stew));
std::thread async_eat(wait_to_eat_stew, std::ref(potenial_fulfilment_of_stew));
async_cook.join();
async_eat.join();
return 0;
}
To me, all this asynchronicity serves no purpose, because ultimately, the consumer's blocking wait on future::get makes this kind of usage equivalent to a single-threaded one with sequential produce-then-consume. I initially thought my example above is contrived. But if we look at the one-time use only constraint of a std::promise/future pair (i.e. you cannot re-write to the original promise nor re-read from the original future), it then follows that the above example becomes the only viable use case, since:
The set-once constraint means there can be only one producer, and
The get-once constraint means there can be only one consumer, and
Inferred from the above 2 set/get-once constraints, there shall be no looping that causes re-use on the same promise/future.
If the usage pattern in the above example is indeed the only viable use case, it then follows that there is no advantage in using std::promise, compared to doing just:
void cook_stew_then_eat() {
auto stew = slow_cook_for_12_hours();
// wait 12 hours
eat_stew(stew);
}
int main() {
std::thread t(cook_stew_then_eat);
t.join();
return 0;
}
Now, this conclusion seems suspicious. I am quite sure there is a good use case for std::promise which cannot be replaced by a single threaded sequential-produce-then-consume version which doesn't involve std::promise.
Question: What is that use case(s)?
Note: It is tempting to speculate that perhaps std::promise/future somehow allows us to asynchronously do something else without waiting on the fulfilment - might that be the advantage? Definitely not, because we can achieve the identical effect by putting that "something else" (e.g. some important work) in another thread. To illustrate:
// cook and eat threads use std::promise/future
std::thread cook(...);
std::thread eat(...);
// Let's do important work on another thread
std::thread important_work(...);
cook.join();
eat.join();
important_work.join();
is identical to this solution that doesn't use std::promise/future:
// sequentially cook then eat, NO NEED to use std::promise/future
std::thread cook_then_eat(...);
// Let's do important work on another thread
std::thread important_work(...);
cook_then_eat.join();
important_work.join();
No, you are actually correct, future/promise pattern can always be replaced with manual thread management (via thread joins, condition variables and mutexes) if you are careful about synchronization and object lifetimes.
The primary benefit of future/promise pattern is abstraction. It hides lifetime management and synchronization of the shared state from you, freeing you from the burden of doing it yourself.
Once the producer has a promise it doesn't need to know anything else about the consuming side, and likewise for the consumer and future. This makes it possible to write more concise, less error prone, and less coupled code.
Also keep in mind that as of C++20 std::future still lacks continuations, which makes it a lot less powerful than it could be.
What is that use case(s)?
Any work that doesn't depend on the result of the promise can be done on other threads before waiting on the promise.
Let's extend your example to a stew competition
extern void slow_cook_for_12_hours(std::promise<StewableFood>& promise_of_stew);
extern Grade rate_stew(const StewableFood &);
std::map<Chef, Grade> judge_stew_competition(std::map<Chef, std::future<StewableFood>>& entries)
{
std::map<Chef, Grade> results;
for (auto & [chef, fut] : entries) { results[chef] = rate_stew(fut.get()); }
return results;
}
int main()
{
std::map<Chef, std::promise<StewableFood>> promises_of_stew = { ... };
std::map<Chef, std::future<StewableFood>> fulfilment_of_stews;
std::vector<std::thread> async_cook;
for (auto & [chef, promise] : promises_of_stew)
{
fulfilment_of_stews[chef] = promise.get_future();
async_cook.emplace(slow_cook_for_12_hours, std::ref(promise));
}
std::thread async_judge(judge_stew_competition, std::ref(fulfilment_of_stews));
for (auto & thread : async_cook) { thread.join(); }
async_judge.join();
return 0;
}
Examples almost everywhere tout this pattern - a single producer, single producer scenario where the producer notifies the consumer one-time that the resource in question is ready for consumption.
May be that is not a good example.
Another example is a task that requires resources/datasets from different providers and there are only blocking calls available to fetch resources (or non-blocking calls cannot easily be integrated into one event loop in your application). In this case your consumer thread launches all resources requests as std::async and waits till they all complete in parallel, rather than sequentially. In this case it takes max(times) rather than sum(times) to fetch all the datasets, where times is an array of each provider response time.

How to implement async/await syntax properly using Boost.Asio

I'm trying to implement some network application using Boost.Asio. I have a problem with multiple layers of callbacks. In other languages that natively support async/await syntax, I can write my logic like this
void do_send(args...) {
if (!endpoint_resolved) {
await resolve_async(...); // results are stored in member variables
}
if (!connected) {
await connect_async(...);
}
await send_async(...);
await receive_async(...);
}
Right now I have to write it using multiple layers of callbacks
void do_send(args...) {
if (!endpoint_resolved) {
resolve_async(..., [captures...](args...) {
if (!connected) {
connect_async(..., [captures...](args...) {
send_async(..., [captures...](args...) {
receive_async(..., [captures...](args...) {
// do something
}); // receive_async
}); // send_async
}); // connect_async
}
});
}
}
This is cumbersome and error-prone. An alternative is to use std::bind to bind member functions as callbacks, but this does not solve the problem because either way I have to write complicated logic in the callbacks to determine what to do next.
I'm wondering if there are better solutions. Ideally I would like to write code in a synchronous way while I can await asynchronously on any I/O operations.
I've also checked std::async, std::future, etc. But they don't seem to fit into my situation.
Boost.Asio's stackful coroutines would provide a good solution. Stackful coroutines allow for asynchronous code to be written in a manner that reads synchronous. One can create a stackful coroutine via the spawn function. Within the coroutine, passing the yield_context as a handler to an asyncornous operation will start the operation and suspend the coroutine. The coroutine will be resumed automatically when the asynchronous operation completes. Here is the example from the documentation:
boost::asio::spawn(my_strand, do_echo);
// ...
void do_echo(boost::asio::yield_context yield)
{
try
{
char data[128];
for (;;)
{
std::size_t length =
my_socket.async_read_some(
boost::asio::buffer(data), yield);
boost::asio::async_write(my_socket,
boost::asio::buffer(data, length), yield);
}
}
catch (std::exception& e)
{
// ...
}
}

Thread safety of boost::asio io_service and std::containers

I'm building a network service with boost::asio and I'm unsure about the thread safety.
io_service.run() is called only once from a thread dedicated for the io_service work
send_message() on the other hand can be called either by the code inside the second io_service handlers mentioned later, or by the mainThread upon user interaction. And that is why I'm getting nervous.
std::deque<message> out_queue;
// send_message will be called by two different threads
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
// from my understanding, deliver will only be called by the thread which called io_service.run()
void deliver(const MsgPtr){
bool write_in_progress = !out_queue.empty();
out_queue.push_back(msg);
if (!write_in_progress)
{
write();
}
}
void write()
{
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(out_queue.front().header(),
message::header_length), [this, self](asio::error_code ec, std::size_t/)
{
if (!ec)
{
asio::async_write(socket_,
asio::buffer(out_queue.front().data(),
out_queue.front().paddedPayload_size()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
out_queue.pop_front();
if (!out_queue.empty())
{
write();
}
}
});
}
});
}
Is this scenario safe?
A similar second scenario: When the network thread receives a message, it posts them into another asio::io_service which is also run by its own dedicated thread. This io_service uses an std::unordered_map to store callback functions etc.
std::unordered_map<int, eventSink> eventSinkMap_;
//...
// called by the main thread (GUI), writes a callback function object to the map
int IOReactor::registerEventSink(std::function<void(int, std::shared_ptr<message>)> fn, QObject* window, std::string endpointId){
util::ScopedLock lock(&sync_);
eventSink es;
es.id = generateRandomId();
// ....
std::pair<int, eventSink> eventSinkPair(es.id, es);
eventSinkMap_.insert(eventSinkPair);
return es.id;
}
// called by the second thread, the network service thread when a message was received
void IOReactor::onMessageReceived(std::shared_ptr<message> msg, ConPtr con)
{
reactor_io_service_.post([=](){ handleReceive(msg, con); });
}
// should be called only by the one thread running the reactor_io_service.run()
// read and write access to the map
void IOReactor::handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg.requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
QMetaObject::invokeMethod(ctx, "runInMainThread", Qt::QueuedConnection, Q_ARG(std::function<void(int, std::shared_ptr<msg::IMessage>)>, fn), Q_ARG(int, CallBackResult::SUCCESS), Q_ARG(std::shared_ptr<msg::IMessage>, msg));
eventSinkMap_.erase(es);
}
first of all: Do I even need to use a lock here?
Ofc both methods access the map, but they are not accessing the same elements (the receiveHandler cannot try to access or read an element that has not yet been registered/inserted into the map). Is that threadsafe?
First of all, a lot of context is missing (where is onMessageReceived invoked, and what is ConPtr? and you have too many questions. I'll give you some specific pointers that will help you though.
You should be nervous here:
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
The check out_queue->size() >= 20 requires synchronization unless out_queue is thread safe.
The call to io_service_.post is safe, because io_service is thread safe. Since you have one dedicated IO thread, this means that deliver() will run on that thread. Right now, you need synchronization there too.
I strongly suggest using a proper thread-safe queue there.
Q. first of all: Do I even need to use a lock here?
Yes you need to lock to do the map lookup (otherwise you get a data race with the main thread inserting sinks).
You do not need to lock during the invocation (in fact, that seems like a very unwise idea that could lead to performance issue or lockups). The reference remains valid due to Iterator invalidation rules.
The deletion of course requires a lock again. I'd revise the code to do deletion and removal at once, and invoke the sink only after releasing the lock. NOTE You will have to think about exceptions here (in your code when there is an exception during invocation, the sink doesn't get removed (ever?). This might be important to you.
Live Demo
void handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg->requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
eventSinkMap_.erase(es); // invalidates es
lock.unlock();
// invoke in whatever way you require
fn(static_cast<int>(CallBackResult::SUCCESS), std::static_pointer_cast<msg::IMessage>(msg));
}
}

boost asio asynchronously waiting on a condition variable

Is it possible to perform an asynchronous wait (read : non-blocking) on a conditional variable in boost::asio ? if it isn't directly supported any hints on implementing it would be appreciated.
I could implement a timer and fire a wakeup even every few ms, but this is approach is vastly inferior, I find it hard to believe that condition variable synchronization is not implemented / documented.
If I understand the intent correctly, you want to launch an event handler, when some condition variable is signaled, in context of asio thread pool? I think it would be sufficient to wait on the condition variable in the beginning of the handler, and io_service::post() itself back in the pool in the end, something of this sort:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
boost::asio::io_service io;
boost::mutex mx;
boost::condition_variable cv;
void handler()
{
boost::unique_lock<boost::mutex> lk(mx);
cv.wait(lk);
std::cout << "handler awakened\n";
io.post(handler);
}
void buzzer()
{
for(;;)
{
boost::this_thread::sleep(boost::posix_time::seconds(1));
boost::lock_guard<boost::mutex> lk(mx);
cv.notify_all();
}
}
int main()
{
io.post(handler);
boost::thread bt(buzzer);
io.run();
}
I can suggest solution based on boost::asio::deadline_timer which works fine for me. This is kind of async event in boost::asio environment.
One very important thing is that the 'handler' must be serialised through the same 'strand_' as 'cancel', because using 'boost::asio::deadline_timer' from multiple threads is not thread safe.
class async_event
{
public:
async_event(
boost::asio::io_service& io_service,
boost::asio::strand<boost::asio::io_context::executor_type>& strand)
: strand_(strand)
, deadline_timer_(io_service, boost::posix_time::ptime(boost::posix_time::pos_infin))
{}
// 'handler' must be serialised through the same 'strand_' as 'cancel' or 'cancel_one'
// because using 'boost::asio::deadline_timer' from multiple threads is not thread safe
template<class WaitHandler>
void async_wait(WaitHandler&& handler) {
deadline_timer_.async_wait(handler);
}
void async_notify_one() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_one_serialized, this));
}
void async_notify_all() {
boost::asio::post(strand_, boost::bind(&async_event::async_notify_all_serialized, this));
}
private:
void async_notify_one_serialized() {
deadline_timer_.cancel_one();
}
void async_notify_all_serialized() {
deadline_timer_.cancel();
}
boost::asio::strand<boost::asio::io_context::executor_type>& strand_;
boost::asio::deadline_timer deadline_timer_;
};
Unfortunately, Boost ASIO doesn't have an async_wait_for_condvar() method.
In most cases, you also won't need it. Programming the ASIO way usually means, that you use strands, not mutexes or condition variables, to protect shared resources. Except for rare cases, which usually focus around correct construction or destruction order at startup and exit, you won't need mutexes or condition variables at all.
When modifying a shared resource, the classic, partially synchronous threaded way is as follows:
Lock the mutex protecting the resource
Update whatever needs to be updated
Signal a condition variable, if further processing by a waiting thread is required
Unlock the mutex
The fully asynchronous ASIO way is though:
Generate a message, that contains everything, that is needed to update the resource
Post a call to an update handler with that message to the resource's strand
If further processing is needed, let that update handler create further message(s) and post them to the apropriate resources' strands.
If jobs can be executed on fully private data, then post them directly to the io-context instead.
Here is an example of a class some_shared_resource, that receives a string state and triggers some further processing depending on the state received. Please note, that all processing in the private method some_shared_resource::receive_state() is fully thread-safe, as the strand serializes all calls.
Of course, the example is not complete; some_other_resource needs a similiar send_code_red() method as some_shared_ressource::send_state().
#include <boost/asio>
#include <memory>
using asio_context = boost::asio::io_context;
using asio_executor_type = asio_context::executor_type;
using asio_strand = boost::asio::strand<asio_executor_type>;
class some_other_resource;
class some_shared_resource : public std::enable_shared_from_this<some_shared_resource> {
asio_strand strand;
std::shared_ptr<some_other_resource> other;
std::string state;
void receive_state(std::string&& new_state) {
std::string oldstate = std::exchange(state, new_state);
if(state == "red" && oldstate != "red") {
// state transition to "red":
other.send_code_red(true);
} else if(state != "red" && oldstate == "red") {
// state transition from "red":
other.send_code_red(false);
}
}
public:
some_shared_resource(asio_context& ctx, const std::shared_ptr<some_other_resource>& other)
: strand(ctx.get_executor()), other(other) {}
void send_state(std::string&& new_state) {
boost::asio::post(strand, [me = weak_from_this(), new_state = std::move(new_state)]() mutable {
if(auto self = me.lock(); self) {
self->receive_state(std::move(new_state));
}
});
}
};
As you see, posting always into ASIO's strands can be a bit tedious at first. But you can move most of that "equip a class with a strand" code into a template.
The good thing about message passing: As you are not using mutexes, you cannot deadlock yourself anymore, even in extreme situations. Also, using message passing, it is often easier to create a high level of parallelity than with classical multithreading. On the downside, moving and copying around all these message objects is time consuming, which can slow down your application.
A last note: Using the weak pointer in the message formed by send_state() facilitates the reliable destruction of some_shared_resource objects: Otherwise, if A calls B and B calls C and C calls A (possibly only after a timeout or similiar), using shared pointers instead of weak pointers in the messages would create cyclic references, which then prevents object destruction. If you are sure, that you never will have cycles, and that processing messages from to-be-deleted objects doesn't pose a problem, you can use shared_from_this() instead of weak_from_this(), of course. If you are sure, that objects won't get deleted before ASIO has been stopped (and all working threads been joined back to the main thread), then you can also directly capture the this pointer instead.
FWIW, I implemented an asynchronous mutex using the rather good continuable library:
class async_mutex
{
cti::continuable<> tail_{cti::make_ready_continuable()};
std::mutex mutex_;
public:
async_mutex() = default;
async_mutex(const async_mutex&) = delete;
const async_mutex& operator=(const async_mutex&) = delete;
[[nodiscard]] cti::continuable<std::shared_ptr<int>> lock()
{
std::shared_ptr<int> result;
cti::continuable<> tail = cti::make_continuable<void>(
[&result](auto&& promise) {
result = std::shared_ptr<int>((int*)1,
[promise = std::move(promise)](auto) mutable {
promise.set_value();
}
);
}
);
{
std::lock_guard _{mutex_};
std::swap(tail, tail_);
}
co_await std::move(tail);
co_return result;
}
};
usage eg:
async_mutex mutex;
...
{
const auto _ = co_await mutex.lock();
// only one lock per mutex-instance
}