How to call a handler? - c++

I don't understand how I could return handle in case the io_context was stopped. Minimum example:
void my_class::async_get_one_scan(
std::function<void(const boost::system::error_code& ec,
std::shared_ptr<my_chunked_packet>)> handler)
{
asio::spawn(strand_, [this, handler] (asio::yield_context yield)
{
const auto work = boost::asio::make_work_guard(io_service_);
my_chunk_buffer chunks;
while (!chunks.full()) {
std::array<uint8_t, 1000> datagram;
boost::system::error_code ec;
auto size = socket_.async_receive(asio::buffer(datagram), yield[ec]);
if (!ec)
process_datagram(datagram, size, chunks);
else {
handler(ec, nullptr);
return;
}
}
io_service_.post(std::bind(handler, boost::system::error_code, chunks.packet()));
});
}
Debug asio output:
#asio|1532525798.533266|6*7|strand#01198ff0.dispatch
#asio|1532525798.533266|>7|
#asio|1532525798.533266|>0|
#asio|1532525798.533266|0*8|socket#008e345c.async_receive
#asio|1532525798.533266|<7|
#asio|1532525798.533266|<6|
#asio|1532525799.550640|0|socket#008e34ac.close
#asio|1532525799.550640|0|socket#008e345c.close
#asio|1532525799.551616|~8|
So the last async_receive() #8 is created, after |<6| io_context.stop() is called and then I have no idea how to get the error_code from yield_context to call the handler.
question#2 is it even a correct way of async reading of chunks of data to collect the whole packet?

By definition, io_context::stop prevents the event loop from executing other handlers. So there's no way to get the exit code into the handler, because it doesn't get invoked.
You probably want to have a "soft-stop" function instead, where you stop admitting new async tasks to the io_context and optionally cancel any pending operations.
If pending operations could take too long, you will want to add a deadline timer that forces the cancellation at some threshold time interval.
The usual way to make the run loop exit is by releasing a work object. See https://www.boost.org/doc/libs/1_67_0/doc/html/boost_asio/reference/io_context__work.html

Related

multithreading problem in boost asio example

I'm developing a tcp service, and I took an example from boost asio to start (https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/example/cpp11/chat/chat_server.cpp), and I'm worried about something, as I understand, any time you want to send something you have to use the deliver function that check the status of and run some operations over the write_msgs_ queue (in my code write_msgs_ is a queue of std::byte based structures):
void deliver(const chat_message& msg)
{
bool write_in_progress = !write_msgs_.empty();
write_msgs_.push_back(msg);
if (!write_in_progress)
{
do_write();
}
}
and inside the do_write() function you will see an asynchronous call wrapping a lambda function:
void do_write()
{
auto self(shared_from_this());
boost::asio::async_write(socket_,
boost::asio::buffer(write_msgs_.front().data(),
write_msgs_.front().length()),
[this, self](boost::system::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
write_msgs_.pop_front();
if (!write_msgs_.empty())
{
do_write();
}
}
else
{
room_.leave(shared_from_this());
}
});
}
where the call is constantly sending messages until the queue is empty.
Now, as I understand, the boost::asio::async_write make the lambda function thread safe, but, as the write_msgs_ is open to be used in the deliver function which is out of the isolation given by the io_context a mutex is needed. Now, should I put a mutex each time the write_queue is used or is cheaper to use the boost::asio::post() calling the deliver function to isolate the write_msgs_ from asynchronous calls ?
something like this:
boost::asio::io_service srvc; // this somewhere
void deliver2(const chat_message &msg)
{
srvc.post(std::bind(&chat_session::deliver,this,msg));
}

How to cancel a coroutine created by boost::asio::spawn

I use boost::asio::spawn to start a coroutine
boost::asio::io_context ioc;
boost::asio::steady_timer timer(ioc);
boost::asio::ip::tcp::socket socket(ioc);
boost::asio::io_context::strand strand(ioc);
......
boost::asio::spawn(strand, [&](boost::asio::yield_context yield)
{
while(true){
http::async_write(socket, req_indicators, yield);
http::async_read(socket, buffer, res, yield);
// do something with res
timer.expires_after(interval);
timer.async_wait(yield);
}
}
......
Now I want to cancel this coroutine immediately (no waiting for timer to expire and check a flag) without stopping the entire ioc, because there are other coroutines running in the same strand.
I can call timer.cancel() and socket.close() together, but as the coroutine gets complicated, I have to keep track of everything that can cause a yield in the coroutine, which is tiresome and error prone. Is there a method to unconditionally stop a coroutine created by boost::asio::spawn, like python's asyncio.task.cancel()?

Thread safety of boost::asio io_service and std::containers

I'm building a network service with boost::asio and I'm unsure about the thread safety.
io_service.run() is called only once from a thread dedicated for the io_service work
send_message() on the other hand can be called either by the code inside the second io_service handlers mentioned later, or by the mainThread upon user interaction. And that is why I'm getting nervous.
std::deque<message> out_queue;
// send_message will be called by two different threads
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
// from my understanding, deliver will only be called by the thread which called io_service.run()
void deliver(const MsgPtr){
bool write_in_progress = !out_queue.empty();
out_queue.push_back(msg);
if (!write_in_progress)
{
write();
}
}
void write()
{
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(out_queue.front().header(),
message::header_length), [this, self](asio::error_code ec, std::size_t/)
{
if (!ec)
{
asio::async_write(socket_,
asio::buffer(out_queue.front().data(),
out_queue.front().paddedPayload_size()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
out_queue.pop_front();
if (!out_queue.empty())
{
write();
}
}
});
}
});
}
Is this scenario safe?
A similar second scenario: When the network thread receives a message, it posts them into another asio::io_service which is also run by its own dedicated thread. This io_service uses an std::unordered_map to store callback functions etc.
std::unordered_map<int, eventSink> eventSinkMap_;
//...
// called by the main thread (GUI), writes a callback function object to the map
int IOReactor::registerEventSink(std::function<void(int, std::shared_ptr<message>)> fn, QObject* window, std::string endpointId){
util::ScopedLock lock(&sync_);
eventSink es;
es.id = generateRandomId();
// ....
std::pair<int, eventSink> eventSinkPair(es.id, es);
eventSinkMap_.insert(eventSinkPair);
return es.id;
}
// called by the second thread, the network service thread when a message was received
void IOReactor::onMessageReceived(std::shared_ptr<message> msg, ConPtr con)
{
reactor_io_service_.post([=](){ handleReceive(msg, con); });
}
// should be called only by the one thread running the reactor_io_service.run()
// read and write access to the map
void IOReactor::handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg.requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
QMetaObject::invokeMethod(ctx, "runInMainThread", Qt::QueuedConnection, Q_ARG(std::function<void(int, std::shared_ptr<msg::IMessage>)>, fn), Q_ARG(int, CallBackResult::SUCCESS), Q_ARG(std::shared_ptr<msg::IMessage>, msg));
eventSinkMap_.erase(es);
}
first of all: Do I even need to use a lock here?
Ofc both methods access the map, but they are not accessing the same elements (the receiveHandler cannot try to access or read an element that has not yet been registered/inserted into the map). Is that threadsafe?
First of all, a lot of context is missing (where is onMessageReceived invoked, and what is ConPtr? and you have too many questions. I'll give you some specific pointers that will help you though.
You should be nervous here:
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
The check out_queue->size() >= 20 requires synchronization unless out_queue is thread safe.
The call to io_service_.post is safe, because io_service is thread safe. Since you have one dedicated IO thread, this means that deliver() will run on that thread. Right now, you need synchronization there too.
I strongly suggest using a proper thread-safe queue there.
Q. first of all: Do I even need to use a lock here?
Yes you need to lock to do the map lookup (otherwise you get a data race with the main thread inserting sinks).
You do not need to lock during the invocation (in fact, that seems like a very unwise idea that could lead to performance issue or lockups). The reference remains valid due to Iterator invalidation rules.
The deletion of course requires a lock again. I'd revise the code to do deletion and removal at once, and invoke the sink only after releasing the lock. NOTE You will have to think about exceptions here (in your code when there is an exception during invocation, the sink doesn't get removed (ever?). This might be important to you.
Live Demo
void handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg->requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
eventSinkMap_.erase(es); // invalidates es
lock.unlock();
// invoke in whatever way you require
fn(static_cast<int>(CallBackResult::SUCCESS), std::static_pointer_cast<msg::IMessage>(msg));
}
}

C++ wait for all async operation end

I have started N same async operations(e.g. N requests to database), so i need to do something after all this operations end. How i can do this? (After one async operation end, my callback will be called).
I use C++14
Example
i use boost.asio to write some data to socket.
for (int i = 0; i < N; ++i)
{
boost::asio::async_write(
m_socket,
boost::asio::buffer(ptr[i], len[i]),
[this, callback](const boost::system::error_code& ec, std::size_t )
{
callback(ec);
});
}
So i need to know when all my writes ends;
first of all, never call async_write in a loop. Each socket may have only one async_write and one async_read outstanding at any one time.
boost already has provision for scatter/gather io.
This snippet should give you enough information to go on.
Notice that async_write can take a vector of vectors as a 'buffer' and it will fire the handler exactly once, once all the buffers have been written.
struct myclass {
boost::asio::ip::tcp::socket m_socket;
std::vector<std::vector<char>> pending_buffers;
std::vector<std::vector<char>> writing_buffers;
void write_all()
{
assert(writing_buffers.size() == 0);
writing_buffers = std::move(pending_buffers);
boost::asio::async_write(
m_socket,
boost::asio::buffer(writing_buffers),
std::bind(&myclass::write_all_handler,
this,
std::placeholders::_1,
std::placeholders::_2));
}
void write_all_handler(const boost::system::error_code& ec, size_t bytes_written)
{
writing_buffers.clear();
// send next load of data
if (pending_buffers.size())
write_all();
// call your callback here
}
};

Having a hard time understanding a few concepts with Boost ASIO TCP with async_read and async_write

I'm having a hard time understand the correct way I should structure a tcp client when using async_read and async_write. The examples seem to do a async_read after connecting and then have async_write in the handler.
In the case of my client and sever, when the client connects it needs to check a queue of messages to write and check to see if anything needs to be read. One of the things I'm having a hard time with is understanding how this would work asynchronously.
What I envision is in the async_connect handler, the thread would call async_write if anything is in the sendQueue and call async_read over and over. Or should it check if anything is available to be read before it does an async_read?
Below is an example of what I'm talking about.
void BoostTCPConnection::connectHandler()
{
setRunning(true);
while (isRunning())
{
//If send Queue has messages
if ( sendSize > 0)
{
//Calls to async_write
send();
}
boost::shared_ptr<std::vector<char> > sizeBuffer(new std::vector<char>(4));
boost::asio::async_read(socket_, boost::asio::buffer(data, size), boost::bind(&BoostTCPConnection::handleReceive, shared_from_this(), boost::asio::placeholders::error, sizeBuffer));
}
}
void BoostTCPConnection::handleReceive(const boost::system::error_code& error, boost::shared_ptr<std::vector<char> > sizeBuffer)
{
if (error)
{
//Handle Error
return;
}
size_t messageSize(0);
memcpy((void*)(&messageSize),(void*)sizeBuffer.data(),4);
boost::shared_ptr<std::vector<char> > message(new std::vector<char>(messageSize) );
//Will this create a race condition with other reads?
//Should a regular read happen here
boost::asio::async_read(socket_, boost::asio::buffer(data, size),
boost::bind(&BoostTCPConnection::handleReceiveMessage, shared_from_this(),
boost::asio::placeholders::error, message));
}
void BoostTCPConnection::handleReceiveMessage(const boost::system::error_code& error, boost::shared_ptr<std::vector<char> > rcvBuffer)
{
if (error)
{
//Handle Error
return;
}
boost::shared_ptr<std::string> message(new std::string(rcvBuffer.begin(),rcvBuffer.end()));
receivedMsgs_.push_back(message);
}
void BoostTCPConnection::handleWrite(const boost::system::error_code& error,size_t bytes_transferred)
{
//Success
if (error.value() == 0)
return;
//else handleError
}
Conceptually, async_read waits for data to be received. You should call it any time you want something to happen after data is received and a read isn't already pending. Similarly, async_write waits for data to be written. You should call it any time you need to write data and a write isn't already pending.
You should call async_read when you complete the connection. Before your async_read handler returns, it should probably call async_read again.
When you need to write to the connection, you should call async_write (if a write isn't already pending). In your async_write handler, if you still need to write more, you should call async_write again.
If no read is already pending, you can call async_read in your write handler, if you wish to resume reading after you finish writing. You can also just keep a read always pending. That's up to you.
You should not check if there's anything to read before calling async_read. The point of async_read is for it to complete when there's something to read. It's a smart way of waiting and doing other things in the meantime.