I'm testing a little protocol design of mine and having trouble getting a continuous async_read to work. My idea was to create a generic read handler that outputs the received data (testing) and then checks it to perform protocol-defined actions.
This means that I am calling a new async_read from within this generic handler, which for some reason blocks and my handler never returns, blocking further execution of my program.
The relevant code
void handle_read_server(const asio::error_code&error_code, size_t bytes_transferred, void *adress)
{
// [...]
char HELO[4] = {'H','E','L','O'};
if (*received.data() == *HELO)
{
cout << "[Protocol] got HELO";
got_helo = true;
short code_data;
asio::async_read(socket_server, asio::buffer(&code_data, 2), asio::transfer_all(), std::bind(handle_read_server, placeholders::_1, placeholders::_2, &code_data)); // This read is blocking my program to continue it's execution.
}
}
What I'm asking
What is causing the function to block here? Is there anything I can do appart from having an async_read thread run all the time passing any received values to a stream in the server?
The async_* call does, in fact, not block.
You have Undefined Behaviour, by passing the address of a local variable into the async operation/completion handler.
You have to ensure the buffer's lifetime extends till after the completion. The natural way to achieve that would be to make the buffer a member of the enclosing class.
Related
This question is asked from the context of Boost ASIO (C++).
Say you are using a library to do some async i/o on a socket, where:
you are always waiting to receive data
you occasionally send some data
Since you are always waiting to receive data (e.g. you trigger another async_read() from your completion handler), at any given time, you will either have:
an async read operation in progress
an async read operation in progress and an async write operation in progress
Now say you wanted to call some other function, on_close(), when the connection closes. In Boost ASIO, a connection error or cancel() will cause any oustanding async reads/writes to give an error to your completion handler. But there is no guarantee whether you are in scenario 1. or 2., nor is there a guarantee that the write will error before the read or vice versa. So to implement this, I can only imagine adding two variables called is_reading and is_writing which are set to true by async_read() and async_write() respectively, and set to false by the completion handlers. Then, from either completion handler, when there is an error and I think the connection may be closing, I would check if there is still an async operation in the opposite direction, and call on_close() if not.
The code, more or less:
atomic_bool is_writing;
atomic_bool is_reading;
...
void read_callback(error_code& error, size_t bytes_transferred)
{
is_reading = false;
if (error)
{
if (!is_writing) on_close();
}
else
{
process_data(bytes_transferred);
async_read(BUF_SIZE); // this will set is_reading to true
}
}
void write_callback(error_code& error, size_t bytes_transferred)
{
is_writing = false;
if (error)
{
if (!is_reading) on_close();
}
}
Assume that this is a single-threaded app, but the thread is handling multiple sockets so you can't just let the thread end.
Is there a better way to design this? To make sure on_close() is called after the last async operation finishes?
One of the most common patterns is to use enable_shared_from_this and binding all completion handlers ("continuations") to it.
That way if the async call chain ends (be it due to error or regular completion) the shared_ptr referee will be freed.
You can see many many examples by me using Asio/Beast on this site
You can put your close logic in a destructor, or if that, too, involves async calls, you can post it on the same strand/chain.
Advanced Ideas
If your traffic is full-duplex and one side fails in a way that necessitates cancelling the other direction, you can post cancellation on the strand and the async call will abort (e.g. with error_code boost::asio::error::operation_aborted).
Even more involved would be to create a custom IO service, where the lifetime of certain "backend" entities is governed by "handle" types. This is probably often overkill, but if you are writing a foundational framework that will be used in a larger number of places, you might consider it. I think this is a good starter: How to design proper release of a boost::asio socket or wrapper thereof (be sure to follow the comment links).
You can leave error handling logic only inside read_callback.
I have a asio sync connection. ioService is in one thread (I have only one thread.).
Smaller problem:
boost::asio::async_write(m_socket, boost::asio::buffer(requestStr.data(), requestStr.size()), handler);
The handler is never called, but the server gets it and replies that I get.
The bigger problem:
boost::asio::async_read_until(m_socket, sbuf, '\n', sendingHandler);
It also doesn't call the handler. The sbuf is immediately filled and I can read it there, but I don't know the position of the deliminator. Therefore I need the handler to get the bytes_transferred parameter. (I'm not going to iterate the buffer.)
I tried several things and I could invoke the handler once, but I don't remember what the issue was about after a small refract. Any help? Thanks!
When I used sync messaging, everything was fine, but there is no timeout there.
EDIT:
If you know any nice solution to find the deliminator I don't need the handler.
Because, I would send the msg sync_write and read async.
It wont be called, because it is async. Async methods for writing and reading never call handlers from within the place they're called:
Regardless of whether the asynchronous operation completes immediately
or not, the handler will not be invoked from within this function.
Invocation of the handler will be performed in a manner equivalent to
using boost::asio::io_service::post().
You need to manually call io_service methods like run or run_once to perform operations and that is the moment when your callback will be called.
Currently, I am using usart_read_buffer_job function provided by ASF library. I placed this function inside the while(1) loop as below:
int main()
{
Some pieces of code for initialization;
while(1)
{
usart_read_buffer_job();
while(1) // The second infinite loop
{
Some other pieces of code;
}
}
}
It works perfectly well for the first interrupt handler call. However, after returning from the handler, I was no longer able to call the interrupt handler. The program kept running within the second infinite loop and was not able to execute usart_read_buffer_job() again. It was probably the cause of the handler 's malfunction.
In this case, my purpose is to jump into the USART interrupt handler regardless of the number of infinite loops being executed in main(). Of course, by not using ASF, the issue could be solved by manually set the handler but I still wonder how this issue could be solved by other functions provided by ASF.
Looking forward to getting the response from the community soon.
Thank you,
Thanks for very quick response.
The code which I am working on is confidentials. Hence, I could only share the ASF library functions with you and explain briefly how they work.
In the ASF, typically, we have two functions for handling the interrupt, namely usart_read_buffer_job and usart_read_job
Before using these two functions, the handler calls are defined by the two functions:
usart_register_callback: Registers a callback function, which is implemented by the user.
usart_enable_callback: The callback function will be called from the interrupt handler when the conditions for the callback type are met.
And these two functions above are placed in the initialization code as shown in the question.
Then, depending on the design purpose, handlers are called whenever a character or a group of characters are received via UART peripherals using usart_read_buffer_job/usart_read_job respectively.
usart_read_buffer_job: Sets up the driver to read from the USART to a given buffer. If registered and enabled, a callback function will be called.
usart_read_job: Sets up the driver to read data from the USART module to the data pointer given. If registered and enabled, a callback will be called when the receiving is completed.
You could find more details about these functions on http://asf.atmel.com/docs/latest/samd21/html/group__asfdoc__sam0__sercom__usart__group.html
In this case, assumming that the main program stalls due to some unexpected infinite loops, the handlers should still work at anytime after receiving command invoked from the UART peripherals and do some important tasks to solve out the problems, for example.
Hope that this explanation makes my previous question clearer. And, hope to get response from all of you soon.
First of all, do not put an infinite loop inside an infinite loop!!.
If you find yourself doing this, this indicates a probable design flow. Please revise your design.
(Let's call it the first rule)
Second, you seem to use event driven I/O (rather than polling) by registering a handler/callback.
Here is a second rule, you never call a handler yourself.
You register a callback function (handler) to be called when the event occurs.
If you are doing the initialization and configuration correctly, the code should work following this scheme:
void initialization()
{
/*Device and other initialization*/
...
usart_register_callback(...); /*Register usart_read_callback() here*/
usart_enable_callback(...);
}
int main()
{
initialization();
while(1)
{
/*Some other pieces of code*/
}
}
void usart_read_callback(...)
{
usart_write_buffer_job(...); /*Read data into your read data buffer*/
}
usart_read_buffer_job() will only invoke the callback one time, so after the callback has been dealt with, you must invoke usart_read_buffer_job() again (perhaps at the end of the callback if the processing is finished).
Only one infinite loop can run unless you have some kind of separate tasks (such as in FreeRTOS), each with their own loop.
I'm still trying to understand the work of boost::asio C++ library.
According to the answer on my previous question, async_write() method enqueues the message in the network stack and immediately returns. However, in the documentation they say it is wrong to do such thing:
void dont_do_this()
{
std::string msg = "Hello, world!";
boost::asio::async_write(socket, boost::asio::buffer(msg), my_handler);
}
They insist that we need to ensure that the buffer for the operation is valid until the completion handler is called. The question is WHY? At the moment of async_write return we've already put our message in the network stack and we don't need the buffer any longer, and the automatic variable msg can be destroyed without serious consequences. Where am I wrong?
async_write does not really queue the message in the network stack. Instead it queues the write to boost asynchronous tasks queue held by the io_service. The write to the network stack actually happens later, when you call run on the io_service. In short there is an intermediate queue.
In you case the boost::asio::buffer keeps a reference to 'msg' and not a copy of it. If msg goes out of the scope, when your message is sent to the network stack, the buffer is pointing to a dangling reference to a string.
So I've made a socket class that uses boost::asio library to make asynchronous reads and writes. It works, but I have a few questions.
Here's a basic code example:
class Socket
{
public:
void doRead()
{
m_sock->async_receive_from(boost::asio::buffer(m_recvBuffer), m_from, boost::bind(&Socket::handleRecv, this, boost::asio::placeholders::error(), boost::asio::placeholders::bytes_transferred()));
}
void handleRecv(boost::system::error_code e, int bytes)
{
if (e.value() || !bytes)
{
handle_error();
return;
}
//do something with data read
do_something(m_recvBuffer);
doRead(); //read another packet
}
protected:
boost::array<char, 1024> m_recvBuffer;
boost::asio::ip::udp::endpoint m_from;
};
It seems that the program will read a packet, handle it, then prepare to read another. Simple.
But what if I set up a thread pool? Should the next call to doRead() be before or after handling the read data? It seems that if it is put before do_something(), the program can immediately begin reading another packet, and if it is put after, the thread is tied up doing whatever do_something() does, which could possibly take a while. If I put the doRead() before the handling, does that mean the data in m_readBuffer might change while I'm handling it?
Also, if I'm using async_send_to(), should I copy the data to be sent into a temporary buffer, because the actual send might not happen until after the data has fallen out of scope? i.e.
void send()
{
char data[] = {1, 2, 3, 4, 5};
m_sock->async_send_to(boost::buffer(&data[0], 5), someEndpoint, someHandler);
} //"data" gets deallocated, but the write might not have happened yet!
Additionally, when the socket is closed, the handleRecv will be called with an error indicating it was interrupted. If I do
Socket* mySocket = new Socket()...
...
mySocket->close();
delete mySocket;
could it cause an error, because there is a chance that mySocket will be deleted before handleRecv() gets called/finished?
Lots of questions here, I'll try to address them one at a time.
But what if I set up a thread pool?
The traditional way to use a thread pool with Boost.Asio is to invoke io_service::run() from multiple threads. Beware this isn't a one-size-fits-all answer though, there can be scalability or performance issues, but this methodology is by far the easiest to implement. There are many similar questions on Stackoverflow with more information.
Should the next call to doRead be before or after handling the read
data? It seems that if it is put before do_something(), the program
can immediately begin reading another packet, and if it is put after,
the thread is tied up doing whatever do_something does, which could
possibly take a while.
This really depends on what do_something() needs to do with m_recvBuffer. If you wish to invoke do_something() in parallel with doRead() using io_service::post() you will likely need to make a copy of m_recvBuffer.
If I put the doRead() before the handling, does
that mean the data in m_readBuffer might change while I'm handling it?
as I mentioned previously, yes this can and will happen.
Also, if I'm using async_send_to(), should I copy the data to be sent
into a temporary buffer, because the actual send might not happen
until after the data has fallen out of scope?
As the documentation describes, it is up to the caller (you) to ensure the buffer remains in scope for the duration of the asynchronous operation. As you suspected, your current example invokes undefined behavior because data[] will go out of scope.
Additionally, when the socket is closed, the handleRecv() will be called
with an error indicating it was interrupted.
If you wish to continue to use the socket, use cancel() to interrupt outstanding asynchronous operations. Otherwise, close() will work. The error passed to outstanding asynchronous operations in either scenario is boost::asio::error::operation_aborted.