asio underlying behavior in async_receive - c++

I have worked with asio library for a few projects and have always managed to get it work, but I feel there are somethings of it that I have not entirely/clearly understood so far.
I am wondering how async_receive works.
I googled around a bit and had a look at the implementation but didn't understand it quite well. This is the way that I often use async communication:
socket.async_receive(receive_buffer, receiveHandler);
where receiveHandler is the function that will be called upon the arrival of data on the socket.
I know that the async_receive call return immediately. So here are my questions:
Does async_receive create a new thread each time it is called?
If not, does it mean that there is a thread responsible to waiting for data and when it arrives, it calls the handler function? When does this thread get created?
If I were to turn this call into a recursive call by using a lambda function like this:
void cyclicReceive() {
// Imagine the whole thing is in a class, so "this" means something here and
// "receiveHandler" is still a valid function that gets called.
socket.async_receive(receive_buffer,
[this](const asio::error_code& error_code, const std::size_t num_bytes)
{
receiveHandler(error_code, num_bytes);
cyclicReceive();
});
}
is there any danger of stack overflow? Why not?
I tried to show a minimal example by removing unnecessary details, so the exact syntax might be a bit wrong.

Asio does not create implicitly any new threads. In general it is based on queue of commands. When you call io.run() the framework is taking the commands from the queue and executing them until queue is empty. All the async_ operations in ASIO push new commands to the internal queue.
Therefore there is no risk of stack overflow. Worst possible but not really probable scenario is out_of_memory exception when there is no space left for commands in command queue (which is very unlikely).

Related

How to call a function repeatedly until a mock has been satisfied?

I'm writing a library with a C (not C++) interface that contains an event loop, call it processEvents. This should be called in a loop, and invokes user-defined callbacks when something has happened. The "something" in this case is triggered by an RPC response that is received in a different thread, and added to an event queue which is consumed by processEvents on the main thread.
So from the point of view of the user of my library, the usage looks like this:
function myCallback(void *userData) {
// ...
}
int main() {
setCallback(&myCallback, NULL);
requestCallback();
while (true) {
processEvents(); /* Eventually calls myCallback, but not immediately. */
doSomeOtherStuff();
}
}
Now I want to test, using Google Test and Google Mock, that the callback is indeed called.
I've used MockFunction<void()> to intercept the actual callback; this is called by a C-style static function that casts the void *userData to a MockFunction<void()> * and calls it. This works fine.
The trouble is: the callback isn't necessarily happen on the first call of processEvents; all I know is that it happens eventually if we keep calling processEvents in a loop.
So I guess I need something like this:
while (!testing::Mock::AllExpectationsSatisfied() && !timedOut()) {
processEvents();
}
But this fictional AllExpectationsSatisfied doesn't seem to exist. The closest I can find is VerifyAndClearExpectations, but it makes the test fail immediately if the expectations aren't met on the first try (and clears them, to boot).
Of course I make this loop run for a full second or so, which would make the test green, but also make it needlessly slow.
Does anyone know a better solution?
If you are looking for efficient synchronization between threads, check out std::condition_variable. Until a next event comes in, your implementation with a while loop will keep on spinning – using up CPU resources doing nothing useful.
Instead, it would make better sense to suspend the execution of your code, freeing up processing time for other threads, until an event comes in, and then signal to the suspended thread to resume its work. Condition variables do just that. For more information, check out the docs.
Furthermore, you might be interested in looking into std::future and std::promise, which basically encapsulate the pattern of waiting for something to come asynchronously. Find more details here.
After posting the question, I thought of using a counter that is decremented by each mock function invocation. But #PetrMánek's answer gave me a better idea. I ended up doing something like this:
MockFunction<void()> myMockFunction;
// Machinery to wire callback to invoke myMockFunction...
Semaphore semaphore; // Implementation from https://stackoverflow.com/a/4793662/14637
EXPECT_CALL(myMockFunction, Call())
.WillRepeatedly(Invoke(&semaphore, &Semaphore::notify));
do {
processEvents();
} while (semaphore.try_wait());
(I'm using a semaphore rather than std::condition_variable because (1) spurious wakeups and (2) it can be used in case I expect multiple callback invocations.)
Of course this still needs an overall timeout so a failing test won't hang forever. An optional timeout could also be added to try_wait() to make this more CPU-efficient. These improvements are left as an exercise to the reader ;)

Async send automatic variable using boost::asio. Is it possible?

I'm still trying to understand the work of boost::asio C++ library.
According to the answer on my previous question, async_write() method enqueues the message in the network stack and immediately returns. However, in the documentation they say it is wrong to do such thing:
void dont_do_this()
{
std::string msg = "Hello, world!";
boost::asio::async_write(socket, boost::asio::buffer(msg), my_handler);
}
They insist that we need to ensure that the buffer for the operation is valid until the completion handler is called. The question is WHY? At the moment of async_write return we've already put our message in the network stack and we don't need the buffer any longer, and the automatic variable msg can be destroyed without serious consequences. Where am I wrong?
async_write does not really queue the message in the network stack. Instead it queues the write to boost asynchronous tasks queue held by the io_service. The write to the network stack actually happens later, when you call run on the io_service. In short there is an intermediate queue.
In you case the boost::asio::buffer keeps a reference to 'msg' and not a copy of it. If msg goes out of the scope, when your message is sent to the network stack, the buffer is pointing to a dangling reference to a string.

Call method right after blocking call

I'm using a third party library which has a blocking function, that is, it won't return until it's done; I can set a timeout for that call.
Problem is, that function puts the library in a certain state. As soon as it enters that state, I need to do something from my own code. My first solution was to do that in a separate thread:
void LibraryWrapper::DoTheMagic(){
//...
boost::thread EnteredFooStateNotifier( &LibraryWrapper::EnterFooState, this );
::LibraryBlockingFunction( timeout_ );
//...
}
void LibraryWrapper::EnterFooState(){
::Sleep( 50 ); //Ensure ::LibraryBlockingFunction is called first
//Do the stuff
}
Quite nasty, isn't it? I had to put the Sleep call because ::LibraryBlockingFunction must definitely be called before the stuff I do below, or everything will fail. But waiting 50 milliseconds is quite a poor guarantee, and I can't wait more because this particular task needs to be done as fast as possible.
Isn't there a better way to do this? Consider that I don't have access to the Library's code. Boost solutions are welcome.
UPDATE: Like one of the answers says, the library API is ill-defined. I sent an e-mail to the developers explaining the problem and suggesting a solution (i.e. making the call non-blocking and sending an event to a registered callback notifying the state change). In the meantime, I set a timeout high enough to ensure stuff X is done, and set a delay high enough before doing the post-call work to ensure the library function was called. It's not deterministic, but works most of the time.
Would using boost future clarify this code? To use an example from the boost future documentation:
int calculate_the_answer_to_life_the_universe_and_everything()
{
return 42;
}
boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
boost::unique_future<int> fi=pt.get_future();
boost::thread task(boost::move(pt));
// In your example, now would be the time to do the post-call work.
fi.wait(); // wait for it to finish
Although you will still presumably need a bit of a delay in order to ensure that your function call has happened (this bit of your problem seems rather ill-defined - is there any way you can establish deterministically when it is safe to execute the post-call state change?).
The problem as I understand it is that you need to do this:
Enter a blocking call
After you have entered the blocking call but before it completes, you need to do something else
You need to have finished #2 before the blocking call returns
From a purely C++ standpoint, there's no way you can accomish this in a deterministic way. That is without understanding the details of the library you're using.
But I noticed your timeout value. That might provide a loophole, maybe.
What if you:
Enter the blocking call with a timeout of zero, so that it returns immediately
Do you other stuff, either in the same thread or synchronized with the main thread. Perhaps using a barrier.
After #2 is verified to be done, enter the blocking call again, with the normal non-zero timeout.
This will only work if the library's state will change if you enter the blocking call with a zero timeout.

boost::asio async_read doesn't receive data or doesn't use callback

I am trying to receive data from a server application using boost asio's async_read() free function, but the callback I set for when the receiving is never called.
The client code is like this:
Client::Client()
{
m_oIoService.run(); // member boost::asio::io_service
m_pSocket = new boost::asio::ip::tcp::socket(m_oIoService);
// Connection to the server
[...]
// First read
boost::asio::async_read(*m_pSocket,
boost::asio::buffer((void*)&m_oData, sizeof(m_oData)),
boost::bind(&Client::handleReceivedData, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
I tried with small data (a short string) and I can't get it to work. When I use the synchronous read function (boost::asio::read()) using the two same first parameters, everything works perfectly.
Am I missing something with the use of the io_service? I am still unsure about how it works.
boost::asio::service::run () is a blocking call. Now, in your example it may or may not return immediately. In case it doesn't, you you are blocked even before you create a socket, and never call read, so cannot expect a callback. Otherwise, dispatch loop is exited, so no callbacks are ever delivered.
Read more about boost::asio::service::run (). I recommend you check out documentation including tutorial, examples and reference. It is worth going trough it in full to understand the concept.
Hope it helps!
P.S.: On a side note, your code is not exception safe. Beware that if constructor of the class fails with exception then destructor of that class instance is never called. Thus, you may leak at least m_pSocket if its type is not one of the "smart pointers". You should consider making it exception safe, moving the code to another method that should be called by user, or even wrapping this functionality with a free function.

Boost::Asio : io_service.run() vs poll() or how do I integrate boost::asio in mainloop

I am currently trying to use boost::asio for some simple tcp networking for the first time, and I allready came across something I am not really sure how to deal with. As far as I understand io_service.run() method is basically a loop which runs until there is nothing more left to do, which means it will run until I release my little server object. Since I allready got some sort of mainloop set up, I would rather like to update the networking loop manually from there just for the sake of simplicity, and I think io_service.poll() would do what I want, sort of like this:
void myApplication::update()
{
myIoService.poll();
//do other stuff
}
This seems to work, but I am still wondering if there is a drawback from this method since that does not seem to be the common way to deal with boost::asios io services. Is this a valid approach or should I rather use io_service.run() in a non blocking extra thread?
Using io_service::poll instead of io_service::run is perfectly acceptable. The difference is explained in the documentation
The poll() function may also be used
to dispatch ready handlers, but
without blocking.
Note that io_service::run will block if there's any work left in the queue
The work class is used to inform the
io_service when work starts and
finishes. This ensures that the
io_service object's run() function
will not exit while work is underway,
and that it does exit when there is no
unfinished work remaining.
whereas io_service::poll does not exhibit this behavior, it just invokes ready handlers. Also note that you will need to invoke io_service::reset on any subsequent invocation to io_service:run or io_service::poll.
A drawback is that you'll make a busy loop.
while(true) {
myIoService.poll()
}
will use 100% cpu. myIoService.run() will use 0% cpu.
myIoService.run_one() might do what you want but it will block if there is nothing for it to do.
A loop like this lets you poll, doesn't busy-wait, and resets as needed. (I'm using the more recent io_context that replaced io_service.)
while (!exitCondition) {
if (ioContext.stopped()) {
ioContext.restart();
}
if (!ioContext.poll()) {
if (stuffToDo) {
doYourStuff();
} else {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
}
}
}