Running unit tests seems to block io_service indefinitely - c++

I have a class that uses asio::generic::stream_protocol::socket to connect to domain (asio::local::stream_protocol::endpoint) and TCP sockets (asio::ip::tcp::endpoint).
To test that class I have a series of unit tests in a single file using the Catch framework.
I've suddenly come across a problem: when running tests they will get stuck. Passing -DASIO_ENABLE_HANDLER_TRACKING to the compiler flags I can see that it gets stuck on async_connect. This does not happen if I comment all tests but one, no matter which. If I have two tests, no matter if they connect to domain or tcp sockets, or one of each, I get a blockage.
The output of Asio changes but this is an example:
$ tests/unit_tests
#asio|1478248907.301230|0*1|deadline_timer#0x7f96f1c07ad8.async_wait
#asio|1478248907.301276|0*2|resolver#0x7f96f1c07ac0.async_resolve
#asio|1478248907.301322|>1|ec=system:0
#asio|1478248907.301328|<1|
#asio|1478248907.302052|>2|ec=system:0,...
#asio|1478248907.302186|2*3|socket#0x7f96f1c07a20.async_connect
#asio|1478248907.302302|<2|
#asio|1478248907.302468|>3|ec=system:0
#asio|1478248907.302481|<3|
#asio|1478248907.302551|0*4|socket#0x7f96f1c07a20.async_send
#asio|1478248907.302611|>4|ec=system:0,bytes_transferred=23
#asio|1478248907.302617|<4|
#asio|1478248907.302621|0*5|socket#0x7f96f1c07a20.async_receive(null_buffers)
#asio|1478248907.356478|>5|ec=system:0,bytes_transferred=0
#asio|1478248907.356547|<5|
#asio|1478248907.356622|0|socket#0x7f96f1c07a20.close
#asio|1478248907.372967|0|deadline_timer#0x7f96f1c07ad8.cancel
#asio|1478248907.372981|0|resolver#0x7f96f1c07ac0.cancel
#asio|1478248907.373509|0*6|deadline_timer#0x7f96f1d00468.async_wait
#asio|1478248907.373526|0*7|resolver#0x7f96f1d00450.async_resolve
#asio|1478248907.374910|>7|ec=system:0,...
#asio|1478248907.374946|7*8|socket#0x7f96f1d003b0.async_connect
#asio|1478248907.375014|<7|
#asio|1478248907.375127|>8|ec=system:0
#asio|1478248907.375135|<8|
My question is: what is the problem with running unit tests that open and close connections? If this is a no-no, how do you write unit tests that use async_open?

io_service has run, run_one, poll and poll_one methods which actually execute the completion handlers. Boost asio may have its own threads, but the thread state of those may not be correct to call your handlers. Hence, even in a unit test you must figure out which thread is going to call completion handlers.
Secondly, run runs to completion, and then returns. From your description (first test succeeds, second fails) it sounds like you did call run but did not reset and re-runthe io_service.

The problem seemed to be related to the way I was iterating through the output of a tcp::resolver.

Related

How i can verify the signal in a unit test of Camunda process?

We use camunda-bpm-assert and camunda-bpm-assert-scenario libs for Camunda processes unit testing (testing of .bpmn).
But i couldn't find any approach for signal testing - how can we verify, that signal with the right name and variables was thrown during test execution?
Appreciate any ideas.
It's a workaround, but if this is important to test, you could deploy another process in your test scenario that receives a signal and has a follow up task (or execution listener) that records the call and the variables, thus allowing you to assert on these.

C++ Boost Unit Test: How to manually finish unit test with success?

Currently I'm coding a network lib based on Boost asio.
I want to automatically test my lib with a kind of loopback echo test.
The problem is, that the server is running continuously thus the test never ends.
My idea is to do some EQUAL tests with response data and to manually stop the unit test with success. If a timeout occurs otherwise, the test should stop with fail (I know, it's more an integration test as an unit test)...
Is there a Boost Unit Test macro to manually stop the test with success?
Thanks!
You can just leave the test function, that will count as success. If you want to explicitly "set" the result, you can use BOOST_CHECK(true) for success and or BOOST_CHECK(false) for failure. There are variants of these macros with an additional message to be printed on failure (BOOST_CHECK_MESSAGE).
The test framework itself is single threaded and the individual tests run one after the other. Each test function has to end by either an explicit return statement or execution "falling off the end".
If you call a function that does not return by itself but needs some trigger to do so, you need to somehow schedule this trigger before calling the function. This might require starting a thread, waiting there for a certain time and then sending the trigger to cause the blocking function to return.

Boost ASIO Network Timing Issue

I am using boost::asio to implement network programming and running into timing issues. The issue is currently most with the client.
The protocol initially begins by the server returning a date time string to the user, and the client reads it. Up to that part it works fine. But What I also want is to be able to write commands to the server which then processes them. To accomplish this I use the io_service.post() function as shown below.
io_service.post(boost::bind()); // bounded function calls async_write() method.
For some reason the write tries happens before the initial client/server communication, when the socket has not been created yet. And I get bad socket descriptor error.
Now the io_service's run method is indeed called in another thread.
When I place a sleep(2) command before post method, it work fine.
Is there way to synchronize this, so that the socket is created before any posted calls are executed.
When creating the socket and establishing the connection using boost::asio, you can define a method to be called when these operations have either completed or failed. So, you should trigger your "posted call" in the success callback.
Relevant methods and classes are :
boost::asio::ip::tcp::resolver::async_resolve(...)
boost::asio::ip::tcp::socket::async_connect(...)
I think the links below
will give u some help
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_asio/reference/io_service.html

Most efficient way to handle a client connection (socket programming)

For every single tutorials and examples I have seen on the internet for Linux/Unix socket tutorials, the server side code always involves an infinite loop that checks for client connection every single time.
Example:
http://www.thegeekstuff.com/2011/12/c-socket-programming/
http://tldp.org/LDP/LG/issue74/tougher.html#3.2
Is there a more efficient way to structure the server side code so that it does not involve an infinite loop, or code the infinite loop in a way that it will take up less system resource?
the infinite loop in those examples is already efficient. the call to accept() is a blocking call: the function does not return until there is a client connecting to the server. code execution for the thread which called the accept() function is halted, and does not take any processing power.
think of accept() as a call to join() or like a wait on a mutex/lock/semaphore.
of course, there are many other ways to handle incoming connection, but those other ways deal with the blocking nature of accept(). this function is difficult to cancel, so there exists non-blocking alternatives which will allow the server to perform other actions while waiting for an incoming connection. one such alternative is using select(). other alternatives are less portable as they involve low-level operating system calls to signal the connection through a callback function, an event or any other asynchronous mechanism handled by the operating system...
For C++ you could look into boost.asio. You could also look into e.g. asynchronous I/O functions. There is also SIGIO.
Of course, even when using these asynchronous methods, your main program still needs to sit in a loop, or the program will exit.
The infinite loop is there to maintain the server's running state, so when a client connection is accepted, the server won't quit immediately afterwards, instead it'll go back to listening for another client connection.
The listen() call is a blocking one - that is to say, it waits until it receives data. It does this is an extremely efficient way, using zero system resources (until a connection is made, of course) by making use of the operating systems network drivers that trigger an event (or hardware interrupt) that wakes the listening thread up.
Here's a good overview of what techniques are available - The C10K problem.
When you are implementing a server that listens for possibly infinite connections, there is imo no way around some sort of infinite loops. Usually this is not a problem at all, because when your socket is not marked as non-blocking, the call to accept() will block until a new connection arrives. Due to this blocking, no system resources are wasted.
Other libraries that provide like an event-based system are ultimately implemented in the way described above.
In addition to what has already been posted, it's fairly easy to see what is going on with a debugger. You will be able to single-step through until you execute the accept() line, upon which the 'sigle-step' highlight will disappear and the app will run on - the next line is not reached. If you put a breadkpoint on the next line, it will not fire until a client connects.
We need to follow the best practice on writing client -server programing. The best guide I can recommend you at this time is The C10K Problem . There are specific stuff we need to follow in this case. We can go for using select or poll or epoll. Each have there own advantages and disadvantages.
If you are running you code using latest kernel version, then I would recommend to go for epoll. Click to see sample program to understand epoll.
If you are using select, poll, epoll then you will be blocked until you get an event / trigger so that your server will not run in to infinite loop by consuming your system time.
On my personal experience, I feel epoll is the best way to go further as I observed the threshold of my server machine on having 80k ACTIVE connection was very less on comparing it will select and poll. The load average of my server machine was just 3.2 on having 80k active connection :)
On testing with poll, I find my server load average went up to 7.8 on reaching 30k active client connection :(.

scala specs don't exit when testing actors

I'm trying to test some actors using scala specs. I run the test in IDEA or Maven (as junit) and it does not exit. Looking at the code, my test finished, but some internal threads (scheduler) are hanging around. How can I make the test finish?
Currently this is only possible by causing the actor framework's scheduler to forcibly shut down:
scala.actors.Scheduler.impl.shutdown
However, the underlying implementation of the scheduler has been changing in patch-releases lately, so this may be different, or not quite work with the version you are on. In 2.7.7 the default scheduler appears to be an instance of scala.actors.FJTaskScheduler2 for which this approach should work, however if you end up with a SingleThreadedScheduler it will not, as the shutdown method is a no-op
This will only work if your actors are not waiting on a react at that time