Currently I'm coding a network lib based on Boost asio.
I want to automatically test my lib with a kind of loopback echo test.
The problem is, that the server is running continuously thus the test never ends.
My idea is to do some EQUAL tests with response data and to manually stop the unit test with success. If a timeout occurs otherwise, the test should stop with fail (I know, it's more an integration test as an unit test)...
Is there a Boost Unit Test macro to manually stop the test with success?
Thanks!
You can just leave the test function, that will count as success. If you want to explicitly "set" the result, you can use BOOST_CHECK(true) for success and or BOOST_CHECK(false) for failure. There are variants of these macros with an additional message to be printed on failure (BOOST_CHECK_MESSAGE).
The test framework itself is single threaded and the individual tests run one after the other. Each test function has to end by either an explicit return statement or execution "falling off the end".
If you call a function that does not return by itself but needs some trigger to do so, you need to somehow schedule this trigger before calling the function. This might require starting a thread, waiting there for a certain time and then sending the trigger to cause the blocking function to return.
Related
We use camunda-bpm-assert and camunda-bpm-assert-scenario libs for Camunda processes unit testing (testing of .bpmn).
But i couldn't find any approach for signal testing - how can we verify, that signal with the right name and variables was thrown during test execution?
Appreciate any ideas.
It's a workaround, but if this is important to test, you could deploy another process in your test scenario that receives a signal and has a follow up task (or execution listener) that records the call and the variables, thus allowing you to assert on these.
I have a class that uses asio::generic::stream_protocol::socket to connect to domain (asio::local::stream_protocol::endpoint) and TCP sockets (asio::ip::tcp::endpoint).
To test that class I have a series of unit tests in a single file using the Catch framework.
I've suddenly come across a problem: when running tests they will get stuck. Passing -DASIO_ENABLE_HANDLER_TRACKING to the compiler flags I can see that it gets stuck on async_connect. This does not happen if I comment all tests but one, no matter which. If I have two tests, no matter if they connect to domain or tcp sockets, or one of each, I get a blockage.
The output of Asio changes but this is an example:
$ tests/unit_tests
#asio|1478248907.301230|0*1|deadline_timer#0x7f96f1c07ad8.async_wait
#asio|1478248907.301276|0*2|resolver#0x7f96f1c07ac0.async_resolve
#asio|1478248907.301322|>1|ec=system:0
#asio|1478248907.301328|<1|
#asio|1478248907.302052|>2|ec=system:0,...
#asio|1478248907.302186|2*3|socket#0x7f96f1c07a20.async_connect
#asio|1478248907.302302|<2|
#asio|1478248907.302468|>3|ec=system:0
#asio|1478248907.302481|<3|
#asio|1478248907.302551|0*4|socket#0x7f96f1c07a20.async_send
#asio|1478248907.302611|>4|ec=system:0,bytes_transferred=23
#asio|1478248907.302617|<4|
#asio|1478248907.302621|0*5|socket#0x7f96f1c07a20.async_receive(null_buffers)
#asio|1478248907.356478|>5|ec=system:0,bytes_transferred=0
#asio|1478248907.356547|<5|
#asio|1478248907.356622|0|socket#0x7f96f1c07a20.close
#asio|1478248907.372967|0|deadline_timer#0x7f96f1c07ad8.cancel
#asio|1478248907.372981|0|resolver#0x7f96f1c07ac0.cancel
#asio|1478248907.373509|0*6|deadline_timer#0x7f96f1d00468.async_wait
#asio|1478248907.373526|0*7|resolver#0x7f96f1d00450.async_resolve
#asio|1478248907.374910|>7|ec=system:0,...
#asio|1478248907.374946|7*8|socket#0x7f96f1d003b0.async_connect
#asio|1478248907.375014|<7|
#asio|1478248907.375127|>8|ec=system:0
#asio|1478248907.375135|<8|
My question is: what is the problem with running unit tests that open and close connections? If this is a no-no, how do you write unit tests that use async_open?
io_service has run, run_one, poll and poll_one methods which actually execute the completion handlers. Boost asio may have its own threads, but the thread state of those may not be correct to call your handlers. Hence, even in a unit test you must figure out which thread is going to call completion handlers.
Secondly, run runs to completion, and then returns. From your description (first test succeeds, second fails) it sounds like you did call run but did not reset and re-runthe io_service.
The problem seemed to be related to the way I was iterating through the output of a tcp::resolver.
I have some unit tests that all execute things asynchronously. But it turns out that even though I'm calling expectAsync a bunch of times in side my unit tests (sometimes this may involve multiple nested asynchronous calls to expectAsync), the unit tests still exits and calls the tearDown method which effectively is cutting off my infrastructure that my asynchronous tests are running on. What I want to happen is for my tests to run and wait until all the expectations, async or not, to have completed before it continues on to the next test. Is this possible to achieve? the reason my unit tests have been passing up to now is because the clean up code in tearDown was also executing async, but it should ideally work weather it cleans up asynchronously or immediately.
We need to see your code to be able to pinpoint the exact problem.
Most likely you are not calling expectAsync enough. At any time your code is waiting for an asynchronous callback, there must be at least one outstanding expectAsync function waiting to be called.
You can cut it all down to one expectAsync call by creating a "done" function that you call whenever your test is completed:
test("ladadidadida", () {
var done = expectAsync((){});
something.then((a) { return somethingElse(); })
.then((b) { expect(a, b); done(); })
});
I would like to write tests that work as follows:
start an asynchronous Test
after this test is done, start the next asynchronous Test
do that for an arbitrary number of tests
Setting QUnit.config.reorder to false does not prevent the Tests from being started before the previous one is finished.
asyncTest('test1',function(){}); // all tests are started back on back
asyncTest('test2',function(){}); // but I would like to start them
asyncTest('test3',function(){}); // one after the other
I know that tests should be atomic, but in this case that would lead to one huge test, which itself may become error prone, so I would like to split it up.
Right now I am »packing« each Test into a wrapping function and call this function after one test is done, but that is somehow awkward and would like to know what kind of best practice exists for this.
Cheers!
I found a solution to reset these globals, by reloading the entire script before each test and therefore I used the Qunit.testStart() method. So now I have both atomic tests and freshly initialised variables in each test
I'm trying to test some actors using scala specs. I run the test in IDEA or Maven (as junit) and it does not exit. Looking at the code, my test finished, but some internal threads (scheduler) are hanging around. How can I make the test finish?
Currently this is only possible by causing the actor framework's scheduler to forcibly shut down:
scala.actors.Scheduler.impl.shutdown
However, the underlying implementation of the scheduler has been changing in patch-releases lately, so this may be different, or not quite work with the version you are on. In 2.7.7 the default scheduler appears to be an instance of scala.actors.FJTaskScheduler2 for which this approach should work, however if you end up with a SingleThreadedScheduler it will not, as the shutdown method is a no-op
This will only work if your actors are not waiting on a react at that time