distributed unit testing/scenario based unit testing with boost.test - unit-testing

I am developing unit test cases for an application using Boost.test libraries. There are certain APIs which can directly be tested.
But, there are APIs which require interaction between test machines. So for example, execution of a certain API in machine 1 should trigger an API in test machine 2 and its response needs to be used again in machine 1 for successful completion.
How can I synchronize this ? Does Boost provide other libraries for this interaction? If there are any other approaches, kindly suggest them.
Thanks in advance for your time and help.

There are two kinds of tests you can write for this interaction:
Unit test - using mocks/faks you can fake the calls from the first component and fake the calls from the 2nd component back. This way you can test the internal logic of the first component - for example make sure that if no response were returned a time-out exception is raised.
Integration/acceptance test - create both components as part of the test and configure them and raise the call from component one.
In both kinds of tests you might be required to use events and WaitForSingleObject to make sure that the test won't end before the response has returned.

Related

Unit testing my service and mocking dependencies

I have a service which has two dependencies. One of them is a $http service responsible for making Ajax calls to my Rest API.
Inside my service I have this function:
this.getAvailableLoginOptions = function() {
return $http.get(path.api + '/security/me/twoFA/options').then(function (resp) {
return new TwoFaLoginOptions(resp.data);
});
};
This function gets some options from my API and returns an object.
Now, how should I unit test this function properly? Normally I would just mock the $http service so that when the get function is called with a string parameter ending with '/security/me/twoFA/options' I would return a valid response with options.
But what if some other developer comes in and refactors this function so now it takes the options from another source e.i. another API or from browser's local storage but the function still works perfectly as it returns what it is supposed to do.
So what really unit testing is? Should we test every function as a black box and assume that if we give some input then we expect some particular output OR we should test it as a white box by looking at every line of code inside a function and mock everything but the test will be strongly dependent on all dependencies and the way how I use them.
Is it possible to write a unit test which tests if my function works properly no matter what algorithm or source of data is used to implement it? Or maybe this is actually a part of unit testing to check if my function really uses a dependency in this and that way (in addition to testing the function's logic)?
But what if some other developer comes in and refactors this function so now it takes the options from another source
This is exactly why Dependency Injection is used, because it makes it simple to manage dependencies between objects (easier to break coherent functionality off into separate contracts (interfaces), and thus solve the problem of swapping out object dependencies during runtime or compile time).
For example, you could have a ITwoFaLoginOptions with multiple implementations (service, local storage etc), and then you'll mock the interface get method.
Should we test every function as a black box [...] OR we should test
it as a white box?
In general unit testing is considered white box testing (mocking the dependencies you have in order to obtain predefined responses that help you reach various code paths, while also asserting these dependencies have been called with the expected parameters), while system (or integration) tests would use a black box approach (for example calling the service like a client, assert against the response/DB).

Unit testing activities marked as #ManualActivityCompletion

I have been working on creating unit tests which run local versions of a workflow. I followed this guide for an initial setup. With that setup, I have been able to successfully execute and test a workflow. The issue arises when I attempt to unit test an activity implementation that is marked as #ManualActivityCompletion. It appears that the manual completion activities return normally within unit tests (not waiting for a completion/failure call).
I'm wondering if it is even possible to unit tests manual completion activities in this way. My guess is that it is not since I have seen no mention of it and I don't see any way to create a test ManualActivityCompletionClient. In that case, I'm wondering if anyone has any suggestions on how to unit test manual completion activities in a local workflow. I have attempted to create workarounds to this by using different threads and synch points, but it is useful to test with the actual behavior of completing/failing activities (exceptions that are thrown, etc.). It may be worth mentioning that I have been able to write successful integration unit tests for manual completion activities.
Any help is greatly appreciated.
To test workflow logic that invokes activity marked with #ManualActivityCompletion just mock client side interface of this activity directly. As client executes in the asynchronous context of the workflow you can use Promises and WorkflowClock to implement your tests.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Can I get log output only for failures with boost unit tests

I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself?
There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of boost::unit_test::test_observer, create an instance of the class that will persist throughout the entire test (either a static global object or a BOOST_TEST_GLOBAL_FIXTURE), and then pass the class to boost::unit_test::framework::register_observer.
The method to override with a start of test hook is test_unit_start, and the method to override with an end of test hook is test_unit_finish. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The test_unit_finish hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a boost::unit_test::results_collector singleton, which has a results() method, and if you pass it the test_unit_id of the test unit provided to test_unit_finish, you get a test_results object that has a passed() method. I can't really see a way to get the test_unit_id that is clearly part of the public API -- you can just directly access the p_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the assertion_result, exception_caught, test_unit_aborted, and test_unit_timed_out hooks from the test_observer subclass (assertion_result indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all).
According to the Boost.Test documentation, run your test executable with --log_level=error. This will catch only failing test cases.
I checked that it works using a BOOST_CHECK(false) on an otherwise correctly running project with a few thousand unit tests.
Running with --log_level=all gives the result of all assertions. I checked that by piping it to wc -l that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by --report_level=detailed). You could of course also grep the log for the strings error or failed.

Inconsistent unit tests - failing in test suite, passing separated

I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');