How to determinate the scope of a test - unit-testing

How i can determinate what is the best way of test a feature:
by example
I have a email validation that is made by a function that returns an object with errors, i should test this as a unit test and test this function or should create an e2e that test the ui show the proper validation message?

You could do both. Each of them tests different things.
In this case you should start with unit tests.
You want to get (quick) feedback from the test and testing the function directly is the best feedback you can get. If the tests fails you are pretty sure it is related to the function under test. With unit tests you can easily test all paths and error conditions of the function.
The e2e test checks that all the layers (which in the end call your function) work together. If it fails there could be an issue on any(!) layer. It would be more work to find the cause of the problem. That's why you do not test the details of the function from an e2e test. Why involve all the layers if you can check the function directly?
You then write more unit(!) tests on the ui side using stub data for you object. You don't call the function. Pass the stub data to the ui and check that it is handled correctly depending on the stub data. For example you could test the success case without error and another with error and so on.
Having done that you know that the function works and that the ui handles the result of the function.
Now you could write e2e tests.
If e2e tests make sense heavily depends on your context. In my experience that's a lot of work with little return. This may be different in your context. I fear you have to find out for yourself if it pays off :-)

Related

Verify unit tests are actually being ran, not just skipped over because it's poorly written

We're using sinon and mocha to do our unit tests. I've come up with the problem several times where I write some tests (Maybe in a promise, or a callback, or a stub) and the tests in these do not ever get hit, thus they aren't actually testing anything.
I can't imagine being the only one who has done this, so was wondering what people do to verify the test they wrote is actually being ran.
As an example:
// sStuff...
let myStub = sinon.stub(className, "classMethod", (result) => {
// THIS will never be ran.
expect(result).to.be.equal(5);
});
expect(myStub.callCount).to.be.equal(0);
// Stuff...
The test won't complain it wasn't ran, because we have the callCount check, but in reality, we're never calling something on className to have the classMethod function to be called and testing the result against 5.
Any common solutions? I couldn't think of search terms for this.
Thanks!
I've never used sinon or mocha. That said, the easiest approach to validate that a test does what you're expecting it to is make sure it fails if its conditions aren't met.
In TDD, you'd follow the Red, Green, Refactor cycle which helps you to avoid this problem because you always experience the test failing and then being 'fixed' by the new code that you implement.
If I have to write tests for existing code, then I will typically write the test I feel is needed (which would pass), then change the behaviour of the code to validate that it 'breaks the test'. If it doesn't break the test, then the test isn't any good so I go back and fix the test. I then rollback the change to the code and validate that the test passes again.
Validating that the tests can fail takes a little longer than just writing them to pass, however the fail cycle helps to confirm that the tests are actually worthwhile and working as expected.

Unit testing my service and mocking dependencies

I have a service which has two dependencies. One of them is a $http service responsible for making Ajax calls to my Rest API.
Inside my service I have this function:
this.getAvailableLoginOptions = function() {
return $http.get(path.api + '/security/me/twoFA/options').then(function (resp) {
return new TwoFaLoginOptions(resp.data);
});
};
This function gets some options from my API and returns an object.
Now, how should I unit test this function properly? Normally I would just mock the $http service so that when the get function is called with a string parameter ending with '/security/me/twoFA/options' I would return a valid response with options.
But what if some other developer comes in and refactors this function so now it takes the options from another source e.i. another API or from browser's local storage but the function still works perfectly as it returns what it is supposed to do.
So what really unit testing is? Should we test every function as a black box and assume that if we give some input then we expect some particular output OR we should test it as a white box by looking at every line of code inside a function and mock everything but the test will be strongly dependent on all dependencies and the way how I use them.
Is it possible to write a unit test which tests if my function works properly no matter what algorithm or source of data is used to implement it? Or maybe this is actually a part of unit testing to check if my function really uses a dependency in this and that way (in addition to testing the function's logic)?
But what if some other developer comes in and refactors this function so now it takes the options from another source
This is exactly why Dependency Injection is used, because it makes it simple to manage dependencies between objects (easier to break coherent functionality off into separate contracts (interfaces), and thus solve the problem of swapping out object dependencies during runtime or compile time).
For example, you could have a ITwoFaLoginOptions with multiple implementations (service, local storage etc), and then you'll mock the interface get method.
Should we test every function as a black box [...] OR we should test
it as a white box?
In general unit testing is considered white box testing (mocking the dependencies you have in order to obtain predefined responses that help you reach various code paths, while also asserting these dependencies have been called with the expected parameters), while system (or integration) tests would use a black box approach (for example calling the service like a client, assert against the response/DB).

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Can I get log output only for failures with boost unit tests

I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself?
There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of boost::unit_test::test_observer, create an instance of the class that will persist throughout the entire test (either a static global object or a BOOST_TEST_GLOBAL_FIXTURE), and then pass the class to boost::unit_test::framework::register_observer.
The method to override with a start of test hook is test_unit_start, and the method to override with an end of test hook is test_unit_finish. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The test_unit_finish hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a boost::unit_test::results_collector singleton, which has a results() method, and if you pass it the test_unit_id of the test unit provided to test_unit_finish, you get a test_results object that has a passed() method. I can't really see a way to get the test_unit_id that is clearly part of the public API -- you can just directly access the p_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the assertion_result, exception_caught, test_unit_aborted, and test_unit_timed_out hooks from the test_observer subclass (assertion_result indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all).
According to the Boost.Test documentation, run your test executable with --log_level=error. This will catch only failing test cases.
I checked that it works using a BOOST_CHECK(false) on an otherwise correctly running project with a few thousand unit tests.
Running with --log_level=all gives the result of all assertions. I checked that by piping it to wc -l that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by --report_level=detailed). You could of course also grep the log for the strings error or failed.

Inconsistent unit tests - failing in test suite, passing separated

I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');