Unit testing my service and mocking dependencies - unit-testing

I have a service which has two dependencies. One of them is a $http service responsible for making Ajax calls to my Rest API.
Inside my service I have this function:
this.getAvailableLoginOptions = function() {
return $http.get(path.api + '/security/me/twoFA/options').then(function (resp) {
return new TwoFaLoginOptions(resp.data);
});
};
This function gets some options from my API and returns an object.
Now, how should I unit test this function properly? Normally I would just mock the $http service so that when the get function is called with a string parameter ending with '/security/me/twoFA/options' I would return a valid response with options.
But what if some other developer comes in and refactors this function so now it takes the options from another source e.i. another API or from browser's local storage but the function still works perfectly as it returns what it is supposed to do.
So what really unit testing is? Should we test every function as a black box and assume that if we give some input then we expect some particular output OR we should test it as a white box by looking at every line of code inside a function and mock everything but the test will be strongly dependent on all dependencies and the way how I use them.
Is it possible to write a unit test which tests if my function works properly no matter what algorithm or source of data is used to implement it? Or maybe this is actually a part of unit testing to check if my function really uses a dependency in this and that way (in addition to testing the function's logic)?

But what if some other developer comes in and refactors this function so now it takes the options from another source
This is exactly why Dependency Injection is used, because it makes it simple to manage dependencies between objects (easier to break coherent functionality off into separate contracts (interfaces), and thus solve the problem of swapping out object dependencies during runtime or compile time).
For example, you could have a ITwoFaLoginOptions with multiple implementations (service, local storage etc), and then you'll mock the interface get method.
Should we test every function as a black box [...] OR we should test
it as a white box?
In general unit testing is considered white box testing (mocking the dependencies you have in order to obtain predefined responses that help you reach various code paths, while also asserting these dependencies have been called with the expected parameters), while system (or integration) tests would use a black box approach (for example calling the service like a client, assert against the response/DB).

Related

How to determinate the scope of a test

How i can determinate what is the best way of test a feature:
by example
I have a email validation that is made by a function that returns an object with errors, i should test this as a unit test and test this function or should create an e2e that test the ui show the proper validation message?
You could do both. Each of them tests different things.
In this case you should start with unit tests.
You want to get (quick) feedback from the test and testing the function directly is the best feedback you can get. If the tests fails you are pretty sure it is related to the function under test. With unit tests you can easily test all paths and error conditions of the function.
The e2e test checks that all the layers (which in the end call your function) work together. If it fails there could be an issue on any(!) layer. It would be more work to find the cause of the problem. That's why you do not test the details of the function from an e2e test. Why involve all the layers if you can check the function directly?
You then write more unit(!) tests on the ui side using stub data for you object. You don't call the function. Pass the stub data to the ui and check that it is handled correctly depending on the stub data. For example you could test the success case without error and another with error and so on.
Having done that you know that the function works and that the ui handles the result of the function.
Now you could write e2e tests.
If e2e tests make sense heavily depends on your context. In my experience that's a lot of work with little return. This may be different in your context. I fear you have to find out for yourself if it pays off :-)

mocking a class used by a Gradle plugin when testing

I'm writing a Gradle plugin that interacts with an external HTTP API. This interaction is handled by a single class (let's call it ApiClient). I'm writing some high-level tests that use Gradle TestKit to simulate an entire build that uses the plugin, but I obviously don't want them to actually hit the API. Instead, I'd like to mock ApiClient and check that its methods have been called with the appropriate arguments, but I'm not sure how to actually inject the mocked version into the plugin. The plugin is instantiated somewhere deep within Gradle, and gets applied to the project being executed using its void apply(Project project) method, so there doesn't appear to be a way to inject a MockApiClient object.
Perhaps one way is to manually instantiate a Project, apply() the plugin to it (at which point, I can inject the mocked object because I have control over plugin instantiation), and then programmatically execute a task on the project, but how can I do that? I've read the Gradle API documentation and haven't seen an obvious way.
A worst-case solution will be to pass in a debug flag through the plugin extension configuration, which the plugin will then use to determine whether it should use the real ApiClient or a mock (which would print some easily grep-able messages to the STDOUT). This isn't ideal, though, since it's more fuzzy than checking the arguments actually passed to the ApiClient methods.
Perhaps you could split your plugin into a few different plugins
my-plugin-common - All the common stuff
my-plugin-real-services - Adds the "real" services to the model (eg RealApiClient)
my-plugin-mock-services - Adds "mock" services to the model (eg MockApiClient)
my-plugin - Applies my-plugin-real-services and my-plugin-common
my-plugin-mock - Applies my-plugin-mock-services and my-plugin-common
In the real world, people will only ever apply: 'my-plugin'
For testing you could apply: 'my-plugin-mock'

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

How do I correctly instantiate a new controller when using Ninject?

I created a demo Web API application that utilizes Ninject. The application works fine as I can run it, navigate to the defined route, and get the data I'm expecting. Now I want to begin adding unit tests to test the ApiController.
How do I instantiate a new ApiController? I'm using var sut = new DogsController(); but that results in an error, "... does not contain a constructor that take 0 arguments". It's correct I don't have a constructor that takes 0 arguments but Ninject should be taking care of that for me, correct? How do I resolve this?
You will have wired Ninject into the Web API application, not your unit test project. As a result, Ninject will not be creating the dependencies for your controller, or even your controller as you are explicitly creating it (in the Web API application, the framework creates your controller).
You could wire Ninject into your unit test project, but that would not be the correct thing to do. You should be creating your controller in your tests with a known state, so you should either be passing in known dependencies, or passing in some form of mock dependencies.
A DI container is not some piece of magic that transforms your code every time you write "new Something()". In your unit test you are newing up the controller by hand (which is good practice btw), but this means you will have to supply the constructor with the proper fake versions of the abstractions the constructor expects.

distributed unit testing/scenario based unit testing with boost.test

I am developing unit test cases for an application using Boost.test libraries. There are certain APIs which can directly be tested.
But, there are APIs which require interaction between test machines. So for example, execution of a certain API in machine 1 should trigger an API in test machine 2 and its response needs to be used again in machine 1 for successful completion.
How can I synchronize this ? Does Boost provide other libraries for this interaction? If there are any other approaches, kindly suggest them.
Thanks in advance for your time and help.
There are two kinds of tests you can write for this interaction:
Unit test - using mocks/faks you can fake the calls from the first component and fake the calls from the 2nd component back. This way you can test the internal logic of the first component - for example make sure that if no response were returned a time-out exception is raised.
Integration/acceptance test - create both components as part of the test and configure them and raise the call from component one.
In both kinds of tests you might be required to use events and WaitForSingleObject to make sure that the test won't end before the response has returned.