Does jest automatically restore mocked modules between test modules? - unit-testing

Does jest automatically restore mocked modules between test files? For example, if I call jest.mock('some_module') in one file, do I need to ensure I call jest.unmock('some_module') after all the tests are run in that file?
It's not clear to me whether that happens in the documentation.

You don't have to reset the mocks, as the test are run in parallel, every test file run in its own sandboxed thread. Even mocking JavaScript globals like Date or Math.random only affects the actual test file.
The only problem we had so far was mocking process.env.NODE_ENV which affected other test that run at the same time. But reseting this after the test run solved the problem.

Related

Unit testing code that depends on dart:html

I'm trying to write unit tests on a piece of code that imports dart:html and I ended up having a test class that's using useHtmlConfiguration();
Do I really have to do this? Because it seems everytime I run my tests, it runs in a browser, dart2js gets called and it's taking much longer than if I were testing with the dartVM. I tried it with Dartium and it also recompiles.
In fact, the only reason my code uses dart:html is because it's using HttpRequest in the package. At the end I might just end up putting an interface in front of the class doing the http request and mocking it, but I was wondering if there were an efficient way of having a good (read quick) feedback loop without having to call dart2js everytime I want to run my tests?
If your code imports dart:html the code and also tests importing this code can only be run in the browser.
I don't know why dart2js is called. You can run tests in Dartium or content_shell --dump-render-tree (headless Dartium) as Dart code without transpiling to JS first.
You might prefer using the http package which has some abstraction of HttpRequest which should on client and server (haven't tested it myself this way yet).

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Can I get log output only for failures with boost unit tests

I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself?
There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of boost::unit_test::test_observer, create an instance of the class that will persist throughout the entire test (either a static global object or a BOOST_TEST_GLOBAL_FIXTURE), and then pass the class to boost::unit_test::framework::register_observer.
The method to override with a start of test hook is test_unit_start, and the method to override with an end of test hook is test_unit_finish. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The test_unit_finish hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a boost::unit_test::results_collector singleton, which has a results() method, and if you pass it the test_unit_id of the test unit provided to test_unit_finish, you get a test_results object that has a passed() method. I can't really see a way to get the test_unit_id that is clearly part of the public API -- you can just directly access the p_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the assertion_result, exception_caught, test_unit_aborted, and test_unit_timed_out hooks from the test_observer subclass (assertion_result indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all).
According to the Boost.Test documentation, run your test executable with --log_level=error. This will catch only failing test cases.
I checked that it works using a BOOST_CHECK(false) on an otherwise correctly running project with a few thousand unit tests.
Running with --log_level=all gives the result of all assertions. I checked that by piping it to wc -l that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by --report_level=detailed). You could of course also grep the log for the strings error or failed.

log4net fails to find thread id when running under unit tests

I have log4net which writes entries like:
<conversionPattern value="[%date{yyyy-MM-dd HH:mm:ss}] [%property{machineName}] [%property{pid}] [%thread] [%-5level]: %message%newline"/>
It all works fine except when running unit tests. If I do not mock the logger and the tests use the real object then instead of a threadId I get
Agent: adapter run thread for test 'Log4NetLogger_TestLoggingMachineNamePrinted' with id '84e27809-f2b8-45b4-a2e1-ce305d20bc0c'
So obviously log4net gets confused when it is being used from a test runner. If I run the app normally then I get a normal thread id.
Anyone knows a workaround for that? I am using MSTest. Same behaviour happens with the MSTest test runner and the R# test runner.
Thank you in advance for reading my question.
George
Adding a reference to log4net in the unit tests project may do the trick (see this answer).
Having said that, you probably don't need logging in this case (unless these are really Integration tests), so it is best to use a Stub instead of your real logger object.

MbUnit SetUp & Teardown thread safety?

First time poster, long time lurker. Figured it's about time I start getting actively involved. So here's a question I've spend all weekend trying to find an answer to.
I'm working on writing a bunch of acceptance tests using Selenium and MbUnit, using the DegreeOfParallelism attribute that MbUnit offers.
My Setup and Teardown methods respectively starts a new and destroys a selenium session, based on the assumption that the method is run in isolation of the context where the test that's gonna be invoked, is about to be run.
However, I'm seeing something that the Teardown method is not guaranteed to be run in the correct context, resulting in the state of another test, which is being run, to be changed. This is manifesting itself as the Selenium session of a random test gets shut down. If I simply prefix and suffix my test bodies with the code(Both 1-liners), everything works correctly.
Is there any way to ensure that the Setup and Teardown methods does not run in the incorrect context/thread?
Thanks in advance.