MbUnit SetUp & Teardown thread safety? - concurrency

First time poster, long time lurker. Figured it's about time I start getting actively involved. So here's a question I've spend all weekend trying to find an answer to.
I'm working on writing a bunch of acceptance tests using Selenium and MbUnit, using the DegreeOfParallelism attribute that MbUnit offers.
My Setup and Teardown methods respectively starts a new and destroys a selenium session, based on the assumption that the method is run in isolation of the context where the test that's gonna be invoked, is about to be run.
However, I'm seeing something that the Teardown method is not guaranteed to be run in the correct context, resulting in the state of another test, which is being run, to be changed. This is manifesting itself as the Selenium session of a random test gets shut down. If I simply prefix and suffix my test bodies with the code(Both 1-liners), everything works correctly.
Is there any way to ensure that the Setup and Teardown methods does not run in the incorrect context/thread?
Thanks in advance.

Related

Ember acceptance tests fail when running all at once

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.

py.test: dump stuck background threads at the end of the tests

I am using pytest to run my projects Python unit tests.
For some reason, sometimes the test runner does not exist after printing the test stats. I suspect this is because some tests open background threads and some dangling threads are not cleaned up properly in the tear down. As this does not occur every time, it makes it harder to pin down what is exactly happening.
I am hoping to find a way to make pytest to display what threads after it prints failed and passed tests. Some ideas I came up with?
Run custom hook after tests are finished - does py.test support any of such hooks?
Some other way (custom py.test wrapping script)
Other alternative ways I think would be just print thread dump at the end of each tear down.
Python 3.4.
Try using the pytest-timeout plugin... after a timeout occurs, it will dump all threads and exit the process.
If you would like to implement custom code yourself though, take a look at pytest hooks. I guess you could use pytest_runtest_teardown hook to write custom tear down code.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

log4net fails to find thread id when running under unit tests

I have log4net which writes entries like:
<conversionPattern value="[%date{yyyy-MM-dd HH:mm:ss}] [%property{machineName}] [%property{pid}] [%thread] [%-5level]: %message%newline"/>
It all works fine except when running unit tests. If I do not mock the logger and the tests use the real object then instead of a threadId I get
Agent: adapter run thread for test 'Log4NetLogger_TestLoggingMachineNamePrinted' with id '84e27809-f2b8-45b4-a2e1-ce305d20bc0c'
So obviously log4net gets confused when it is being used from a test runner. If I run the app normally then I get a normal thread id.
Anyone knows a workaround for that? I am using MSTest. Same behaviour happens with the MSTest test runner and the R# test runner.
Thank you in advance for reading my question.
George
Adding a reference to log4net in the unit tests project may do the trick (see this answer).
Having said that, you probably don't need logging in this case (unless these are really Integration tests), so it is best to use a Stub instead of your real logger object.

Inconsistent unit tests - failing in test suite, passing separated

I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');