py.test: dump stuck background threads at the end of the tests - unit-testing

I am using pytest to run my projects Python unit tests.
For some reason, sometimes the test runner does not exist after printing the test stats. I suspect this is because some tests open background threads and some dangling threads are not cleaned up properly in the tear down. As this does not occur every time, it makes it harder to pin down what is exactly happening.
I am hoping to find a way to make pytest to display what threads after it prints failed and passed tests. Some ideas I came up with?
Run custom hook after tests are finished - does py.test support any of such hooks?
Some other way (custom py.test wrapping script)
Other alternative ways I think would be just print thread dump at the end of each tear down.
Python 3.4.

Try using the pytest-timeout plugin... after a timeout occurs, it will dump all threads and exit the process.
If you would like to implement custom code yourself though, take a look at pytest hooks. I guess you could use pytest_runtest_teardown hook to write custom tear down code.

Related

Ember acceptance tests fail when running all at once

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.

Exeute custom method when the test execution is halted

We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation

Is there a way to access all of fs.watch listeners for a process?

I'm trying to implement a unit testing platform, a unit test automated runner, in a way that tests can be debugged, and this way involves clearing as many resources as possible between each test run, for example require.cache.
The problem I've been running into is that FSWatcher instances, if any are created by the unit tests and their associated code, are being duplicated for each test run creating an obvious memory leak, and printing big red warnings in the console. Is there a way to locate them from within the process to close them?
http://nodemanual.org/latest/nodejs_ref_guide/fs.FSWatcher.html
You can call close() on a FSWatcher.

tools for running test binaries

I'm looking for a tool that could run a unit test, which is a normal unix binary, as many instances concurrently. I also need the tool to gather any cores and stop on failure. Ability to allow some failures is a bonus.
Idea is to stress test a multi-threaded application with a large amount of test processes running concurrently. A single unit-test crashes very seldom, so I want to run many of them at the same time to maximize my chances of catching the bug.
Extra credit if the tool can be daemonized to constantly run a set of binaries, with the ability to control it from outside.
UPDATE:
I ended up implementing a test driver with Python (runs multiple tests concurrently, restarting a test if it completes successfully). The test driver can be signaled to stop by creating a stamp file. This test driver is in turn invoked by a buildbot builder and stopped when a new revision is published. This approach seems to work reasonably well.
Valgrind Maybe ?
http://valgrind.org/

MbUnit SetUp & Teardown thread safety?

First time poster, long time lurker. Figured it's about time I start getting actively involved. So here's a question I've spend all weekend trying to find an answer to.
I'm working on writing a bunch of acceptance tests using Selenium and MbUnit, using the DegreeOfParallelism attribute that MbUnit offers.
My Setup and Teardown methods respectively starts a new and destroys a selenium session, based on the assumption that the method is run in isolation of the context where the test that's gonna be invoked, is about to be run.
However, I'm seeing something that the Teardown method is not guaranteed to be run in the correct context, resulting in the state of another test, which is being run, to be changed. This is manifesting itself as the Selenium session of a random test gets shut down. If I simply prefix and suffix my test bodies with the code(Both 1-liners), everything works correctly.
Is there any way to ensure that the Setup and Teardown methods does not run in the incorrect context/thread?
Thanks in advance.