I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().
Related
I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.
Ran into a really odd issue yesterday while doing my Jasmine tests (which run headless usually, but can debug in Chrome). A test that usually passes seems to fail when I reach a specific total test count (678), but succeeds again the moment I have more than that. I reduced the number of tests such that I was only running that one test suite, and could repro the same problem at 177 tests, which I did by taking a very simple non-failing test and duplicating it a bunch more times.
I'm not seeing any other issues (i.e. a page reload error), and even stranger is that the test that supposedly fails doesn't match the line number Jasmine spits out as the offending line (which is actually the following test). When I manually step through these, it's obvious that the spy IS called, and I do believe I'm handling the async stuff correctly, as the code involves promises.
I know that isn't super specific, but I'm curious if anyone has run into this before, and has ideas for how to proceed in debugging this?
Came to the same conclusion that Sulthan did in his comment above, but it turned out to be a problem with when I was calling my expects in relation to where I was calling done() in some tests that involved async calls/promises. Seems like the number of tests that would run would create timing issues that finally exposed these problems.
I have a test project I'm using to practice deploying to Azure
Located:
https://github.com/EdLichtman/HelloAzureCI
When I use Resharper to run the NUnit tests, All of them pass except for the Environment-Specific test case, as should be expected.
However when I run deploy.cmd on my local computer All 4 tests fail because "Object reference not set to an instance of an object."
One of my unit tests is "Assert.AreEqual(1,1)" and that throws a nullReference exception, which leads me to think that Assert is not an instance of an object.
Why is this such a problem? Can anyone else recreate?
There are a few odd things here, but the main one is that you are trying to run NUnit 3.7.1 tests using an NUnit V2 console runner (2.6.2). This is never going to work. My suggestion is that whenever you have trouble with running NUnit in a remote environment or using a third-party runner, you fall back to using the console runner locally. Even if that's not your preferred mode of working, you will usually be able to figure out what's wrong more easily if you eliminate as many middlemen as possible.
If you actually want to run under vsconsole, then you need to install the nunit3-vs-adapter nuget package and point to it's location in your command-line. Note that the adapter, even though it is ours, constitutes another middleman, so debugging using nunit3-console is still a good choice.
I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.
I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.