I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.
Related
I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.
I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.
Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().
I have noticed that if I have a set of regression tests and decide to change a property on one of my objects (DTO) from int to decimal for example - i make all the other changes and the tests pass like normal. But if this project is under source control (VSS specifically) this small change will cause something strange to happen...
Similar to this question
Testing in Visual Studio Succeeds Individually, Fails in a Set
But a little different. I can make this change, and try to run my tests and any test that has an assert around this new data type will fail, but if I then click "debug checked tests" and it then runs through the previously failed tests - they pass. No changes to the test code /etc
Does anyone know why this might be happening? I hate to work outside of source control but if my tests are not reliable ... why have them at all in this case ... and I live for testing code :P
Given the age of the question, I doubt it's still an issue for you, but I wonder if you have a bin or obj folders under source control or an assembly that is in them?
If they are then when you compile the app (before MSTest runs) the source controlled assemblies are going to be in read-only mode and won't get overridden by the compiler and thus your tests will be against out of date binaries.