Automated Test case Execution - when to stop - unit-testing

We have around 100 test cases for our system. We are trying to build an automated test suite for it.
Say while running the tests the 25th test fails. Should our automated test system bail out here and stop execution, or should it just mark this as failed and continue trying to execute test cases 26th onwards (that is every test cycle will execute all 100 test cases irrespective of any failed test cases).
Ofcourse after a failed test case(for example no 25) if the system needs to be reset to execute test cases 26 onwards it will be taken care of.
Thanks
James

If your tests are independent - you should finish all of them. This way you can monitor the system stability and see all of the problems at once without re-running tests countless times.

If this is running without human intervention, say as part of some automated build, I would want to attempt all tests.
However, there are scenarios where you're in the mode of fixing problems where it might save a human's time to just stop. If it's easy I'd like to offer a "Stop on first failure" option.

Related

Tell Google Test to resume executing the rest of the tests after a crashed test

I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Run Functional Tests step in Vnext TFS 15 is awfully slow

Running functional tests in TFS 15 with Vnext in comparison to the old system with MTM and Tests Environemnts, it is awefully slow. it takes like 10 minutes after initial test start, before the first tests are started. And while running the tests, they take longer as normal.
Distribution of tests is slightly "unhappy", tests get distributed at the beginning of the test run, but if 1 maschine is finished, while the other one has still 5 long test runs doens't make sense. Bucket size was a way more intelligent system
is there are a way to improve this? We have updated to RC2 and we are not happy with the test outcome. Feeling like test tast is a bottleneck
Ok so case is that the tests get distributed at the start of the test run and the test runner itself works different now. Unlike bucket system that distributes the tests one after another..
Also the tests only shows if failed or finished AFTER they are done, so if 200 tests get distributed it will only show the outcome when all 200 are done.
kinda awkward..

Repeat subset of tests determined programmatically at runtime with gtest framework

That's orthogonal why but for clarity: I created a TimeMonitor event listener that at the end of the test compares the elapsed time with a policy and fails it if the test takes longer.
It works great with one exception - from time to time the system gets in weird state and some of the tests might take longer because of that. Note my bar for unit tests is 15ms - it is not so hard to happen.
I had this before and the way I solved it was to create a record and wait until the same test exceed the them several times before I fail it. This has several flows - the major one - the need of persisting the data.
I think it will works better if I simply do two (or more) passes. At first pass I collect the tests that exceeded their time and in pass 2-N I repeat them to confirm or reject the problem.
My question is - how. What I need to do (if possible) to programmatically collect a subset of tests and rerun them. Do I need to remove test from testing::UnitTest::GetInstance() or I should create another UnitTest.
A reference to something similar would be great, like retry failed tests for example.
I know the following does not directly answer your question, but I believe that a suggestion of a different approach is justified. I would suggest doing test execution time analysis from a separate process to simplify things and avoid changing the program that runs the tests. This way, you can be certain that you have not influenced the execution time of your tests by inserting additional code that keeps track of tests whose execution time exceeds the threshold you have defined. Also, you won't be needing to modify state of UnitTest objects and other details of googletest implementation, which is harder to understand and potentially dangerous.
Output of the executable that runs your test suite already provides you with execution time for each test. Write a script that runs your test suite executable once and parses that output to determine which tests take too long to execute (this can be easily achieved in some higher level language like Python). Then, if the script has found some tests that are suspicious, it re-runs the test suite executable 2-N times by specifying --gtest_filter command line parameter to it. For example:
tests.exe --gtest_filter=*test1*:*test2*:...:*testN*
This way, only suspicious tests will be re-run and you will be able to determine if some of them is indeed problematic.
If you do not want to use the values provided by googletest, you can modify your TimeMonitor to output the test execution time and parse those values. However, maybe it would be best to remove it and be 100% sure you are not influencing the execution time of the tests.
Hope this helps!
The solution actually is simple (when you know it). Disclaimer not tested with every possible corner case.
in pseudo code:
time monitor -> just observe and create a filter for the long tests
attach time monitor
testing::InitGoogleTest(&argc, argv);
int result = RUN_ALL_TESTS();
if (result == 0 && time_monitor->has too long tests()) {
time monitor -> activate reporting errors
::testing::GTEST_FLAG(filter) = time monitor -> the filter();
result = RUN_ALL_TESTS();
}

Ember acceptance tests fail when running all at once

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.