What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST - unit-testing

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.

From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Related

Collect and run all junit tests in parallel with each test class in its own JVM (parallelization by class, not by method)

Problem
I've a bunch of junit tests (many with custom runners such as PowerMockRunner or JUnitParamsRunner) all under some root package tests (they are in various subpackages of tests at various depths).
I'd like to collect all the tests under package tests and run each test class in a different JVM, in parallel. Ideally, the parallelization would be configurable, but a default of number_of_cores is totally fine as well. Note that I do not want to run each method in its own JVM, but each class.
Background
I'm using PowerMock combined with JUnitParams via annotations #RunWith(PowerMockRunner.class) and #PowerMockRunnerDelegate(JUnitParamsRunner.class) for many of my tests. I have ~9000 unit tests which complete in an "ok" amount of time but I've an 8-core CPU and the systems is heavily underutilized with the default single-test-at-a-time runner. As I run the tests quite often, the extra time adds up and I really want to run the test classes in parallel.
Note that, unfortunately, in a good number of the tests I need to mock static methods which is part of the reason I'm using PowerMock.
What I've Tried
Having to mock static methods makes it impossible to use something like com.googlecode.junittoolbox.ParallelSuite (which was my initial solution) since it runs everything in the same JVM and the static mocking gets all interleaved and messed up. Or so it seems to me at least based on the errors I get.
I don't know the JUnit stack at all, but after poking around, it appears that another option might be to try to write and inject my own RunnerBuilder -- but I'm not sure if I can even spawn another JVM process from within a RunnerBuilder, unlikely. I think the proper solution would be some kind of harness that lives as a gradle task.
I also JUST discovered some Android Studio (Intellij's) test options but the only available fork option is method which is not what I want. I am currently exploring this solution so perhaps I will figure it out but I thought I'd ask the community in parallel since I haven't had much lock yet.
UPDATE: Finally was able to get Android Studio (Intellij) to collect all my tests using options Test Kind: All in directory (for some reason the package option did not do recursive searching) and picking fork mode Class. However, this still runs each test class found sequentially and there are no options that I see about parallelization. This is so close to what I want but not quite... :(
Instead of using Intellij's (Android Studio) built-in JUnit run configurations, I noticed that Android Studio has a bunch of pre-build gradle tasks some of which refer to testing. Those however, exhibited the same sequential execution problem. I then found Run parallel test task using gradle and added the following statement to my root build.gradle file:
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
This works great, my CPU is now pegged to 100% (for most of the run, as the number of outstanding test classes becomes < avail processors obviously utilization goes down).
The downside to this solution is that it does not integrate with Android Studio's (Intellij) pretty junit runner UI. So while the gradle task is progressing I cannot really see the rate of test completion, etc. At the end of the task execution, it just spits out the total runtime and a link to an HTML generated report. This is a minor point and I can totally live with it, but it would be nice if I could figure out how to improve the solution to use the the JUnit runner UI.
Maybe this was not possible when the question posted but now you can do it easily in android studio.
I am using gradle build tools: 'com.android.tools.build:gradle:2.2.3'
And I added the following in my root build.gradle file.
allprojects {
// ...
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
Now, I have multiple Gradle Test Executor runners for my tests. The more cores of your running machine, the mores executors you have!
Thanks for sharing your original answer!
It may sound counterintuitive but actually running lower number of forks may be faster than running on all available cores.
For me this setup is 30s faster (1:50 instead of 2:20) for the same tests, compared to all available processors (8 core CPU, 16 threads)
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}

Repeat subset of tests determined programmatically at runtime with gtest framework

That's orthogonal why but for clarity: I created a TimeMonitor event listener that at the end of the test compares the elapsed time with a policy and fails it if the test takes longer.
It works great with one exception - from time to time the system gets in weird state and some of the tests might take longer because of that. Note my bar for unit tests is 15ms - it is not so hard to happen.
I had this before and the way I solved it was to create a record and wait until the same test exceed the them several times before I fail it. This has several flows - the major one - the need of persisting the data.
I think it will works better if I simply do two (or more) passes. At first pass I collect the tests that exceeded their time and in pass 2-N I repeat them to confirm or reject the problem.
My question is - how. What I need to do (if possible) to programmatically collect a subset of tests and rerun them. Do I need to remove test from testing::UnitTest::GetInstance() or I should create another UnitTest.
A reference to something similar would be great, like retry failed tests for example.
I know the following does not directly answer your question, but I believe that a suggestion of a different approach is justified. I would suggest doing test execution time analysis from a separate process to simplify things and avoid changing the program that runs the tests. This way, you can be certain that you have not influenced the execution time of your tests by inserting additional code that keeps track of tests whose execution time exceeds the threshold you have defined. Also, you won't be needing to modify state of UnitTest objects and other details of googletest implementation, which is harder to understand and potentially dangerous.
Output of the executable that runs your test suite already provides you with execution time for each test. Write a script that runs your test suite executable once and parses that output to determine which tests take too long to execute (this can be easily achieved in some higher level language like Python). Then, if the script has found some tests that are suspicious, it re-runs the test suite executable 2-N times by specifying --gtest_filter command line parameter to it. For example:
tests.exe --gtest_filter=*test1*:*test2*:...:*testN*
This way, only suspicious tests will be re-run and you will be able to determine if some of them is indeed problematic.
If you do not want to use the values provided by googletest, you can modify your TimeMonitor to output the test execution time and parse those values. However, maybe it would be best to remove it and be 100% sure you are not influencing the execution time of the tests.
Hope this helps!
The solution actually is simple (when you know it). Disclaimer not tested with every possible corner case.
in pseudo code:
time monitor -> just observe and create a filter for the long tests
attach time monitor
testing::InitGoogleTest(&argc, argv);
int result = RUN_ALL_TESTS();
if (result == 0 && time_monitor->has too long tests()) {
time monitor -> activate reporting errors
::testing::GTEST_FLAG(filter) = time monitor -> the filter();
result = RUN_ALL_TESTS();
}

Ember acceptance tests fail when running all at once

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Exeute custom method when the test execution is halted

We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation