Run Functional Tests step in Vnext TFS 15 is awfully slow - unit-testing

Running functional tests in TFS 15 with Vnext in comparison to the old system with MTM and Tests Environemnts, it is awefully slow. it takes like 10 minutes after initial test start, before the first tests are started. And while running the tests, they take longer as normal.
Distribution of tests is slightly "unhappy", tests get distributed at the beginning of the test run, but if 1 maschine is finished, while the other one has still 5 long test runs doens't make sense. Bucket size was a way more intelligent system
is there are a way to improve this? We have updated to RC2 and we are not happy with the test outcome. Feeling like test tast is a bottleneck

Ok so case is that the tests get distributed at the start of the test run and the test runner itself works different now. Unlike bucket system that distributes the tests one after another..
Also the tests only shows if failed or finished AFTER they are done, so if 200 tests get distributed it will only show the outcome when all 200 are done.
kinda awkward..

Related

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Collect and run all junit tests in parallel with each test class in its own JVM (parallelization by class, not by method)

Problem
I've a bunch of junit tests (many with custom runners such as PowerMockRunner or JUnitParamsRunner) all under some root package tests (they are in various subpackages of tests at various depths).
I'd like to collect all the tests under package tests and run each test class in a different JVM, in parallel. Ideally, the parallelization would be configurable, but a default of number_of_cores is totally fine as well. Note that I do not want to run each method in its own JVM, but each class.
Background
I'm using PowerMock combined with JUnitParams via annotations #RunWith(PowerMockRunner.class) and #PowerMockRunnerDelegate(JUnitParamsRunner.class) for many of my tests. I have ~9000 unit tests which complete in an "ok" amount of time but I've an 8-core CPU and the systems is heavily underutilized with the default single-test-at-a-time runner. As I run the tests quite often, the extra time adds up and I really want to run the test classes in parallel.
Note that, unfortunately, in a good number of the tests I need to mock static methods which is part of the reason I'm using PowerMock.
What I've Tried
Having to mock static methods makes it impossible to use something like com.googlecode.junittoolbox.ParallelSuite (which was my initial solution) since it runs everything in the same JVM and the static mocking gets all interleaved and messed up. Or so it seems to me at least based on the errors I get.
I don't know the JUnit stack at all, but after poking around, it appears that another option might be to try to write and inject my own RunnerBuilder -- but I'm not sure if I can even spawn another JVM process from within a RunnerBuilder, unlikely. I think the proper solution would be some kind of harness that lives as a gradle task.
I also JUST discovered some Android Studio (Intellij's) test options but the only available fork option is method which is not what I want. I am currently exploring this solution so perhaps I will figure it out but I thought I'd ask the community in parallel since I haven't had much lock yet.
UPDATE: Finally was able to get Android Studio (Intellij) to collect all my tests using options Test Kind: All in directory (for some reason the package option did not do recursive searching) and picking fork mode Class. However, this still runs each test class found sequentially and there are no options that I see about parallelization. This is so close to what I want but not quite... :(
Instead of using Intellij's (Android Studio) built-in JUnit run configurations, I noticed that Android Studio has a bunch of pre-build gradle tasks some of which refer to testing. Those however, exhibited the same sequential execution problem. I then found Run parallel test task using gradle and added the following statement to my root build.gradle file:
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
This works great, my CPU is now pegged to 100% (for most of the run, as the number of outstanding test classes becomes < avail processors obviously utilization goes down).
The downside to this solution is that it does not integrate with Android Studio's (Intellij) pretty junit runner UI. So while the gradle task is progressing I cannot really see the rate of test completion, etc. At the end of the task execution, it just spits out the total runtime and a link to an HTML generated report. This is a minor point and I can totally live with it, but it would be nice if I could figure out how to improve the solution to use the the JUnit runner UI.
Maybe this was not possible when the question posted but now you can do it easily in android studio.
I am using gradle build tools: 'com.android.tools.build:gradle:2.2.3'
And I added the following in my root build.gradle file.
allprojects {
// ...
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
Now, I have multiple Gradle Test Executor runners for my tests. The more cores of your running machine, the mores executors you have!
Thanks for sharing your original answer!
It may sound counterintuitive but actually running lower number of forks may be faster than running on all available cores.
For me this setup is 30s faster (1:50 instead of 2:20) for the same tests, compared to all available processors (8 core CPU, 16 threads)
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}

Run UnitTest several time - not just loop it

I have a VS UnitTest (MSTest) to cover some multiple threading code. With out the required locks the code fails the test 15 out 30 runs - i.e half the runs. I fix the code and the tests pass 30 times.
Now I want the test framework to run the test N time and not just once to be sure that it cant pass once out of luck. There doesn't seem to be any way to do that with an attribute on the test so I put a loop INSIDE the test to run the test code body N times.
I remove the fixes to the code (locks etc) and run the test (which loops N times) - and bam it passes. I run it again (loop N times) and it fails... Im back where I started - its still failing only half the time (even though its doing N loops through test body on each test).
What I really want is not to loop inside the test but really have the test framework load the test, run test, unload test etc N times (as I did by hand originally). How to do that? (Why isn't there just a test attribute for this - like [TestRepeatCount=5]?)
What you really need is a Load Test. Add your unit test(s) and configure it how it will run.
You can set the number of total tests, the number of concurrent tests e.t.c.

Automated Test case Execution - when to stop

We have around 100 test cases for our system. We are trying to build an automated test suite for it.
Say while running the tests the 25th test fails. Should our automated test system bail out here and stop execution, or should it just mark this as failed and continue trying to execute test cases 26th onwards (that is every test cycle will execute all 100 test cases irrespective of any failed test cases).
Ofcourse after a failed test case(for example no 25) if the system needs to be reset to execute test cases 26 onwards it will be taken care of.
Thanks
James
If your tests are independent - you should finish all of them. This way you can monitor the system stability and see all of the problems at once without re-running tests countless times.
If this is running without human intervention, say as part of some automated build, I would want to attempt all tests.
However, there are scenarios where you're in the mode of fixing problems where it might save a human's time to just stop. If it's easy I'd like to offer a "Stop on first failure" option.

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.