Azure DevOps Pipeline: Tests that pass locally fail on the pipeline - unit-testing

Our team is trying to implement Azure DevOps pipeline but we are having some issues. For background, we are running the tests on our self-hosted windows server. We got the front-end tests to all work but now we are having issues getting the backend to pass.
While the majority of the tests are passing just fine we have a few instances where it is not working as expected. Although they should be able to run in parallel, I turned that off to verify that wasn't the issue. I've also tried running them in isolation without any noticeable change.
The errors we get are just simple assert failures like the one below that don't give us much information.
Error Message:
Assert.AreEqual failed. Expected:<True>. Actual:<False>.
But the line numbers point to a mock setup like the one below which I don't understand.
mockQueryHandler.Setup(x => x.Handle(It.IsAny<FindQuery>())).Returns(info);
Currently the test class that is giving us the most problems is one that tests our event alert functionality. It seems that the first test will pass but the subsequent ones fail. However, we have the pipeline setup to rerun any failed tests three times until it gives up. So on every rerun the next test will pass. The only thing I can think of is that our auto mocks are giving us trouble for some reason. We set them up in accordance with
AutoMock - How to unit test with Keyed Registrations?
We mock it out as follows:
var IAbstractEventFactoryMock = new Mock<IAbstractEventFactory>();
using (var mock = AutoMock.GetLoose(builder => builder.RegisterInstance(IAbstractEventFactoryMock.Object).Keyed<IAbstractEventFactory>("EventLogFactory").Keyed<IAbstractEventFactory>("ArpEventLogFactory"))) {
...
}
Below is our .runsettings file
<RunSettings>
<RunConfiguration>
<ResultsDirectory>.\TestResults</ResultsDirectory>
</RunConfiguration>
<MSTest>
</MSTest>
</RunSettings>
Below is the YAML we are using to run the tests
steps:
- task: VSTest#2
displayName: 'Run All Tests'
inputs:
testAssemblyVer2: |
**\*ID_Test*.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.DefaultWorkingDirectory)/ID_Test'
runSettingsFile: 'ID_Test/.runsettings'
runInParallel: false
runTestsInIsolation: true
codeCoverageEnabled: false
testRunTitle: 'Unit Tests'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
rerunFailedTests: true
continueOnError: true
Again, these tests all pass without problems locally and they pass eventually if I run them enough times on the pipeline. So I don't think the problem is in the code itself. It seems to be that the tests aren't getting setup or cleaned properly. I am primarily looking for any ideas on other ways I can configure my yaml or runsettings to try to fix this issue. Thank you in advance for any help and let me know if I can provide any additional information to help.

Turned out this was just an issue with our tests. In a few places we would use the current time which was fine locally, but the server had a much higher execution time that created a few errors. Testing the pipeline on an agent that had a similar execution time as running the tests locally helped me discover that it wasn't an issue with the pipeline.

You can use the Visual Studio Test Platform Installer task to run tests without needing a full Visual Studio installation. You may try to add this task before VSTest task and select appropriate version. Then in VSTest task, choose Installed by Tools Installer under Test platform version:

I have same problem and I found why it fail on Devops pipeline: because the Datetime format is different from in my local. For example:
date.ToShortDate() => will generate string base on current format. Then I change it and specific date format: ToString("dd/MM/yyyy", CultureInfo.InvariantCulture)
Hope this will help.

Related

Collect and run all junit tests in parallel with each test class in its own JVM (parallelization by class, not by method)

Problem
I've a bunch of junit tests (many with custom runners such as PowerMockRunner or JUnitParamsRunner) all under some root package tests (they are in various subpackages of tests at various depths).
I'd like to collect all the tests under package tests and run each test class in a different JVM, in parallel. Ideally, the parallelization would be configurable, but a default of number_of_cores is totally fine as well. Note that I do not want to run each method in its own JVM, but each class.
Background
I'm using PowerMock combined with JUnitParams via annotations #RunWith(PowerMockRunner.class) and #PowerMockRunnerDelegate(JUnitParamsRunner.class) for many of my tests. I have ~9000 unit tests which complete in an "ok" amount of time but I've an 8-core CPU and the systems is heavily underutilized with the default single-test-at-a-time runner. As I run the tests quite often, the extra time adds up and I really want to run the test classes in parallel.
Note that, unfortunately, in a good number of the tests I need to mock static methods which is part of the reason I'm using PowerMock.
What I've Tried
Having to mock static methods makes it impossible to use something like com.googlecode.junittoolbox.ParallelSuite (which was my initial solution) since it runs everything in the same JVM and the static mocking gets all interleaved and messed up. Or so it seems to me at least based on the errors I get.
I don't know the JUnit stack at all, but after poking around, it appears that another option might be to try to write and inject my own RunnerBuilder -- but I'm not sure if I can even spawn another JVM process from within a RunnerBuilder, unlikely. I think the proper solution would be some kind of harness that lives as a gradle task.
I also JUST discovered some Android Studio (Intellij's) test options but the only available fork option is method which is not what I want. I am currently exploring this solution so perhaps I will figure it out but I thought I'd ask the community in parallel since I haven't had much lock yet.
UPDATE: Finally was able to get Android Studio (Intellij) to collect all my tests using options Test Kind: All in directory (for some reason the package option did not do recursive searching) and picking fork mode Class. However, this still runs each test class found sequentially and there are no options that I see about parallelization. This is so close to what I want but not quite... :(
Instead of using Intellij's (Android Studio) built-in JUnit run configurations, I noticed that Android Studio has a bunch of pre-build gradle tasks some of which refer to testing. Those however, exhibited the same sequential execution problem. I then found Run parallel test task using gradle and added the following statement to my root build.gradle file:
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
This works great, my CPU is now pegged to 100% (for most of the run, as the number of outstanding test classes becomes < avail processors obviously utilization goes down).
The downside to this solution is that it does not integrate with Android Studio's (Intellij) pretty junit runner UI. So while the gradle task is progressing I cannot really see the rate of test completion, etc. At the end of the task execution, it just spits out the total runtime and a link to an HTML generated report. This is a minor point and I can totally live with it, but it would be nice if I could figure out how to improve the solution to use the the JUnit runner UI.
Maybe this was not possible when the question posted but now you can do it easily in android studio.
I am using gradle build tools: 'com.android.tools.build:gradle:2.2.3'
And I added the following in my root build.gradle file.
allprojects {
// ...
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
Now, I have multiple Gradle Test Executor runners for my tests. The more cores of your running machine, the mores executors you have!
Thanks for sharing your original answer!
It may sound counterintuitive but actually running lower number of forks may be faster than running on all available cores.
For me this setup is 30s faster (1:50 instead of 2:20) for the same tests, compared to all available processors (8 core CPU, 16 threads)
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}

Run unit tests which fail during TFS-Build

I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.

dart unitest suite never ending on drone.io

I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().

tfs2010 teambuild overnight unit tests have stopped completing (due to assertion fail?)

The auto build completes (when 'Disable Tests' is set to true in build definition) but when I enable tests the build doesn't complete.
I'm building as Debug/AnyCpu. I've copy-and-pasted the MSTest line and ran it in a shell on the build server and I got some assertion fails. Thus, I think the server's waiting for a response to ignore/retry these assertions, does anyone know how I can fix this?
If you want to using assertions during your unit testing, I would recommend using the Unit Test Framework Assert class instead of Debug.Assert.
See this method for more details:
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting.assert.fail.aspx
You have different ways to assert (AreEqual, AreNotEqual, IsTrue, etc.).
Hope this helps.
Unit test using a release build on your server - it won't have any asserts in it.
What do you mean build doesn't complete? The build log will tell you what is the most recent action. You might want to set the logging level to detailed for the build to see more information. Also one issue might be that you have configured the build to fail if tests fail. In that case you could add Ignore attribute into those tests:
[TestMethod]
[Ignore]
public void TestMethodThatFails()
Of course you should fix those tests rather than ignore them but that is not the subject of this question.

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.