slow startup time for #MicronautTest - unit-testing

I have been building a java 8 project with Micronaut and thought I would try using #MicronautTest for the unit tests. I found that each class I annotate with #MicronautTest adds about 2:30 to the time it takes to run the tests. I saw the same behavior in my project as well, the micronaut java example project as well as a simple project I created with mn create-app.
Just curious if there is something else in my environment on this computer that could be slowing this down. If I start the whole app, it comes up and starts handling requests in under 10 seconds.

Related

Run Functional Tests step in Vnext TFS 15 is awfully slow

Running functional tests in TFS 15 with Vnext in comparison to the old system with MTM and Tests Environemnts, it is awefully slow. it takes like 10 minutes after initial test start, before the first tests are started. And while running the tests, they take longer as normal.
Distribution of tests is slightly "unhappy", tests get distributed at the beginning of the test run, but if 1 maschine is finished, while the other one has still 5 long test runs doens't make sense. Bucket size was a way more intelligent system
is there are a way to improve this? We have updated to RC2 and we are not happy with the test outcome. Feeling like test tast is a bottleneck
Ok so case is that the tests get distributed at the start of the test run and the test runner itself works different now. Unlike bucket system that distributes the tests one after another..
Also the tests only shows if failed or finished AFTER they are done, so if 200 tests get distributed it will only show the outcome when all 200 are done.
kinda awkward..

Collect and run all junit tests in parallel with each test class in its own JVM (parallelization by class, not by method)

Problem
I've a bunch of junit tests (many with custom runners such as PowerMockRunner or JUnitParamsRunner) all under some root package tests (they are in various subpackages of tests at various depths).
I'd like to collect all the tests under package tests and run each test class in a different JVM, in parallel. Ideally, the parallelization would be configurable, but a default of number_of_cores is totally fine as well. Note that I do not want to run each method in its own JVM, but each class.
Background
I'm using PowerMock combined with JUnitParams via annotations #RunWith(PowerMockRunner.class) and #PowerMockRunnerDelegate(JUnitParamsRunner.class) for many of my tests. I have ~9000 unit tests which complete in an "ok" amount of time but I've an 8-core CPU and the systems is heavily underutilized with the default single-test-at-a-time runner. As I run the tests quite often, the extra time adds up and I really want to run the test classes in parallel.
Note that, unfortunately, in a good number of the tests I need to mock static methods which is part of the reason I'm using PowerMock.
What I've Tried
Having to mock static methods makes it impossible to use something like com.googlecode.junittoolbox.ParallelSuite (which was my initial solution) since it runs everything in the same JVM and the static mocking gets all interleaved and messed up. Or so it seems to me at least based on the errors I get.
I don't know the JUnit stack at all, but after poking around, it appears that another option might be to try to write and inject my own RunnerBuilder -- but I'm not sure if I can even spawn another JVM process from within a RunnerBuilder, unlikely. I think the proper solution would be some kind of harness that lives as a gradle task.
I also JUST discovered some Android Studio (Intellij's) test options but the only available fork option is method which is not what I want. I am currently exploring this solution so perhaps I will figure it out but I thought I'd ask the community in parallel since I haven't had much lock yet.
UPDATE: Finally was able to get Android Studio (Intellij) to collect all my tests using options Test Kind: All in directory (for some reason the package option did not do recursive searching) and picking fork mode Class. However, this still runs each test class found sequentially and there are no options that I see about parallelization. This is so close to what I want but not quite... :(
Instead of using Intellij's (Android Studio) built-in JUnit run configurations, I noticed that Android Studio has a bunch of pre-build gradle tasks some of which refer to testing. Those however, exhibited the same sequential execution problem. I then found Run parallel test task using gradle and added the following statement to my root build.gradle file:
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
This works great, my CPU is now pegged to 100% (for most of the run, as the number of outstanding test classes becomes < avail processors obviously utilization goes down).
The downside to this solution is that it does not integrate with Android Studio's (Intellij) pretty junit runner UI. So while the gradle task is progressing I cannot really see the rate of test completion, etc. At the end of the task execution, it just spits out the total runtime and a link to an HTML generated report. This is a minor point and I can totally live with it, but it would be nice if I could figure out how to improve the solution to use the the JUnit runner UI.
Maybe this was not possible when the question posted but now you can do it easily in android studio.
I am using gradle build tools: 'com.android.tools.build:gradle:2.2.3'
And I added the following in my root build.gradle file.
allprojects {
// ...
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
Now, I have multiple Gradle Test Executor runners for my tests. The more cores of your running machine, the mores executors you have!
Thanks for sharing your original answer!
It may sound counterintuitive but actually running lower number of forks may be faster than running on all available cores.
For me this setup is 30s faster (1:50 instead of 2:20) for the same tests, compared to all available processors (8 core CPU, 16 threads)
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}

How Can I Increase the Timeout for Visual Studio Tests?

I'm working on a pretty sizable suite of tests for some code I'm writing (in Visual Studio 2012). For the most part, running the unit tests is no big deal. But I'm also including a lot of integration tests which have more external infrastructure dependencies. The number of tests, combined with re-setting the infrastructure dependencies between tests, has resulting in a rather lengthy test run for the full suite (around 45 minutes at the moment).
Running the tests is no big deal. Unit tests will be run on check-in, integration tests nightly. However, I'm running into an issue when trying to analyze code coverage for all of the tests. No code coverage results are created, and the output window says the following:
This request operation sent to net.pipe://megara/vstest.discoveryengine/14108 did not receive a reply within the configured timeout (00:30:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.
I'm not sure where it's directing me here. I don't use any iContextChannel for anything, all of the test-running is built in to Visual Studio. So I don't really know where/how I can increase any kind of timeout. Does anybody know where I should look?
Try changing the time-out values in your solution .testsettings file.
If you don't have one, you can add it to the solution by using the right-click on solution -> Add New Item -> TestSettings menu. In there you can do time-outs on individual tests (default is 30 minutes), or set the timeout for an entire test run.
It's not clear if this is the root cause or not, but it is worth ruling out.
Old topic, but perhaps some new info: I didn't have any luck trying to set the timeout in the .testsettings file with Visual Studio 2015. No matter what I set it to in test settings, my tests would stop after 30 minutes.
There is now a [Timeout(milliseconds)] attribute which can be applied to individual test methods. This is even better than testsettings since you can fine tune individual tests to make sure they aren't taking longer than expected.
Unfortunately, I could not get this attribute to have any effect when attempting to set it higher than 30 minutes when using a .testsettings file, even if the .testsettings file defined a higher timeout. Values lower than 30 minutes were honored, but higher values would still stop at 30 minutes regardless of what testsettings said.
After I removed the .testsettings file, the timeout attributes seem to be working as expected - the test will run up to whatever timeout I set it to.
If you have trouble getting the timeout attribute to work, try removing .testsettings.

Should I write unit tests as console apps first?

I'm debugging a set of WCF services. Initially, I created some unit tests, but since I'm using threading I often receive "Aborted" or "Stopped" tests without any clear explanation why (this is a known bug in Visual Studio).
I found it extremely challenging to debug the services when I can't even read the log output, so I quickly wrote a custom Assert class and converted all unit tests to console applications. This way, I was able to fix a huge number of simple problems immediately that were hard to impossible before.
So I'm wondering if it is a good idea to write unit tests as (fully automated) console apps first and convert them to real (executes when launching unit tests in VS) tests later.
if you want to stick to the stand alone console app you can have a one fits all aproach: Change
the application type of the MsUnitTest (or NUnitTest) to "Console application"
add a public static void Main() that call your unittests you are interested in.
This exe is can run on its own or it runs in the unittest-ide.
I prefer a standalone consolerunner as described in how-do-i-use-mstest-without-visual-studio

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.