I'm creating unit tests using CustomWebApplicationFactory. My scenario is the following:
Two tests classes (Test1 and Test2). Both classes are using CustomWebApplicationFactory pointing to a net core API startup.
If I run all tests from class Test1 all is ok. Same if I run all tests from class Test2.
The problem comes if I run all tests at the same time. Tests crashed for different reasons.
I guess that CustomWebApplicationFactory are sharing something beetween two tests classes when are running at the same time.
CustomWebApplicationFactory doesn't up two instances of the startup API. I'm not sure what it's happen but it seems that both test classes are sharing static fields or something similar.
Is it anyway to start two complety different instances? Or another way to do that?
My goal is to have multiple test classes but all pointing to the same API and runnig all tests a the same time without having errors between them.
Thanks
I have faced the same issue multiple times with the WebApplicationFactory. I know you mention you would like to run all the tests at the same time, but the solution that has worked for me is to only run the tests sequentially.
You can add an AssemblyInfo.cs file to your integration tests project like:
<ProjectPath>\Properties\AssemblyInfo.cs
With this inside:
using Xunit;
[assembly: CollectionBehavior(DisableTestParallelization = true)]
This will force your tests to run sequentially and should clear the errors you see.
Related
I have implemented unit tests for my MVC Application using the NUnit Framework.
The Unit testing project contains multiple [TestFixture] classes that need to be tested. I'm using a TestFixture for each Module in my MVC project, but all of the modules are inside in single Namespace.
I would like to be able to test a single Module through its [TestFixture] using the Unit Tool in FinalBuilder (automation) or Manually, instead of testing the whole unit test project.
My total test case count is around 2000, made up of 20 modules each with about 100 test cases. Any changes made to one of modules or it's TestFixture will mean that only that modules tests need to be run. This will minimise the amount of time that needs to be taken to wait for unrelated tests to complete.
As far as manual testing goes, in the Nunit gui, you can simply select a test fixture and run it to. I haven't used finalbuilder, but assuming it's similar to the nunit-console application, you have a couple of options.
You can pass arguments to specify particular test cases, for example:
nunit-console some.dll /run=SomeTestFixture
Or, you can mark up your test fixtures with categories and then tell the test runner to only include tests in those categories, so in your code:
[TestFixture]
[Category("SomeTestCategory")]
public class SomeTestClass {
Then in your nunit call:
nunit-console some.dll /include=SomeTestCategory
That said, assuming that your tests are actually unit tests, 2000 doesn't seem like that many and they shouldn't really be taking all that long to run...
When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.
I have a big mess with 100 tests in one class and running all of them by clicking "Test project (...). They run in a random order and I would like them to run in a specific order - from beginning to the end, the same order that I wrote them. In eclipse it's not a problem because eclipse just works like that, how to do it in netbeans?
Any help will be appreciated.
Edit (due to answers): Tests order is required for the clearance of the log. They are independent.
If your tests needs to run in a specific order, something is wrong with your design.
2 test that needs to run one after another are 1 test. Consider this before searching for a solution.
check this https://blogs.oracle.com/mindless/entry/controlling_the_order_of_junit
Having tests depending on other tests 99.9% of the time a very bad idea. Unit tests should be independent from each other, as otherwise you might have a cascade of errors, or (even worse) one test failing because something another test did sometime before.
If you still want to go through this pain, you'll need to use a different unit testing framework (such as TestNG - see dependsOnMethods) which supports test dependencies.
Junit doesn't support this feature because it's seen by many as a bad practice (for very good reasons).
The next JUnit release will support ordering of test methods. The standard Maven Surefire Plugin supports ordering of test methods already.
Netbeans has good integration with ant build files. You could write a specific ant target that could execute each test in order.
I have a problem with executing tests in JUnit. Imagine you have one test case class with f.e. 100 tests, no test suite and no main program - test case class test the device on com port. JUnit project is in Netbeans. I want to run tests - but not all of them at the same time, i would like to choose tests to run before actual testing.
Once I saw something like that in eclipse - but it wasn't my project and I don't know how it was done and how to do the same thing in netbeans. It was a separate window, poping up before running tests. In this window there were checkboxes with names of methods with #Test annotation and you could choose tests you wanted to run and click run - so it let you to run what you wanted.
Does anyone know how to do it in netbeans? Is it any library or plugin?
Any help will be appreciated.
You can take a look at Run single test from a JUnit class using command-line. It does allow you to specify what test you want to run given a class with multiple test cases in it. Being command-line you can then script your own test suite that runs the specific ones you want.
I also noticed your other question Junit: changing sequence of test running. With the scripting approach you can actually control the order of your testing.
This approach does not take advantage of Eclipse's or NetBean's JUnit test runners though, so it is a very specific workaround.
Netbeans nowadays support running single tests:
I have a test setup where I have many very similar unit tests that I need to run. For example, there are about 40 stored procedures that need to be checked for existence in the target environment. However I'd like all the tests to be grouped by their business unit. So there'd be 40 instances of a very similar TestMethod in 40 separate classes. Kinda lame. One other thing: each group of tests need to be in their own solution. So Business Unit A will have a solution called Tests.BusinessUnitA.
I'm thinking that I can set this all up by passing a configuration object (with the name of the stored proc to check, among other things) to a TestRunner class.
The problem is that I'm losing the atomicity of my unit tests. I wouldn't be able to run just one of the tests, I'd have to run all the tests in the TestRunner class.
This is what the code looks like at this time. Sure, it's nice and compact, but if Test 8 fails, I have no way of running just Test 8.
TestRunner runner = new TestRunner(config, this.TestContext);
var runnerType = typeof(TestRunner);
var methods = runnerType.GetMethods()
.Where(x =>
x.GetCustomAttributes(typeof(TestMethodAttribute), false)
.Count() > 0).ToArray();
foreach (var method in methods)
{
method.Invoke(runner, null);
}
So I'm looking for suggestions for making a group of unit tests that take in a configuration object but won't require me to generate many many TestMethods. This looks like it might require code-generation, but I'd like to solve it without that.
Can you use Data Driven Tests?
http://msdn.microsoft.com/en-us/library/ms182519(VS.80).aspx
You can then exclude rows that previously passed...
My solution to this was to put all the tests in their own assembly, decorate them with attributes then reflect through the assembly to execute the tests. Made more sense than data driven tests.