Here's the example, why does it shows me different duplicated tests per every version, for example tests for .NET Core 6.0 and without version just the test, what I need to do if I want to use only .NET Core, I don't want to get duplications there
I don't even have an idea what to do in that case, I tried to change the build process, etc
Related
I'm a new NUnit user, using NUnit 3.9 under Visual Studio Community 2017. I'm using it on a pet open source library project, and it's going well once I got the hang of it.
The library accesses a publicly available government website via a documented API. Most of my tests use local data, so that I have a stable bed to compare against, and so that I can test without going out to the website every time.
I would like to set it up so that normally, the tests that hit the server do not run. I run the tests over and over as I tweak the code, and just as a matter of courtesy, don't want to bang on the server. Also, I'd like to be able to test even when the remote system is down or when I don't have Internet access.
Is there any way to group or tag my tests so that normally only the ones using local data run, but that I can still, when necessary, run the ones that exercise the server access? Either specifying "run these" or "exclude these" would be fine.
I've grouped the tests into two different classes, UnitTestOffline.cs and UnitTestOnline.cs, and was hoping I could somehow run the tests on a class-by-class basis, but haven't found a way to do that.
You'll get better answers if you say specifically how you run your tests, since there are a number of ways to do it. Since you mention VS2017, I'm going to assume that you are using the NUnit 3 VS Adapter, but let us know if you are using some other approach.
In the VS adapter, use the dropdown to display your tests by class. Right click on the class for which you want to run tests and run them.
If you decide to categorize tests using the CategoryAttribute, you can display tests by "trait" in Visual Studio. As before, right click on the group you want to run tests for and run them.
If you get a lot of tests, you might want to put your unit tests in one assembly and your integration tests in another. In that case, display the tests by project, right click on the project you want and run them.
All of this can also be done using the nunit3-console command-line runner as well. To select by class or category, you use the --where option. To select by assembly, you merely enter the name of the assembly you want on the command-line.
Seems like you want to categorize your tests (unit test, integration tests...) and run only the unit tests... you could use [Category] for that.
In the nunit GUI you could /include /exclude category after that and run only the one you want.
And probably that the filtering of Visual Studio could work.
Try to see one of the solution suggested here as well
We've customized a product which includes their own phpunit test suite. In Jenkins, I have two jobs setup: the first runs our own test suite that covers our customizations, and the second job runs the existing core unit tests.
The core unit tests were not designed to be run on a customized version, so failures are expected. Out of the ~5000 tests, 81 fail. What I'd like to setup in Jenkins, is have the build marked as a failure only if the number of failed tests changes from the previous build.
I've looked at the Performance plugin but the documentation seems sparse and I'm trying to find something that matches our use case.
Any suggestions?
You should have a look at the plugin https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
It handle a threasolding mechanism (I specified this requirement for the xunit plugin when my team developed it )
expect this helps..
But you want to associates the failure to a change ....
Hum maybe more complex .. have to ask .. if such thing should be developped.
Is there any tool that analyzes test reports of particular unit test runs and shows differences between them? Basically, I'm interested in a "graph of progress":
12 Aug 2012 10:00: 48/50 tests passed. Failed tests: "MyTest13", "MyTest43".
12 Aug 2012 10:02: 47/50 tests passed. "MyTest13" now passed, but "MyTest2" and "MyTest22" started failing.
NUnit is preferrable, however, unit testing framework is not that important.
I'm looking for a completely automated tool, so that I can set it to run it after each build and instantly look at the results and compare them with previous results. The closest thing I've found is nunit-results and a hand-written batch file to call NUnit (with specified xml report path) and nunit-results as a post-build action. However, html file that it produces is not that informative.
I'm really surprised that noone of the popular unit testing software is capable of storing test run information and analyzing series of runs in bulk. I've tried Resharper, NUnit GUI, Gallio and haven't found anything useful.
I would be glad for a solution that does not require a setup of a complicated CI server. My projects are typically small, but I need a tool like this for every one of them.
I don't know what your threshold is for "complicated CI server", but Jenkins is pretty easy to setup, and with the NUnit Plugin ought to give you what you're after:
This plugin makes it possible to import NUnit reports from each build into Jenkins so they are displayed with a trend graph and details about which tests that failed.
If you are interested in a "Graph of progress", I'd go for a way more simple (IMHO) approach and use NCrunch. It shows you your tests status as you code, without stopping for test runs. See my answer here for more details.
I googled and found the below helpful references. Currently I want to run all from the command-line (for easy of execution & quickness) in cases:
A specific test (ie. a test written by a method marked [TestMethod()])
All tests in a class
All impacted tests of the current TFS pending change of mine.
All tests
All tests except the ones marked as category [TestCategory("some-category")]
I'm not sure how can I write a correct command for my needs above.
References:
the MSTest.exe http://msdn.microsoft.com/en-us/library/ms182487.aspx
the MSTest.exe's detailed options http://msdn.microsoft.com/en-us/library/ms182489.aspx
obtaining the result http://msdn.microsoft.com/en-us/library/ms182488.aspx
[Edit]
After a while, I found the below useful tips.
run Visual Studio unit tests by using MSTest.exe, located at %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe in my case.
using /testcontainer:Path\To\Your\TestProjectAssembly.dll to indicate where your tests are coded. You can specify multiple '/testcontainer' options if required.
using /test:TestFilter to filter the tests to run. Note that this filter is applied to the full test method name (ie. FullNamespace.Classname.MethodName)
Currently I can have some answers for my needs:
A specific test (ie. a test written by a method marked [TestMethod()])
Use MSTest.exe /container:TheAssemblyContainingYourSpecificTest /test:TheSpecificTestName
All tests in a class
Use MSTest.exe /container:TheAssemblyContainingYourClass /test:TheClassNameWithFullNamespace
Note that the /test: is the filter which uses the full name of the class when filtering.
The others are still left unknown. Please disscuss if you know how.
For number 4. To run all tests in an assembly it's simply:
mstest /testcontainer:YourCompiledTestAssembly.dll
For question
5 All tests except the ones marked as category
[TestCategory("some-category")]
Use
mstest.exe /testcontainer:yourTests.dll /category:"!some-category"
If you need to exclude more than one category, use
mstest.exe /testcontainer:yourTests.dll /category:"!group1&!group2"
Reference: /category filter
You might be interested by the Gallio bundle. It provides a free common automation platform to run your tests (MSTest, MbUnit, NUnit, xUnit, etc.) with various test runners (GUI, command line, PoSh, plugins for 3rd party tools, etc.)
In particular you may want to use Gallio.Echo which is a nice command line test runner:
The Gallio test runners have also filtering capabilities to run a subset of your unit tests only (e.g. per category, per fixture, etc.)
** adding this due to errors I've encountered.
To run all just use '''vstest.console.exe .\x64\Release\UnitTesting.dll'''
vstest.console.exe is not deprecated so you will not need the /nologo suppression.
If needed it also has --TestCaseFilter|/TestCaseFilter:
I am using the Boost 1.34.1 unit test framework. (I know the version is ancient, but right now updating or switching frameworks is not an option for technical reasons.)
I have a single test module (#define BOOST_TEST_MODULE UnitTests) that consists of three test suites (BOOST_AUTO_TEST_SUITE( Suite1 );) which in turn consist of several BOOST_AUTO_TEST_CASE()s.
My question:
Is it possible to run only a subset of the test module, i.e. limit the test run to only one test suite, or even only one test case?
Reasoning:
I integrated the unit tests into our automake framework, so that the whole module is run on make check. I wouldn't want to split it up into multiple modules, because our application generates lots of output and it is nice to see the test summary at the bottom ("X of Y tests failed") instead of spread across several thousand lines of output.
But a full test run is also time consuming, and the output of the test you're looking for is likewise drowned; thus, it would be nice if I could somehow limit the scope of the tests being run.
The Boost documentation left me pretty confused and none the wiser; anyone around who might have a suggestion? (Some trickery allowing to split up the test module while still receiving a usable test summary would also be welcome.)
Take a look at the --run_test parameter - it should provide what you're after.