I have ancestral project and my Nightly build fails. I can`t find out how to fix it. Problem is about test cases sequence dependent execution.
Environment:
There is used TFS2015 build definitions. Basically, I have a definition with a Visual Studio Build task and then a Visual Studio Test task. Visual Studio Test task is also overwritten with Powershell file and I see that in TFS Nightly build process my predefined VS Ordered Test statements is ignored.
Important
Test cases are sequence dependent (as I said, this project is ancestral).
Problem
Interesting is that build log files always shows test execution in sequence what I define in VS Ordered Test, but at TFS2015 Detailed report Test results sequence always is different. So I can`t find out what affects test case execution procedures in TFS. Also I am not sure how tests are executed - parallel or sequentially (As I see, both TFS and PowerShell has no indications to run test cases in Parallel).
I have 2 questions:
Powershell brake down all in VS defined conditions?
Which is the best way to define test execution order, so that it takes into account?
Actually, those Test method run in the order that you defined in Ordered Test file during TFS build process. The build log already shows the correct sequence.
Just like you mentioned above, in the test result page, the order is the same as what you defined in the Ordered Test, but you can see that in front of each test method, it has an order number. You could download the test result file to check again and you will find those test methods are run in the correct order.
In the higher version, like TFS 2017, you could click the Column title 'Test' to make it sort by order().
You could also add a Date started column to know which test method run the first.
Related
Our team uses Google Test for automated testing. Most of our tests pass consistently, but a few seem to fail ~5% of the time due to race conditions, network time-outs, etc.
We would like the ability to mark certain tests as "flaky". A flaky test would be automatically re-run if it fails the first time, and will only fail the test suite if it fails both times.
Is this something Google Test offers out-of-the-box? If not, is it something that can be built on top of Google Test?
You have several options:
Use --gtest_repeat for the test executable:
The --gtest_repeat flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug.
You can mimic tagging your tests by adding "flaky" somewhere in their names, and then use the gtest_filter option to repeat them. Below are some examples from Google documentation:
$ foo_test --gtest_repeat=1000
Repeat foo_test 1000 times and don't stop at failures.
$ foo_test --gtest_repeat=-1
A negative count means repeating forever.
$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
Repeat foo_test 1000 times, stopping at the first failure. This
is especially useful when running under a debugger: when the test
fails, it will drop into the debugger and you can then inspect
variables and stacks.
$ foo_test --gtest_repeat=1000 --gtest_filter=Flaky.*
Repeat the tests whose name matches the filter 1000 times.
See here for more info.
Use bazel to build and run your tests:
Rather than tagging your tests in the test files, you can tag them in the bazel BUILD files.
You can tag each test individually using cc_test rule.
You can also define a set of tests (using test_suite) in the BUILD file and tag them together (e.g. "small", "large", "flaky", etc). See here for an example.
Once you tag your tests, you can use simple commands like this:
% bazel test --test_tag_filters=performance,stress,-flaky //myproject:all
The above command will test all tests in myproject that are tagged as performance,stress, and are not flaky.
See here for documentation.
Using Bazel is probably cleaner because you don't have to modify your test files, and you can quickly modify your tests tags if things change.
See this repo and this video for examples of running tests using bazel.
I want to exclude some tests from my continuous integration build but I haven't found a way to do so.
One of the things I've tried was to set up the priority of those tests to -2 and then on the build I specified Minimum Test Priority = -1 but it still run those tests.
Any help would be greatly appreciated.
Instead of using "Test Lists" that have been described, you should use the "Test Category" method. The test lists & VSMDI functionality have actually been deprecated in Visual Studio 2010 and Microsoft may remove the feature completely in a future version of Visual Studio.
If you'd like some more information about how to use test categories especially with your automated build process, check out this blog post: http://www.edsquared.com/2009/09/25/Test+Categories+And+Running+A+Subset+Of+Tests+In+Team+Foundation+Server+2010.aspx
You can also exclude test categories from running by specifying the ! (exclamation point) character in front of the category name to further define your filter.
If you are using MSTest you can create a Test List for the tests that you need in you continuous integration.
With MSTest, you can simply create two test projects (assemblies) and only specify one in the build config to use for testing. In MSBuild, this was the way to go. For the new WF-Based build definitions, I currently don't have a sample at hand:
<ItemGroup>
<!-- TEST ARGUMENTS
If the RunTest property is set to true then the following test arguments will be used to run
tests. Tests can be run by specifying one or more test lists and/or one or more test containers.
To run tests using test lists, add MetaDataFile items and associated TestLists here. Paths can
be server paths or local paths, but server paths relative to the location of this file are highly
recommended:
<MetaDataFile Include="$(BuildProjectFolderPath)/HelloWorld/HelloWorld.vsmdi">
<TestList>BVT1;BVT2</TestList>
</MetaDataFile>
To run tests using test containers, add TestContainer items here:
<TestContainer Include="$(OutDir)\AutomatedBuildTests.dll" />
<TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />
<TestContainer Include="$(SolutionRoot)\TestProject\LoadTest1.loadtest" />
Use %2a instead of * and %3f instead of ? to prevent expansion before test assemblies are built
-->
</ItemGroup>
<PropertyGroup>
<RunConfigFile>$(SolutionRoot)\LocalTestRun.testrunconfig</RunConfigFile>
</PropertyGroup>
Tip: To use a generic build definition, we name all our Test projects "AutomatedBuildTests", i.e. there is no solution difference. So the build definition can be included in any existing build definition (or even be a common one) that always executes the right set of tests. It would be an easy task to prepend an "if exists" check in order to allow a build definition to only run tests when a Test assembly is present. We do not use this in order to get build errors when no test assembly is found as we absolutely want test with all those builds that use this definition.
My preference would be as above using a Test List, but some people have issued merging/editing the vsmdi files... We end up with separate solutions and use a pattern match to execute all tests in the appropriate DLL.
In Visual Studio 2012 and later you can configure your build definition using the Test case filter setting.
This setting is part of your build definition.
Open the build definition and navigate to the Process tab. In the section 3. Test you can define mutiple test sources. For each test source your can specify a Test case filter.
You can find the details in this MSDN article: Running selective unit tests in VS 2012 RC using TestCaseFilter
I have copied the supported operators and some examples from this article:
Operators supported in RC are:
1.= (equals)
2.!= (not equals)
3.~ (contains or substring only for string values)
4.& (and)
5.| (or)
6.( ) (paranthesis for grouping)
Expresssion can be created using these operators as any valid logical condition. & (and) has higher
precedence over | (or) while evaluating expression.
E.g.
"TestCategory=NAR|Priority=1"
"Owner=vikram&TestCategory!=UI"
"FullyQualifiedName~NameSpace.Class"
"(TestCategory!=UI&(Priority=1|Priority=2))|(TestCategory=UI&Priority=1)"
Another possibility would be to have some test sources in one build definition in some (i.e. more or fewer) test sources in other build definitions.
We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.
I'm trying to add some unit tests to an existing code base using Visual Studio 2010's unit test generator. However, in some cases when I open a class, right click --> Create Unit Tests..., after I select the methods to generate tests for it will create what is essentially a blank test. Are there situations where this can happen? In every case I select at least one public method to gen tests for, and all it generates is this:
using TxRP.Controllers; //The location of the code to be tested
using Microsoft.VisualStudio.TestTools.UnitTesting;
That's it. Nothing else. Strange, right?
I should note that this is all MVC 2 controller code, and I have been able to gen tests for other controllers with no problem, and all my controllers follow pretty much the same format. No error seems to be thrown, as it gens the empty page happily and adds it to the project as if everything is just fine.
Has anyone had experience with the same type of thing happening, and was there any answer found as to why?
UPDATE:
There is in fact an error during generation:
While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
After some research, the only possible solution I found is that this error occurrs if you're trying to generate tests to a test file that already exists. However, this solution is not working for me...
If you try to generate tests for a class which already has existing tests in another file in the project, it will just generate an empty file as described above. Changing the filename is not sufficient, nor is using a different location within the project. Basically it seems to enforce the one-testfile-per-class convention across the entire project.
This problem is caused by the previously generated test file having been moved to a folder other than the root folder in the test project.
Resolution
Move the test file into the test project root folder.
Generate the new tests
Move the test file back to the folder location you want in the test project.
I have no clue why they dont call it a BUG! in a typical enterprise level software development it is more than a coincidence where multiple people generate unit tests for different methods of the same class # different points of time.
We always end up with this error and it is not helping us any way! Feels as if the Context Menu "Create Unit Tests" has lil use!
Error description:
"While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
"
In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.