TFS - order of automata tests to execute - unit-testing

Supposing I have system tests: A and B, where A includes a record to a database, B tries to modify it. When A fails, B will fail as well. A and B are written as "unit tests" (test methods), A and B are tests cases in TFS as well, automated, linked to these "unit tests". I put them on a test plan, test suite, both of them. I want to execute them with the "Run Functional Tests" step.
How can I tell TFS to execute them in the right order?
What is the best practice to develop tests like these?

You could created an ordered test, which is a container that holds other tests and guarantees that tests run in a specific order.
How to create an ordered test, you could refer this tutorial.
In TFS, you could follow below steps to run ordered test:
Add an Order Test file in your test project and use it to define the
testing order.
In your build definition, add a Run Functional Tests task. Change
the Test Assembly like the picture below.
In the Test Drop location I have the complete Project, and in the
Executions folder I have the ordered test. hope it helps
Update
It's able to order for manual tests, however not able for automated test. If you need the ordering support for automated tests, please vote on this user voice item:
enable changing the order of test cases on the web gui and let them be tested in this order for automated tests
https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/13489221-enable-changing-the-order-of-test-cases-on-the-web

Related

What is the difference between test cases coverage and just a console application coverage?

I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).

How to Force a Google Test Case to Run Last

Our team has a very mature suite of Google Test (GTest) test cases. The test cases, via a custom test environment, build up a test report in addition to the standard JUnit XML output that GTest produces on its own.
I would like to add one final test that ensures that the Google Test suite produced its test report after all other tests in the suite execute. In other words, I would like to force which test executes last so it can write the custom output and then verify that it was properly written, failing if it was not.
The solution should work even if Google Test is executing tests in random order. Can I force one test to run last? Can I write a test that GTest won't automatically discover, call it myself from my "main", and have its results rolled into the rest of them, or ??
I see no way to do this with the current GTest API, but thought it was worth asking.
This is probably closest to what you're looking for.
https://github.com/google/googletest/blob/master/docs/advanced.md#sharing-resources-between-tests-in-the-same-test-suite
Perhaps you can use the destruction of the static object to collect information about all tests that ran.
However, beware of forks.
I really would write your own main(), fork the test process and wait for the child to finish so you can collect data from it.

Running Isolated Tests Without Relying on Terminal? IntelliJ IDEA

I understand that I should use the sure-fire plugin for unit tests, and failsafe for integration. I can run unit tests with mvn test and integration tests with mvn verify but this annoys me for 2 reasons:
I'd prefer to be able to select any test class (or method in that class) and run it individually by a simple click, rather than typing it into terminal every time.
The terminal returns the test results in ugly black/white paragraphs, requiring me to sift through them. I'd much prefer to have the results returned in a visually organized manner, similar to if I right-clicked on the test class in IntelliJ and click 'RunDemoTest`. This produces:
I find the error results much easier to sift through, for example it shows red/green #Test results on the left, and on the right it cleanly organizes the error into
Expected : 3
Actual :1
I'm sure there are advantages to using terminal for automated test runs later into production, but during development I don't find the terminal conducive to my tinkering.
How do I benefit from IntelliJ's visual feedback of test results, while simultaneously ensuring unit & integration tests are run separately, and preserving my freedom to pick and choose which test classes and test methods I can run at any time?
I'm assuming I can't have my cake and eat it too. Please explain.
If you are using the IntelliJ view "Maven Projects" you can very easy toggle on/off the exection of maven integrated tests.
Via "Run/Debug Configurations" you can create test executions that match your reqirement for a comfortable UI.
After these steps, there is a new entry in the drop down list "Run/Debug configurations". When you start the new JUnit Test configuration, the defined tests are executed and the results are presented exactly in the same manner as the screenshot in your question.
The options in my second screenshot allow a very flexible definition of the scope. You don't have go to every java file and click on the green arrows in the editor view.
This configuration isn't related to any maven configuration, and you can use them at any time in your coding process.

How can I have Jenkins fail a build only when the number of test failures changes?

We've customized a product which includes their own phpunit test suite. In Jenkins, I have two jobs setup: the first runs our own test suite that covers our customizations, and the second job runs the existing core unit tests.
The core unit tests were not designed to be run on a customized version, so failures are expected. Out of the ~5000 tests, 81 fail. What I'd like to setup in Jenkins, is have the build marked as a failure only if the number of failed tests changes from the previous build.
I've looked at the Performance plugin but the documentation seems sparse and I'm trying to find something that matches our use case.
Any suggestions?
You should have a look at the plugin https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
It handle a threasolding mechanism (I specified this requirement for the xunit plugin when my team developed it )
expect this helps..
But you want to associates the failure to a change ....
Hum maybe more complex .. have to ask .. if such thing should be developped.

Junit: testing chosen tests instead of all of them

I have a problem with executing tests in JUnit. Imagine you have one test case class with f.e. 100 tests, no test suite and no main program - test case class test the device on com port. JUnit project is in Netbeans. I want to run tests - but not all of them at the same time, i would like to choose tests to run before actual testing.
Once I saw something like that in eclipse - but it wasn't my project and I don't know how it was done and how to do the same thing in netbeans. It was a separate window, poping up before running tests. In this window there were checkboxes with names of methods with #Test annotation and you could choose tests you wanted to run and click run - so it let you to run what you wanted.
Does anyone know how to do it in netbeans? Is it any library or plugin?
Any help will be appreciated.
You can take a look at Run single test from a JUnit class using command-line. It does allow you to specify what test you want to run given a class with multiple test cases in it. Being command-line you can then script your own test suite that runs the specific ones you want.
I also noticed your other question Junit: changing sequence of test running. With the scripting approach you can actually control the order of your testing.
This approach does not take advantage of Eclipse's or NetBean's JUnit test runners though, so it is a very specific workaround.
Netbeans nowadays support running single tests: