I have made a few coded UI tests to test the application I am working on. I followed the following post: How to run coded UI Tests from MTM as a guidance and also the following post on how to create a fake build: How to Create a Fake Build Definition and a Fake Build
So after I set the infrastructure I attempted to run the tests from the Microsoft Test Manager (MTM). While MTM doesn't break or throw errors, the result it reports back to me is that it cannot find the Coded UI Test recordings.
Upon looking the contents of the first link (How to run coded UI tests from MTM) I noticed that there was a small piece of text saying "You must create a build definition that just has a share location added that is where your assemblies for your tests are located."
What exactly does that mean? How do I do this? My build definition drops the assemblies in \\machine\share, so that's where I copied the coded ui tests, but I still get the same result.
Is there anything I am missing?
Thanks,
Martin
Well,I don't like to leave questions unanswered, so I thought to come back to this one.
The build definition's drop folder is the one that defines where the MTM is going to look for the coded ui test recordings/dlls.
In order for this to work, one simply has to build and compile their tests/coded ui tests and put the resulting assembly and resources in the folder defined in the build definition.
Cheers!
Related
I have a .Net core that builds successfully using VSTS. The issue I'm presenting is that Unit Tests aren't being discovered when building the project. I know this is similar to this post but I just wanted to bring up more details in case someone has a good idea seeing this description.
This a summary of the logs:
##[warning]Project file(s) matching the specified pattern were not found.
##[section]Finishing: Test.
I'm concerned about the minimatch pattern used here. It seems is looking for a Tests folder and then any file that ends in .csproj
The default agent queue is Hosted VS2017 as indicated by #starain-MSFT in previous post
The solution structure is described in the next image, is pretty basic:
A .Net Core project with a model class.
A MS Unit Test Project (That contains a reference to the mentioned class).
A [TestClass] with a single [TestMethod] that pass the test.
Well, it resulted that my concern was the key factor to solve my issue.
I just made a little reverse engineer with an MVC project, the default minimatch pattern is different for this type of project, (**\$(BuildConfiguration)\*test*.dll !**\obj\**)
You can learn more about minimatch here.
So I just wanted to look for a .csproj file that contains the word Tests, therefore I changed it to **/*Tests*.csproj instead of **/*Tests/*.csproj.
Now I'm able to see that my unit tests are being executed right away when there is a new build.
I hope that my issue and resolution helps saving other people's time!
I understand that I should use the sure-fire plugin for unit tests, and failsafe for integration. I can run unit tests with mvn test and integration tests with mvn verify but this annoys me for 2 reasons:
I'd prefer to be able to select any test class (or method in that class) and run it individually by a simple click, rather than typing it into terminal every time.
The terminal returns the test results in ugly black/white paragraphs, requiring me to sift through them. I'd much prefer to have the results returned in a visually organized manner, similar to if I right-clicked on the test class in IntelliJ and click 'RunDemoTest`. This produces:
I find the error results much easier to sift through, for example it shows red/green #Test results on the left, and on the right it cleanly organizes the error into
Expected : 3
Actual :1
I'm sure there are advantages to using terminal for automated test runs later into production, but during development I don't find the terminal conducive to my tinkering.
How do I benefit from IntelliJ's visual feedback of test results, while simultaneously ensuring unit & integration tests are run separately, and preserving my freedom to pick and choose which test classes and test methods I can run at any time?
I'm assuming I can't have my cake and eat it too. Please explain.
If you are using the IntelliJ view "Maven Projects" you can very easy toggle on/off the exection of maven integrated tests.
Via "Run/Debug Configurations" you can create test executions that match your reqirement for a comfortable UI.
After these steps, there is a new entry in the drop down list "Run/Debug configurations". When you start the new JUnit Test configuration, the defined tests are executed and the results are presented exactly in the same manner as the screenshot in your question.
The options in my second screenshot allow a very flexible definition of the scope. You don't have go to every java file and click on the green arrows in the editor view.
This configuration isn't related to any maven configuration, and you can use them at any time in your coding process.
We have a TFS gated check-in which uses MSTest workflow activity for running our unit tests. We came across some issues recently due to the result folder that MSTest activity creates being too long so some of our unit tests are failing now because of that. It looks it uses a patter like <user>_<machine_name> <date> <time>_<platform>_<config> so we see very lengthy directory names like "tfsbuild_machine123 2015-09-10 10_00_00_Any CPU_Debug". I did some digging into the workflow and its options but couldn't identify where this pattern is coming from. I appreciate if someone can please point me to where this is coming from and how I can change it so we get more room for our unit testing.
I assume that you're referring to test part in the Build Summary page. Like:
As far as I know that the Summary part in the Build Summary page actually is a SummaryFactory type which drives from IBuildDetailFactory, it is not defined in the TFS build process template. The SummaryFactory class contains some functions like CreateSections and CreateNodes which are used to create nodes with on the Summary page, for example, a hyperlink with the format<user>_<machine_name> <date> <time>_<platform>_<config> . However, the SummaryFactory.cs is an internal class so you can't use it in your own program, nor to customize the test hyperlink format.
For your issue, I still would like to check the detailed error message to see what's wrong with it.
I have searched for a while and can't find an answer on here for the first time ever.
I have a solution which contains multiple C# projects. Two of these projects are unit test projects. I am building this solution with Jenkins, attempting to run all the unit tests, then packaging it for an internal NuGet server.
The issue I am having is that I am running the tests for one unit test project after the other, I think this is causing the second set of tests to save their results over the first set because I can see in the console output that both sets of tests ran and passed, however the Test Results in Jenkins only shows the second set of tests.
I am running these with a windows batch command using nunit-console like this:
nunit-console.exe MyFirstProject.UnitTests/bin/Debug/MyFirstProject.UnitTests.dll
nunit-console.exe MySecondProject.UnitTests/bin/Debug/MySecondProject.UnitTests.dll
Is there a better way I can run these so that all test results are recorded?
Thanks!
Figured it out. For anyone who runs into this it's quite simple, you just have to put both (or however many) assemblies in the same command like this:
nunit-console.exe MyFirstProject.UnitTests/bin/Debug/MyFirstProject.UnitTests.dll MySecondProject.UnitTests/bin/Debug/MySecondProject.UnitTests.dll
Now all my test results show up!
As an addition to anyone else who has this type of issue:
To get one report from two groups of tests run from the same assembly, define two Windows Batch Command build steps, each with its own category and result file :
"%nunit%\nunit-console.exe" "%WORKSPACE%\MyTests.sln" /include:TestCategory1 /xml=nunit-result1.xml
and
"%nunit%\nunit-console.exe" "%WORKSPACE%\MyTests.sln" /include:TestCategory2 /xml=nunit-result2.xml
Add a Post Build action to Publish the NUnit Test result report and define the XML filename as *.xml.
The Nunit plugin will concatenate the different test results into one result that can be viewed and emailed.
This way will also solve the OP's question.
When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.