We have a TFS gated check-in which uses MSTest workflow activity for running our unit tests. We came across some issues recently due to the result folder that MSTest activity creates being too long so some of our unit tests are failing now because of that. It looks it uses a patter like <user>_<machine_name> <date> <time>_<platform>_<config> so we see very lengthy directory names like "tfsbuild_machine123 2015-09-10 10_00_00_Any CPU_Debug". I did some digging into the workflow and its options but couldn't identify where this pattern is coming from. I appreciate if someone can please point me to where this is coming from and how I can change it so we get more room for our unit testing.
I assume that you're referring to test part in the Build Summary page. Like:
As far as I know that the Summary part in the Build Summary page actually is a SummaryFactory type which drives from IBuildDetailFactory, it is not defined in the TFS build process template. The SummaryFactory class contains some functions like CreateSections and CreateNodes which are used to create nodes with on the Summary page, for example, a hyperlink with the format<user>_<machine_name> <date> <time>_<platform>_<config> . However, the SummaryFactory.cs is an internal class so you can't use it in your own program, nor to customize the test hyperlink format.
For your issue, I still would like to check the detailed error message to see what's wrong with it.
Related
I have a .Net core that builds successfully using VSTS. The issue I'm presenting is that Unit Tests aren't being discovered when building the project. I know this is similar to this post but I just wanted to bring up more details in case someone has a good idea seeing this description.
This a summary of the logs:
##[warning]Project file(s) matching the specified pattern were not found.
##[section]Finishing: Test.
I'm concerned about the minimatch pattern used here. It seems is looking for a Tests folder and then any file that ends in .csproj
The default agent queue is Hosted VS2017 as indicated by #starain-MSFT in previous post
The solution structure is described in the next image, is pretty basic:
A .Net Core project with a model class.
A MS Unit Test Project (That contains a reference to the mentioned class).
A [TestClass] with a single [TestMethod] that pass the test.
Well, it resulted that my concern was the key factor to solve my issue.
I just made a little reverse engineer with an MVC project, the default minimatch pattern is different for this type of project, (**\$(BuildConfiguration)\*test*.dll !**\obj\**)
You can learn more about minimatch here.
So I just wanted to look for a .csproj file that contains the word Tests, therefore I changed it to **/*Tests*.csproj instead of **/*Tests/*.csproj.
Now I'm able to see that my unit tests are being executed right away when there is a new build.
I hope that my issue and resolution helps saving other people's time!
I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.
Let's say that the language I use is Java and I do a specific commit, is there a way how I can tell the code coverage of a commit or even if it was done? When a user commits to the repository, I want to have multiple statistics about the commit, including unit test coverage on that specific commit. Is this even possible? Are there already such tools out there or do I need to think about something custom?
I am using Git hosted under Gitblit, I alread saw the hook mechanism by Gitblit, thus it is just a matter of how to do this.
I am working now on that same problem, and I've come out with this approach:
My projects are Java and Maven based. Every one of them inherits from a common parent.
In the common parent pom I've set the jacoco-maven-plugin at integration-test phase to calculate the unit tests coverage. The produced report is stored in the same project along with the source code.
Every time a developer does a commit to the Source Control System, he/she also commits the report, so that I shall query the coverage state for every commit that has been done.
This is already done and working. My next step will be to develop a tool to query the Source Control System (which is SVN in my case) to get the coverage statistics: by date, project, and even by user.
Note that the report generated by Jacoco is absolute, not incremental. If you want it incremental (as I want, too), you have to compute the difference between one report and the previous one.
To query SVN I'm currently using org.tmatesoft.svnkit:svnkit:1.3.8.
The same procedure can be applied to other code metrics: rules compliance, etc.
I am working on an existing web application (written in .NET) which, surprisingly, has a few bugs in it. We track outstanding in a bug tracker (JIRA, in this case), and have some test libraries in place already (written in NUnit).
What I would like to be able to do is, when closing an issue, to be able to link that issue to the unit-/integration-test that ensures that a regression does not occur, and I would like to be able to expose that information as easily as possible.
There are a few things I can think of off-hand, that can be used in different combinations, depending on how far I want to go:
copy the URL of the issue and paste it as a comment in the test code;
add a Category attribute to the test and name it Regressions, so I can select regression tests explicitly and run them as a group (but how to automatically report on which issues have failed regression testing?);
make the issue number part of the test case name;
create a custom Regression attribute that takes the URI of the issue as a required parameter;
create a new custom field in the issue tracker to store the name (or path) of the regression test(s);
The ideal scenario for me would be that I can look at the issue tracker and see which issues have been closed with regression tests in place (a gold star to that developer!), and to look at the test reports and see which issues are failing regression tests.
Has anyone come across, or come up with, a good solution to this?
i fail to see what makes regression tests different from any other tests. why would you want to run only regression tests or everything except regression tests. if regression or non-regression test fails that means this specific functionality is not working and product owner has to decide how critical the problem is. if you stop differentiate tests then simply do code reviews and don't allow any commits without tests.
in case we want to see what tests have been added for specific issues we go to issue tracking system. there are all commits, ideally only one (thanks squashing), issue tracker is connected to git so we can easily browse changed files
in case we want to see (for whatever reason) what issue is related to some specific test/line of code we just give tests a meaningful business name which helps finding any related information. if problem is more technical and we know may need specific issue then we simply add issue number in comment. in case you want to automate retrieval just standardize the format of the comment
another helpful technique is to structure your program correctly. if every functionality has it's own package and tests are packaged and named meaningfully then it's also easy ro find any related code
I have made a few coded UI tests to test the application I am working on. I followed the following post: How to run coded UI Tests from MTM as a guidance and also the following post on how to create a fake build: How to Create a Fake Build Definition and a Fake Build
So after I set the infrastructure I attempted to run the tests from the Microsoft Test Manager (MTM). While MTM doesn't break or throw errors, the result it reports back to me is that it cannot find the Coded UI Test recordings.
Upon looking the contents of the first link (How to run coded UI tests from MTM) I noticed that there was a small piece of text saying "You must create a build definition that just has a share location added that is where your assemblies for your tests are located."
What exactly does that mean? How do I do this? My build definition drops the assemblies in \\machine\share, so that's where I copied the coded ui tests, but I still get the same result.
Is there anything I am missing?
Thanks,
Martin
Well,I don't like to leave questions unanswered, so I thought to come back to this one.
The build definition's drop folder is the one that defines where the MTM is going to look for the coded ui test recordings/dlls.
In order for this to work, one simply has to build and compile their tests/coded ui tests and put the resulting assembly and resources in the folder defined in the build definition.
Cheers!