I am trying to easily display unit test results and code coverage reports from Visual Studio 2012 into the CruiseControl.NET Build Reports. The pieces are the following:
MSBuild - to build the project
vstest.console.exe - to execute visual studio unit tests and code coverage tools via the command line
custom console application to convert coverage report to XML
My problem is how can I control the output name for the vstest.console.exe. I am not finding anyway to control this. My only solution at this point is to write some custom script to find the coverage file and TRX file and rename to a known value. Then the cruise control tool can find the files properly.
Any help would be appreciated. Thanks,
The command line looks very limiting:
http://msdn.microsoft.com/en-us/library/vstudio/jj155796.aspx
"The TRX logger doesn't support any parameters (unlike the TFS publisher logger)."
From:
Specifying results filename for vstest.console.exe
Here is a general tip.
One way to think of CC.NET is like this: It's a big, fancy "Msbuild.Exe" executor.
So if you can write up your logic in msbuild (.proj) file, you can get CC.NET to call it.
1. CC.NET calls a source-control retrieve task.
2. In that retrieve, there is a .proj file.
3. You get CC.NET to call "msbuild.exe MySolutionBuild.proj"
4. Have the .proj file run Unit-Tests, create xml, create artifacts (.zip(s) or .msi(s), etc)
5. After the build, have CC.NET pull in the results (usually xml with File-Merge) and have CC.NET send out emails (publishers).
If you do it this way, if you ever go to TFS (or Jenkins or other), you'll minimize the transition effort.
If you rely very heavily on CC.NET proprietary commands, you can get it to work, but its harder to maintain IMHO.
Take a look at this post
http://rubenwillems.blogspot.be/2011/09/setting-up-ccnet-in-combination-with.html
It shows the step needed.
Look at step 4 and step 5 there, these cover unit testing and coverage.
Related
I have a TFS 2015 build which builds one of our applications (it's an ASP.NET Web API application). As part of the build process it runs our unit tests.
I have a Visual Studio Test build step which runs these unit tests and they all pass okay.
I then run dotCover from this same build to determine code coverage (no we don't use the built-in code coverage as we don't have enterprise licences). However, when run from dotCover all the same unit tests fail.
I use a script step to run a batch file which invokes dotCover as follows.
E:\JetBrains\Installations\dotCover05\dotCover.exe analyse coverage.xml /LogFile=dotcover.log
The dotCover log file doesn't seem to give any indications as to why the unit tests have failed.
Any ideas why the unit tests pass when run from the Visual Studio Test build step and then fail when run from dotCover?
It seems that the problem is related to the fact that my build uses an XML file to hold certain data values. This XML file is found by VSTest when run under TFS but not by dotCover.
When dotCover runs it creates a TestResults folder which it then copies all the necessary files to required to run the unit tests. All files are copied except the XML file. I have set the file to "Copy always" so can't understand why this file isn't copied. I tried copying the file manually as a batch file but the folder structure is created by dotCover so it doesn't exist until I actually run the code coverage.
The solution is to decorate my test classes with the DeploymentItem() attribute.
[TestClass]
[DeploymentItem("File.xml")]
This has resolved my problem.
Make sure your build service account have enough permission to run the dotcove.exe. According to your E:\JetBrains\Installations\dotCover05\dotCover.exe Seems you didn't install for all users on the build agent. Which should installed under %ProgramFiles(x86)% not %LOCALAPPDATA%\JetBrains\Installations.
Try to use CoreInstructionSet parameter in your dotCover as a workaround for your situation. Details see below picture.
After doing this try to run the build again.
You can also try this method if it is failed by the file copying: Shadow-copying in dotCover: if your NUnit tests fail during continuous testing .
I am looking for a tool either command line or GUI which copies the changed assemblies from the solution to a separated folder - so that a new build afterwards does not influence the test run. Afterwards it should executed a configurable set of tests (only certain assemblies, with filtering certain TestCategories). When it is finished the test results should be shown.
Is there a tool or a set of tools which does these tasks? MsTest.exe could run the test but not copy all necessary assemblies.
Using a combination of the MsTest command line tool (or the visual studio runner) in combination with post-build step to copy the assemblies locally are not good, because it would slow down every build, but I will not run the tests locally every time I build the solution. I could write a little script which copies the necessary assemblies locally beforehand. But what I was hoping for is a tool which does all this without me having to write a script.
Copy files
To copy assemblies from the commandline you can use the standard copy command. To just copy "Changed" assemblies is harder, unless you're doing incremental builds.
xcopy, copy will suffice here.
MsBuild is the other tool that you can use to copy the files. You can create a post-build event or a custom target that doesn't run inside Visual Studio. It would only run when you set a specific condition or call the target explicitly from the commandline.
The handy fact is that MsBuild at least knows which files are required to build the files. Do note though, that MsBuild may now know exactly what is needed to run your tests. Certain config files and dependencies of 3rd party references may be needed too, but not part of the project.
I'm not aware of any other tools. I'd personally opt to write a simple script, as it isn't hard to do and maintain.
Run the tests
There is the commandline vstest.console.exe which will happily run the MsTest based tests.
The syntax looks like this:
vstest.console.exe /TestCaseFilter:[ expression ] assemblyone.dll assembly2.dll
Note 1
Every time you run MsBuild to build your assemblies to build your project, even if all the MSIL generated is the same, your assembly will be different, as the compiler will assign a new unique GUID to the file.
So unless your build is incremental and detects that there is no need to actually build the file to begin with, it's going to generate unique files every time.
Note 2
Indeed, as other mention, Continuous Integration tools like tfsbuild or teamcity can help you build and run your tests and create a nice report for you.
More advanced tools such as ms-release-management or octopus-deploy can run your tests during a deployment workflow when you're doing continuous-delivery or even better continuous-deployment.
I think you are describing a team integration system.
You can use MS Team Build or the online version of it from visualstudio.com.
The other popular option is TeamCity
Both of these allow you to configure them so that they trigger a build that run your tests (or a subset of tests based on categories). If the build fails or if a test fails you have the option to react to this (e.g. reject the checkin)
I have been banging my head on a brick wall that seems to be easily worked around for everyone except me.
I want to setup css and js compression using a standard build on Team Foundation Server 2010. Below is what I've tried so far and failed. I am looking for a magic helping hand to guide me into setting this up the way professionals (SO is full of em) believe it should.
http://yuicompressor.codeplex.com/releases/view/46679 (dowload demo using post-build events)
This method looked promising as it did exactly as promised when you build your project in Visual Studio.
My msbuild Post-build command:
$(MSBuildBinPath)\msbuild.exe
"$(ProjectDir)MSBuild\MSBuildSettings.xml"
/p:CssOutputFile="$(TargetDir)..\Content\StylesSheetFinal.css"
/p:JavaScriptOutputFile="$(TargetDir)..\Scripts\JavaScriptFinal.js"
However when the build is run by TFS I get a lot of errors like these:
D:\Builds\3\CKB 2010_Build_CP\Sources\CKB
2010\My.Name.Space\MSBuild\MSBuildSettings.xml (61): Failed
to save the compressed text into the output file [D:\Builds\3\CKB
2010_Build_CP\Binaries..\Content\StylesSheetFinal.css]. Please check
the path/file name and make sure the file isn't magically locked,
read-only, etc..
So clearly the problem is the syntax in the Post-build command that is wrong. But I can't figure out how to make it work for both local and TSF builds.
Update 2011-08-17
As noted by Edward Thompson, I've tried adding a backslash to the path:
$(MSBuildBinPath)\msbuild.exe
"$(ProjectDir)MSBuild\MSBuildSettings.xml"
/p:CssOutputFile="$(TargetDir)\..\Content\styles.min.css"
/p:JavaScriptOutputFile="$(TargetDir)\..\Scripts\scripts.min.js"
And the result is this:
Failed to save the compressed text into the output file
[D:\Builds\3\CKB 2010_Build_CP\Binaries\\..\Content\styles.min.css].
Please check the path/file name and make sure the file isn't magically
locked, read-only, etc..
The problem is the difference in values with which TFS and Visual Studio run the msbuild command.
These are the steps I have taken to get proper YuiCompressor integration with Visual Studio 2010 and Team Foundation Server 2010.
In your desired project add a folder named 'MSBuild'
In this folder you should extract the files you download from the YuiCompressor project on CodePlex
Set the properties of these files like this:
Now open the MSBuildSettings.xml file and edit it according to the scripts and css files you want to have compressed. I have uploaded mine on pastebin since pasting it here caused problems with the editor.
Add the following postbuild event to your project. Note that the paths can differ for your environment.
IF "$(BuildingInsideVisualStudio)"=="" $(MSBuildBinPath)\msbuild.exe
"$(ProjectDir)MSBuild\MSBuildSettings.xml"
/p:CssOutputFile="$(TargetDir)_PublishedWebsites\$(ProjectName)\Content\styles.min.css"
/p:JavaScriptOutputFile="$(TargetDir)_PublishedWebsites\$(ProjectName)\Scripts\scripts.min.js"
IF "$(BuildingInsideVisualStudio)"=="true"
$(MSBuildBinPath)\msbuild.exe
"$(ProjectDir)MSBuild\MSBuildSettings.xml"
/p:CssOutputFile="$(TargetDir)..\Content\styles.min.css"
/p:JavaScriptOutputFile="$(TargetDir)..\Scripts\scripts.min.js"
Build the project and see if the files where created as expected.
Perform a check-in and watch the tfs build create the compressed files for you.
For debugging the tfs build, you'll find the logs in the msbuild log which is linked inside the normal tfsbuild log.
I hope this helps someone out there. I couldn't find a decent guide anywhere so now there is one here! If you have other suggestions, feel free to add them or post them in the comments.
One thing that sticks out at me is that you're using $(TargetDir)..\ - which expands to \Binaries..\. I suspect that you don't have a Binaries.. directory, and that this is supposed to be \$(TargetDir)\..\. (Ie, the parent of the Binaries directory.)
I am using VC++ 2005 and 2008 on a project. Now I want to see if the unit test cases cover all the code, and a found a problem. We use Boost.Test for unit testing, and each file is designed to test a particular function or method. Each file is compiled into a separate executable.
I am able to view the results per executable in Visual Studio. What I am really interested in is to view the overall code coverage by all the tests combined. Is there a way to combine the code coverage results?
I don't know about Visual Studio's test coverage tools.
Our SD C++ Test Coverage Tool will combine test coverage vectors from a single instrumented set of source code, no matter how many times you compile/link it (as long as you don't change the source of the code being tested). This tool can be obtained for the Visual Studio dialect(s) of C++. SD's test coverage tools for other languages have this same property.
C++ Coverage Validator can combine results from different code coverage sessions. You can combine sessions interactively using the GUI or from the command line (so you can automate things).
Alternatively you could set up the automatic merging to a central session and get every code coverage session automatically merged into the central session.
As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html