Run MsTest separately from Visual Studio - unit-testing

I am looking for a tool either command line or GUI which copies the changed assemblies from the solution to a separated folder - so that a new build afterwards does not influence the test run. Afterwards it should executed a configurable set of tests (only certain assemblies, with filtering certain TestCategories). When it is finished the test results should be shown.
Is there a tool or a set of tools which does these tasks? MsTest.exe could run the test but not copy all necessary assemblies.
Using a combination of the MsTest command line tool (or the visual studio runner) in combination with post-build step to copy the assemblies locally are not good, because it would slow down every build, but I will not run the tests locally every time I build the solution. I could write a little script which copies the necessary assemblies locally beforehand. But what I was hoping for is a tool which does all this without me having to write a script.

Copy files
To copy assemblies from the commandline you can use the standard copy command. To just copy "Changed" assemblies is harder, unless you're doing incremental builds.
xcopy, copy will suffice here.
MsBuild is the other tool that you can use to copy the files. You can create a post-build event or a custom target that doesn't run inside Visual Studio. It would only run when you set a specific condition or call the target explicitly from the commandline.
The handy fact is that MsBuild at least knows which files are required to build the files. Do note though, that MsBuild may now know exactly what is needed to run your tests. Certain config files and dependencies of 3rd party references may be needed too, but not part of the project.
I'm not aware of any other tools. I'd personally opt to write a simple script, as it isn't hard to do and maintain.
Run the tests
There is the commandline vstest.console.exe which will happily run the MsTest based tests.
The syntax looks like this:
vstest.console.exe /TestCaseFilter:[ expression ] assemblyone.dll assembly2.dll
Note 1
Every time you run MsBuild to build your assemblies to build your project, even if all the MSIL generated is the same, your assembly will be different, as the compiler will assign a new unique GUID to the file.
So unless your build is incremental and detects that there is no need to actually build the file to begin with, it's going to generate unique files every time.
Note 2
Indeed, as other mention, Continuous Integration tools like tfsbuild or teamcity can help you build and run your tests and create a nice report for you.
More advanced tools such as ms-release-management or octopus-deploy can run your tests during a deployment workflow when you're doing continuous-delivery or even better continuous-deployment.

I think you are describing a team integration system.
You can use MS Team Build or the online version of it from visualstudio.com.
The other popular option is TeamCity
Both of these allow you to configure them so that they trigger a build that run your tests (or a subset of tests based on categories). If the build fails or if a test fails you have the option to react to this (e.g. reject the checkin)

Related

TFS Build - Sending MSBuild Proj Values into the vstest runtime

We have unit tests that are built and run during our TFS Build process. It is a very large project with a complex build time. There are parameters used in the msbuild .proj files that get passed down to child projects, etc.
Sometimes the unit test runtime needs some of these .proj parameters (which can only be known at build time) in order to function correctly.
My predecessor managed this by creating a file at build time using post-build events (e.g. ECHO SomethingINeedToKnow=True >> somefile ) in the unit test project's vcxproj file.
Then at runtime the unit test dll's on AssemblyInitialize event looks for this file and parses the needed values, injecting them into the test runtime. It's really quite ingenious.
However, the senior architects do not like hacks and they want everything to be done the Microsoft way, if at all possible.
So my question is this: is there a native, Microsoft-sanctioned way to pass values inherited by the vcxproj at build time into the unit test runtime?
I think the answer is no, and that the current solution is the best solution, but I want to make sure.
p.s. The code under test is typically unmanaged C++ and the unit test projects are managed C++ using namespace Microsoft::VisualStudio::TestTools::UnitTesting (10.0 I believe)
I know supplying Run Time Parameters to Tests can be achieved through VNEXT build. Not sure how to send proj values into the vstest runtime.

Teamcity NUnit Tests - No assemblies found

I am trying to get TeamCity set up for a project. I want to run a scheduled build that includes a step where NUnit Tests are run.
My NUnit build step looks like this:
Runtime: NUnit-2.6.3 v4.0 MSIL
Run tests on: **/Tests/*.dll
Execute: If all previous steps finished successfully
But every time I run the build I get an error saying:
No assemblies were found.
Why is this happening and how can I fix it? Also, conceptually, this build step will happen BEFORE the project is actually built. But how will there be any Test DLLs (assemblies) unless the project is built in the first place?
I would suggest that you build your projects before running tests. Most common way to achieve this is that you have separate configurations for building code and running tests.
Your build configuration would generate artefacts (containing assemblies most likely).
Test run configuration would extract this artefact package, via artefacts dependency, then in build step you run tests from specific assembly.
This is the most common approach and using this approach you do not have to worry about files in the files system. Teamcity's snapshot isolation and artefacts dependencies will take care of this (when properly configured)
If you need an example how to achieve this, let me know.

How to unit test WIX merge modules?

I am building merge modules with WIX. The batch files which calls the WIX tools to generate the merge modules from *.wxs files are run by my daily build.
I am trying to figure out how could I automate the testing of these merge modules. Things I would like to test are, whether the merge module installs required files, whether the versions of the files are correct etc.
One idea I have is to write a script (may be VB Script) to install the merge module at a temporary location and check if it has installed everything correctly. However, I am not sure if this is a correct way to do it.
Are there any standard ways of writing unit tests for merge modules? Any ideas around how to go about this are welcome.
When you test an installer, the primary goals are to verify that
When installing the msi file, msiexec reports success (i.e. return code 0).
After running the installer, your application can be started and works as expected.
The first point should be easy enough to do, though if you want to keep the test automated you can only test the non-interactive install. Here is an example how to do that from a batch file or on the command line:
msiexec /i myinstaller.msi /passive || echo ERROR: non-zero return code!
The second point is a bit more complicated. I think the best way to achieve this is to build some integration tests into your application, and invoke those tests after the install:
"c:\program files\mystuff\app.exe" /selftest || echo ERROR: non-zero return code!
In your case you're testing a merge module rather than an entire installer. The same approach can be used. You just will have to do the additional work of building a "self test" application and its installer which consumes your merge module.
I've often thought about this but haven't come up with anything that I like. First I would call this integration testing not strictly unit testing. Second the problem of "right files" and "right versions" is difficult to define.
I'm often tempted to say WiX/MSI is just data that defines what the installer is to do. It's declarative in nature and therefore by definition correct. It's tempting to want to create yet another set of data that cross checks the implementation of the installer but what exactly does that accomplish that the first data set didn't already represent? It's hard enough sometimes to maintain what files go into an application yet alone to maintain a second list of files.
I continue to think about this and wonder if there's an approach that would make sense but at this point I just do my normal MSI validation.
You could try to use scripts or other small console program that will do the job, just like you suggested.
With your build process you could also build a basic setup that just uses the merge module. Your script could just install this, run the other script or console app that will check if all the files are in place, that they have the correct version, that all the registry keys are installed, etc. After all the output is gathered your main script would just uninstall everything. You could also run the check program after uninstalling to be sure that everything is gone and that the uninstall works correctly. I would recommend this if, for example, you have custom actions set for install and uninstall.
Ideally this whole install / uninstall process should be done on a separate machine, or a virtual one, in order to avoid messing up the build server.
You'll have some work to do with all this scripts but once you have it, you'll be able to use it with little changes for any other future merge module projects or just plain setup projects.
Hope this would help,
Adrian.

have you and how do you do C++ autotest?

As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html

Running unit tests on Team Foundation Server (TFS) builds

What are the steps to get Team Foundation Server running unit tests when a given build runs?
What are the caveats / pitfalls / workarounds a dev or sysadmin should be aware of when setting up a TFS server to do this for the first time?
What are common troubleshooting steps for unit test problems during builds?
it depends on which version of TFS you are running, so I will assume it is 2008.
Firstly, you must have Team Edition for Testers installed on the computer that will act as your build agent, as stated in How To: Create a Build Definition
There are a couple of ways to tell Team Build to run tests for your build.
Unit tests can be run from a defined Test List within the Solution being built. This list is referenced by the build definition and all tests within the chosen list(s) are executed. More info here
WildCard test exectution is also available by defining a wildcard mask (i.e. Test*.dll) that instructs Team Build to run any tests present in assemblies that match the mask. This is done when defining the build definition as well.
Things to note:
If you intend to use the wildcard method and want to enable code coverage for your test configuration, you must add the following to your build definition file to enable it.
<RunConfigFile>$(SolutionRoot)\TestRunConfig.testrunconfig</RunConfigFile>
See my previous question on this for more info here
If you don't want to use test configs (A Pain in the ass to manage) just run all the test in a .dll by adding this to the build config:
<ItemGroup>
<TestContainerInOutput Include="MyProject.UnitTests.dll" />
</ItemGroup>
The whole process is smooth and fairly simple. You can inspect the unit tests that filaed on the build server by opening the test result file locally (a bit of a pain) but generally you'll just run the unit tests locally and you can see the results immediately.
If you are used to NUnit, you may opt to sort the tests by classname, it gives a similar view.
Careful with code coverage, it makes complete copies of your binaries on compile. If your binaries are sufficiently large and you compile often, it will eat through drive space quickly.
http://msdn.microsoft.com/en-us/library/cc981972(v=vs.90).aspx
I like this defination as it gives you a complete 'walkthrough' from
Creating the Project
Creating the Unit Test Project
To configuring Team build to use it Unit Test