How to unit test WIX merge modules? - unit-testing

I am building merge modules with WIX. The batch files which calls the WIX tools to generate the merge modules from *.wxs files are run by my daily build.
I am trying to figure out how could I automate the testing of these merge modules. Things I would like to test are, whether the merge module installs required files, whether the versions of the files are correct etc.
One idea I have is to write a script (may be VB Script) to install the merge module at a temporary location and check if it has installed everything correctly. However, I am not sure if this is a correct way to do it.
Are there any standard ways of writing unit tests for merge modules? Any ideas around how to go about this are welcome.

When you test an installer, the primary goals are to verify that
When installing the msi file, msiexec reports success (i.e. return code 0).
After running the installer, your application can be started and works as expected.
The first point should be easy enough to do, though if you want to keep the test automated you can only test the non-interactive install. Here is an example how to do that from a batch file or on the command line:
msiexec /i myinstaller.msi /passive || echo ERROR: non-zero return code!
The second point is a bit more complicated. I think the best way to achieve this is to build some integration tests into your application, and invoke those tests after the install:
"c:\program files\mystuff\app.exe" /selftest || echo ERROR: non-zero return code!
In your case you're testing a merge module rather than an entire installer. The same approach can be used. You just will have to do the additional work of building a "self test" application and its installer which consumes your merge module.

I've often thought about this but haven't come up with anything that I like. First I would call this integration testing not strictly unit testing. Second the problem of "right files" and "right versions" is difficult to define.
I'm often tempted to say WiX/MSI is just data that defines what the installer is to do. It's declarative in nature and therefore by definition correct. It's tempting to want to create yet another set of data that cross checks the implementation of the installer but what exactly does that accomplish that the first data set didn't already represent? It's hard enough sometimes to maintain what files go into an application yet alone to maintain a second list of files.
I continue to think about this and wonder if there's an approach that would make sense but at this point I just do my normal MSI validation.

You could try to use scripts or other small console program that will do the job, just like you suggested.
With your build process you could also build a basic setup that just uses the merge module. Your script could just install this, run the other script or console app that will check if all the files are in place, that they have the correct version, that all the registry keys are installed, etc. After all the output is gathered your main script would just uninstall everything. You could also run the check program after uninstalling to be sure that everything is gone and that the uninstall works correctly. I would recommend this if, for example, you have custom actions set for install and uninstall.
Ideally this whole install / uninstall process should be done on a separate machine, or a virtual one, in order to avoid messing up the build server.
You'll have some work to do with all this scripts but once you have it, you'll be able to use it with little changes for any other future merge module projects or just plain setup projects.
Hope this would help,
Adrian.

Related

Run MsTest separately from Visual Studio

I am looking for a tool either command line or GUI which copies the changed assemblies from the solution to a separated folder - so that a new build afterwards does not influence the test run. Afterwards it should executed a configurable set of tests (only certain assemblies, with filtering certain TestCategories). When it is finished the test results should be shown.
Is there a tool or a set of tools which does these tasks? MsTest.exe could run the test but not copy all necessary assemblies.
Using a combination of the MsTest command line tool (or the visual studio runner) in combination with post-build step to copy the assemblies locally are not good, because it would slow down every build, but I will not run the tests locally every time I build the solution. I could write a little script which copies the necessary assemblies locally beforehand. But what I was hoping for is a tool which does all this without me having to write a script.
Copy files
To copy assemblies from the commandline you can use the standard copy command. To just copy "Changed" assemblies is harder, unless you're doing incremental builds.
xcopy, copy will suffice here.
MsBuild is the other tool that you can use to copy the files. You can create a post-build event or a custom target that doesn't run inside Visual Studio. It would only run when you set a specific condition or call the target explicitly from the commandline.
The handy fact is that MsBuild at least knows which files are required to build the files. Do note though, that MsBuild may now know exactly what is needed to run your tests. Certain config files and dependencies of 3rd party references may be needed too, but not part of the project.
I'm not aware of any other tools. I'd personally opt to write a simple script, as it isn't hard to do and maintain.
Run the tests
There is the commandline vstest.console.exe which will happily run the MsTest based tests.
The syntax looks like this:
vstest.console.exe /TestCaseFilter:[ expression ] assemblyone.dll assembly2.dll
Note 1
Every time you run MsBuild to build your assemblies to build your project, even if all the MSIL generated is the same, your assembly will be different, as the compiler will assign a new unique GUID to the file.
So unless your build is incremental and detects that there is no need to actually build the file to begin with, it's going to generate unique files every time.
Note 2
Indeed, as other mention, Continuous Integration tools like tfsbuild or teamcity can help you build and run your tests and create a nice report for you.
More advanced tools such as ms-release-management or octopus-deploy can run your tests during a deployment workflow when you're doing continuous-delivery or even better continuous-deployment.
I think you are describing a team integration system.
You can use MS Team Build or the online version of it from visualstudio.com.
The other popular option is TeamCity
Both of these allow you to configure them so that they trigger a build that run your tests (or a subset of tests based on categories). If the build fails or if a test fails you have the option to react to this (e.g. reject the checkin)

Is there a way to hash a library to ensure that scripts using the library are updated to the last version?

For instance, I have created a WebDriver boilerplate that I use across multiple projects and multiple workstations on the same project, however as I treat it as a library so I don't commit it to the project(s) repo. For instance
I have "Project X" running on my desktop, laptop and work computer and I update my boilerplate code on my laptop and commit the changes to the boilerplate repo, and then make some modifications to the Project X test cases and commit those. Later when I pull from the Project X repo to my Laptop, I make some code changes and run my WebDriver tests which can take about 5-10 minutes. After say 10 minutes I realise the tests have all run and some have failed because I forgot to update the library.
Some manual ways of dealing with this might be to have a library version number which is also referenced in the test cases code, however this is also a manual step that could be forgotten.
At the moment I'm leaning towards the library providing a function to generate a hash of itself which the test case code will then need to run first and if the hash mismatches then I know instantly that my test cases should be using a newer library.
What methods are common in this scenario?
Have you considered using the version control commit sha? Git also has a way of dealing with submodules http://git-scm.com/book/en/v2/Git-Tools-Submodules.
Usually languages have a dependency management tool, for Python i believe it's Pip, and usually those tools have a way to lock versions and update with one command. See Install specific git commit with pip.

Using googletest to run unit-tests for multiple tested modules at once

I'm working on a large C++ project which contains more than 50 libraries and executables. I am starting to add googletest tests for each of these modules. I read that google recommends putting the tests in an executables and not in libraries to make life easier. Creating a separate executable for each separate components I would get more than 50 test executables and in order to run them all at once I would need to create an external script which would also need to combine their output to a single one.
Is that the recommended thing to do?
Or should I create a library for tests of each separate module and link all these libs to a single tests executable? But then running tests for a single module becomes less convinient. I would need to build all the tests and specify to the main test executable through the gtest_filter flag which tests should be executed at this time.
It would really help me to hear how other people do this and what is the best practice here.
Thank you
[...] and in order to run them all at once I would need to create an
external script which would also need to combine their output to a
single one.
Maybe it's not actually necessary to combine the output into a single file. For example with Jenkins you can specify a wildcard pattern for the Google Test output files.
So if you actually just want to see the Google Test results in Jenkins (or Hudson or whatever CI tool you use), this might be a possible solution:
You would run all of your test executables from a simple script (or even from a Make rule), with parameter --gtest_output=xml: followed by a directory name (ie. ending with a slash). Every test executable will then write an own XML file into that directory, and you can then configure your CI tool to read all files from that directory.

run tests automatically after sources changed

I use Cxxtest for unit-testing my C++ code. I would like that each time I change and save my code, tests are run (simply, by make tests). I know that for Python there is Nosy that enables that. Is there some generic program that would allow this for Cxxtest or any other testing unit?
Simply, I just need running a single command of files` change. It wouldn't be difficult to write script like that, but maybe there is some tool already : )
On Linux you can use a filesystem monitoring daemon like incron to run a command (e.g. make tests) every time a file is changed in a directory (a so-called IN_CLOSE_WRITE event).
You could use TeamCity for this. It can monitor your code repository and run automated builds + unit tests when changes are detected.. Includes a decent web style interface and emailing capability to notify of build/test failures..
It can also be configured for both windows and linux builds..
http://www.jetbrains.com/teamcity/
If thats a bit heavyweight for you, then you should be able to configure your build process to run the tests for you (e.g. edit your makefile on linux), but obviously this would still mean you manually kicking off a build when you make changes (which I guess you'd probably do anyway)..
Annoyed that there is no ready and simple solution, i just created a simple tool: changerun to run commands when files' change. I hope someone will find it useful : )

have you and how do you do C++ autotest?

As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html