TFS Build - Sending MSBuild Proj Values into the vstest runtime - c++

We have unit tests that are built and run during our TFS Build process. It is a very large project with a complex build time. There are parameters used in the msbuild .proj files that get passed down to child projects, etc.
Sometimes the unit test runtime needs some of these .proj parameters (which can only be known at build time) in order to function correctly.
My predecessor managed this by creating a file at build time using post-build events (e.g. ECHO SomethingINeedToKnow=True >> somefile ) in the unit test project's vcxproj file.
Then at runtime the unit test dll's on AssemblyInitialize event looks for this file and parses the needed values, injecting them into the test runtime. It's really quite ingenious.
However, the senior architects do not like hacks and they want everything to be done the Microsoft way, if at all possible.
So my question is this: is there a native, Microsoft-sanctioned way to pass values inherited by the vcxproj at build time into the unit test runtime?
I think the answer is no, and that the current solution is the best solution, but I want to make sure.
p.s. The code under test is typically unmanaged C++ and the unit test projects are managed C++ using namespace Microsoft::VisualStudio::TestTools::UnitTesting (10.0 I believe)

I know supplying Run Time Parameters to Tests can be achieved through VNEXT build. Not sure how to send proj values into the vstest runtime.

Related

Run MsTest separately from Visual Studio

I am looking for a tool either command line or GUI which copies the changed assemblies from the solution to a separated folder - so that a new build afterwards does not influence the test run. Afterwards it should executed a configurable set of tests (only certain assemblies, with filtering certain TestCategories). When it is finished the test results should be shown.
Is there a tool or a set of tools which does these tasks? MsTest.exe could run the test but not copy all necessary assemblies.
Using a combination of the MsTest command line tool (or the visual studio runner) in combination with post-build step to copy the assemblies locally are not good, because it would slow down every build, but I will not run the tests locally every time I build the solution. I could write a little script which copies the necessary assemblies locally beforehand. But what I was hoping for is a tool which does all this without me having to write a script.
Copy files
To copy assemblies from the commandline you can use the standard copy command. To just copy "Changed" assemblies is harder, unless you're doing incremental builds.
xcopy, copy will suffice here.
MsBuild is the other tool that you can use to copy the files. You can create a post-build event or a custom target that doesn't run inside Visual Studio. It would only run when you set a specific condition or call the target explicitly from the commandline.
The handy fact is that MsBuild at least knows which files are required to build the files. Do note though, that MsBuild may now know exactly what is needed to run your tests. Certain config files and dependencies of 3rd party references may be needed too, but not part of the project.
I'm not aware of any other tools. I'd personally opt to write a simple script, as it isn't hard to do and maintain.
Run the tests
There is the commandline vstest.console.exe which will happily run the MsTest based tests.
The syntax looks like this:
vstest.console.exe /TestCaseFilter:[ expression ] assemblyone.dll assembly2.dll
Note 1
Every time you run MsBuild to build your assemblies to build your project, even if all the MSIL generated is the same, your assembly will be different, as the compiler will assign a new unique GUID to the file.
So unless your build is incremental and detects that there is no need to actually build the file to begin with, it's going to generate unique files every time.
Note 2
Indeed, as other mention, Continuous Integration tools like tfsbuild or teamcity can help you build and run your tests and create a nice report for you.
More advanced tools such as ms-release-management or octopus-deploy can run your tests during a deployment workflow when you're doing continuous-delivery or even better continuous-deployment.
I think you are describing a team integration system.
You can use MS Team Build or the online version of it from visualstudio.com.
The other popular option is TeamCity
Both of these allow you to configure them so that they trigger a build that run your tests (or a subset of tests based on categories). If the build fails or if a test fails you have the option to react to this (e.g. reject the checkin)

Does build include running Unit Tests?

I used work on Java. I was using Ant/Maven for making the build process easier.
I used to write the unit tests in the JUnit framework. When I say build all unit test area lso used to run.
So there means, Build includes the compiling the source code and running the unit test against that compiled source code.
Now I started working with C++ on new project in Visual C++. Here, When I say build, Only source is compiled and linked with libs. But unit test are not ran.
So, now I got confused with actual definition of build process.
Does build process includes running tests also? Or it is just compiling and linking of source code?
Ant and Maven "know" about unit tests. If an executable is marked as test, it will be run during the build process.
Visual Studio, AFAIK, does not have this concept. Even if an executable is written using some *Unit framework, Visual Studio will build it but not run it (by default). It is possible to run tests during the build process, but you need to add a post-build step to the test program's project to run the executable.
Disclaimer: The last version I used was VS 2003.NET; things might have improved (or gotten worse) since then.
Nowadays, I use the build tool with the greatest flexibility: make. In the project I'm currently working on, there's a build target named 'all' which builds every executable and test program, a target named 'check' which runs every test program (and builds them if needed), a target named 'coverage' which computes code coverage, and a target named 'run' which does all of the above. This way I get to choose what to run.
I hope that you can see from this that "what does the build process include" depends on how the build system is set up. There isn't a universally defined concept of "build process".

Is it possible to perform unit testing on dll's methods without an executable during build process?

I have 63 DLL's with various C++ methods in each. I want to validate the output of some of the methods with fixed input values. I'm wondering if it is possible to do unit testing in the DLL itself during compilation build process.
So, the compilation build of DLL gives the results of the Unit Testing in the Output window of Visual Studio.
I know that I can validate this scenario by creating executable file and calling the methods. But, is it possible without executable file?
As others have said - testing "during compilation" does not make sense, so I'm assuming you mean testing during the build process, which is different and of course possible using post build steps etc.
You don't specify which version of Visual Studio you use, but if you have VS2012, there is an MSDN article that describes exactly how to do what you describe. See the link for the full instructions, I've attached a partial screenshot below
Taking your question verbatim, the answer is "no", because you can't test a DLL when you haven't even finished compiling it. Also, you need some kind of executable to load that DLL, so either you load it with a scripting language (Python with ctypes comes to mind) or you create an executable.
Calling that from a post-compile step in Visual Studio, as suggested by shivakumar is probably the only way to get the results into the output window. I personally prefer running this from an external build script, but I'm also cross-compiling a lot and I can't run things from a post-compile step there. This also makes it easier to debug the unit tests when something fails.
You have to wait compilation to complete so that there are no compilation error in the code.
In the post-build event you can add batch files which will run your unit test modules and validate the binaries generated after compilation.
You are asking for a thing that does not make sense. When you say "compiling" that means a very specific thing: invoking the compiler, before invoking the linker. But C++ code (and C++ unit tests) do not work like that. The compiler must finish compiling both your production code and your tests, and the object files must then be linked into libraries, executables, or both. A test framework must then execute the test code which calls your production code in order to get results. None of these steps are optional in C++.
Instead, you probably intended to ask if you could run the unit tests as part of the build (not compile). And the answer to that is an emphatic "yes!"
I'm guessing that your solution is likely structured into 63 or more individual DLL projects. For each production DLL you are going to test, such as Foo.DLL, I recommend you add a new FooTest project, with the unit test code added to the FooTest project. In FooTest, create a project dependency upon the Foo project, which will force FooTest to build after building Foo. In the FooTest project you would have two kinds of code modules: classes containing your unit tests, and a FooTest.cpp that would house the main() entrypoint of the FooTest.EXE program, invoking the testing framework, and outputting the results to the console.
Create your FooTest.cpp so that it's a console program. If you format your test executable's output so that it matches the output of the Visual Studio compiler, as in "filename.cpp(lineNo) : error: description of failure", Visual Studio will automatically navigate to the file and line if you click on it. Unit test frameworks such as CppUnit may already have a "CompilerOutputter" class that will properly format the output to match your compiler's errors.
In your FooTest project, you also need to set the input to the FooTest linker so that it can link in the production code you are trying to test. In the properties of the FooTest project, go to the Linker/Input tab and add the path to your Foo project's OBJ files to the Additional Dependencies. The line I use looks like this: $(SolutionDir)Foo\Debug\obj*.obj
In the Build Events properties of the FooTest project, invoke your new FooTest.EXE as a post-build step. Then, every time you click build, your code will be built and your unit tests will be executed. The project dependency will ensure that if you change your Foo code, you will compile, link, and execute the FooTest tests. And the console output ensures that your test results will appear as clickable output in your IDE.
You could create 63 separate unit test executables, or you could create one all-encompassing unit test executable. That's entirely your choice. If you are looking to make the builds and links happen quicker, you will probably want to have the separate executables; even though it's a bit more individual configuration work, you do it only once, and after that you retain the benefits of quick builds for small changes.
Now you're ready to do some serious coding.

have you and how do you do C++ autotest?

As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html

Organizing unit testing for existing code

I recently received as a new task to maintain and improve existing code written in C++ with MS Visual Studio. The code builds into an exe file (not a dll). I would like to add unit tests for the code and the problem I encountered is how to organize my testing projects. Basically I want to have 2 projects, one would be the original project I received and the second the testing project.
I saw on the Internet that usually when the subject being tested is built into a dll it's quite easy you have to statically link in your testing project the lib built from the main project and you have access to the function being tested. But how can this be done when the subject under test is an exe file?
Surely you can arrange the solution into projects that share code, where one project outputs to exe and the other(s) to DLL?
Whatever the project deliverable is, unit testing is testing the smallest units: the functions. A unit test typically follows the tripe A pattern: Arrange (create the environment for the test), Act (invoke the method under test), Assert (verify the method behaved as expected).
There are several possible project[s] structures: modify the project so that it compiles into a DLL, a production executable and a unit test program. The executable source has to be as small as possible, possibly just the main() function that create an Application object. It is also possible to have three projects, one for the DLL, one for the application and the third one for the tests.
An alternative is to embed the unit tests inside the executable and to have a mean to invoke them, e.g. with a special --unit-test parameter.