I have an odd situation. I have a suite of unit tests that pass on my dev machine. They pass on the build machine if run from visual studio. But 5 of them reliably fail during the automated build. There is nothing noteworthy about the ones that fail that I can see (and I've stared at them a long time). Anyone seen anything like this? Is there a way to see the test output in the Team Build log? All I get is Passed or Failed messages, but not the Assert message.
Thanks!
You should be able to get the actual .trx file from either the build result screen or from the drop location. You can open that in Visual Studio and see the error message, stack trace, etc.
One possibility is that it's depending on certain file paths or dependent libraries that aren't there in the CI build; Team Build will only copy libraries that are either referenced by your test assemblies or are explicitly labeled as Deployment Items, so if you do any reflection loading or other dynamic type loading, that could be a cause.
Related
We use Azure DevOps as our repository and our solution has multiple projects; including a web application, a SQL server database project, a unit test project, and multiple class libraries. When committing changes, the build will fail in Azure DevOps, because it fails to compile the Unit Test project. The problem, is locally you can still launch your solution even with Unit Test syntax errors. If you look at your Error List you would notice them, but is there any way for either - forcing the solution launch to fail with Unit Test syntax errors, or prevent Team Explorer commits if there are any errors listed in the Error List?
It seems there is no such a way to do this. On one hand, Visual studio build helps you to find the problem in your codes, if you can't launch when there is error, how do you find and fix it? On the other hand, I think it is better to build your solution and fix the error before you commit。
Besides, there is no way to prevent committing you code, since the commit command just captures a snapshot of the project's currently staged changes, it does nothing with your source codes.
I am looking for a tool either command line or GUI which copies the changed assemblies from the solution to a separated folder - so that a new build afterwards does not influence the test run. Afterwards it should executed a configurable set of tests (only certain assemblies, with filtering certain TestCategories). When it is finished the test results should be shown.
Is there a tool or a set of tools which does these tasks? MsTest.exe could run the test but not copy all necessary assemblies.
Using a combination of the MsTest command line tool (or the visual studio runner) in combination with post-build step to copy the assemblies locally are not good, because it would slow down every build, but I will not run the tests locally every time I build the solution. I could write a little script which copies the necessary assemblies locally beforehand. But what I was hoping for is a tool which does all this without me having to write a script.
Copy files
To copy assemblies from the commandline you can use the standard copy command. To just copy "Changed" assemblies is harder, unless you're doing incremental builds.
xcopy, copy will suffice here.
MsBuild is the other tool that you can use to copy the files. You can create a post-build event or a custom target that doesn't run inside Visual Studio. It would only run when you set a specific condition or call the target explicitly from the commandline.
The handy fact is that MsBuild at least knows which files are required to build the files. Do note though, that MsBuild may now know exactly what is needed to run your tests. Certain config files and dependencies of 3rd party references may be needed too, but not part of the project.
I'm not aware of any other tools. I'd personally opt to write a simple script, as it isn't hard to do and maintain.
Run the tests
There is the commandline vstest.console.exe which will happily run the MsTest based tests.
The syntax looks like this:
vstest.console.exe /TestCaseFilter:[ expression ] assemblyone.dll assembly2.dll
Note 1
Every time you run MsBuild to build your assemblies to build your project, even if all the MSIL generated is the same, your assembly will be different, as the compiler will assign a new unique GUID to the file.
So unless your build is incremental and detects that there is no need to actually build the file to begin with, it's going to generate unique files every time.
Note 2
Indeed, as other mention, Continuous Integration tools like tfsbuild or teamcity can help you build and run your tests and create a nice report for you.
More advanced tools such as ms-release-management or octopus-deploy can run your tests during a deployment workflow when you're doing continuous-delivery or even better continuous-deployment.
I think you are describing a team integration system.
You can use MS Team Build or the online version of it from visualstudio.com.
The other popular option is TeamCity
Both of these allow you to configure them so that they trigger a build that run your tests (or a subset of tests based on categories). If the build fails or if a test fails you have the option to react to this (e.g. reject the checkin)
I am trying to get TeamCity set up for a project. I want to run a scheduled build that includes a step where NUnit Tests are run.
My NUnit build step looks like this:
Runtime: NUnit-2.6.3 v4.0 MSIL
Run tests on: **/Tests/*.dll
Execute: If all previous steps finished successfully
But every time I run the build I get an error saying:
No assemblies were found.
Why is this happening and how can I fix it? Also, conceptually, this build step will happen BEFORE the project is actually built. But how will there be any Test DLLs (assemblies) unless the project is built in the first place?
I would suggest that you build your projects before running tests. Most common way to achieve this is that you have separate configurations for building code and running tests.
Your build configuration would generate artefacts (containing assemblies most likely).
Test run configuration would extract this artefact package, via artefacts dependency, then in build step you run tests from specific assembly.
This is the most common approach and using this approach you do not have to worry about files in the files system. Teamcity's snapshot isolation and artefacts dependencies will take care of this (when properly configured)
If you need an example how to achieve this, let me know.
I have 63 DLL's with various C++ methods in each. I want to validate the output of some of the methods with fixed input values. I'm wondering if it is possible to do unit testing in the DLL itself during compilation build process.
So, the compilation build of DLL gives the results of the Unit Testing in the Output window of Visual Studio.
I know that I can validate this scenario by creating executable file and calling the methods. But, is it possible without executable file?
As others have said - testing "during compilation" does not make sense, so I'm assuming you mean testing during the build process, which is different and of course possible using post build steps etc.
You don't specify which version of Visual Studio you use, but if you have VS2012, there is an MSDN article that describes exactly how to do what you describe. See the link for the full instructions, I've attached a partial screenshot below
Taking your question verbatim, the answer is "no", because you can't test a DLL when you haven't even finished compiling it. Also, you need some kind of executable to load that DLL, so either you load it with a scripting language (Python with ctypes comes to mind) or you create an executable.
Calling that from a post-compile step in Visual Studio, as suggested by shivakumar is probably the only way to get the results into the output window. I personally prefer running this from an external build script, but I'm also cross-compiling a lot and I can't run things from a post-compile step there. This also makes it easier to debug the unit tests when something fails.
You have to wait compilation to complete so that there are no compilation error in the code.
In the post-build event you can add batch files which will run your unit test modules and validate the binaries generated after compilation.
You are asking for a thing that does not make sense. When you say "compiling" that means a very specific thing: invoking the compiler, before invoking the linker. But C++ code (and C++ unit tests) do not work like that. The compiler must finish compiling both your production code and your tests, and the object files must then be linked into libraries, executables, or both. A test framework must then execute the test code which calls your production code in order to get results. None of these steps are optional in C++.
Instead, you probably intended to ask if you could run the unit tests as part of the build (not compile). And the answer to that is an emphatic "yes!"
I'm guessing that your solution is likely structured into 63 or more individual DLL projects. For each production DLL you are going to test, such as Foo.DLL, I recommend you add a new FooTest project, with the unit test code added to the FooTest project. In FooTest, create a project dependency upon the Foo project, which will force FooTest to build after building Foo. In the FooTest project you would have two kinds of code modules: classes containing your unit tests, and a FooTest.cpp that would house the main() entrypoint of the FooTest.EXE program, invoking the testing framework, and outputting the results to the console.
Create your FooTest.cpp so that it's a console program. If you format your test executable's output so that it matches the output of the Visual Studio compiler, as in "filename.cpp(lineNo) : error: description of failure", Visual Studio will automatically navigate to the file and line if you click on it. Unit test frameworks such as CppUnit may already have a "CompilerOutputter" class that will properly format the output to match your compiler's errors.
In your FooTest project, you also need to set the input to the FooTest linker so that it can link in the production code you are trying to test. In the properties of the FooTest project, go to the Linker/Input tab and add the path to your Foo project's OBJ files to the Additional Dependencies. The line I use looks like this: $(SolutionDir)Foo\Debug\obj*.obj
In the Build Events properties of the FooTest project, invoke your new FooTest.EXE as a post-build step. Then, every time you click build, your code will be built and your unit tests will be executed. The project dependency will ensure that if you change your Foo code, you will compile, link, and execute the FooTest tests. And the console output ensures that your test results will appear as clickable output in your IDE.
You could create 63 separate unit test executables, or you could create one all-encompassing unit test executable. That's entirely your choice. If you are looking to make the builds and links happen quicker, you will probably want to have the separate executables; even though it's a bit more individual configuration work, you do it only once, and after that you retain the benefits of quick builds for small changes.
Now you're ready to do some serious coding.
I have a project that builds fine If I build it manually but it fails with CC.NET.
The error that shows up on CC.NET is basically related to an import that's failing because file was not found; one of the projects (C++ dll) tries to import a dll built by another project. Dll should be in the right place since there's a dependency between the projects - indeeed when I build manually everything works fine (Note that when I say manually I am getting everything fresh from source code repository then invoking a Rebuild from VS2005 to simulate CC.NET automation).
looks like dependencies are ignored when the build is automated through CC.NET.
I am building in Release MinDependency mode.
Any help would be highly appreciated!
Can you change CC to use msbuild instead of devenv? That seems like the optimal solution to me, as it means the build is the same in both situations.
After a long investigation - my understanding on this at current stage is that the problem is related to the fact that I am using devenv to build through CruiseControl.NET but when I build manually VisualStudio is using msbuild.
Basically this causes dependencies to be ignored (because of some msbuild command arg that I am not reproducing using devenv).
I think the fact that dependencies are set between C++ projects is relevant too to some extent, since I've been able in other occasions to build properly with CC.NET setting dependencies between .NET projects and C++ projects.
In order to figure out exactly what is generating this different
behavior I'd have to follow this lead.
I'd like to hear other people's opinions on this.
Try building it from the command line and see what happens.
My guess would be that the user that the service is configured has different permissions and/or environment variables as you do when actually running it. If you are on the same physical box and it compiles fine with visual studio and you are also using visual studio in CruiseControl (not MSBuild) then it is almost assuredly the user. If however you are using MSBuild in CruiseControl there is a huge set of diffrence when MSBuild (2.0) compiles a C++ sln and when Visual Studio compiles it. If you must use MSBuild on C++ solutions try v3.5 it has much more support for C++ solutions.
I wonder if CC.Net is building with different environment variables, such that the necessary library directories aren't properly added to the path.
Is there any specific error message in the CC.Net build log as to why that particular DLL import failed? Could not find file? Permissions? Look in the detailed CC.Net build log for the failure and see where it differs from a normal command-line build.
I've run into instances where my solution builds if I open it in the IDE and compile, but fails if I run from a command line (either msbuild or devenv.) In each case, the problem was due to a bad reference - likely from paths not matching between your local box and the build server. You see it compile in the IDE correctly because VisualStudio, when opening a solution, will attempt to auto-resolve broken paths. When it does this, it won't tell you about it and usually won't change your solution and project files (which is what you'd hope for.)
Try opening your solution file and/or project files in a text editor and make sure all relative paths are valid.
As Alex said, I think that your problem is that the CC.NET service runs as a local user account. Unfortunatly some of the C++ environment variables are per user and will not be carried over to the default build environment. In my case it was the lib and include files defined in Tools -> Options -> Projects and Solutions -> VC++ Directories. This same issue evidently causes other issues and is called out in this article as a yellow block.
My solution was to create a new user (BuildUser) on the build machine specifically for building. The key was to then log in as BuildUser and set up the environment. Finally, I changed the CC.NET service to login as BuildUser and restarted it.
(reposting as my initial post seems to have failed)
VC2003 seems to have an inconsistency between dependencies and input libraries.
An example:
ProjectA --> A.lib
ProjectB --> B.exe
In Properties-->Linker-->Additional Input Libraries, A.lib is specified.
In Project Dependencies, ProjectA is unchecked (why it is not automatic is still a mystery to me)
When cleaning ProjectB, A.lib is not deleted, nor is it rebuilt when ProjectB is compiled. So the build appears to succeed in your local machine.
CC.NET starts from scratch, and the build fails as A.lib is not found in the first place.