NetBeans generated Makefile ignores test return codes - c++

I have a C++ project in NetBeans using generated Makefiles. I set up a job in Jenkins (continuous integration server) to run the tests configured in NetBeans. Now Jenkins runs the tests and captures their output, but it considers the build successful even when a test fails.
I'm using the Boost Unit Test Framework which of course returns a non-zero code on failure as any proper *nix program would. So I wondered why Jenkins didn't understand when a test failed. Then I found this in the generated Makefile-Debug.mk from NetBeans:
# Run Test Targets
.test-conf:
#if [ "${TEST}" = "" ]; \
then \
${TESTDIR}/TestFiles/f1 || true; \
${TESTDIR}/TestFiles/f2 || true; \
else \
./${TEST} || true; \
fi
So it seems like they deliberately ignore the return value of all tests. But this doesn't make sense, because then what are your tests testing?
I tried to find a setting in NetBeans to say "Let failing tests break the build" but didn't find anything. I also tried to find a bug in the NetBeans tracker for this but didn't see any in my brief search.
Is there any other reasonable solution? I want Jenkins to fail my build if any test fails. Right now it only fails if a test fails to build, but if it builds and fails to run, success is reported.

It turns out that NetBeans (up to version 8 at least) cannot support this. What I did to work around it is to do make build-tests rather than make test in Jenkins, followed by a loop over all the generated test files (TestFiles/f* in the build directory) to run them.
This is a major shortcoming in NetBeans' Makefile generator, as it is fundamentally incompatible with running tests outside of NetBeans itself. Thanks to #HEKTO for the link which led me to this page about writing NetBeans testing plugins: http://wiki.netbeans.org/CND69UnitTestsPluginTutotial
What that page tells you is basically that NetBeans relies on parsing the textual output of tests to determine success or failure. What it doesn't tell you is that NetBeans generates defective Makefiles which ignore critical failures in tests, including aborts, segmentation faults, assertion failures, uncaught exceptions, etc. It assumes you will use a test framework that it knows about (which is only CppUnit), or manually write magic strings at the right moments in your test programs.
I thought about taking the time to write a NetBeans unit test plugin for the Boost Unit Test Framework, but it won't help Jenkins at all: the plugins are only used when tests are run inside NetBeans itself, to display pretty status indicators.

Related

Rust tests fail to even run

I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run

When to run unit-tests?

We are currently setting up our build-process within an automated continious integration environment and facing the fundamental question, when to run unit-tests?
One way would be to run the unit test with every build task. So as soon as one unit-test fails, the whole build fails. This has the advantage, that the developer is always forced to keep the unit-tests green, as s/he is otherwise not able to run the application. On the other hand, you are always distracted by fixing the tests during a development process - which might force you to work in very small iterations. Besides that the time to run your application always increases, as you have to wait for the tests every time.
The other way would be, to let the CI-Server run the tests after each new commit and let the developer simply know, that something went wrong. In this way the developer is pretty free, at what time to care for the unit-tests, but also other developers on the same branch might suffer, because they cannot be sure, that all parts of the software work as expected and have to check theirselves, if the failing tests might also influence their work.
So do you have any best-practices or recommendations, which would be a good time to run the tests?
BTW: of course we also run bigger integration-tests, which are handled in a seperate CI-process.
Short answer: run all unit tests on the build server for every commit, on every branch. Assuming your unit tests don't take a really long time to run, there really is no downside to this. As for running all unit tests on every build task locally, that would be a overkill. Developers should have the discipline to decide when to run the tests and when not to.
You want to know as soon as possible when something is wrong so you can fix it promptly. You also want to know all of the tests that fail rather than just the first test that fails. When there are multiple issues it would be a pretty annoying workflow to only fix the one issue and then have to commit, push, and wait for the build to run again to see if there are more issues.
Your build process should have two targets: build and test. test should be the default target when not specifying anything else. The test can't run until the project was build, so the build target is a dependancy of test. So (suppose use use make): make or make test will build and test. make build will just build the project.
Now, if you're using some IDE, you could consider doing the test in some separate way "outside" of the IDE. So, maybe add a third target ide and let the ide build that one. It could then have the build target as normal dependency and as last step spawn a new job in background to do the testing in it's own terminal window, something like (under linux): ( xterm -e ./run-tests & ).
And if you're developing outside of an ide (like I do), then just have a separate terminal run the build & test. As soon as testing starts, you know the build process finished, so you can run you application already, even thou the tests are still running.
Just to demonstrate this (and as a proof of concept for having the test run in background) I just created some trivial test case:
bodo.c:
#include <stdio.h>
int main(int argc, char * argv[]) {
printf("Hallo %s", argc > 1 ? argv[1] : "Welt");
return 0;
}
Makefile:
test: build run-tests
ide: build run-tests-background
run-tests-background:
( xterm -e ./run-tests --wait & )
run-tests:
./run-tests
build: bodo
bodo: bodo.o
bodo.o: bodo.c
.PHONY: run-tests run-tests-background
run-tests:
#! /bin/sh
retval=true
if test "$(./bodo)" != "Hallo Welt"
then
echo "Test failed []"
retval=false
fi
if test "$(./bodo Bodo)" != "Hallo Bodo"
then
echo "Test failed [Bodo]"
retval=false
fi
if test "$(./bodo Fail)" != "Hallo Bodo"
then
echo "Test failed [Fail]"
retval=false
fi
sleep 5 # Simulate some more tests
if $retval
then
echo "All tests suceeded ;)"
else
echo "Some tests failed :("
fi
if test "$1" == "--wait"
then
read -p "Press ENTER to close" enter
fi
if $retval
then
exit 0
else
exit 2
fi
Usage:
Build the project but do not run the tests
make build
Build the project and do run the test in current terminal
make
Build the project and do run the test in separate terminal. Make will return once the build process completed and the test got started
make ide
And two helpers, which are not supposed to be run by hand:
Only run the tests in current terminal (this will fail, if the project wasn't built yet)
make run-tests
Only run the tests in separate terminal (this will fail, if the project wasn't built yet). Make will return immediatelly
make run-tests-background

Can I use mach to do unit test for thunderbird on windows?

I've been doing some thunderbird develop lately, and wanna do some unit test for my XPCOM module. I notice that there is a tool called mach in mozilla which can automatically run a set of test cases at one time, but when I run "mach help" on windows, It shows that "is_platform_supported - Must have a Firefox, Android or B2G build."
Since I'm using thunderbird, does that mean that I can't use mach for unit test for thunderbird? If not so - which I hope - how can I change my config to use that tool?
Any reples from you will be appreciated!
Thanks in advance.
At the moment, mozilla/mach xpcshell-test does not work (see Bug 934170). As a workaround, use the following:
First, switch to the object directory after building Thunderbird.
To run all xpcshell tests, use: make xpcshell-tests
To run a single xpcshell test, use (e.g.): make xpcshell-tests TEST_PATH=mailnews/news/test/unit/test_server.js
To run all xpcshell tests in a given directory, use (e.g.): make xpcshell-tests TEST_PATH=mailnews/news

Not getting any test results with xunitmultiprocess

I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.

Teamcity running build steps even when tests fail

I am having problems with Teamcity, where it is proceeding to run build steps even if the previous ones were unsuccessful.
The final step of my Build configuration deploys my site, which I do not want it to do if any of my tests fail.
Each build step is set to only execute if all previous steps were successful.
In the Build Failure Conditions tab, I have checked the following options under Fail build if:
-build process exit code is not zero
-at least one test failed
-an out-of-memory or crash is detected (Java only)
This doesn't work - even when tests fail TeamCity deploys my site, why?
I even tried to add an additional build failure condition that will look for specific text in the build log (namely "Test Run Failed.")
When viewing a completed test in the overview page, you can see the error message against the latest build:
"Test Run Failed." text appeared in build log
But it still deploys it anyway.
Does anyone know how to fix this? It appears that the issue has been running for a long time, here.
Apparently there is a workaround:
So far we do not consider this feature as very important as there is
an obvious workaround: the script can check the necessary condition
and do not produce the artifacts as configured in TeamCity.
e.g. a script can move the artifacts from a temporary directory to the
directory specified in the TeamCity as publish artifacts from just
before the finish and in case the build operations were successful.
But that is not clear to me on exactly how to do that, and doesn't sound like the best solution either. Any help appreciated.
Edit: I was also able to workaround the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.
This is a known problem as of TeamCity 7.1 (cf. http://youtrack.jetbrains.com/issue/TW-17002) which has been fixed in TeamCity 8.x+ (see this answer).
TeamCity distinguishes between a failed build and a failed build step. While a failing unit test will fail the build as a whole, unfortunately TeamCity still considers the test step itself successful because it did not return a non-zero error code. As a result, subsequent steps will continue running.
A variety of workarounds have been proposed, but I've found they either require non-trivial setup or compromise on the testing experience in TeamCity.
However, after reviewing a suggestion from #arex1337, we found an easy way to get TeamCity to do what we want. Just add an extra Powershell build step after your existing test step that contains the following inline script (replacing YOUR_TEAMCITY_HOSTNAME with your actual TeamCity host/domain):
$request = [System.Net.WebRequest]::Create("http://YOUR_TEAMCITY_HOSTNAME/guestAuth/app/rest/builds/%teamcity.build.id%")
$xml = [xml](new-object System.IO.StreamReader $request.GetResponse().GetResponseStream()).ReadToEnd()
Microsoft.PowerShell.Utility\Select-Xml $xml -XPath "/build" | % { $status = $_.Node.status }
if ($status -eq "FAILURE") {
throw "Failing this step because the build itself is considered failed. This is our way to workaround the fact that TeamCity incorrectly considers a test step to be successful even if there are test failures. See http://youtrack.jetbrains.com/issue/TW-17002"
}
This inline PowerShell script is just using the TeamCity REST API to ask whether or not the build itself, as a whole, is considered failed (the variable %teamcity.build.id%" will be replaced by TeamCity with the actual build id when the step is executed). If the build as a whole is considered failed (say, due to a test failure), then this PowerShell script throws an error, causing the process to return a non-zero error code which results in the individual build step itself to be considered unsuccessful. At that point, subsequent steps can be prevented from running.
Note that this script uses guestAuth, which requires the TeamCity guest account to be enabled. Alternately, you can use httpAuth instead, but you'll need to update the script to include a TeamCity username and password (e.g. http://USERNAME:PASSWORD#YOUR_TEAMCITY_HOSTNAME/httpAuth/app/rest/builds/%teamcity.build.id%).
So, with this additional step in place, all subsequent steps set to execute "Only if all previous steps were successful" will be skipped if there are any previous unit test failures. We're using this to prevent automated deployment if any of our NUnit tests are not successful until JetBrains fixes the problem.
Thanks to #arex1337 for the idea.
Just to prevent confusion, this issue is fixed in Team City v8.x, We don't need those workarounds now.
You can specify the step execution policy via the Execute step option:
Only if build status is successful - before starting the step, the build agent requests the build status from the server, and skips the step if the status is failed.
https://confluence.jetbrains.com/display/TCD8/Configuring+Build+Steps
Of course you need to fail the build if at least one unit test failed:
https://confluence.jetbrains.com/display/TCD8/Build+Failure+Conditions
On the Build Failure Conditions page, the Fail build if area, specify when TeamCity will fail builds:
at least one test failed: Check this option to mark the build as failed if the build fails at least one test.
This is (as you have found) a known issue with TeamCity, there are a set of linked issues in their Issue Tracker. This issue is hopefully scheduled to be resolved in the next release of TeamCity (version 8.x)
In the mean time, the way we identified to resolve the issue (for version 6.5.5) was to download the test results file as part of the later steps. This was then parsed to check for any test failures, returning an error code and hence breaking the build properly (performing any cleanup we needed as part of that failure) which would probably work for you.
TeamCity build failure does not mean that it will stop the build and it will publish the artifacts if your build is providing the the build output files as required by TeamCity. It will only update the build status properly.
But, you can very well stop the build process by modification to your build script to stop the build on test case failure. If you are using MSBuild, then ContinueOnError="false" will do that.
In the end, I was able to solve the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.