Running only changed or failed tests with CMake/CTest? - c++

I work on a large code base that has close to 400 test executables, with run times varying between 0.001 second and 1800 seconds. When some bit of code changes CMake will rebuild intelligently only the targets that have changed, many times taking shorter than the actual test run will take.
The only way I know around this is to manually filter on tests you know you want to run. My intuition says that I would want to re-run any test suite that does not have a successful run stored - either because it failed, or because it was recompiled.
Is this possible? If so, how?

ctest command accepts several parameters, which affects on set of tests to run. E.g. ,"-R" - filter tests by name, "-L" - filter tests by label. Probably, using dashboard-related options, you may also choose tests to run.
As for generating values for these options according to changed executables, you may write program or script, which checks modification time of executables and/or parses last log file for find failed tests.
Another way for run only changed executables is to wrap tests into additional script. This script will run executable only if some condition is saticfied.
For Linux wrapper script could be implemented as follows:
test_wrapper.sh:
# test_wrapper.sh <test_name> <executable> <params..>
# Run executable, given as second argument, with parameters, given as futher arguments.
#
# If environment variable `LAST_LOG_FILE` is set,
# checks that this file is older than the executable.
#
# If environment variable LAST_LOG_FAILED_FILE is set,
# check that testname is listed in this file.
#
# Test executable is run only if one of these checks succeed, or if none of checks is performed.
check_succeed=
check_performed=
if [ -n $LAST_LOG_FILE ]; then
check_performed=1
executable=$2
if [ ! ( -e "$LAST_LOG_FILE" ) ]; then
check_succeed=1 # Log file is absent
elif [ "$LAST_LOG_FILE" -ot "$executable" ]; then
check_succeed=1 # Log file is older than executable
fi
fi
if [ -n "$LAST_LOG_FAILED_FILE" ]; then
check_performed=1
testname=$1
if [ ! ( -e "$LAST_LOG_FAILED_FILE" ) ]; then
# No failed tests at all
elif grep ":${testname}\$" "$LAST_LOG_FAILED_FILE" > /dev/null; then
check_succeed=1 # Test has been failed previously
fi
fi
if [ -n "$check_performed" -a -z "$check_succeed" ]; then
echo "Needn't to run test."
exit 0
fi
shift 1 # remove `testname` argument
eval "$*"
CMake macro for add wrapped test:
CMakeLists.txt:
# Similar to add_test(), but test is executed with our wrapper.
function(add_wrapped_test name command)
if(name STREQUAL "NAME")
# Complex add_test() command flow: NAME <name> COMMAND <command> ...
set(other_params ${ARGN})
list(REMOVE_AT other_params 0) # COMMAND keyword
# Actual `command` argument
list(GET other_params 0 real_command)
list(REMOVE_AT other_params 0)
# If `real_command` is a target, need to translate it to path to executable.
if(TARGET real_command)
# Generator expression is perfectly OK here.
set(real_command "$<TARGET_FILE:${real_command}")
endif()
# `command` is actually value of 'NAME' parameter
add_test("NAME" ${command} "COMMAND" /bin/sh <...>/test_wrapper.sh
${command} ${real_command} ${other_params}
)
else() # Simple add_test() command flow
add_test(${name} /bin/sh <...>/test_wrapper.sh
${name} ${command} ${ARGN}
)
endif()
endfunction(add_wrapped_test)
When you want to run only those tests, which executables have been changed since last run or which has been failed last time, use
LAST_LOG_FILE=<build-dir>/Testing/Temporary/LastTest.log \
LAST_FAILED_LOG_FILE=<build-dir>/Testing/Temporary/LastTestsFailed.log \
ctest
All other tests will be automatically passed.

Related

How to build list of variables in make runs and use it for post-processing

I need to call submakes recursively with different variable settings. During this, I need to build a list of the variable settings. After all submakes are complete, I need to check the results of all the submakes using the list built up.
tests:
echo "Testcase 1 $(testname)..."; \
$(MAKE) -e TESTCASE=1 guimode=no run > test.tc1.log; \ # must save variable TESTCASE_LIST = {1} or similar
$(MAKE) -e TESTCASE=2 guimode=no run > test.tc2.log; \ # must append to variable TESTCASE_LIST = {1 2}
$(MAKE) -e TESTCASE=2 guimode=no run > test.tc3.log; \ # must append to variable TESTCASE_LIST = {1 2 3}
echo "Completed Tests at time $(realtime) ..."; \
$(MAKE) check_test_results; # must run through results of tests 1,2,3 and get data
check_test_results:
for testcase in $(TESTCASE_LIST); do something; done
Sub-make is a child process, so it cannot transfer environment variables to its parent. I suggest simply to examine exit codes and process them inside tests recipe, kind of:
.ONESHELL:
tests:
$(MAKE) TEST=1 ... && TESTCASE_LIST+=(1)
$(MAKE) TEST=2 ... && TESTCASE_LIST+=(2)
...
# check results
echo "Successful tests: $${TESTCASE_LIST[#]}"

if condition to folder branch in Jenkinsfile

I have branch folder "feature-set" under this folder there's multibranch
I need to run the below script in my Jenkinsfile with a condition if this build runs from any branches under the "feature-set" folder like "feature-set/" then run the script
the script is:
sh """
if [ ${env.BRANCH_NAME} = "feature-set*" ]
then
echo ${env.BRANCH_NAME}
branchName='${env.BRANCH_NAME}' | cut -d'\\/' -f 2
echo \$branchName
npm install
ng build --aot --output-hashing none --sourcemap=false
fi
"""
the current output doesn't get the condition:
[ feature-set/swat5 = feature-set* ]
any help?
I would re-write this to be primarily Jenkins/Groovy syntax and only go to shell when required.
Based on the info you provided I assume your env.BRANCH_NAME always looks like `feature-set/
// Echo first so we can see value if condition fails
echo(env.BRANCH_NAME)
// startsWith better than contains() based on current usecase
if ( (env.BRANCH_NAME).startsWith('feature-set') ) {
// Split branch string into list based on delimiter
List<String> parts = (env.BRANCH_NAME).tokenize('/')
/**
* Grab everything minus the first part
* This handles branches that include additional '/' characters
* e.g. 'feature-set/feat/my-feat'
*/
branchName = parts[1..-1].join('/')
echo(branchName)
sh('npm install && ng build --aot --output-hashing none --sourcemap=false')
}
This seems to be more on shell side. Since you are planning to use shell if condition the below worked for me.
Administrator1#XXXXXXXX:
$ if [[ ${BRANCH_NAME} = feature-set* ]]; then echo "Success"; fi
Success
Remove the quotes and add an additional "[]" at the start and end respectively.
The additional "[]" works as regex

VisualSVN post-commit hook with batch file

I'm running VisualSVN on a Windows server.
I'm trying to add a post-commit hook to update our staging project whenever a commit happens.
In VisualSVN, if I type the command in the hook/post-commit dialog, everything works great.
However, if I make a batch file with the exact same command, I get an error that says the post-commit hook has failed. There is no additional information.
My command uses absolute paths.
I've tried putting the batch file in the VisualSVN/bin directory, I get the same error there.
I've made sure VisualSVN has permissions for the directories where the batch file is.
The only thing I can think of is I'm not calling it correctly from VisualSVN. I'm just replacing the svn update command in the hook/post-commit dialog with the batch file name ("c:\VisualSVN\bin\my-batch-file.bat") I've tried it with and without the path (without the path it doesn't find the file at all).
Do I need to use a different syntax in the SVNCommit dialog to call the batch file? What about within the batch file (It just has my svn update command. It works if I run the batch file from the command line.)
Ultimately I want to use a batch file because I want to do a few more things after the commit.
When using VisualSVN > Select the Repo > Properties > Hooks > Post-commit hook.
Where is the code I use for Sending an Email then running a script, which has commands I want to customize
"%VISUALSVN_SERVER%\bin\VisualSVNServerHooks.exe" ^
commit-notification "%1" -r %2 ^
--from support#domainname.com --to "support#domainname.com" ^
--smtp-server mail.domainname.com ^
--no-diffs ^
--detailed-subject
--no-html
set PWSH=%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe
%PWSH% -command $input ^| C:\ServerScripts\SVNScripts\post-commit-wp.ps1 %1 %2
if errorlevel 1 exit %errorlevel%
The script file is located on C:\ServerScripts\SVNScripts\
post-commit-wp.ps1 and I pass in two VisualSVN variables as %1 and %2
%1 = serverpathwithrep
%2 = revision number
The script file is written in Windows PowerShell
# PATH TO SVN.EXE
$svn = "C:\Program Files\VisualSVN Server\bin\svn.exe"
$pathtowebistesWP = "c:\websites-wp\"
# STORE HOOK ARGUMENTS INTO FRIENDLY NAMES
$serverpathwithrep = $args[0]
$revision = $args[1]
# GET DIR NAME ONLY FROM REPO-PATH STRING
# EXAMPLE: C:\REPOSITORIES\DEVHOOKTEST
# RETURNS 'DEVHOOKTEST'
$dirname = ($serverpathwithrep -split '\\')[-1]
# Combine ServerPath with Dir name
$exportpath = -join($pathtowebistesWP, $dirname);
# BUILD URL TO REPOSITORY
$urepos = $serverpathwithrep -replace "\\", "/"
$url = "file:///$urepos/"
# --------------------------------
# SOME TESTING SCRIPTS
# --------------------------------
# STRING BUILDER PATH + DIRNAME
$name = -join($pathtowebistesWP, "testscript.txt");
# CREATE FILE ON SERVER
New-Item $name -ItemType file
# APPEND TEXT TO FILE
Add-Content $name $pathtowebistesWP
Add-Content $name $exportpath
# --------------------------------
# DO EXPORT REPOSITORY REVISION $REVISION TO THE ExportPath
&"$svn" export -r $revision --force "$url" $exportpath
I added comments to explain each line and what it does. In a nutshell, the scripts:
Gets all the parameters
Build a local dir path
Runs SVN export
Places files to a website/publish directory.
Its a simple way of Deploying your newly committed code to a website.
Did you try to execute batch file using 'call' command? I mean:
call C:\Script\myscript.bat
I was trying the same thing and found that you also must have the script in the hooks folder.. the bat file that is.

Autoconf macro for Boost MPI?

I'm searching an autoconf macro to use in my configure.ac that checks for Boost MPI.
It's not hard to find a couple of them on the Internet but none of the one I tried worked as expected.
What ax_boost_mpi.m4 do you use?
EDIT: I'll explain my requirement better. I need the macro to tell me if Boost MPI is available or not (defining HAVE_BOOST_MPI) to store the compiler and linker flags somewhere and to switch the compiler from the nornal c++ compiler to an available mpiCC or mpic++.
If the Boost MPI is not found I'd like to be able to choose if I want to stop the configuration process with an error or continue using g++ without HAVE_BOOST_MPI defined.
As a plus it should define an MPIRUN variable to allow running some checks.
I'm unaware of a turnkey solution here, but that doesn't mean one's unavailable.
With some work, you could probably adapt http://www.gnu.org/software/autoconf-archive/ax_mpi.html#ax_mpi and http://github.com/tsuna/boost.m4 to do what you want. The former digging up the MPI compiler and the latter checking for Boost MPI. You'd have to add a Boost MPI check to boost.m4 as it doesn't have one. You'd have to add your own MPIRUN-searching mechanism.
If you find a solution and/or roll your own, please do share.
# ===========================================================================
#
# SYNOPSIS
#
# AX_BOOST_MPI
#
# DESCRIPTION
#
# Test for MPI library from the Boost C++ libraries. The macro
# requires a preceding call to AX_BOOST_BASE, AX_BOOST_SERIALIZATION
# and AX_MPI. You also need to set CXX="$MPICXX" before calling the
# macro.
#
# This macro calls:
#
# AC_SUBST(BOOST_MPI_LIB)
#
# And sets:
#
# HAVE_BOOST_MPI
#
# LICENSE
#
# Based on Boost Serialize by:
# Copyright (c) 2008 Thomas Porschberg <thomas#randspringer.de>
#
# Copyright (c) 2010 Mirko Maischberger <mirko.maischberger#gmail.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 1
AC_DEFUN([AX_BOOST_MPI],
[
AC_ARG_WITH([boost-mpi],
AS_HELP_STRING([--with-boost-mpi#<:#=special-lib#:>#],
[use the MPI library from boost - it is possible to
specify a certain library for the linker
e.g. --with-boost-mpi=boost_mpi-gcc-mt-d-1_33_1 ]),
[
if test "$withval" = "no"; then
want_boost="no"
elif test "$withval" = "yes"; then
want_boost="yes"
ax_boost_user_mpi_lib=""
else
want_boost="yes"
ax_boost_user_mpi_lib="$withval"
fi
],
[want_boost="yes"]
)
if test "x$want_boost" = "xyes"; then
AC_REQUIRE([AC_PROG_CC])
CPPFLAGS_SAVED="$CPPFLAGS"
CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
AC_MSG_WARN(BOOST_CPPFLAGS $BOOST_CPPFLAGS)
export CPPFLAGS
LDFLAGS_SAVED="$LDFLAGS"
LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
export LDFLAGS
LIBS_SAVED="$LIBS"
LIBS="$LIBS $BOOST_SERIALIZATION_LIB"
export LIBS
AC_CACHE_CHECK(whether the Boost::MPI library is available,
ax_cv_boost_mpi,
[AC_LANG_PUSH([C++])
AC_COMPILE_IFELSE(AC_LANG_PROGRAM([[#%:#include <boost/mpi.hpp>
]],
[[int argc = 0;
char **argv = 0;
boost::mpi::environment env(argc,argv);
return 0;
]]),
ax_cv_boost_mpi=yes, ax_cv_boost_mpi=no)
AC_LANG_POP([C++])
])
if test "x$ax_cv_boost_mpi" = "xyes"; then
AC_DEFINE(HAVE_BOOST_MPI,,[define if the Boost::MPI library is available])
BOOSTLIBDIR=`echo $BOOST_LDFLAGS | sed -e 's/#<:#^\/#:>#*//'`
if test "x$ax_boost_user_mpi_lib" = "x"; then
for libextension in `ls $BOOSTLIBDIR/libboost_mpi*.{so,a}* 2>/dev/null | grep -v python | sed 's,.*/,,' | sed -e 's;^lib\(boost_mpi.*\)\.so.*$;\1;' -e 's;^lib\(boost_mpi.*\)\.a*$;\1;'` ; do
ax_lib=${libextension}
AC_CHECK_LIB($ax_lib, exit,
[BOOST_MPI_LIB="-l$ax_lib"; AC_SUBST(BOOST_MPI_LIB) link_mpi="yes"; break],
[link_mpi="no"])
done
if test "x$link_mpi" != "xyes"; then
for libextension in `ls $BOOSTLIBDIR/boost_mpi*.{dll,a}* 2>/dev/null | grep -v python | sed 's,.*/,,' | sed -e 's;^\(boost_mpi.*\)\.dll.*$;\1;' -e 's;^\(boost_mpi.*\)\.a*$;\1;'` ; do
ax_lib=${libextension}
AC_CHECK_LIB($ax_lib, exit,
[BOOST_MPI_LIB="-l$ax_lib"; AC_SUBST(BOOST_MPI_LIB) link_mpi="yes"; break],
[link_mpi="no"])
done
fi
else
for ax_lib in $ax_boost_user_mpi_lib boost_mpi-$ax_boost_user_mpi_lib; do
AC_CHECK_LIB($ax_lib, exit,
[BOOST_MPI_LIB="-l$ax_lib"; AC_SUBST(BOOST_MPI_LIB) link_mpi="yes"; break],
[link_mpi="no"])
done
fi
if test "x$link_mpi" != "xyes"; then
AC_MSG_ERROR(Could not link against $ax_lib !)
fi
fi
LIBS="$LIBS_SAVED"
CPPFLAGS="$CPPFLAGS_SAVED"
LDFLAGS="$LDFLAGS_SAVED"
fi
])
This comment is a bit late, but I will add it here so that others searching for the same topic can find it. I had personally been looking for a function integrated into boost.m4 that defined similar variables as the other boost libraries (BOOST_MPI_LDFLAGS, BOOST_MPI_LIBS). I finally just added one and submitted a pull request here:
https://github.com/tsuna/boost.m4/pull/50
This uses the MPICXX variable for CXX/CXXCPP if it is already defined (by ax_mpi.m4, acx_mpi.m4, etc), otherwise it uses the existing CXX/CXXCPP.

How to run TAP::Harness tests written in Guile?

The usual approach of
test:
$(PERL) "-MExtUtils::Command::MM" "-e" "test_harness($(TEST_VERBOSE), '$(INCDIRS)')" $(TEST_FILES)
fails to run Guile scripts, because it passes to Guile the extra parameter "-w".
One possible approach is to set up your project as follows.
Your directory structure is as follows:
./project Your project files
./project/t/*.t Your unit test scripts
./project/t/scripts/* Auxiliary scripts used by your unit tests
Your ./project/Makefile contains the following:
PERL = /usr/bin/perl
TEST_LIBDIRS = ./lib
RUN_GUILE_TESTS = ./t/scripts/RunGuileTests.pl
TEST_FILES = ./t/*.t
test:
$(PERL) -I$(TEST_LIBDIRS) $(RUN_GUILE_TESTS) $(TEST_FILES)
Your ./project/t/scripts/RunGuileTests.pl contents are:
#!/usr/bin/perl -w
# Run Guile tests - filenames are given as arguments to the script.
use TAP::Harness;
my #tests = #ARGV;
my %args = (
verbosity => 0,
timer => 1,
show_count => 1,
exec => ['/usr/bin/guile', '-s'],
);
my $harness = TAP::Harness->new( \%args );
$harness->runtests(#tests);
# End of RunGuileTests.pl
Your Guile test scripts should start with:
#!/usr/bin/guile -s
!#
; Description of your tests