my autotools project has a couple of unit-tests.
one of these tests (filereader) needs to read a file (data/test1.bin)
Here's my filesystem layout:
- libfoo/tests/filereader.c
- libfoo/tests/data/test1.bin
and my libfoo/tests/Makefile.am:
AUTOMAKE_OPTIONS = foreign
AM_CPPFLAGS = -I$(top_srcdir)/foo
LDADD = $(top_builddir)/src/libfoo.la
EXTRA_DIST = data/file1.bin
TESTS = filereader
check_PROGRAMS= filereader
filereader_SOURCES = filereader.c
this works great, as long as i do in-tree builds.
However, when running the test-suite out-of-tree (e.g. make distcheck), the filereader test cannot find the input file anymore.
This is obviously because only the source tree contains the input file, but not the build tree.
i wonder what is the canonical way to fix this problem?
compile the directory of the test-file into the unittest (AM_CPPFLAGS+=-DSRCDIR=$(srcdir))
pass the qualified input file as a cmdline argument to the test? (e.g. $(builddir)/filereader $(srcdir)/data/file1.bin)
copy the input file from the source tree to the build tree? (cp $(srcdir)/data/file1.bin $(builddir)/data/file1.bin? how would a proper make-rule look like??)
Canonically, the solution would be to define the path to your file into the unittest, so the first option you laid out. The second one is also possible but it requires using an in-between driver script.
I would suggest avoiding the third one, but if you do want to go down that route, use $(LN_S) rather than cp; this way you reduce the I/O load of the test.
There is a way to do this with autoconf. From the netcdf-c configure.ac:
##
# Some files need to exist in build directories
# that do not correspond to their source directory, or
# the test program makes an assumption about where files
# live. AC_CONFIG_LINKS provides a mechanism to link/copy files
# if an out-of-source build is happening.
##
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat1.nc:nc_test4/ref_hdf5_compat1.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat2.nc:nc_test4/ref_hdf5_compat2.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat3.nc:nc_test4/ref_hdf5_compat3.nc])
AC_CONFIG_LINKS([nc_test4/ref_chunked.hdf4:nc_test4/ref_chunked.hdf4])
AC_CONFIG_LINKS([nc_test4/ref_contiguous.hdf4:nc_test4/ref_contiguous.hdf4])
In a meson build, is it possible to set the working directory before executing unit tests? It uses ninja by default to run the tests, so perhaps there's an option you can pass to ninja to set the directory?
Why I'm asking is sometimes unit tests need access to config/data files (I usually try to avoid this, but sometimes it's just not possible) and they need to know the relative path in order to load them.
It appears the appropriate syntax for this is to append workdir to arguments passed to the test() method.
exe = executable('unit_test', 'test.c')
test('basic', exe, workdir : meson.source_root())
I am maintaining an autoconf package and wanted to integrate automatic testing. I use the Boost Unit Test Framework for my unit tests and was able to sucessfully integrate it into the package.
That is it can be compiled via make check, but is is not run (although I read that make check both compiles and runs the tests). As result, I have to run it manually after building the tests which is cumbersome.
Makefile.am in the test folder looks like this:
check_PROGRAMS = prog_test
prog_test_SOURCES = test_main.cpp ../src/class1.cpp class1_test.cpp class2.cpp ../src/class2_test.cpp ../src/class3.cpp ../src/class4.cpp
prog_test_LDADD = $(BOOST_FILESYSTEM_LIB) $(BOOST_SYSTEM_LIB) $(BOOST_UNIT_TEST_FRAMEWORK_LIB)
Makefile.am in the root folder:
SUBDIRS = src test
dist_doc_DATA = README
ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} -I m4
Running test/prog yields the output:
Running 4 test cases...
*** No errors detected
(I don't think you need the contents of my test cases in order to answer my question, so I omitted them for now)
So how can I make automake run my tests every time I run make check?
At least one way of doing this involves setting TESTS variable. Here's what documentation on automake says about it:
If the special variable TESTS is defined, its value is taken to be a list of programs or scripts to run in order to do the testing.
So adding the line
TESTS = $(check_PROGRAMS)
should instruct it to run the tests on make check.
I'm running into an issue with autotools.
I need both the support for tests and custom build directory (other then the main source directory). Autotools seems to complain:
src/lib/Libattr/test/attr_atomic/Makefile.am:18: error: using '$(top_srcdir)' in TESTS is currently broken: '$(top_srcdir)/src/test/coverage_run.sh'
Apparently the same thing is true for $(srcdir). The unit test needs to have manually set includes and source paths as it requires headers and files from different locations in the source tree.
How do I refer to the root of the source tree if I can't use $(srcdir) and $(top_srcdir)?
I suspect the problem is that your autotooled test harness plays by the
rules of the Older (and discouraged) serial test harness
and that the solution is to play by the rules of the newer and correspondly
encouraged Parallel Test Harness
In the latter, as you'll gather from the example code at the link, you don't mention $(top_srcdir) et al in TESTS. You
mention them in AM_TESTS_ENVIRONMENT:
Here's an illustrative Makefile.am fragment to hand that works fine for me:
...
CORE_TESTS = coan_case_tester.py coan_bulk_tester.py coan_spin_tester.py \
coan_symbol_rewind_tester.py coan_softlink_tester.py
if MAKE_CHECK_TIMING
TESTS = $(CORE_TESTS) coan_test_metrics.py
else
TESTS = $(CORE_TESTS)
endif
AM_TESTS_ENVIRONMENT = COAN_PKGDIR=$(top_srcdir); \
COAN_BUILDDIR=$(top_builddir); TIMING_METRICS=$(TIMING_METRICS_ENABLED); \
rm -f coan.test_timer.time.txt; \
export COAN_PKGDIR; export COAN_BUILDDIR; export TIMING_METRICS;
LOG_COMPILER = python
I've recently picked up scons to implement a multi-platform build framework for a medium sized C++ project. The build generates a bunch of unit-tests which should be invoked at the end of it all. How does one achieve that sort of thing?
For example in my top level sconstruct, I have
subdirs=['list', 'of', 'my', 'subprojects']
for subdir in subdirs:
SConscript(dirs=subdir, exports='env', name='sconscript',
variant_dir=subdir+os.sep+'build'+os.sep+mode, duplicate=0)
Each of the subdir has its unit-tests, however, since there are dependencies between the dlls and executables built inside them - i want to hold the running of tests until all the subdirs have been built and installed (I mean, using env.Install).
Where should I write the loop to iterate through the built tests and execute them? I tried putting it just after this loop - but since scons doesn't let you control the order of execution - it gets executed well before I want it to.
Please help a scons newbie. :)
thanks,
SCons, like Make, uses a declarative method to solving the build problem. You don't want to tell SCons how to do its job. You want to document all the dependencies and then let SCons solve how it builds everything.
If something is being executed before something else, you need to create and hook up the dependencies.
If you want to create dmy touch files, you can create a custom builder like:
import time
def action(target, source, env):
os.system('echo here I am running other build')
dmy_fh = open('dmy_file','w')
dmy_fh.write( 'Dummy dependency file created at %4d.%02d.%02d %02dh%02dm%02ds\n'%time.localtime()[0:6])
dmy_fh.close()
bldr = Builder(action=action)
env.Append( BUILDERS = {'SubBuild' : bldr } )
env.SubBuild(srcs,tgts)
It is very important to put the timestamp into the dummy file, because scons uses md5 hashes. If you have an empty file, the md5 will always be the same and it may decide to not do subsequent build steps. If you need to generate different tweaks on a basic command, you can use function factories to modify a template. e.g.
def gen_a_echo_cmd_func(echo_str):
def cmd_func(target,source,env):
cmd = 'echo %s'%echo_str
print cmd
os.system(cmd)
return cmd_fun
bldr = Builder(action = gen_a_echo_cmd_func('hi'))
env.Append(BUILDERS = {'Hi': bldr})
env.Hi(srcs,tgts)
bldr = Builder(action = gen_a_echo_cmd_func('bye'))
env.Append(BUILDERS = {'Bye': bldr})
env.Bye(srcs,tgts)
If you have something that you want to automatically inject into the scons build flow ( e.g. something that compresses all your build log files after everything else has run ), see my question here.
The solution should be as simple as this.
Make the result of the Test builders depend on the result of the Install builder
In pseudo:
test = Test(dlls)
result = Install(dlls)
Depends(test,result)
The best way would be if the Test builder actually worked out the dll dependencies for you, but there may be all kinds of reasons it doesn't do that.
In terms of dependencies, what you want is for all the test actions to depend on all the program-built actions. A way of doing this is to create and export a dummy-target to all the subdirectories' sconscript files, and in the sconscript files, make the dummy-target Depends on the main targets, and have the test targets Depends on the dummy-target.
I'm having a bit of trouble figuring out how to set up the dummy target, but this basically works:
(in top-level SConstruct)
dummy = env.Command('.all_built', 'SConstruct', 'echo Targets built. > $TARGET')
Export('dummy')
(in each sub-directory's SConscript)
Import('dummy')
for target in target_list:
Depends(dummy, targe)
for test in test_list:
Depends(test, dummy)
I'm sure further refinements are possible, but maybe this'll get you started.
EDIT: also worth pointing out this page on the subject.
Just have each SConscript return a value on which you will build dependencies.
SConscript file:
test = debug_environment.Program('myTest', src_files)
Return('test')
SConstruct file:
dep1 = SConscript([...])
dep2 = SConscript([...])
Depends(dep1, dep2)
Now dep1 build will complete after dep2 build has completed.