Combination of tests and custom build directory - c++

I'm running into an issue with autotools.
I need both the support for tests and custom build directory (other then the main source directory). Autotools seems to complain:
src/lib/Libattr/test/attr_atomic/Makefile.am:18: error: using '$(top_srcdir)' in TESTS is currently broken: '$(top_srcdir)/src/test/coverage_run.sh'
Apparently the same thing is true for $(srcdir). The unit test needs to have manually set includes and source paths as it requires headers and files from different locations in the source tree.
How do I refer to the root of the source tree if I can't use $(srcdir) and $(top_srcdir)?

I suspect the problem is that your autotooled test harness plays by the
rules of the Older (and discouraged) serial test harness
and that the solution is to play by the rules of the newer and correspondly
encouraged Parallel Test Harness
In the latter, as you'll gather from the example code at the link, you don't mention $(top_srcdir) et al in TESTS. You
mention them in AM_TESTS_ENVIRONMENT:
Here's an illustrative Makefile.am fragment to hand that works fine for me:
...
CORE_TESTS = coan_case_tester.py coan_bulk_tester.py coan_spin_tester.py \
coan_symbol_rewind_tester.py coan_softlink_tester.py
if MAKE_CHECK_TIMING
TESTS = $(CORE_TESTS) coan_test_metrics.py
else
TESTS = $(CORE_TESTS)
endif
AM_TESTS_ENVIRONMENT = COAN_PKGDIR=$(top_srcdir); \
COAN_BUILDDIR=$(top_builddir); TIMING_METRICS=$(TIMING_METRICS_ENABLED); \
rm -f coan.test_timer.time.txt; \
export COAN_PKGDIR; export COAN_BUILDDIR; export TIMING_METRICS;
LOG_COMPILER = python

Related

Can I use bitbake to rebuild our system like CMake would have, when changes have been made in local repo's C++-files?

I have googled frenetically for answers but have not been able to find an answer yet.
I had like to build an image for our device containing our Linux system together with our end-user applications. The end-user applications reside in our git repository: ourapplications-repo. This works fine, i.e. invoking: bitbake <ourapps>, builds that image as desired. So far, so good.
However, it only works once. When I have edited our application files, which happens to be C++ files, I had of course like the bitbake <ourapps> command to re-compile and link affected C++ files, libraries and executables, just like running make would, and incorporating the affected files into a new fresh image that can be downloaded to our devices. Unfortunately, that does not happen.
Is there an easy fix for this, maybe adding something to SRC_URI or setting another BB_VARIABLE to ensure that files in ourapplications-repo are checked for any changes? Or is this totally wrong way to handle my need, maybe I really have to create an SDK and work with it using my vanilla build tools like autotools and CMake and when I am satisfied with my local changes I commit and push my changes to the git repo allowing bitbake <ourapps> to pickup those changes (if bitbake only pickup changes that have been pushed to the git repository)?
Or what is the best practice, how should I work with a common(?) setup like this?
This is how our simple and standard project structure looks like:
- project
-- build
-- poky
-- meta-freescale
-- meta-openembedded
-- meta-ourapplications
-- ourapplications-repo
--- build/conf/bblayers.conf:
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
${TOPDIR}/../poky/meta \
${TOPDIR}/../poky/meta-poky \
${TOPDIR}/../poky/meta-yocto-bsp \
${TOPDIR}/../meta-freescale \
${TOPDIR}/../meta-openembedded/meta-oe \
${TOPDIR}/../meta-openembedded/meta-python \
${TOPDIR}/../meta-ourapplications \
"
-- meta-ourapplications/recipes-ourapplications/images/ourapps.bb:
SUMMARY = "A small image capable of allowing our linux system to boot with our applications."
IMAGE_INSTALL = "packagegroup-core-boot ${CORE_IMAGE_EXTRA_INSTALL} ... foo"
inherit core-image
--- meta-ourapplications/recipes-ourapplications/foo/foo_1.0.bb:
DESCRIPTION = "Foo App"
LICENSE = "CLOSED"
SRCREV = "${AUTOREV}"
PVBASE := "${PV}"
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}-${PVBASE}:${THISDIR}/${PN}:"
PV = "${PVBASE}+${SRCPV}"
SRC_URI = "git://${TOPDIR}/../ourapplications-repo;protocol=file;subpath=${BPN}"
S = "${WORKDIR}/${BPN}"
inherit autotools
--- ourapplications-repo/foo/
---- foo.cpp
---- Makefile.am
---- and so on...

Can you glob source code with meson?

Is it possible to glob source code files in a meson build?
Globbing source files is discouraged and is bad practice, and not only on Meson. It causes weird errors, makes it hard to have some developement files aside for development but that you don't want to build or ship, and can cause problems with incremental builds.
Explicit is better than implicit.
2021-03-02 EDIT:
Read also Why can't I specify target files with a wildcard? in the Meson FAQ.
Meson does not support this syntax and the reason for this is simple. This can not be made both reliable and fast.
If after all the warnings, you still want to do it at your own risk, the FAQ tells you how in But I really want to use wildcards!. You just use an external script to do the globbing and return the list of files (that script is called grabber.sh in that example).
c = run_command('grabber.sh')
sources = c.stdout().strip().split('\n')
e = executable('prog', sources)
I found an example in the meson unit tests showing how to glob source, but in the comments it says this is not recommended.
if build_machine.system() == 'windows'
c = run_command('grabber.bat')
grabber = find_program('grabber2.bat')
else
c = run_command('grabber.sh')
grabber = find_program('grabber.sh')
endif
# First test running command explicitly.
if c.returncode() != 0
error('Executing script failed.')
endif
newline = '''
'''
sources = c.stdout().strip().split(newline)
e = executable('prog', sources)
The reason this is not recommended: attempting to add files by glob'ing a directory will NOT make those files automatically appear in the build. You have to manually re-invoke meson for the files to be added to the build. Re-invoking ninja or other back-ends is not sufficient, you must reinvoke meson itself.
meson.build
glob = run_command('python', 'glob')
sources = glob.stdout().strip().split('\n')
glob:
import glob
sources = glob.glob('./src/*.cpp') + glob.glob('./src/**/*.cpp')
for i in sources:
print(i)
No it's not possible. Every sources have to be explicitly stated to build a target.

automake: automatically run unit tests

I am maintaining an autoconf package and wanted to integrate automatic testing. I use the Boost Unit Test Framework for my unit tests and was able to sucessfully integrate it into the package.
That is it can be compiled via make check, but is is not run (although I read that make check both compiles and runs the tests). As result, I have to run it manually after building the tests which is cumbersome.
Makefile.am in the test folder looks like this:
check_PROGRAMS = prog_test
prog_test_SOURCES = test_main.cpp ../src/class1.cpp class1_test.cpp class2.cpp ../src/class2_test.cpp ../src/class3.cpp ../src/class4.cpp
prog_test_LDADD = $(BOOST_FILESYSTEM_LIB) $(BOOST_SYSTEM_LIB) $(BOOST_UNIT_TEST_FRAMEWORK_LIB)
Makefile.am in the root folder:
SUBDIRS = src test
dist_doc_DATA = README
ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} -I m4
Running test/prog yields the output:
Running 4 test cases...
*** No errors detected
(I don't think you need the contents of my test cases in order to answer my question, so I omitted them for now)
So how can I make automake run my tests every time I run make check?
At least one way of doing this involves setting TESTS variable. Here's what documentation on automake says about it:
If the special variable TESTS is defined, its value is taken to be a list of programs or scripts to run in order to do the testing.
So adding the line
TESTS = $(check_PROGRAMS)
should instruct it to run the tests on make check.

make distcheck and tests that need input files

I recently converted my build system to automake/autoconf. In my project I have a few unit tests that need some input data files in the direcory from where they are run. When I run make distcheck and it tries the VPATH build, these tests fail because they are apparently not run from the directory where the input files are. I was wondering if there is some quick fix for this. For example, can I somehow tell the system not to run these tests on make distcheck (but still run them on make check)? Or to cd to the directory where the files are before running the tests?
I had the same problem and used solution similar to William's. My Makefile.am looks something like this:
EXTRA_DIST = testdata/test1.dat
AM_CPPFLAGS = -DDATADIR=\"$(srcdir)/\"
Then, in my unittest, I use the DATADIR define:
string path = DATADIR "/testdata/test1.dat"
This works with make check and make distcheck.
The typical solution is to write the tests so that they look in the source directory for the data files. For example, you can reference $srcdir in the test, or convert test to test.in and refer to #srcdir#.
If your tests are all in the source directory, you can run all the tests in that directory by setting TESTS_ENVIRONMENT in Makefile.am:
TESTS_ENVIRONMENT = cd $(srcdir) &&
This will fail if some of your tests are created by configure and therefore live only in the build directory, in which case you can selectively cd with something like:
TESTS_ENVIRONMENT = { test $${tst} = mytest && cd $(srcdir); true; } &&
Trying to use TESTS_ENVIRONMENT like this is fragile at best, and it would be best to write the tests so that they look in the source directory for the data files.

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite