How to add test unit to a shared library project using autotools - c++

I'm developing a library which should do a lot of calculation. It uses GNU autotools build system. There is a test project that links to this library and runs various test procedures. Each procedure compares results with pre-calculated values from MATLAB.
I found that testing process is boring and time consuming. Every time I need to do make, sudo make install in library and make in test project, then run the program and see what's going on.
What is the standard way to add check target to a library using autotools? It should meet this requirements:
User should be able to make check and see results without having to install the library itself. The executable should link to recently compiled, and not-yet-installed shared objects.
Running make check should also run the test program. (Not only compile it). Result of make check depends on return value of test unit program. The make should show error if test unit fails.
If user decides not to make check then no executable should be compiled.

Since you're already using autotools, you've got most of the infrastructure already. I don't know the directory layout, but let's say your have: SUBDIRS = soroush tests in a top-level Makefile.am, alternatively, you might have SUBDIRS = tests in the soroush directory. What matters is that a libtool-managed libsoroush.la exists before descent into the tests directory.
The prefix check_ indicates that those objects, in this case PROGRAM, should not be built until make check is run. So in tests/Makefile.am => check_PROGRAMS = t1 t2 t3
For each test program you can specify: t1_SOURCES = t1.cc, etc. As a shortcut, if you only have one source file per test, you can use AM_DEFAULT_SOURCE_EXT = .cc, which will implicitly generate the sources for you. so far:
AM_CPPFLAGS = -I$(srcdir)/.. $(OTHER_CPPFLAGS) # relative path to lib headers.
LDADD = ../soroush/libsoroush.la
check_PROGRAMS = t1 t2 t3
AM_DEFAULT_SOURCE_EXT = .cc
# or: t1_SOURCES = t1.cc, t1_LDADD = ../soroush/libsoroush.la, etc.
make check will build, but not execute, those programs. For that, you need to add:
TESTS = $(check_PROGRAMS)
What's really good about this approach is that if libsoroush is built as a shared library, libtool will take care of handling library search paths, etc., using the uninstalled library.
Often, the resulting t1 program will just be a shell script that sets up environment variables so that the real binary: .libs/t1 can be executed. I only mention this, because the whole point of using libtool is that you don't need to worry about how it's done.
Test feedback is more complicated, depending on what you require. You can go all the way with a parallel test harness, or just simple PASS/FAIL feedback. Unless testing is a major bottleneck, or the project is huge, it's easier just to use simple (or scripted) testing.

Related

julia: rerun unittests upon changes to files

Are there julia libraries that can run unittests automatically when I make changes to the code?
In Python there is the pytest.xdist library which can run unittests again when you make changes to the code. Does julia have a similar library?
A simple solution could be made using the standard library module FileWatching; specifically FileWatching.watch_file. Despite the name, it can be used with directories as well. When something happens to the directory (e.g., you save a new version of a file in it), it returns an object with a field, changed, which is true if the directory has changed. You could of course combine this with Glob to instead watch a set of source files.
You could have a separate Julia process running, with the project's environment active, and use something like:
julia> import Pkg; import FileWatching: watch_file
julia> while true
event = watch_file("src")
if event.changed
try
Pkg.pkg"test"
catch err
#warn("Error during testing:\n$err")
end
end
end
More sophisticated implementations are possible; with the above you would need to interrupt the loop with Ctrl-C to break out. But this does work for me and happily reruns tests whenever I save a file.
If you use a Github repository, there are ways to set up Travis or Appveyor to do this. This is the testing method used by many of the registered modules for Julia. You will need to write the unit test suite (with using Test) and place it in a /test subdirectory on the github repository. You can search for julia and those web services for details.
Use a standard GNU Makefile and call it from various places depending on your use-case
Your .juliarc if you want to check for tests on startup.
Cron if you want them checked regularly
Inside your module's init function to check every time a module is loaded.
Since GNU makefiles detect changes automatically, calls to make will be silently ignored in the absence of changes.

Testing a DLL with Boost::Test?

I am developing a DLL in C++ and want to perform unit testing of that DLL using the Boost Test Libraries. I read the Boost test manual thoroughly but since I am new, I have the following question:
Should I add test classes in the same VC project in which I am developing my DLL?. Ideally I want to do this but I am confused that a DLL has no main() and, on the other hand, the Boost test needs its own main() to execute. So where does the Boost test output go in this scenario? (In fact, I practically implemented this and don't see any output :( and almost spent two days I figuring out the problem, but didn't succeed)
Regards,
Jame.
You have 3 ways to do this:
You can definitely do what another reply suggest and build your lib as static. I wouldn't recommend this way though.
You can have one or more separate unit test projects in your solution. These projects will link with your library and with either static or shared version of Boost Test library. Each project will have a main either supplied by the Boost.Test library or implemented by you manually.
Finally you have another option and you can put your test cases directly into your library. You'll need to link with shared version of Boost Test. Once your library is built you can use it regularly as you do now plus you'll have an ability to execute test cases built into it. To execute the test case you'll need a test runner. Boost Test supplies one called "console test runner". You'll need to build it once and you can use for all your projects. Using this test runner you can execute your unit test like this:
test_runner.exe --test "your_lib".dll
You should understand all the pluses and minuses of this approach. Your unit test code will be part of your production library. It'll make it slightly bigger, but on the other hand you'll be able to run the test in production if necessary.
You could build your DLL as a static library file first. You can then use it to compile your final DLL directly and create an executable that contains your boost tests. Here's an example using boost.build:
lib lib_base
: # sources
$(MAIN_SOURCES).cpp # Sources for the library.
: # requirements
<link>static
: : ;
lib dll_final
: # sources
lib_base
$(DLL_SOURCES).cpp # Sources for DllMain .
: # requirements
<link>shared
: : ;
unit-test test_exe
: # sources
lib_base
$(TEST_SOURCES).cpp # Sources for the unit tests.
: # properties
<library>/site-config//boost/test
;
You do have to be carefull to not have any important logic in your DllMain but that's usually a bad idea.

scons - how to run something /after/ all targets have been built

I've recently picked up scons to implement a multi-platform build framework for a medium sized C++ project. The build generates a bunch of unit-tests which should be invoked at the end of it all. How does one achieve that sort of thing?
For example in my top level sconstruct, I have
subdirs=['list', 'of', 'my', 'subprojects']
for subdir in subdirs:
SConscript(dirs=subdir, exports='env', name='sconscript',
variant_dir=subdir+os.sep+'build'+os.sep+mode, duplicate=0)
Each of the subdir has its unit-tests, however, since there are dependencies between the dlls and executables built inside them - i want to hold the running of tests until all the subdirs have been built and installed (I mean, using env.Install).
Where should I write the loop to iterate through the built tests and execute them? I tried putting it just after this loop - but since scons doesn't let you control the order of execution - it gets executed well before I want it to.
Please help a scons newbie. :)
thanks,
SCons, like Make, uses a declarative method to solving the build problem. You don't want to tell SCons how to do its job. You want to document all the dependencies and then let SCons solve how it builds everything.
If something is being executed before something else, you need to create and hook up the dependencies.
If you want to create dmy touch files, you can create a custom builder like:
import time
def action(target, source, env):
os.system('echo here I am running other build')
dmy_fh = open('dmy_file','w')
dmy_fh.write( 'Dummy dependency file created at %4d.%02d.%02d %02dh%02dm%02ds\n'%time.localtime()[0:6])
dmy_fh.close()
bldr = Builder(action=action)
env.Append( BUILDERS = {'SubBuild' : bldr } )
env.SubBuild(srcs,tgts)
It is very important to put the timestamp into the dummy file, because scons uses md5 hashes. If you have an empty file, the md5 will always be the same and it may decide to not do subsequent build steps. If you need to generate different tweaks on a basic command, you can use function factories to modify a template. e.g.
def gen_a_echo_cmd_func(echo_str):
def cmd_func(target,source,env):
cmd = 'echo %s'%echo_str
print cmd
os.system(cmd)
return cmd_fun
bldr = Builder(action = gen_a_echo_cmd_func('hi'))
env.Append(BUILDERS = {'Hi': bldr})
env.Hi(srcs,tgts)
bldr = Builder(action = gen_a_echo_cmd_func('bye'))
env.Append(BUILDERS = {'Bye': bldr})
env.Bye(srcs,tgts)
If you have something that you want to automatically inject into the scons build flow ( e.g. something that compresses all your build log files after everything else has run ), see my question here.
The solution should be as simple as this.
Make the result of the Test builders depend on the result of the Install builder
In pseudo:
test = Test(dlls)
result = Install(dlls)
Depends(test,result)
The best way would be if the Test builder actually worked out the dll dependencies for you, but there may be all kinds of reasons it doesn't do that.
In terms of dependencies, what you want is for all the test actions to depend on all the program-built actions. A way of doing this is to create and export a dummy-target to all the subdirectories' sconscript files, and in the sconscript files, make the dummy-target Depends on the main targets, and have the test targets Depends on the dummy-target.
I'm having a bit of trouble figuring out how to set up the dummy target, but this basically works:
(in top-level SConstruct)
dummy = env.Command('.all_built', 'SConstruct', 'echo Targets built. > $TARGET')
Export('dummy')
(in each sub-directory's SConscript)
Import('dummy')
for target in target_list:
Depends(dummy, targe)
for test in test_list:
Depends(test, dummy)
I'm sure further refinements are possible, but maybe this'll get you started.
EDIT: also worth pointing out this page on the subject.
Just have each SConscript return a value on which you will build dependencies.
SConscript file:
test = debug_environment.Program('myTest', src_files)
Return('test')
SConstruct file:
dep1 = SConscript([...])
dep2 = SConscript([...])
Depends(dep1, dep2)
Now dep1 build will complete after dep2 build has completed.

How does one add a custom build step to an automake-based project in KDevelop?

I recently started work on a personal coding project using C++ and KDevelop. Although I started out by just hacking around, I figure it'll be better in the long run if I set up a proper unit test suite before going too much further. I've created a seperate test-runner executable as a sub project, and the tests I've added to it appear to function properly. So far, success.
However, I'd really like to get my unit tests running every time I build, not only when I explicitly run them. This will be especially true as I split up the mess I've made into convenience libraries, each of which will probably have its own test executable. Rather than run them all by hand, I'd like to get them to run as the final step in my build process. I've looked all through the options in the project menu and the automake manager, but I can't figure out how to set this up.
I imagine this could probably be done by editing the makefile by hand. Unfortunately, my makefile-fu is a bit weak, and I'm also afraid that KDevelop might overwrite any changes I make by hand the next time I change something through the IDE. Therefore, if there's an option on how to do this through KDevelop itself, I'd much prefer to go that way.
Does anybody know how I could get KDevelop to run my test executables as part of the build process? Thank you!
(I'm not 100% tied to KDevelop. If KDevelop can't do this, or else if there's an IDE that makes this much easier, I could be convinced to switch.)
Although you could manipulate the default `make` target to run your tests,
it is generally not recommended, because every invocation of
make
would run all the tests.
You should use the "check" target instead,
which is an accepted quasi-standard among software packages.
By doing that,
the tests are only started when you run
make check
You can then easily configure KDevelop
to run "make check" instead of just "make".
Since you are using automake (through KDevelop),
you don't need to write the "check" target yourself.
Instead, just edit your `Makefile.am` and set some variables:
TESTS = ...
Please have a look at the
automake documentation, "Support for test suites"
for further information.
I got it working this way:
$ cat src/base64.c
//code to be tested
int encode64(...) { ... }
#ifdef UNITTEST
#include <assert.h>
int main(int argc, char* argv[])
{
assert( encode64(...) == 0 );
return 0;
}
#endif //UNITTEST
/* end file.c */
$ cat src/Makefile.am
...
check_PROGRAMS = base64-test
base64_test_SOURCES = base64.c
base64_test_CPPFLAGS = -I../include -DUNITTEST
TESTS = base64-test
A make check would build src/base64-test and run it:
$ make check
...
PASS: base64-test
==================
All 1 tests passed
==================
...
Now I'm trying to encapsulate it all as a m4 macro to be used like this:
MAKE_UNITTEST(base64.c)
which should produce something like the solution above.
Hope this helps.

How do you create tests for "make check" with GNU autotools

I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite