I am developing a DLL in C++ and want to perform unit testing of that DLL using the Boost Test Libraries. I read the Boost test manual thoroughly but since I am new, I have the following question:
Should I add test classes in the same VC project in which I am developing my DLL?. Ideally I want to do this but I am confused that a DLL has no main() and, on the other hand, the Boost test needs its own main() to execute. So where does the Boost test output go in this scenario? (In fact, I practically implemented this and don't see any output :( and almost spent two days I figuring out the problem, but didn't succeed)
Regards,
Jame.
You have 3 ways to do this:
You can definitely do what another reply suggest and build your lib as static. I wouldn't recommend this way though.
You can have one or more separate unit test projects in your solution. These projects will link with your library and with either static or shared version of Boost Test library. Each project will have a main either supplied by the Boost.Test library or implemented by you manually.
Finally you have another option and you can put your test cases directly into your library. You'll need to link with shared version of Boost Test. Once your library is built you can use it regularly as you do now plus you'll have an ability to execute test cases built into it. To execute the test case you'll need a test runner. Boost Test supplies one called "console test runner". You'll need to build it once and you can use for all your projects. Using this test runner you can execute your unit test like this:
test_runner.exe --test "your_lib".dll
You should understand all the pluses and minuses of this approach. Your unit test code will be part of your production library. It'll make it slightly bigger, but on the other hand you'll be able to run the test in production if necessary.
You could build your DLL as a static library file first. You can then use it to compile your final DLL directly and create an executable that contains your boost tests. Here's an example using boost.build:
lib lib_base
: # sources
$(MAIN_SOURCES).cpp # Sources for the library.
: # requirements
<link>static
: : ;
lib dll_final
: # sources
lib_base
$(DLL_SOURCES).cpp # Sources for DllMain .
: # requirements
<link>shared
: : ;
unit-test test_exe
: # sources
lib_base
$(TEST_SOURCES).cpp # Sources for the unit tests.
: # properties
<library>/site-config//boost/test
;
You do have to be carefull to not have any important logic in your DllMain but that's usually a bad idea.
Related
I have a dapptools project, and when I run dapp test I get the following before my tests happen:
dapp-build: building with linked libraries
dapp: Predeploying test library lib/openzeppelin-contracts/contracts/utils/Address.sol:Address at 0x1F39490BdD8e57Ed3CA877783E563cC0B329431b
dapp: Predeploying test library lib/openzeppelin-contracts/contracts/utils/Strings.sol:Strings at 0x7496c5C5c86FB8cE60221e5BeFf3b7806CFd09a7
However, I have another project where this doesn't happen. This seems to take a lot of time.
What's going on?
Create a .dapprc if you don't have one already, and add:
export DAPP_LINK_TEST_LIBRARIES=0
This will skip compiling all the files in the lib folder, which we don't need to do.
Note that I have already read the layout convention.
In my lib directory I usually have a few libraries I could extract into their own package. Very often the code is not complete enough or / and I want to wait for a new package until I really want to reuse the code in another project.
I would really like to place the unit-test code, examples and doc in the same directory.
Example:
let's say I have a string-helper library in lib → lib/string-helper.
I would like to place my tests, examples and doc in lib/string-helper/tests, lib/string-helper/examplesand lib/string-helper/doc.
However the layout convention says that I should put them outside the lib directory.
This makes it unnecessarily hard to extract it into its own package. (pub serve even went into an endless loop when I ignored this and made my own package symbolic link)
How do you handle this?
The only valid place for tests is the my_package/test directory or any subdirectory of test.
I'm developing a library which should do a lot of calculation. It uses GNU autotools build system. There is a test project that links to this library and runs various test procedures. Each procedure compares results with pre-calculated values from MATLAB.
I found that testing process is boring and time consuming. Every time I need to do make, sudo make install in library and make in test project, then run the program and see what's going on.
What is the standard way to add check target to a library using autotools? It should meet this requirements:
User should be able to make check and see results without having to install the library itself. The executable should link to recently compiled, and not-yet-installed shared objects.
Running make check should also run the test program. (Not only compile it). Result of make check depends on return value of test unit program. The make should show error if test unit fails.
If user decides not to make check then no executable should be compiled.
Since you're already using autotools, you've got most of the infrastructure already. I don't know the directory layout, but let's say your have: SUBDIRS = soroush tests in a top-level Makefile.am, alternatively, you might have SUBDIRS = tests in the soroush directory. What matters is that a libtool-managed libsoroush.la exists before descent into the tests directory.
The prefix check_ indicates that those objects, in this case PROGRAM, should not be built until make check is run. So in tests/Makefile.am => check_PROGRAMS = t1 t2 t3
For each test program you can specify: t1_SOURCES = t1.cc, etc. As a shortcut, if you only have one source file per test, you can use AM_DEFAULT_SOURCE_EXT = .cc, which will implicitly generate the sources for you. so far:
AM_CPPFLAGS = -I$(srcdir)/.. $(OTHER_CPPFLAGS) # relative path to lib headers.
LDADD = ../soroush/libsoroush.la
check_PROGRAMS = t1 t2 t3
AM_DEFAULT_SOURCE_EXT = .cc
# or: t1_SOURCES = t1.cc, t1_LDADD = ../soroush/libsoroush.la, etc.
make check will build, but not execute, those programs. For that, you need to add:
TESTS = $(check_PROGRAMS)
What's really good about this approach is that if libsoroush is built as a shared library, libtool will take care of handling library search paths, etc., using the uninstalled library.
Often, the resulting t1 program will just be a shell script that sets up environment variables so that the real binary: .libs/t1 can be executed. I only mention this, because the whole point of using libtool is that you don't need to worry about how it's done.
Test feedback is more complicated, depending on what you require. You can go all the way with a parallel test harness, or just simple PASS/FAIL feedback. Unless testing is a major bottleneck, or the project is huge, it's easier just to use simple (or scripted) testing.
I have not used Unit Testing so far, and I intend to adopt this procedure. I was impressed by TDD and certainly want to give it a try - I'm almost sure it's the way to go.
Boost looks like a good choice, mainly because it's being maintained. With that said, how should I go about implementing a working and elegant file-structure and project-structure ? I am using VS 2005 in Win XP. I have been googling about this and was more confused than enlightened.
Our Boost based Testing structure looks like this:
ProjectRoot/
Library1/
lib1.vcproj
lib1.cpp
classX.cpp
...
Library2/
lib2.vcproj
lib2.cpp
toolB.cpp
classY.cpp
...
MainExecutable/
main.cpp
toolA.cpp
toolB.cpp
classZ.cpp
...
Tests/
unittests.sln
ut_lib1/
ut_lib1.vcproj (referencing the lib1 project)
ut_lib1.cpp (with BOOST_AUTO_TEST_CASE) - testing public interface of lib1
ut_classX.cpp - testing of a class or other entity might be split
into a separate test file for size reasons or if the entity
is not part of the public interface of the library
...
ut_lib2/
ut_lib2.vcproj (referencing the lib2 project)
ut_lib2.cpp (with BOOST_AUTO_TEST_CASE) - testing public interface of lib2
...
ut_toolA/
ut_toolA.vcproj (referencing the toolA.cpp file)
ut_toolA.cpp - testing functions of toolA
ut_toolB/
ut_toolB.vcproj (referencing the toolB.cpp file)
ut_toolB.cpp - testing functions of toolB
ut_main/
ut_main.vcproj (referencing all required cpp files from the main project)
ut_classZ.cpp - testing classZ
...
This structure was chosen for a legacy project, where we had to decide on a case-by-case basis on what tests to add and how to group test-projects for existing modules of sourcecode.
Things to note:
Unit Testing code is always compiled separately from production code.
Production projects do not reference the unit testing code.
Unit Testing projects include source-files directly or only reference libraries, depending on what makes sense given the use of a certain code-file.
Running the unit tests is done via a post-build step in each ut_*.vcproj
All our production builds automatically also run the unit tests. (In our build scripts.)
In our real (C++) world you have to make tradeoffs btw. legacy issues, developer convenience, compile times, etc. I think our project structure is a good tradeoff. :-)
I spilt my core code up into either .libs or .dlls and then have my Boost test projects depend on these lib/dll projects. So I might end up with:
ProjectRoot
Lib1Source
Lib1Tests
Lib2Source
Lib2Tests
The alternative is to store your source in a separate folder and add the files to both your main apps project and the unit test project but I find this a little messy. YMMV.
I'm using GNU autotools for the build system on a particular project. I want to start writing automated tests for verifcation. I would like to just type "make check" to have it automatically run these. My project is in C++, although I am still curious about writing automated tests for other languages as well.
Is this compatible with pretty much every unit testing framework out there (I was thinking of using cppunit)? How do I hook these unit testing frameworks into make check? Can I make sure that I don't require the unit test software to be installed to be able to configure and build the rest of the project?
To make test run when you issue make check, you need to add them to the TESTS variable
Assuming you've already built the executable that runs the unit tests, you just add the name of the executable to the TESTS variable like this:
TESTS=my-test-executable
It should then be automatically run when you make check, and if the executable returns a non-zero value, it will report that as a test failure. If you have multiple unit test executables, just list them all in the TESTS variable:
TESTS=my-first-test my-second-test my-third-test
and they will all get run.
I'm using Check 0.9.10
configure.ac
Makefile.am
src/Makefile.am
src/foo.c
tests/check_foo.c
tests/Makefile.am
./configure.ac
PKG_CHECK_MODULES([CHECK], [check >= 0.9.10])
./tests/Makefile.am for test codes
TESTS = check_foo
check_PROGRAMS = check_foo
check_foo_SOURCES = check_foo.c $(top_builddir)/src/foo.h
check_foo_CFLAGS = #CHECK_CFLAGS#
and write test code, ./tests/check_foo.c
START_TEST (test_foo)
{
ck_assert( foo() == 0 );
ck_assert_int_eq( foo(), 0);
}
END_TEST
/// And there are some tcase_xxx codes to run this test
Using check you can use timeout and raise signal. it is very helpful.
You seem to be asking 2 questions in the first paragraph.
The first is about adding tests to the GNU autotools toolchain - but those tests, if I'm understanding you correctly, are for both validating that the environment necessary to build your application exists (dependent libraries and tools) as well as adapt the build to the environment (platform specific differences).
The second is about unit testing your C++ application and where to invoke those tests, you've proposed doing so from the autotools tool chain, presumably from the configure script. Doing that isn't conventional though - putting a 'test' target in your Makefile is a more conventional way of executing your test suite. The typical steps for building and installing an application with autotools (at least from a user's perspective, not from your, the developer, perspective) is to run the configure script, then run make, then optionally run make test and finally make install.
For the second issue, not wanting cppunit to be a dependency, why not just distribute it with your c++ application? Can you just put it right in what ever archive format you're using (be it tar.gz, tar.bz2 or .zip) along with your source code. I've used cppunit in the past and was happy with it, having used JUnit and other xUnit style frameworks.
Here is a method without dependencies:
#src/Makefile.am
check_PROGRAMS = test1 test2
test1_SOURCES = test/test1.c code_needed_to_test1.h code_needed_to_test1.c
test2_SOURCES = test/test2.c code_needed_to_test2.h code_needed_to_test2.c
TESTS = $(check_PROGRAMS)
The make check will naturally work and show formatted and summarized output:
$ make check
...
PASS: test1
PASS: test2
============================================================================
Testsuite summary for foo 1.0
============================================================================
# TOTAL: 2
# PASS: 2
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
When you do a make dist nothing from src/test/* will be
in the tarball. Test code is not in the dist, only source will be.
When you do a make distcheck it will run make check and run your tests.
You can use Automake's TESTS to run programs generated with check_PROGRAMS but this will assume that you are using a log driver and a compiler for the output. It is probably easier to still use check_PROGRAMS but to invoke the test suite using a local rule in the Makefile:
check_PROGRAMS=testsuite
testsuite_SOURCES=...
testsuite_CFLAGS=...
testsuite_LDADD=...
check-local:
./testsuite