Unit testing OCaml modules with pa_ounit - unit-testing

I have a simple module to test with a few inline pa_ounit tests, i've setup the directory in the oasis style and got it all to build.
For a reference I've been using: https://github.com/janestreet/textutils
How would one execute the unit-tests for the above repo? I'm assuming there's an executable .ml file to write but what goes in this, how it is it built and does it extend the tests described at the module level in any way?
I've read the docs for pa_ounit and they just make me more confused ha.

As pa_ounit readme says, run the executable that contains tests with inline-test-runner argument.
Even without pa_ounit (when using plain OUnit), the file with tests is compiled and then executed. You should probably try OUnit itself before you start using the syntax extension so you can get the feel of the system.
OASIS, a popular build automation tool, allows you to build tests and run them with "make test" easily. See https://ocaml.org/learn/tutorials/setting_up_with_oasis.html#Tests

Related

Testing our CMake Modules

We created multiple additional functions for CMake. They became quite a lot, and we need to (unit) test them.
There are simple ones, that are only variable bases like:
function(join_list LIST GLUE)
These can be tested with a custom CMake Script, that checks the results.
For this we also wrote a set of assert-macros.
This becomes way harder when the functions are target based:
function(target_my_custom_property_set TARGET VALUE)
We need a multiple CMakeLists.txt Files that need to be configured. Configuration must succeed or fail with specified messages. Also the result files must be checked.
I wonder, is there an easier way? Is there a existing framework? How does Kitware test the shipped modules?
ctest is the framework for running all sorts of tests. There are many tests for cmake that get run as part of the CMake Testing Process. These tests are part of the source code in the Tests folder and are part of CMakeLists.txt.
The specific tests you want to look at are located in the RunCMake folder. These tests utilize RunCMake.cmake. A good example is the tests in message. What these tests do is utilize execute_process to capture the output from cmake and compare the output from the cmake configure step to the contents for a file with the expected output. The return value from cmake is also returned and can be tested.
You don't specify what "results files" are. There are examples that are more complicated that the perform a configuration and build and scan some files to verify there contents.
It may be easier if you separate out checking messages in a failed configure vs a failed build vs a passing configure and build and a specific output message.

Using googletest to run unit-tests for multiple tested modules at once

I'm working on a large C++ project which contains more than 50 libraries and executables. I am starting to add googletest tests for each of these modules. I read that google recommends putting the tests in an executables and not in libraries to make life easier. Creating a separate executable for each separate components I would get more than 50 test executables and in order to run them all at once I would need to create an external script which would also need to combine their output to a single one.
Is that the recommended thing to do?
Or should I create a library for tests of each separate module and link all these libs to a single tests executable? But then running tests for a single module becomes less convinient. I would need to build all the tests and specify to the main test executable through the gtest_filter flag which tests should be executed at this time.
It would really help me to hear how other people do this and what is the best practice here.
Thank you
[...] and in order to run them all at once I would need to create an
external script which would also need to combine their output to a
single one.
Maybe it's not actually necessary to combine the output into a single file. For example with Jenkins you can specify a wildcard pattern for the Google Test output files.
So if you actually just want to see the Google Test results in Jenkins (or Hudson or whatever CI tool you use), this might be a possible solution:
You would run all of your test executables from a simple script (or even from a Make rule), with parameter --gtest_output=xml: followed by a directory name (ie. ending with a slash). Every test executable will then write an own XML file into that directory, and you can then configure your CI tool to read all files from that directory.

Is there a standard way to test scripts/executables in Jenkins?

We have a project that contains a library of Python and Scala packages, as well as Bourne, Python and Perl executable scripts. Although the library has good test coverage, we don't have any tests on the scripts.
The current testing environment uses Jenkins, Python, virtualenv, nose, Scala, and sbt.
Is there a standard/common way to incorporate testing of scripts in Jenkins?
Edit: I'm hoping for something simple like Python's unittest for shell scripts, like this:
assertEquals expected.txt commandline
assertError commandline --bogus
assertStatus 11 commandline baddata.in
Have you looked at shunit2: https://github.com/kward/shunit2
It allows you to write testable shell scripts in Bourne, bash or ksh scripts.
Not sure how you can integrate it into what you're describing, but it generates output similar to other unit test suites.
I do not know how 'standard' this is, but if you truly practice TDD your scripts also should be developed with TDD. How you connect your TDD tests with Jenkins then depends on the TDD framework you are using: you can generate JUnit reports for example, that Jenkins can read, or your tests can simply return failed status, etc.
If your script requires another project, then my inclination is to make a new jenkins project, say 'system-qa'.
This would be a downstream project of the python project, and have a dependency on the python project and the in-house project.
If you were using dependency resolution/publishing technology, say apache ivy http://ant.apache.org/ivy/, and if these existing projects were to publish a packaged version of their code (as simple as a .tar.gz, perhaps), then system-qa project could then declare dependencies (again, using ivy) for both the python package and the in-house project package, download it using ivy, extract/install it, run tests, and exit.
So in summary, the system-qa project's build script is responsible for retrieving dependencies, running tests against those dependencies, and then perhaps publishing a standardized output test format like junit xml (but at a minimum returning 0 or non-0 to clue Jenkins in on how the built went).
I think this is a technically correct solution, but also a lot of work. Judgement call required if it's worth it.

Running Nested Tests with CTest

I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.

have you and how do you do C++ autotest?

As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html