Log level in boost unit test framework - c++

I am using the boost unit test framework. I use the BOOST_TEST_MESSAGE function, and therefore I need to set the log level to at least message.
From reading the doc, I can do the following:
I can add somehwere boost::unit_test::unit_test_log.set_threshold_level( boost::unit_test::log_messages); However, the doc indicates it to be generally considered bad practice.
I can set the environment variable BOOST_TEST_LOG_LEVEL appropriately. This is a bad solution for me, as I will distribute my code, and I do not have a good way to constrain the user to set this environment variable appropriately in their bashrc.
Does anyone know a proper solution to this?

The best solution was simply to use the command line argument --log_level when running my binary.

Related

What could be the simplest way to incorporate Windows WPP Software Tracing into SCons builds?

I ask my question in such a specific way because I am afraid that a more generic form could lead to excessively theoretic discussions of how the things should be done best and in the most appropriate way (like a question about pre and post-process actions in SCons).
WPP incorporation actually requires execution of an additional command (commands) before compilation of a file and only even if the build process finds necessity to compile the file without any regard to WPP.
I would remark that this is easily achieved with few lines of definitions in a shared Visual Studio property page file making this work for multiple files in multiple projects, folders, etc. in an absolutely transparent for developers way.
Thus I am wondering whether this can be done in a similarly simple way with SCons? I do not have any deep knowledge of either SCons or MSBuild frameworks; I work with them for simple practical use so I would truly appreciate a practical and useful advise.
Here's what I'd suggest.
SCons builds command lines from Environment() variables.
For example the compile command line for building shared object for c++ is stored in SHCXXCOM (and the variable for what is displayed to user when the command is run defaults to SHCXXCOM, but can be changed by modifying SHCXXCOMSTR).
Back to the problem at hand.
Assuming you have a limited number of build steps you want to wrap, you can do something like.
env['SHCXXCOM'] = [ 'MPP PRE COMMAND LINE', env['SHCXXCOM'], 'MPP POST COMMAND LINE']
You'll have to figure out which variables you need to do this with, but take a look at the manpage to figure that out.
https://scons.org/doc/production/HTML/scons-man.html
p.s. I've not tried this, but in theory it should work. Let us know if not.

How to unittest Python ConfigParser

i am using the ConfigParser to read a INI File in a Python (2.7) Project.
Now i want to write a Unittest and i am not sure what is the cleanest way to test it.
i already tried to adapt the init-method of the class which is reading the INI file, so i can pass the file which should be used as a parameter.
like this:
def __ini__(self, config_file='/src/config/my_conf.ini'):
self.conf = ConfigParser.SafeConfigParser()
self.conf.read(config_file)
it works fine in production. However i do not manage to run it in a unittest. The config file is never available. It doesn't matter if i use the default value or change it to '/test/config/my_conf.ini'.
I would prefer to use relative paths.
Could someone tell me what i am doing wrong with the path in the unittest.
Or does someone have a much better idea how to handle this?
Thanks a lot!
A common solution for this kind of problem is, that during unit-testing you simply do not really access a real file. Instead, you 'mock' the functionality that reads the file content. This mock is then controlled by the test, such for each test case your mocked functionality returns data as if it was read from a file.
This does not necessarily have to be done on file access level. In your example you could also mock the ConfigParser.SafeConfigParser(). That is, during testing your code would not access the true config parser, but only some mock that just behaves as one and is in fact controlled by the test code.
In your code, however, this would require a bit of trickery: The way the __init__ function is written makes it difficult to take control from the unit-test code. Thus, it is advisable to change the code such that test code has more control of the file access or the config parser. Search for 'inversion of control' to learn more about how this can be done.

how to mock file copy in a functional test

I have a controller which duty is copying a file passed along with the request (through a body POST) to a specific path under web/images. The path is specified by a property living into the specific Controller.
What I would like to do is testing it with a functional test, but I wouldn't like it to overwrite files in my project, so I would like to use vfs or change the path before my test case sends the request.
Is there a good-straight way to accomplish this?
A common approach is to load configuration that may change between environments as an environmental variable. (I have not ever used symfony before, so there may be tools to help with env vars)
The upload path could then be
$upload_path = getenv('WEB_IMAGE_UPLOAD_PATH') ?
getenv('WEB_IMAGE_UPLOAD_PATH') : 'web/images'
This will allow you to specify a temp (/tmp ?) directory when starting up your server in integration mode.
Ah cool, (disclaimer: i'm not a php person) it looks like php has IO streams that may be able to help in functional testing, and allow easy cleanup.
http://php.net/manual/en/wrappers.php.php#refsect2-wrappers.php-unknown-unknown-unknown-unknown-unknown-descriptios
I believe you may be able to set your 'WEB_IMAGE_UPLOAD_PATH' to be one of those streams
I'll try to answer myself: I refactored my code in order to have a property that specifies the path I would like to copy/overwrite my file.
Then, inside a PHPUnit class I replace the object property's value with a vfsStream path. By doing like that I get the behavior I need, without touching my real files/paths. Everything will live inside the virtual file system and my object will use it.
Parameters are important for a clean and reusable code, but even more when you want to unit-test: I think Unit testing is helping me to force to parameterize everything in place of relapsing to hardcoding when you don't have so much time. In order to help me writing unit tests I created a class that accesses methods and properties, irrespective of their accessibility.
PHPUnitUtils
I'm quite sure there's already something more sophisticated, but this class fullfills my needs in this very moment. Hope it helps :-)

CPP unit setup for C++

In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.

How to organize C++ test apps and related files?

I'm working on a C++ library that (among other stuff) has functions to read config files; and I want to add tests for this. So far, this has lead me to create lots of valid and invalid config files, each with only a few lines that test one specific functionality. But it has now got very unwieldy, as there are so many files, and also lots of small C++ test apps. Somehow this seems wrong to me :-) so do you have hints how to organise all these tests, the test apps, and the test data?
Note: the library's public API itself is not easily testable (it requires a config file as parameter). The juicy, bug-prone methods for actually reading and interpreting config values are private, so I don't see a way to test them directly?
So: would you stick with testing against real files; and if so, how would you organise all these files and apps so that they are still maintainable?
Perhaps the library could accept some kind of stream input, so you could pass in a string-like object and avoid all the input files? Or depending on the type of configuration, you could provide "get/setAttribute()" functions to directly, publicy, fiddle the parameters. If that is not really a design goal, then never mind. Data-driven unit tests are frowned upon in some places, but it is definitely better than nothing! I would probably lay out the code like this:
project/
src/
tests/
test1/
input/
test2
input/
In each testN directory you would have a cpp file associated to the config files in the input directory.
Then, assuming you are using an xUnit-style test library (cppunit, googletest, unittest++, or whatever) you can add various testXXX() functions to a single class to test out associated groups of functionality. That way you could cut out part of the lots-of-little-programs problem by grouping at least some tests together.
The only problem with this is if the library expects the config file to be called something specific, or to be in a specific place. That shouldn't be the case, but if it is would have to be worked around by copying your test file to the expected location.
And don't worry about lots of tests cluttering your project up, if they are tucked away in a tests directory then they won't bother anyone.
Part 1.
As Richard suggested, I'd take a look at the CPPUnit test framework. That will drive the location of your test framework to a certain extent.
Your tests could be in a parallel directory located at a high-level, as per Richard's example, or in test subdirectories or test directories parallel with the area you want to test.
Either way, please be consistent in the directory structure across the project! Especially in the case of tests being contained in a single high-level directory.
There's nothing worse than having to maintain a mental mapping of source code in a location such as:
/project/src/component_a/piece_2/this_bit
and having the test(s) located somewhere such as:
/project/test/the_first_components/connection_tests/test_a
And I've worked on projects where someone did that!
What a waste of wetware cycles! 8-O Talk about violating the Alexander's concept of Quality Without a Name.
Much better is having your tests consistently located w.r.t. location of the source code under test:
/project/test/component_a/piece_2/this_bit/test_a
Part 2
As for the API config files, make local copies of a reference config in each local test area as a part of the test env. setup that is run before executing a test. Don't sprinkle copies of config's (or data) all through your test tree.
HTH.
cheers,
Rob
BTW Really glad to see you asking this now when setting things up!
In some tests I have done, I have actually used the test code to write the configuration files and then delete them after the test had made use of the file. It pads out the code somewhat and I have no idea if it is good practice, but it worked. If you happen to be using boost, then its filesystem module is useful for creating directories, navigating directories, and removing the files.
I agree with what #Richard Quirk said, but also you might want to make your test suite class a friend of the class you're testing and test its private functions.
For things like this I always have a small utility class that will load a config into a memory buffer and from there it gets fed into the actually config class. This means the real source doesn't matter - it could be a file or a db. For the unit-test it is hard coded one in a std::string that is then passed to the class for testing. You can simulate currup!pte3d data easily for testing failure paths.
I use UnitTest++. I have the tests as part of the src tree. So:
solution/project1/src <-- source code
solution/project1/src/tests <-- unit test code
solution/project2/src <-- source code
solution/project2/src/tests <-- unit test code
Assuming that you have control over the design of the library, I would expect that you'd be able to refactor such that you separate the concerns of actual file reading from interpreting it as a configuration file:
class FileReader reads the file and produces a input stream,
class ConfigFileInterpreter validates/interprets etc. the contents of the input stream
Now to test FileReader you'd need a very small number of actual files (empty, binary, plain text etc.), and for ConfigFileInterpreter you would use a stub of the FileReader class that returns an input stream to read from. Now you can prepare all your various config situations as strings and you would not have to read so many files.
You will not find a unit testing framework worse than CppUnit. Seriously, anybody who recommends CppUnit has not really taken a look at any of the competing frameworks.
So yes, go for a unit testing franework, but do not use CppUnit.