I am currently using Boost.Log in one of my software projects. There is one case, where I report an error condition by using a log message. I would like to test whether this condition is detected correctly using google testing framework. Just to be clear, I want to test whether the message is generated. It may be removed by a filter, but this should not cause the test to fail. Is this possible at all? Any hints? Thanks!
For base yes-or-no testing, simply use assert, something like this:
#include <assert.h> /* assert */
void print_number(int* myInt) {
assert (myInt!=NULL);
// Boost.Log stuff...
// print_number stuff...
}
this will give you a straight up message (depending on compiler/OS) if the test fails.
Related
I am working on a hobby project mainly to learn cpp unit testing and database programming. However I am a little bit lost & confused about how should I write my code for proper testing. I tend to write a lot of void functions for my cpp projects. But now I can not figure out how should I test those functions. I have been succeeded in testing non-void functions cause they return something which can be easily tested against a value.
Ami I doing things in an unprofessional way? Should I avoid void functions as much as possible so that I can test those functions ? Or I am missing something ? For example how would I be able to test this function -
database.cpp
#include "database.hpp"
#include <sqlite3.h>
#include <iostream>
#include "spdlog/sinks/basic_file_sink.h"
// Creating the logging object
auto logger = spdlog::basic_logger_mt("appnotex", "../data/appnotexlog");
void Database::createDb(const char *dbname) {
// Creating the database file
sqlite3 *datadb;
int status = sqlite3_open(dbname, &datadb);
// checking for errors
if (status == SQLITE_OK) {
logger->info("------------ New Session ----------");
logger->info("Connected to Database Successfully");
} else {
std::string errorMessage = sqlite3_errmsg(datadb);
logger->info("Error: " + errorMessage);
}
If Needed
I am using Google Test framework
My whole project code hosted - here
Update
I have tried this one is this approach of testing the above method correct ?
databaseTest.cpp
TEST(DatabaseTest, createDbTest) {
const char *dbfilename = "../data/test/data.db";
const char *tbname = "DataTest";
Database *db = new Database();
std::ifstream dbfile("../data/test/data.db");
bool ok = false;
if (!dbfile.is_open())
ok = false;
else
ok = true;
EXPECT_TRUE(ok);
}
The problem is not so much in the function returning void. Think about how it signals errors and make sure all cases (success and failures) are tested, simple as that.
However, I don't see any error signalling at all there, apart from logging it. As a rule of thumb, logging should only be used for post-mortem research and the like. So, if logging completely fails, your program can still run correctly. That means, nothing internally depends on it and it is not a suitable error handling/signalling mechanism.
Now, there are basically three ways to signal errors:
Return values. Typically used in C code and sometimes used in C++ as well. With void return, that's not an option, and that is probably the source of your question.
Exceptions. You could throw std::runtime_error("DB connect failed"); and delegate handling it to the calling code.
Side effects. You could store the connection state in your Database instance. For completeness, using a global errno is also possible, but not advisable.
In any case, all three ways can be exercised and verified in unit tests.
I am using boost test within a home-grown GUI, and want to access test results (e.g. the failure message and location when a test fails)
The unit_test::test_observer class provides the virtual method:
void assertion_result(boost::unit_test::assertion_result)
However, unit_test::assertion_result is just an enum indicating success or failure. From there, I cannot see how to access further information about the test result.
The framework also provides the class test_tools::assertion_result, which encapsulates an error message, but this only appears to be used for evaluating pre-conditions. (I would have expected this type to be the argument to unit_test::test_observer::assertion_result).
The log output classes appear to provide more information on test results. These are implemented as streams, which makes it non-trivial to extract test result data.
Does anyone know how I can access the information on test results - success/failure, the test code, the location, etc?
Adding an observer will not give you the level of details you need.
From this class you can add your own formatter using the add_formatter function. This will contain the details of what is happening and where, depending on the formatter log level.
I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?
I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!
I'm writing some Test in C++ and I'm using gcov (actually lcov but I think it's beside the point) to get informations about coverage.
Is there any way to disable the information record at run-time?
E.G. :
bool myTest() {
ObjectToTest obj;
/* Enable gcov... */
obj.FunctionToTest();
/* ...Disable gcov */
if(obj.GetStatus() != WHATEVER)
return false;
else
return true;
}
In this case I would like gcov to display as "covered" just FunctionToTest but leave ObjectToTest constructor and GetStatus "uncovered".
Thanks in advance!
No, in case of gcov we don't have any such option.
I have seen such options in some coverage tools like clover, which works by instrumenting source code directly though.
Beside a solution to your problem will be to write that part of code into a different source file and then call it inside your desired source file by including it.
I am suggesting this because when you generate coverage report later using LCOV or GCOVR they both provide the option to exclude specified files from coverage report by passing them to certain switches.
LCOV:
-r tracefile pattern
--remove tracefile pattern
Remove data from tracefile.
GCOVR:
-e EXCLUDE, --exclude=EXCLUDE
Exclude data files that match this regular expression
Although I agree with what #VikasTawniya said, you can also mock the functions you don't like to track in your test code.
#ifdev NO_COV
#include mock.h // mock of obj.FunctionToTest(); does nothing
#else
#include real.h // real implementation of obj.FunctionToTest();
#endif
Now your coverage result is not spoiled with the call of obj.FunctionToTest()
Feasible solution now (2023),
void some_function() {
__gcov_reset(); // this will reset all profile counters
do_something();
__gcov_flush(); // this will flush all counters (note: this flush is incremental and will not overwrite existing profile data)
}
// To prevent other parts of profile data from being flushed
void __attribute__((destructor)) clear_redundant_gcov() {
__gcov_reset();
}
Additional explanation: when you compile your source code with gcov, gcc will insert a lot of API calls (such as __gcov_inc_counter(xxx), just an example), and it is possible to invoke these gcov API calls within your source code.
I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}