Let's start with a minimal working example:
main.cpp:
#include <iostream>
#include <string>
int main() {
std::cout << "hello " + std::to_string(42);
return 0;
}
I compile this code using the following flags:
[g++/clang++] -std=c++11 -g -Og --coverage -Wall -o main main.cpp
clang 4.0.1
gcc 4.8.5.
I get only 50% code coverage, since the compiler generates exception code, which is not executed, as explained in another stackoverflow question.
The problem is that disabling exceptions via -fno-exceptionsis not an option for me. The code I am writing unit tests for uses exceptions, so disabling all of them is not an option.
In order to generate a report I'm using gcovr, in case of clang++ additionally llvm-cov gcovto convert it. But I am not bound to these tools, so if you have other tools that do not show this behaviour please suggest them!
Basically I need a way to compile/write unit tests for this code and get 100% branch / conditional coverage with exceptions enabled. Is there a way?
Well, I believe your intention is not actually test this small piece of code, but use the concept in a project...
The code you entered throws an exception - bad_alloc is thrown when you have no memory left to store the string that will be created with std::to_string. To be 100% safe, std::to_string should be surrounded with try-catch, where you could handle your exception.
To build a 100% code coverage unit test, you will need to force the exception to happen - in this specific case it is almost impossible to guarantee, since the parameter is a constant number. But, in your project, you probably have some data to be allocated whose size is variable - in this case, you can isolate in your code the methods that allocate memory, to test them separately. Then you pass to these methods, in the test function, a huge amount to be allocated to evaluate what you have put on your catch block (and check if you are handling it properly).
For instance, this code should throw the exception, you could use it to inspire yourself when building your tests (source):
// bad_alloc.cpp
// compile with: /EHsc
#include<new>
#include<iostream>
using namespace std;
int main() {
char* ptr;
try {
ptr = new char[(~unsigned int((int)0)/2) - 1];
delete[] ptr;
}
catch( bad_alloc &ba) {
cout << ba.what( ) << endl;
}
}
However, if you are not planning to handle all bad_alloc exceptions (or absolutely all exceptions) in your code, there is no way to get 100% coverage - since it won't be 100% covered... Most of the cases, true 100% coverage is unnecessary, though.
Related
I have a problem with my current program. For some reason it always crashed after the final line of code on windows. I got a "application is no longer responding" error or something like this.
So I tried the Intel inspector. And Luckily it told me some bad errors in my project where I accessed some uninitialized memory.
Besides this obvious problems that I understand I get also some:
Incorrect memcpy calls in: boost::algorithm::trim()
Uninitialized partial memory access in: myptree.get<boost::posix_time::ptime>("path.to.node") where myptree is of type boost::property_tree::ptree
Uninitialized memory access in: cout << myptime where myptime is of type boost::posix_time::ptime
...
does this mean that I use the boost library functions not properly? Or are this false positives?
I'm just confused because the functions work, they do what I want them to do and I get no error message.
I also get a Memory not deallocated warning at the end (from [Unknown] source).
example for trim:
#include <iostream>
#include <boost/algorithm/string.hpp>
int main() {
std::string test = " test ";
boost::algorithm::trim(test);
std::cout << test << std::endl;
return 0;
}
gives me a incorrect memcpy call...
Boost will happily forward bad arguments; it often has no way to check them. If boost::algorithm::trim passes a bad argument to memcpy, it will be because you passes a bad argument to trim.
So, yes, you should worry. There are almost certainly multiple bugs in your program. Check your calls to the functions reported.
I'm having a hard time finding an answer to a nitch case using cmocka, testing malloc for failure (simulating), and using gcov
Update about cmocka+gcov: I noticed I get empty gcda files as soon as I mock a function in my cmocka tests. Why? Googling cmocka and gcov gives results where people talk about using the two together. It seems most people are using CMake, something I will look at later but there should be no reason (that I can think of) that would require me to use cmake. Why can't I just use cmocka with the --coverage/-lgcov flags?
Orignal question:
I've tried a myriad combinations mostly based off of two main ideas:
I tried using -Wl,--wrap=malloc so calls to malloc are wrapped. From my cmocka tests I attempted to use will_return(__wrap_malloc, (void*)NULL) to simulate a malloc failure. In my wrap function I use mock() to determine if I should return __real_malloc() or NULL. This has the ideal effect, however I found that gcov fails to create gcda files, which is part of the reason with wrapping malloc, so I can test malloc failing AND get code coverage results. I feel I've played dirty games with symbols and messed up malloc() calls called form other compilation units (gcov? cmocka?).
Another way I tried was to us gcc -include using a #define for malloc to call "my malloc" and compile my target code to be tested with mymalloc.c (defining the "my malloc"). So a #define malloc _mymalloc helps me call only the "special malloc" from the target test code leaving malloc alone anywhere else it is called (i.e., leave the other compilation unites alone so they just always call real malloc). However I don't know how to use will_return() and mock() correctly to detect failure cases vs success cases. If I am testing malloc() failing I get what I want, I return NULL from "malloc" based on mock() returning NULL- this is all done in a wrapping function for malloc that is only called in the targeted code. However if I want to return the results of the real malloc than cmocka will fail since I didn't return the result from mock(). I wish I could just have cmocka just dequeue the results from the mock() macro and then not care that I didn't return the results since I need real results from malloc() so the code under test can function correctly.
I feel it should be possible to combine malloc testing, with cmocka and get gcov results.
whatever the answer is I'd like to pull of the following or something similar.
int business_code()
{
void* d = malloc(somethingCalculated);
void* e = malloc(somethingElse);
if(!d) return someRecovery();
if(!e) return someOtherRecovery();
return 0;
}
then have cmocka tests like
cmocka_d_fail()
{
will_return(malloc, NULL);
int ret = business_code();
assert_int_equal(ret, ERROR_CODE_D);
}
cmocka_e_fail()
{
will_return(malloc, __LINE__); // someway to tell wrapped malloc to give me real memory because the code under test needs it
will_return(malloc, NULL); // I want "d" malloc to succeed but "e" malloc to fail
int ret = business_code();
assert_int_equal(ret, ERROR_CODE_E);
}
I get close with some of the #define/wrap ideas I tried but in the end I either mess up malloc and cause gcov to not spit out my coverage data or I don't have a way to have cmocka run malloc cases and return real memory i.e., not reeturn from mock() calls. On one hand I could call real malloc from my test driver but and pass that to will_return but my test_code doesn't know the size of the memory needed, only the code under test knows that.
given time constraints I don't want to move away from cmocka and my current test infrastructure. I'd consider other ideas in the future though if what I want isn't possible. What I'm looking for I know isn't new but I'm trying to use a cmocka/gcov solution.
Thanks
This all comes down to what symbols I was messing with, either using -lW,--wrap or clever #defines. In either case I was either clobbering the symbol for other call sites and breaking code or confusing cmocka with not dequeuing queued up returns.
Also the reason my gcda files were not being generated correctly is my attempts to use -Wl,--wrap=fseek and cmocka's mock() was messing me up.
A clever #define on fseek/malloc/etc combined with mock() for a symbol that gets called in your wrapper implementation can in short query the test suite to see if you should return something bogus to cause the test to fail or return the real results. A bit hacky but does the trick.
This workaround works for me: wrap _test_malloc() instead of malloc().
Working example can be found at https://github.com/CESNET/Nemea-Framework/blob/2ef806a0297eddc920dc7ae71731dfb2c0e49a5b/libtrap. tests/test_trap_buffer.c contains an implementation of a wrap function __wrap__test_malloc() (note the 4x '_' in the name)
void *__real__test_malloc(const size_t size, const char* file, const int line);
void *__wrap__test_malloc(size_t size)
{
int fail = (int) mock();
if (fail) {
return NULL;
} else {
return __real__test_malloc(size, __FILE__, __LINE__);
}
}
and e.g. test_create_destroy() to test the tb_init() function which uses 3x malloc():
static void test_create_destroy(void **state)
{
trap_buffer_t *b = NULL;
(void) state; /* unused */
b = tb_init(0, 0);
assert_null(b);
b = tb_init(0, 1);
assert_null(b);
b = tb_init(1, 0);
assert_null(b);
will_return(__wrap__test_malloc, 0);
will_return(__wrap__test_malloc, 0);
will_return(__wrap__test_malloc, 0);
b = tb_init(10, 100000);
assert_non_null(b);
tb_destroy(&b);
tb_destroy(&b);
tb_destroy(NULL);
}
For the completeness, tb_init() is in src/trap_buffer.c line 146.
Compilation can be run like this (sample from Makefile):
buffer:
gcc --coverage -g -O0 -DUNIT_TESTING -c tests/test_trap_buffer.c
gcc --coverage -g -O0 -DUNIT_TESTING -c src/trap_buffer.c
gcc -g -O0 -Wl,--wrap=_test_malloc -lcmocka --coverage -DUNIT_TESTING -o test_buffer test_trap_buffer.o trap_buffer.o
See the UNIT_TESTING preprocessor macro defined for cmocka, this is important since it enables testing allocation functions in our code.
Finally, running the test generates *.gcda files for us, so we can visualize the code coverage. Output for the tested tb_init(): https://codecov.io/gh/CESNET/Nemea-Framework/src/775cfd34c9e74574741bc6a0a2b509ae6474dbdb/libtrap/src/trap_buffer.c#L146
I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks. There is, however, a workaround for Microsoft Visual C++, but what about Linux?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
i don't know about c++ unit-testing, but i used Dr. memory, it works on linux windows and mac
if you have the symbols it even tells you in what line the memory leak happened! really usefull :Dmore info
http://drmemory.org/
Even if this thread is very old. I was searching for this lately.
I now came up with a simple solution (inspired by https://stackoverflow.com/a/19315100/8633816)
Just write the following header:
#include "gtest/gtest.h"
#include <crtdbg.h>
class MemoryLeakDetector {
public:
MemoryLeakDetector() {
_CrtMemCheckpoint(&memState_);
}
~MemoryLeakDetector() {
_CrtMemState stateNow, stateDiff;
_CrtMemCheckpoint(&stateNow);
int diffResult = _CrtMemDifference(&stateDiff, &memState_, &stateNow);
if (diffResult)
reportFailure(stateDiff.lSizes[1]);
}
private:
void reportFailure(unsigned int unfreedBytes) {
FAIL() << "Memory leak of " << unfreedBytes << " byte(s) detected.";
}
_CrtMemState memState_;
};
Then just add a local MemoryLeakDetector to your Test:
TEST(TestCase, Test) {
// Do memory leak detection for this test
MemoryLeakDetector leakDetector;
//Your test code
}
Example:
A test like:
TEST(MEMORY, FORCE_LEAK) {
MemoryLeakDetector leakDetector;
int* dummy = new int;
}
Produces the output:
I am sure there are better tools out there, but this is a very easy and simple solution.
"I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks."
It's not (and never was) purposed to do so.
You can actually do some certifying, e.g. using google mock and setting up expected calls (for e.g. destructors). But using a tool specialized upon this aspect, will certainly do better, than everything you're able to write yourself.
"is it better to use another C++ unit-testing framework?"
So why bothering looking for different unit testing frameworks (that won't support such feature either, at least there's none I know of).
There are tools like valgrind you can use, and run your UnitTester executable under their control to detect memory leaks.
Note:
The above advice to do this with the UnitTester executable, won't be able to catch all of the possible memory leaks from the final executable produced with your code, but just help to find bugs/flaws with the actually tested code.
Not sure whether this worked in 2015, but since 2018 or so we use GoogleTest with CLang's sanitizers, including LeakSanitizer, AddressSanitizer and UndefinedBehavior sanitizer.
Just build tests with sanitizers enabled, example for the CMake-based project:
add_compile_options(-fsanitize=leak,address,undefined -fno-omit-frame-pointer -fno-common -O1)
link_libraries(-fsanitize=leak,address,undefined)
Memory leaks are a result of incorrect use of system interfaces, The unit test should check if those interfaces are being used correctly in your unit under test, not what the implementation specific results of any of those interfaces is. It should check that the memory allocation and deallocation interfaces used directly by your unit are being used as designed. Testing the system specific results would be a part of component or integration testing. In the unit test, the memory management interfaces are external to the unit under test and thus should be stubbed out with a test implementation.
I have some c++ code but I don't know what. For the purposes of example let's say it is:
//main.cpp
#include<iostream>
using namespace std;
int T[100];
int main()
{
for(int i = 0; i < 100; ++i)
T[i] = i;
int x;
cin>>x;
cout<<T[x]<<endl;
return 0;
}
I'm compiling it by cl /O2 /nologo /EHsc main.cpp and running by main < inFile.in. Let's say that inFile.in content is one number 500 and new line. The output is some random number because program reads memory under the address T+500 and printing it.
I want to get runtime error in such cases (any possibility of checking is something like this happened). Is this possible without access to main.cpp?
To be specific I'm running all this programmatically by Process class in C# in ASP.Net MVC Application. I want to check did program threw exception / read not reserved memory etc.
Is this a feature you want to use for development purposes only, or also in your production environment?
In case of development purposes only you may try to run your application under some tool for runtime checking (like Valgrind/Dr Memory), or change the way you compile it to include runtime debug checks (is not guaranteed to work in described case, but helps in many others). Keep in mind that this will make your application much slower (thus should be used only for applications under development)
When it comes to production environment, I am not aware of any way of doing what you want to - in general you can only count on OS segmentation fault in case of reading out of available memory (if you have luck - if you don't it'll "work").
For the exception thing, I'm not 100% sure I understand what you mean - is this "why did the program terminate" ? In such case on you might get a core dump of crashed application (in case of normal termination I assume you have return codes), and you can inspect it later to either get crash reason or possibly also try to recover some data. For instructions how to collect dumps on Windows, you may check out:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb787181%28v=vs.85%29.aspx
However, this is also a feature that is more useful in development environment than in production.
If You can't modify above program's source then run it in 'outside' environment (shell) and get returned value and test it against 0 - any other value would me incorrect behavior.
It is also good to verify such programs' input data that you know it can't handle so rather than waiting for it to crash you can prevent it from happening.
If You could modify program then simple solution would be to use std::vector or std::deque which are similar but important is to use at() method and not the operator[] operator as the at() method checks bounds
#include<iostream>
#include<vector>
using namespace std;
std::vector<int> T(100);
int main()
{
for(int i = 0; i < 100; ++i)
T[i] = i;
int x;
cin>>x;
cout<<T.at(x)<<endl;
return 0;
}
if at() will be called with bad out-of-bound parameter then exception will be throw which You can catch like this:
try{
cin>>x;
cout<<T.at(x)<<endl;
}
catch(...)
{
cout << "exception while accessing vector's data" << endl;
}
Suppose I have a custom static assert implementation (because I need to target a compiler that doesn't have static_assert built in). I want to craft a test that checks that
MY_STATIC_ASSERT(false);
indeed asserts. If I just write such code - it will not compile (and so not run). I'd rather have some piece of code that compiles fine when the code above fails and fails to compile when the code above does compile.
Is that possible? Can I have a compile-time (or at least a runtime) check that my static assert indeed asserts for "false"?
Sure, you can have a "compile-time" check - as long as you're compiling something else entirely:
// test_my_static_assert.cpp
#include "my_static_assert.h"
int main() {
MY_STATIC_ASSERT(false);
}
// compile.sh
if g++ test_my_static_assert.cpp; then
echo "MY_STATIC_ASSERT failed! Compile succeeded!"
fi
Or something. But it'd have to be in a separate program entirely.