GoogleTest and Memory Leaks - c++

I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks. There is, however, a workaround for Microsoft Visual C++, but what about Linux?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?

If memory management is crucial for me, is it better to use another C++ unit-testing framework?
i don't know about c++ unit-testing, but i used Dr. memory, it works on linux windows and mac
if you have the symbols it even tells you in what line the memory leak happened! really usefull :Dmore info
http://drmemory.org/

Even if this thread is very old. I was searching for this lately.
I now came up with a simple solution (inspired by https://stackoverflow.com/a/19315100/8633816)
Just write the following header:
#include "gtest/gtest.h"
#include <crtdbg.h>
class MemoryLeakDetector {
public:
MemoryLeakDetector() {
_CrtMemCheckpoint(&memState_);
}
~MemoryLeakDetector() {
_CrtMemState stateNow, stateDiff;
_CrtMemCheckpoint(&stateNow);
int diffResult = _CrtMemDifference(&stateDiff, &memState_, &stateNow);
if (diffResult)
reportFailure(stateDiff.lSizes[1]);
}
private:
void reportFailure(unsigned int unfreedBytes) {
FAIL() << "Memory leak of " << unfreedBytes << " byte(s) detected.";
}
_CrtMemState memState_;
};
Then just add a local MemoryLeakDetector to your Test:
TEST(TestCase, Test) {
// Do memory leak detection for this test
MemoryLeakDetector leakDetector;
//Your test code
}
Example:
A test like:
TEST(MEMORY, FORCE_LEAK) {
MemoryLeakDetector leakDetector;
int* dummy = new int;
}
Produces the output:
I am sure there are better tools out there, but this is a very easy and simple solution.

"I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks."
It's not (and never was) purposed to do so.
You can actually do some certifying, e.g. using google mock and setting up expected calls (for e.g. destructors). But using a tool specialized upon this aspect, will certainly do better, than everything you're able to write yourself.
"is it better to use another C++ unit-testing framework?"
So why bothering looking for different unit testing frameworks (that won't support such feature either, at least there's none I know of).
There are tools like valgrind you can use, and run your UnitTester executable under their control to detect memory leaks.
Note:
The above advice to do this with the UnitTester executable, won't be able to catch all of the possible memory leaks from the final executable produced with your code, but just help to find bugs/flaws with the actually tested code.

Not sure whether this worked in 2015, but since 2018 or so we use GoogleTest with CLang's sanitizers, including LeakSanitizer, AddressSanitizer and UndefinedBehavior sanitizer.
Just build tests with sanitizers enabled, example for the CMake-based project:
add_compile_options(-fsanitize=leak,address,undefined -fno-omit-frame-pointer -fno-common -O1)
link_libraries(-fsanitize=leak,address,undefined)

Memory leaks are a result of incorrect use of system interfaces, The unit test should check if those interfaces are being used correctly in your unit under test, not what the implementation specific results of any of those interfaces is. It should check that the memory allocation and deallocation interfaces used directly by your unit are being used as designed. Testing the system specific results would be a part of component or integration testing. In the unit test, the memory management interfaces are external to the unit under test and thus should be stubbed out with a test implementation.

Related

AccessViolationException reading memory allocated in C++ application from C++/CLI DLL

I have a C++ client to a C++/CLI DLL, which initializes a series of C# dlls.
This used to work. The code that is failing has not changed. The code that has changed is not called before the exception is thrown. My compile environment has changed, but recompiling on a machine with an environment similar to my old one still failed. (EDIT: as we see in the answer this is not entirely true, I was only recompiling the library in the old environment, not the library and client together. The client projects had been upgraded and couldn't easily go back.)
Someone besides me recompiled the library, and we started getting memory management issues. The pointer passed in as a String must not be in the bottom 64K of the process's address space. I recompiled it, and all worked well with no code changes. (Alarm #1) Recently it was recompiled, and memory management issues with strings re-appeared, and this time they're not going away. The new error is Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I'm pretty sure the problem is not located where I see the exception, the code didn't change between the successful and failing builds, but we should review that to be complete. Ignore the names of things, I don't have much control over the design of what it's doing with these strings. And sorry for the confusion, but note that _bridge and bridge are different things. Lots of lines of code missing because this question is already too long.
Defined in library:
struct Config
{
std::string aye;
std::string bee;
std::string sea;
};
extern "C" __declspec(dllexport) BridgeBase_I* __stdcall Bridge_GetConfiguredDefaultsImplementationPointer(
const std::vector<Config> & newConfigs, /**< new configurations to apply **/
std::string configFolderPath, /**< folder to write config files in **/
std::string defaultConfigFolderPath, /**< folder to find default config files in **/
std::string & status /**< output status of config parse **/
);
In client function:
GatewayWrapper::Config bridge;
std::string configPath("./config");
std::string defaultPath("./config/default");
GatewayWrapper::Config gwtransport;
bridge.aye = "bridged.dll";
bridge.bee = "1.0";
bridge.sea = "";
configs.push_back(bridge);
_bridge = GatewayWrapper::Bridge_GetConfiguredDefaultsImplementationPointer(configs, configPath, defaultPath, status);
Note that call to library that is crashing is in the same scope as the vector declaration, struct declaration, string assignment and vector push-back
There are no threading calls in this section of code, but there are other threads running doing other things. There is no pointer math here, there are no heap allocations in the area except perhaps inside the standard library.
I can run the code up to the Bridge_GetConfiguredDefaultsImplementationPointer call in the debugger, and the contents of the configs vector look correct in the debugger.
Back in the library, in the first sub function, where the debugger don't shine, I've broken down the failing statement into several console prints.
System::String^ temp
List<CConfig^>^ configs = gcnew List<CConfig ^>((INT32)newConfigs.size());
for( int i = 0; i< newConfigs.size(); i++)
{
std::cout << newConfigs[i].aye<< std::flush; // prints
std::cout << newConfigs[i].aye.c_str() << std::flush; // prints
temp = gcnew System::String(newConfigs[i].aye.c_str());
System::Console::WriteLine(temp); // prints
std::cout << "Testing string creation" << std::endl; // prints
std::cout << newConfigs[i].bee << std::flush; // crashes here
}
I get the same exception on access of bee if I move the newConfigs[i].bee out above the assignment of temp or comment out the list declaration/assignment.
Just for reference that the std::string in a struct in a vector should have arrived at it's destination ok
Is std::vector copying the objects with a push_back?
std::string in struct - Copy/assignment issues?
http://www.cplusplus.com/reference/vector/vector/operator=/
Assign one struct to another in C
Why this exception is not caught by my try/catch
https://stackoverflow.com/a/918891/2091951
Generic AccessViolationException related questions
How to handle AccessViolationException
Programs randomly getting System.AccessViolationException
https://connect.microsoft.com/VisualStudio/feedback/details/819552/visual-studio-debugger-throws-accessviolationexception
finding the cause of System.AccessViolationException
https://msdn.microsoft.com/en-us/library/ms164911.aspx
Catching access violation exceptions?
AccessViolationException when using C++ DLL from C#
Suggestions in above questions
Change to .net 3.5, change target platform - these solutions could have serious issues with a large mult-project solution.
HandleProcessCorruptedStateExceptions - does not work in C++, this decoration is for C#, catching this error could be a very bad idea anyway
Change legacyCorruptedStateExceptionsPolicy - this is about catching the error, not preventing it
Install .NET 4.5.2 - can't, already have 4.6.1. Installing 4.6.2 did not help. Recompiling on a different machine that didn't have 4.5 or 4.6 installed did not help. (Despite this used to compile and run on my machine before installing Visual Studio 2013, which strongly suggests the .NET library is an issue?)
VSDebug_DisableManagedReturnValue - I only see this mentioned in relation to a specific crash in the debugger, and the help from Microsoft says that other AccessViolationException issues are likely unrelated. (http://connect.microsoft.com/VisualStudio/feedbackdetail/view/819552/visual-studio-debugger-throws-accessviolationexception)
Change Comodo Firewall settings - I don't use this software
Change all the code to managed memory - Not an option. The overall design of calling C# from C++ through C++/CLI is resistant to change. I was specifically asked to design it this way to leverage existing C# code from existing C++ code.
Make sure memory is allocated - memory should be allocated on the stack in the C++ client. I've attempted to make the vector be not a reference parameter, to force a vector copy into explicitly library controlled memory space, did not help.
"Access violations in unmanaged code that bubble up to managed code are always wrapped in an AccessViolationException." - Fact, not a solution.
but it was the mismatch, not the specific version that was the problem
Yes, that's black letter law in VS. You unfortunately just missed the counter-measures that were built into VS2012 to turn this mistake into a diagnosable linker error. Previously (and in VS2010), the CRT would allocate its own heap with HeapAlloc(). Now (in VS2013), it uses the default process heap, the one returned by the GetProcessHeap().
Which is in itself enough to trigger an AVE when you run your app on Vista or higher, allocating memory from one heap and releasing it from another triggers an AVE at runtime, a debugger break when you debug with the Debug Heap enabled.
This is not where it ends, another significant issue is that the std::string object layout is not the same between the versions. Something you can discover with a little test program:
#include <string>
#include <iostream>
int main()
{
std::cout << sizeof(std::string) << std::endl;
return 0;
}
VS2010 Debug : 32
VS2010 Release : 28
VS2013 Debug : 28
VS2013 Release : 24
I have a vague memory of Stephen Lavavej mentioning the std::string object size reduction, very much presented as a feature, but I can't find it back. The extra 4 bytes in the Debug build is caused by the iterator debugging feature, it can be disabled with _HAS_ITERATOR_DEBUGGING=0 in the Preprocessor Definitions. Not a feature you'd quickly want to throw away but it makes mixing Debug and Release builds of the EXE and its DLLs quite lethal.
Needless to say, the different object sizes seriously bytes when the Config object is created in a DLL built with one version of the standard C++ library and used in another. Many mishaps, the most basic one is that the code will simply read the Config::bee member from the wrong offset. An AVE is (almost) guaranteed. Lots more misery when code allocates the small flavor of the Config object but writes the large flavor of std::string, that randomly corrupts the heap or the stack frame.
Don't mix.
I believe 2013 introduced a lot of changes in the internal data formats of STL containers, as part of a push to reduce memory usage and improve perf. I know vector became smaller, and string is basically a glorified vector<char>.
Microsoft acknowledges the incompatibility:
"To enable new optimizations and debugging checks, the Visual Studio
implementation of the C++ Standard Library intentionally breaks binary
compatibility from one version to the next. Therefore, when the C++
Standard Library is used, object files and static libraries that are
compiled by using different versions can't be mixed in one binary (EXE
or DLL), and C++ Standard Library objects can't be passed between
binaries that are compiled by using different versions."
If you're going to pass std::* objects between executables and/or DLLs, you absolutely must ensure that they are using the same version of the compiler. It would be well-advised to have your client and its DLLs negotiate in some way at startup, comparing any available versions (e.g. compiler version + flags, boost version, directx version, etc.) so that you catch errors like this quickly. Think of it as a cross-module assert.
If you want to confirm that this is the issue, you could pick a few of the data structures you're passing back and forth and check their sizes in the client vs. the DLLs. I suspect your Config class above would register differently in one of the fail cases.
I'd also like to mention that it is probably a bad idea in the first place to use smart containers in DLL calls. Unless you can guarantee that the app and DLL won't try to free or reallocate the internal buffers of the other's containers, you could easily run into heap corruption issues, since the app and DLL each have their own internal C++ heap. I think that behavior is considered undefined at best. Even passing const& arguments could still result in reallocation in rare cases, since const doesn't prevent a compiler from diddling with mutable internals.
You seem to have memory corruption. Microsoft Application Verifier is invaluable in finding corruption. Use it to find your bug:
Install it to your dev machine.
Add your exe to it.
Only select Basics\Heaps.
Press Save. It doesn't matter if you keep application verifier open.
Run your program a few times.
If it crashes, debug it and this time, the crash will point to your problem, not just some random location in your program.
PS: It's a great idea to have Application Verifier enabled at all times for your development project.

How to remediate Microsoft typeinfo.name() memory leaks?

Microsoft has a decades old bug when using its leak check gear in debug builds. The leak is reported from the allocation made by the runtime library when using C++ type information, like typeinfo.name(). Also see Memory leaks reported by debug CRT inside typeinfo.name() on Microsoft Connect.
We've been getting error reports and user list discussions because of the leaks for about the same amount of time. The Microsoft bug could also mask real leaks from user programs. The latter point is especially worrisome to me because we may not tending to real problems because of the masking.
I'd like to try to squash the leaks due to use of typeid(T) and typeinfo.name(). My question is, how can we work around Microsoft's bug? Is there a work around available?
On the line of my suggestion in the Q's comments.
For if (valueType == typeid(int)) you can use the type_index (at least since C++11)
For type_info.name() leaking memory:
Since totally eliminating the leak doesn't seem possible, the next best thing would be reduce their number (to only one leaked per type interrogation) and, secondary, to tag them for reporting purposes. Being inside some templated classes, one can hope that the 'mem leaking' report will use the class names (or at least the source file where the allocation happened) - you can subsequently use this information to filter them out from the 'all leaked memory' reports.
So instead of using typeid(<typename>), you use something like:
"file typeid_name_workaround.hpp"
template <typename T> struct get_type_name {
static const char* name() const {
static const char* ret=typeid(T).name();
return ret;
}
};
Other .cpp/.hpp file
#include "typeid_name_workaround.hpp"
struct dummy {
};
int main() {
// instead of typeid(dummy).name() you use
get_type_name<dummy>::name();
}

CppCheck does not detect memory leak

Have following code, but CppCheck(1.68) detects only "style" error.
AbstractTelegram *TelegramFactory::CreateGetWigWagParameterTelegram(BYTE Address_i, BYTE SubAddress_i, BYTE Tag_i)
{
SignDataWigWag *pWigWag = new SignDataWigWag();
return new SendTelegram(SubAddress_i, Tag_i, Telegram::GET_WIG_WAG,NULL,0);
}
Output:
Variable 'pWigWag' is assigned a value that is never used.
Variable 'pWigWag' is assigned a value that is never used.
Any options to tune?
I am a Cppcheck developer.
Actually.. we can't see that there is definitely a memory leak in that code.
There are classes that has automatic memory management.
Imagine for example that the SignDataWigWag constructor has such code:
SignDataWigWag::SignDataWigWag() {
instances.push_back(this);
}
then it can be deleted later by using for instance:
void deleteAllInstances() {
while (!instances.empty()) {
delete instances.back();
instances.pop_back();
}
}
This is not unusual. Some popular class libraries has lots of classes with some kind of memory management so manual delete is not needed..
cppcheck is essentially only a style-checker (and like other tools which incorporate the developer's notion of "good style", its usefulness depends on various factors).
There are suitable tools for detecting memory leaks (such as valgrind); cppcheck is not one of those. Of course, you will find differing opinions on which are the best tools, and even on what a tool is suitable for, e.g., a blog entry *Valgrind is NOT a leak checker *

Protobuf with C++ plugins

We're working on a relatively large-scale C++ project where we chose at the very beginning to use protobuf as our Lingua Franca for stored and transmitted data.
We had our first problem because of end-of-program memory leaks due to the protobuf generated classes metadata that are stored as static pointer, allocated during the first call to the constructor and never deallocated. We found a nice function provide by Mr. Google to do this clean-up:
google::protobuf::ShutdownProtobufLibrary();
Works fine except there is no symmetric call, so once it's done you can no longer use anything . You have to do that exactly one time in your executable. We did what any lazy developper would have done:
struct LIBPROTOBUF_EXPORT Resource
{
~Resource()
{
google::protobuf::ShutdownProtobufLibrary();
}
};
bool registerShutdownAtExit()
{
static Resource cleaner;
return true;
}
And we added in the protobuf generation of cc files a:
static bool protobufResource = mlv::protobuf::registerShutdownAtExit();
It worked fine for several months.
Then we added the support for dynamically loadable plugins (dlls) in our tool. Some of them using protobuf. Unloading of the plugins worked fine, but when more than one of them used protobuf, we had a nice little crash when unloading the last one.
The reason: the last to unload would destroy the cleaner instance, itself trying to google::protobuf::ShutdownProtobufLibrary(), itself trying to destroy metadata of unloaded types... CRASH.
Long story short: are we condemned to either have lots of "normal" memory leaks or a crash when closing our tool. Does anyone have experienced the same problem and found a better solution? Is my diagnosis bad?
Like johnathon suggested in his comment, use a reference counting scheme, or register your destruction routine with atexit. Such a routine is free-standing, but that could work fine for your case.
Relevant documentation:
MSDN
POSIX
Edit: You're right, it's basically the same thing. Didn't think this through.
Another suggestion: Use a global resource singleton for all protobuf-using plugins. This one has a global destructor, which is only registered when a plugin first uses the protobuf library. Or just set a flag whenver it's used, then call ShutdownProtobufLibrary only if the flag is set.

Recovering from exceptions using CPPUnit

I have been using CPPUnit as a unit testing framework and am now trying to use it in an automated build and package system. However a problem holding me back is that if a crash occurs during the running of the unit tests, e.g. a null pointer dereferencing, it halts the remainder of the automation.
Is there any way for CPPUnit to recover from the exception, record the test failure and then exist gracefully rather than terminating the unit test process? Even an approach specific to null pointer dereferencing would be useful as that makes up about 90% of the issues I have had.
To be technology-specific, I am using makefiles on a Windows system.
You're automating the execution of your cppunit-based unit-tests during your build process, right ?
If you were trying to use CppUnit to execute the build process, I would be tempted to say don't do that !
Could you tell us what is stopping the build process when the unit tests crash ? And what are your unit tests started by, a Makefile, a script of your own, or a continuous integration framework ?
To try to answer your question, CppUnit cannot recover from violation or segmentation errors. On Unix-like systems you should be able to catch the SIGSEGV and to continue, but in which state ?
If your crashes occur in your unit test and not in your product, then I'd recommend you to rely on assertion guards to prevent dereferencing NULL pointers:
class TestObject : public CPPUNIT_NS::TestCase
{
CPPUNIT_TEST_SUITE(Test);
CPPUNIT_TEST(testObjectIsReady);
CPPUNIT_TEST_SUITE_END();
public:
void setUp(void) {}
void tearDown(void) {}
protected:
void testObjectIsReady(void)
{
Object *theObject = GetObject();
CPPUNIT_ASSERT_MESSAGE("check pointer is not null", theObject != NULL);
//--- now you can play with your object without dereferencing a NULL pointer
CPPUNIT_ASSERT_MESSAGE("check objet is ready", theObject->isReady());
}
};
Sorry to say this but the previous answers you received on this are ridiculous.
cppunit really lacks in this regard. cppunit should implement an EXIT_ON_FAIL macro which allows you to trap the access violation in windows (using SetUnhandledExceptionFilter), then you can do any clean-up and allow cpp-unit to report the failure via EXIT_ON_FAIL. Then after reporting, exit the application.
In C/C++, the best way to recover from errors like that is to run each test in a separate process and then monitor them from a parent process. This is very easy in UNIX -- just fork() before the test begins. check supports this, and you could likely patch CPPUnit to have this behavior without much fuss.
As an additional note to anyone perusing this question later, I've found UnitTest++ can catch exceptions in tests and just fail the test with appropriate information rather than resulting in a process exit.
I didn't try it, but if in Windows, I guess use SEH would help:
__try
{
// running your case
}
__except
{
}
Integrate it into the CppUnit framework, and everytime receive an unknown exception, mark the case as fail.