CppCheck does not detect memory leak - c++

Have following code, but CppCheck(1.68) detects only "style" error.
AbstractTelegram *TelegramFactory::CreateGetWigWagParameterTelegram(BYTE Address_i, BYTE SubAddress_i, BYTE Tag_i)
{
SignDataWigWag *pWigWag = new SignDataWigWag();
return new SendTelegram(SubAddress_i, Tag_i, Telegram::GET_WIG_WAG,NULL,0);
}
Output:
Variable 'pWigWag' is assigned a value that is never used.
Variable 'pWigWag' is assigned a value that is never used.
Any options to tune?

I am a Cppcheck developer.
Actually.. we can't see that there is definitely a memory leak in that code.
There are classes that has automatic memory management.
Imagine for example that the SignDataWigWag constructor has such code:
SignDataWigWag::SignDataWigWag() {
instances.push_back(this);
}
then it can be deleted later by using for instance:
void deleteAllInstances() {
while (!instances.empty()) {
delete instances.back();
instances.pop_back();
}
}
This is not unusual. Some popular class libraries has lots of classes with some kind of memory management so manual delete is not needed..

cppcheck is essentially only a style-checker (and like other tools which incorporate the developer's notion of "good style", its usefulness depends on various factors).
There are suitable tools for detecting memory leaks (such as valgrind); cppcheck is not one of those. Of course, you will find differing opinions on which are the best tools, and even on what a tool is suitable for, e.g., a blog entry *Valgrind is NOT a leak checker *

Related

Overloading base types with a custom allocator, and its alternatives

So, this is a bit of an open question. But let's say that I have a large application which globally overrides the various new and delete operators so that they use home-brewed jemalloc-style arenas and custom alignments.
All fine and good, but I have been running into segfault issues because other C++-based DLLs and their dependencies also use the overloaded allocators when they shouldn't (namely LLVM), putting the little custom allocator to its knees (lack of memory and more stresses).
Testing workarounds, I have wrapped (and moved) those global operators into a class, and I made all base classes inherit from it. And well, that works for classes, but not for base types. That's the problem.
Given that C++ doesn't allow useful things like having separate allocators per namespace, or limiting the new operator per executable module, what is the best way of emulating this in base data types, where I can't directly subclass an int?
The obvious way is wrapping them in a custom template, but the problem is performance. Do I have to emulate all the array and indexing operations under a second layer just so that I can malloc from a different place without having to change the rest of the functional code? There's a better way?
P.S.: I have also been thinking about using special global new/delete operators with extra parameters, while leaving the standard ones alone. Thus ensuring that I am (well, my executable module is) the only one calling those global functions. It should be a simple search-and-replace.
Well, quick update. What I did in the end to 'solve' this conundrum is to manually detect if the code that called the overridden global allocators comes from the main executable module and conditionally redirect all the external new / delete calls to their corresponding malloc / free while still using the custom arena allocator for our own internal code.
How? After doing some R&D I found that this could be done by using the _ReturnAddress() built-in on MSVC and __builtin_extract_return_addr(__builtin_return_address(0)) on GCC/Clang; and I can say that it seems to work fine so far in production software.
Now, when some C++ code from our address space wants some memory we can see where it comes from.
But, how do we find out if that address is part of some other module in our process space or our own? We might need to find out both the base and end addresses of the main program, cache them at startup as globals, and check that the return address is within bounds.
All for extremely little overhead. But, our second problem is that retrieving the base address is different in every platform. After some research I found that things were more straightforward than expected:
In Windows/Win32 we can simply do this:
#include <windows.h>
#include <psapi.h>
inline void __initialize_base_address()
{
MODULEINFO minfo;
GetModuleInformation(GetCurrentProcess(), GetModuleHandle(NULL), &minfo, sizeof(minfo));
base_addr = (uintptr_t) minfo.lpBaseOfDll;
base_end = (uintptr_t) minfo.lpBaseOfDll + minfo.SizeOfImage;
}
In Linux there are a thousand ways of doing this, including linker globals and some debuggey (verbose and unreliable) ways of walking the process module table. I was looking at the linker map output and noticed that the _init and _fini functions always seem to wrap the rest of the .text section symbols. Sometimes it's hard to get to the simplest solution that works everywhere:
#include <link.h>
inline void __initialize_base_address()
{
void *handle = dlopen(0, RTLD_NOW);
base_addr = (uintptr_t) dlsym(handle, "_init");
base_end = (uintptr_t) dlsym(handle, "_fini");
dlclose(handle);
}
While in macOS things are even less documented and I had to cobble together my own thing using the Darwin kernel open-source code and tracking down some obscure low-level tools as reference. Keep in mind that _NSGetMachExecuteHeader() is just a wrapper for the internal _mh_execute_header linker global. If you need to do anything about parsing the Mach-O format and its structures then getsect.h is the way to go:
#include <mach-o/getsect.h>
#include <mach-o/ldsyms.h>
#include <crt_externs.h>
inline void __initialize_base_address()
{
size_t size;
void *ptr = getsectiondata(&_mh_execute_header, SEG_TEXT, SECT_TEXT, &size);
base_addr = (uintptr_t) _NSGetMachExecuteHeader();
base_end = (uintptr_t) ptr + size;
}
Another thing to keep in mind is that this some-other-cpp-module-is-using-our-internal-allocator-that-globally-overrides-new-causing-weird-bugs issue seems to be a problem in Linux and maybe macOS, I didn't have this issue in Windows, probably because no conflicting DLLs were loaded in the process, being mostly C API-based. I think, or maybe the platform uses different C++ runtimes for each module.
The main issue I had was caused by Mesa3D, which uses LLVM (pure C++ in and out) for many of their GLSL shader compilers and liked to gobble up big chunks of my small custom-tailored memory arena uninvited.
Rewriting a legacy program that is structurally dependent on these allocators was out of the question due to its sheer size and complexity, so this turned out to be the best way of making things work as expected.
It's only a few lines of optional, sneaky, extra per-platform code.

How to remediate Microsoft typeinfo.name() memory leaks?

Microsoft has a decades old bug when using its leak check gear in debug builds. The leak is reported from the allocation made by the runtime library when using C++ type information, like typeinfo.name(). Also see Memory leaks reported by debug CRT inside typeinfo.name() on Microsoft Connect.
We've been getting error reports and user list discussions because of the leaks for about the same amount of time. The Microsoft bug could also mask real leaks from user programs. The latter point is especially worrisome to me because we may not tending to real problems because of the masking.
I'd like to try to squash the leaks due to use of typeid(T) and typeinfo.name(). My question is, how can we work around Microsoft's bug? Is there a work around available?
On the line of my suggestion in the Q's comments.
For if (valueType == typeid(int)) you can use the type_index (at least since C++11)
For type_info.name() leaking memory:
Since totally eliminating the leak doesn't seem possible, the next best thing would be reduce their number (to only one leaked per type interrogation) and, secondary, to tag them for reporting purposes. Being inside some templated classes, one can hope that the 'mem leaking' report will use the class names (or at least the source file where the allocation happened) - you can subsequently use this information to filter them out from the 'all leaked memory' reports.
So instead of using typeid(<typename>), you use something like:
"file typeid_name_workaround.hpp"
template <typename T> struct get_type_name {
static const char* name() const {
static const char* ret=typeid(T).name();
return ret;
}
};
Other .cpp/.hpp file
#include "typeid_name_workaround.hpp"
struct dummy {
};
int main() {
// instead of typeid(dummy).name() you use
get_type_name<dummy>::name();
}

GoogleTest and Memory Leaks

I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks. There is, however, a workaround for Microsoft Visual C++, but what about Linux?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
i don't know about c++ unit-testing, but i used Dr. memory, it works on linux windows and mac
if you have the symbols it even tells you in what line the memory leak happened! really usefull :Dmore info
http://drmemory.org/
Even if this thread is very old. I was searching for this lately.
I now came up with a simple solution (inspired by https://stackoverflow.com/a/19315100/8633816)
Just write the following header:
#include "gtest/gtest.h"
#include <crtdbg.h>
class MemoryLeakDetector {
public:
MemoryLeakDetector() {
_CrtMemCheckpoint(&memState_);
}
~MemoryLeakDetector() {
_CrtMemState stateNow, stateDiff;
_CrtMemCheckpoint(&stateNow);
int diffResult = _CrtMemDifference(&stateDiff, &memState_, &stateNow);
if (diffResult)
reportFailure(stateDiff.lSizes[1]);
}
private:
void reportFailure(unsigned int unfreedBytes) {
FAIL() << "Memory leak of " << unfreedBytes << " byte(s) detected.";
}
_CrtMemState memState_;
};
Then just add a local MemoryLeakDetector to your Test:
TEST(TestCase, Test) {
// Do memory leak detection for this test
MemoryLeakDetector leakDetector;
//Your test code
}
Example:
A test like:
TEST(MEMORY, FORCE_LEAK) {
MemoryLeakDetector leakDetector;
int* dummy = new int;
}
Produces the output:
I am sure there are better tools out there, but this is a very easy and simple solution.
"I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks."
It's not (and never was) purposed to do so.
You can actually do some certifying, e.g. using google mock and setting up expected calls (for e.g. destructors). But using a tool specialized upon this aspect, will certainly do better, than everything you're able to write yourself.
"is it better to use another C++ unit-testing framework?"
So why bothering looking for different unit testing frameworks (that won't support such feature either, at least there's none I know of).
There are tools like valgrind you can use, and run your UnitTester executable under their control to detect memory leaks.
Note:
The above advice to do this with the UnitTester executable, won't be able to catch all of the possible memory leaks from the final executable produced with your code, but just help to find bugs/flaws with the actually tested code.
Not sure whether this worked in 2015, but since 2018 or so we use GoogleTest with CLang's sanitizers, including LeakSanitizer, AddressSanitizer and UndefinedBehavior sanitizer.
Just build tests with sanitizers enabled, example for the CMake-based project:
add_compile_options(-fsanitize=leak,address,undefined -fno-omit-frame-pointer -fno-common -O1)
link_libraries(-fsanitize=leak,address,undefined)
Memory leaks are a result of incorrect use of system interfaces, The unit test should check if those interfaces are being used correctly in your unit under test, not what the implementation specific results of any of those interfaces is. It should check that the memory allocation and deallocation interfaces used directly by your unit are being used as designed. Testing the system specific results would be a part of component or integration testing. In the unit test, the memory management interfaces are external to the unit under test and thus should be stubbed out with a test implementation.

Protobuf with C++ plugins

We're working on a relatively large-scale C++ project where we chose at the very beginning to use protobuf as our Lingua Franca for stored and transmitted data.
We had our first problem because of end-of-program memory leaks due to the protobuf generated classes metadata that are stored as static pointer, allocated during the first call to the constructor and never deallocated. We found a nice function provide by Mr. Google to do this clean-up:
google::protobuf::ShutdownProtobufLibrary();
Works fine except there is no symmetric call, so once it's done you can no longer use anything . You have to do that exactly one time in your executable. We did what any lazy developper would have done:
struct LIBPROTOBUF_EXPORT Resource
{
~Resource()
{
google::protobuf::ShutdownProtobufLibrary();
}
};
bool registerShutdownAtExit()
{
static Resource cleaner;
return true;
}
And we added in the protobuf generation of cc files a:
static bool protobufResource = mlv::protobuf::registerShutdownAtExit();
It worked fine for several months.
Then we added the support for dynamically loadable plugins (dlls) in our tool. Some of them using protobuf. Unloading of the plugins worked fine, but when more than one of them used protobuf, we had a nice little crash when unloading the last one.
The reason: the last to unload would destroy the cleaner instance, itself trying to google::protobuf::ShutdownProtobufLibrary(), itself trying to destroy metadata of unloaded types... CRASH.
Long story short: are we condemned to either have lots of "normal" memory leaks or a crash when closing our tool. Does anyone have experienced the same problem and found a better solution? Is my diagnosis bad?
Like johnathon suggested in his comment, use a reference counting scheme, or register your destruction routine with atexit. Such a routine is free-standing, but that could work fine for your case.
Relevant documentation:
MSDN
POSIX
Edit: You're right, it's basically the same thing. Didn't think this through.
Another suggestion: Use a global resource singleton for all protobuf-using plugins. This one has a global destructor, which is only registered when a plugin first uses the protobuf library. Or just set a flag whenver it's used, then call ShutdownProtobufLibrary only if the flag is set.

Prevent malloc/free to be compiled for embedded projects

Background: We are using Keil to compile our NXP LPC2458 project. There are numerous tasks that are being run on Keil’s RealView RTOS. There is stack space created, which is being allocated to each task. There is no HEAP created by default, and I want to avoid it since we can't afford the code-space overhead and the cost of "garbage collecting"
Objective: Use C++ in the embedded code without using the heap. Keil provides the #pragma (__use_no_heap) which prevents malloc() and free() calls to be linked.
Solution: I tried creating a Singleton with a private static pointer. My hopes were that the new() would not be called since I declared dlmData as static in the getDLMData(). For some reason, the linker still states that malloc() and free() are being called. I have thoughts of a private operator new () and a private operator delete() , and then declaring the dlmData as static within the overloaded function. It is not working for some reason. WHAT AM I DOING WRONG?
//class declaration
class DataLogMaintenanceData
{
public:
static DataLogMaintenanceData* getDLMData();
~DataLogMaintenanceData()
{ instanceFlag = FALSE; }
protected:
DataLogMaintenaceData(); //constructor declared protected to avoid poly
private:
static Boolean instanceFlag;
static DataLogMaintenceData *DLMData;
}
//set these to NULL when the code is first started
Boolean DataLogMaintenanceData::instanceFlag = FALSE;
DataLogMaintenanceData *DataLogMaintenaceData::DLMData = NULL;
//class functions
DataLogMaintenanceData *DataLogMaintenanceData::getDLMData()
{
if (FALSE == instanceFlag)
{
static DataLogMaintenanceData dlmData;
DLMData = &dlmData;
instanceFlag = TRUE;
return DLMData;
}
else
{
return DLMData;
}
}
void InitDataLog ( void )
{
DataLogMaintenanceData *dlmData;
dlmData = DataLogMaintenanceData::getDLMData();
// to avoid dlmData warning
dlmData = dlmData;
}
//ACTUAL TASK
__task DataLog()
{
.. .. .. code to initialize stuff
InitDataLog();
.. .. ..more stuff
}
For some reason, the only way I can get this to compile, is to create a heap space and then allow the malloc() and free() calls to be compiled into the project. As expected, the “static”ally defined object, dlmData, resides in the RAM space allocated to the dataLog.o module (i.e. it doesn’t live in the HEAP).
I can’t figure out, and I have checked Google, what am I missing? Is it possible in C++ to bypass malloc() and free() when compiling pure objects? I know I can replace the RTOS’s implementation of malloc() and free() to do nothing, but I want to avoid compiling in code that I won’t use.
Probably some of the code we aren't seeing calls a function that calls malloc behind the scenes.
From http://www.keil.com/support/man/docs/armlib/armlib_CJAIJCJI.htm you can use --verbose --list=out.txt on the link line to get details about the malloc caller.
Included in the Keil installation is a set of PDFs... one of the documents (document ID DUI0475A) is titled "Using ARM C and C++ Libraries and Floating-Point Support". It discusses use of the heap (and preventing its use) in several places.
Specifically, check out section 2.64 "Avoiding the ARM-supplied heap and heap-using library functions", lots of good information there. The interesting text in that section:
You can reference the __use_no_heap or __use_no_heap_region symbols in
your code to guarantee that no heap-using functions are linked in from
the ARM library.
__use_no_heap guards against the use of malloc(), realloc(), free(),
and any function that uses those functions. For example, calloc() and
other stdio functions.
__use_no_heap_region has the same properties as __use_no_heap, but in
addition, guards against other things that use the heap memory region.
For example, if you declare main() as a function taking arguments, the
heap region is used for collecting argc and argv.
Since your question is about how prevent malloc() from being called / used, that might put you on the right track.
From the code you've posted I cannot see anything that would like to allocate the memory on the heap. Are there any implicit conversions taking place somewhere? What if you compile without this class at all?
What you could do:
1) Run under debugger (assuming you can build a runnable image, maybe on an emulator), set a breakpoint in malloc and examine the stack
2) Provide your own malloc and free to make linker happy, then repeat step 1.
You may find that you need to link against a different version of C runtime startup. In the worst case if number of calls to malloc/free is limited you can roll out your own version which will give the callers some preallocated memory - but hopefully this will not be neccessary.