ASAN AddressSanitizer complains on memory leak - c++

I'm using ASAN address sanitizer to detect memory issues. When the program stops ASAN complains about the following:
==102121==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 537 byte(s) in 1 object(s) allocated from:
#0 0x75cb48 in operator new(unsigned long) (/home/app+0x75cb48)
#1 0x7dca83 in __gnu_cxx::new_allocator<char>::allocate(unsigned long, void const*) /opt/rh/devtoolset-7/root/usr/include/c++/7/ext/new_allocator.h:111
#2 0x7ce766 in std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/basic_string.tcc:1057
#3 0x7cc54d in std::string::_Rep::_M_clone(std::allocator<char> const&, unsigned long) (/home/app+0x7cc54d)
#4 0x7c1f2a in std::string::reserve(unsigned long) /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/basic_string.tcc:960
#5 0x7fa0a639c6f5 in std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::overflow(int) (/lib64/libstdc++.so.6+0x9b6f5)
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x75cec8 in operator new(unsigned long, std::nothrow_t const&) (/home/app+0x75cec8)
#1 0x7fa0a635df1d in __cxa_thread_atexit (/lib64/libstdc++.so.6+0x5cf1d)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x75cec8 in operator new(unsigned long, std::nothrow_t const&) (/home/app+0x75cec8)
#1 0x7fa0a635df1d in __cxa_thread_atexit (/lib64/libstdc++.so.6+0x5cf1d)
I've seen on the ASAN page that it can come from the fact the the standard library is statically linked. Although, in my case it is dynamic one.
The application is compiled with devtoolset-7 on RHEL.
Do you have any idea where the leak comes from?

You can get more info than
#0 0x75cb48 in operator new(unsigned long) (/home/app+0x75cb48)
by using llvm-symbolizer.
Download it, and set the environment variable
ASAN_SYMBOLIZER_PATH=/usr/where/ever/the/binary/is
If you are sure that the leak is a false alarm, you can use a suppression file:
create a suppression text file and add to it: leak: __cxa_thread_atexit
Set environment variable
LSAN_OPTIONS=suppressions=path/to/suppr.txt
and then run your app.
http://clang.llvm.org/docs/AddressSanitizer.html#symbolizing-the-reports

Related

Tensorflow c++ memory leak - Valgrind

I am executing simple tensorflow code to create graph def as shown
tensorflow::NewSession (options, &session)
ReadBinaryProto (tensorflow::Env::Default(), "/home/ashok/eclipseWorkspace/faceRecognition-x86_64/Data/models/optimized_facenet.pb", &graph_def));
session->Create (graph_def);
But when I run Valgrind as shown below
valgrind --leak-check=full --show-leak-kinds=all --vex-guest-max-insns=25 ./faceRecognition-x86_64 -r -i
I get below errors
==12366== 16,000 bytes in 1 blocks are still reachable in loss record 47,782 of 47,905
==12366== at 0x4C2E19F: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==12366== by 0xBF875DC: std::vector<tensorflow::CostModel::MemUsage, std::allocator<tensorflow::CostModel::MemUsage> >::reserve(unsigned long) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBF90128: tensorflow::CostModel::InitFromGraph(tensorflow::Graph const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBEE48D3: tensorflow::SimpleGraphExecutionState::InitBaseGraph(tensorflow::BuildGraphOptions const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBEE52CF: tensorflow::SimpleGraphExecutionState::MakeForBaseGraph(tensorflow::GraphDef*, tensorflow::SimpleGraphExecutionStateOptions const&, std::unique_ptr<tensorflow::SimpleGraphExecutionState, std::default_delete<tensorflow::SimpleGraphExecutionState> >*) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68B9D: tensorflow::DirectSession::MaybeInitializeExecutionState(tensorflow::GraphDef const&, bool*) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68CF9: tensorflow::DirectSession::ExtendLocked(tensorflow::GraphDef const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68FC7: tensorflow::DirectSession::Create(tensorflow::GraphDef const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0x26B899: TensorFlow::initializeRecognition() (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
==12366== by 0x24197D: RecognitionWithImages::RecognitionWithImages() (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
==12366== by 0x12F27C: main (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
These type of errors are also generated when I do session -> run ()
Due to the above issues, the memory needed to run the program keeps increasing as time passes and the application crashes due to insufficient memory after a certain point of time
Did you close the session ? You also need to delete the Session pointer.
session->Close();
delete session;

Does log4cxx::Level::getError() leak memory?

I use the log4cxx logging library in a project and Valgrind Memory Analyzer (in Qt Creator) to check for memory leaks.
It appears to me that the log4cxx::Level::getError() and log4cxx::Level::getFatal() leak 18 bytes of memory.
Here is the relevant part of the Valgrind dump:
18 bytes in 1 blocks are possibly lost in loss record 157 of 409 in OLogger::getLogLevel(char const*) in XXX/Infrastructure/Logging/OLogger.cpp:51
1: operator new(unsigned int) in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so
2: std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
3: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
4: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
5: log4cxx::Level::getError() in /usr/lib/liblog4cxx.so.10.0.0
6: OLogger::getLogLevel(char const*) in XXX/Infrastructure/Logging/OLogger.cpp:51
Now the question is if the library intentionally leaks memory in a view places. VLD, for example, had problems in the past with statically allocated memory. Maybe the logging system wants to stay alive as long as possible to report errors leading to memory leaks. That's just my speculations...
Can anyone verify that leak? Is it there by design? What do I have to do to remove it, if possible?
Thank you :-)
According to a short discussion at apache.org the leak is there in my current version (see log in my question).
The leak needs further investigation, for example, in the development branch of log4cxx.
I post this as answer because I know now that the leak does not stem from me.

Valgrind complaining possible memory leak in std string's new operator

In my project i am using jsoncpp, boost and many libraries, when i ran the valgrind for my program in many palces including jsoncpp, boost libraries it shows possible memory leak in string creation
I have pasted the valgrind error snippets
==5506== 427,198 bytes in 489 blocks are possibly lost in loss record 8,343 of 8,359
==5506== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5506== by 0x9360A88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==5506== by 0x55EB0BD: char* std::string::_S_construct(char const*, char const*, std::allocator const&, std::forward_iterator_tag) (basic_string.tcc:140)
==5506== by 0x936261C: std::basic_string, std::allocator >::basic_string(char
const*, unsigned long, std::allocator const&) (in
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==5506== by 0x63FEB99: Json::Value::asString() const (json_value.cpp:611)
My question is are these errors are valid or they are false positive ?
Thanks in advance
To be completely sure, you can do a looped test and check for memory hogging.
We had similar messages and they turned out to be false positives, so we added them to the suppression list.
Valgrind has some heuristics that reduces the number of 'false positive'
possibly lost.
A.o., it has an heuristic to better detect std::string.
Use the following options to activate some heuristics:
--leak-check-heuristics=heur1,heur2,... which heuristics to use for
improving leak search false positive [none]
where heur is one of:
stdstring length64 newarray multipleinheritance all none
Note that in the upcoming 3.11 release, the default for this option
has been changed from 'none' to 'all'.

How to detect "possibly lost" bytes in valgrind when no sufficient data are provided?

I have been digging several days in my 1500 lines of code to find those 15 bytes (possibly lost) to no avail.
There are no sufficient data provided by valgrind even though I ran the following command:
valgrind --leak-check=full --show-reachable=yes --track-origins=yes --show-below-main=yes ./myapp
I get the following block of report:
==3283== 15 bytes in 1 blocks are possibly lost in loss record 1 of 4
==3283== at 0x402842F: operator new(unsigned int) (vg_replace_malloc.c:255)
==3283== by 0x40D2A83: std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x40D4CF7: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x40D4E65: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x804DB22: _GLOBAL__sub_I__ZN7processC2Ei7in_addr (main.cpp:1304)
==3283== by 0x8050131: __libc_csu_init (in /home/username/myapp-write/src/myapp)
==3283== by 0x41A60A9: __libc_start_main (libc-start.c:185)
==3283== by 0x80499C0: ??? (in /home/username/myapp-write/src/myapp)
==3283==
Would any one, please, tell me how to detect the faulty line?
If your code invokes alternative termination strategies that are not designed to properly clean up automatic variables due to hindering unwind semantics, your automatic variable destructors will not be called.
std::terminate, as you mentioned in-comment you were using, unfortunately ponies up one such condition. The default action for the termination handler is to invoke std::abort, which does not fire cleanup destruction on automatic, thread-local, or static storage duration objects, and any such vars that have assumed dynamic memory management will leak like a sieve.
Avoid termination in this fashion unless you have a very good reason for it, and in general there are very few good reasons for it.
Best of luck.

Memory leaks and errors in libraries detected by valgrind

Very new to c++ but got my hands on valgrind that I played with during the day. Got my code cleaned up nicely, except from a part that uses an external library (xqilla). What I can see there is both a memory leak and error. Does this mean I should look at a different library or is it common that libraries has errors and small leaks that I shouldn't care about it?
Valgrind output
==8779== Syscall param sendmsg(mmsg[0].msg_hdr) points to uninitialised byte(s)
==8779== at 0x6065829: sendmmsg (sendmmsg.c:32)
==8779== by 0x767C8FD: __libc_res_nsend (res_send.c:1140)
==8779== by 0x7679D48: __libc_res_nquery (res_query.c:226)
==8779== by 0x767A6F8: __libc_res_nsearch (res_query.c:582)
==8779== by 0x746CB57: _nss_dns_gethostbyname4_r (dns-host.c:314)
==8779== by 0x6035ADF: gaih_inet (getaddrinfo.c:849)
==8779== by 0x6039913: getaddrinfo (getaddrinfo.c:2473)
==8779== by 0x50B3F06: xercesc_3_1::UnixHTTPURLInputStream::UnixHTTPURLInputStream(xercesc_3_1::XMLURL const&, xercesc_3_1::XMLNetHTTPInfo const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x50B3A44: xercesc_3_1::SocketNetAccessor::makeNew(xercesc_3_1::XMLURL const&, xercesc_3_1::XMLNetHTTPInfo const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4F8813A: xercesc_3_1::XMLURL::makeNewStream() const (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4FEC737: xercesc_3_1::ReaderMgr::createReader(xercesc_3_1::InputSource const&, bool, xercesc_3_1::XMLReader::RefFrom, xercesc_3_1::XMLReader::Types, xercesc_3_1::XMLReader::Sources, bool, unsigned long) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4FE6758: xercesc_3_1::IGXMLScanner::scanReset(xercesc_3_1::InputSource const&) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== Address 0x7feffdad0 is on thread 1's stack
==8779==
Hello
Good
Bye
==8779==
==8779== HEAP SUMMARY:
==8779== in use at exit: 11 bytes in 1 blocks
==8779== total heap usage: 8,144 allocs, 8,143 frees, 1,957,496 bytes allocated
==8779==
==8779== 11 bytes in 1 blocks are definitely lost in loss record 1 of 1
==8779== at 0x4C2A879: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==8779== by 0x4FEC508: xercesc_3_1::MemoryManagerImpl::allocate(unsigned long) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x50B87D5: xercesc_3_1::IconvGNULCPTranscoder::transcode(unsigned short const*, xercesc_3_1::MemoryManager*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x50B3C6D: xercesc_3_1::UnixHTTPURLInputStream::UnixHTTPURLInputStream(xercesc_3_1::XMLURL const&, xercesc_3_1::XMLNetHTTPInfo const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x50B3A44: xercesc_3_1::SocketNetAccessor::makeNew(xercesc_3_1::XMLURL const&, xercesc_3_1::XMLNetHTTPInfo const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4F8813A: xercesc_3_1::XMLURL::makeNewStream() const (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4FEC737: xercesc_3_1::ReaderMgr::createReader(xercesc_3_1::InputSource const&, bool, xercesc_3_1::XMLReader::RefFrom, xercesc_3_1::XMLReader::Types, xercesc_3_1::XMLReader::Sources, bool, unsigned long) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4FE6758: xercesc_3_1::IGXMLScanner::scanReset(xercesc_3_1::InputSource const&) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x4FE0D03: xercesc_3_1::IGXMLScanner::scanDocument(xercesc_3_1::InputSource const&) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x50053B8: xercesc_3_1::XMLScanner::scanDocument(unsigned short const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x5008931: xercesc_3_1::XMLScanner::scanDocument(char const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779== by 0x5027474: xercesc_3_1::AbstractDOMParser::parse(char const*) (in /usr/lib/x86_64-linux-gnu/libxerces-c-3.1.so)
==8779==
==8779== LEAK SUMMARY:
==8779== definitely lost: 11 bytes in 1 blocks
==8779== indirectly lost: 0 bytes in 0 blocks
==8779== possibly lost: 0 bytes in 0 blocks
==8779== still reachable: 0 bytes in 0 blocks
==8779== suppressed: 0 bytes in 0 blocks
==8779==
==8779== For counts of detected and suppressed errors, rerun with: -v
==8779== Use --track-origins=yes to see where uninitialised values come from
==8779== ERROR SUMMARY: 5 errors from 2 contexts (suppressed: 2 from 2)
If the library is indeed leaking memory, it would be wise to resolve it as the leak could eventually impact the application. Typical symptoms of a memory leak include a hung process, or a process that terminates abruptly (such as when the Linux out-of-memory killer goes to work).
Looking for another library may be viable. If the library maintainers can be reached, it would be good to bring it to their attention. And, if it is open source, it would be even more awesome to track it down and submit a fix.
One thing to consider here though. Valgrind is going to flag any memory that is not eventually released as a leak. The library may be making one-time allocations, such as creating singletons. If that's the case, then it is really a non-problem.
So, things to try:
Track down what interactions with the library create the leaks
Confirm that those interactions actually leak - meaning that they make more than one allocation over time
Check the library for operations that may clean up the memory
Verify the use of the library (for example, make sure a cleanup method/function that should be called is being called)
Contact the maintainers
Track it down and fix it (if it's opensource)
The errors referring to the pointers to uninitialized bytes does not necessarily mean anything is wrong, just that the library has allocated pointers and not set the allocated memory to any value. If the program runs into segmentation faults, those errors could be helpful to track them down. Otherwise, it could be totally normal. For example, the library may be pre-allocating buffers for later use.
Again, I would consider mentioning them to maintainers, but those errors by themselves are not too worrisome.