I am executing simple tensorflow code to create graph def as shown
tensorflow::NewSession (options, &session)
ReadBinaryProto (tensorflow::Env::Default(), "/home/ashok/eclipseWorkspace/faceRecognition-x86_64/Data/models/optimized_facenet.pb", &graph_def));
session->Create (graph_def);
But when I run Valgrind as shown below
valgrind --leak-check=full --show-leak-kinds=all --vex-guest-max-insns=25 ./faceRecognition-x86_64 -r -i
I get below errors
==12366== 16,000 bytes in 1 blocks are still reachable in loss record 47,782 of 47,905
==12366== at 0x4C2E19F: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==12366== by 0xBF875DC: std::vector<tensorflow::CostModel::MemUsage, std::allocator<tensorflow::CostModel::MemUsage> >::reserve(unsigned long) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBF90128: tensorflow::CostModel::InitFromGraph(tensorflow::Graph const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBEE48D3: tensorflow::SimpleGraphExecutionState::InitBaseGraph(tensorflow::BuildGraphOptions const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBEE52CF: tensorflow::SimpleGraphExecutionState::MakeForBaseGraph(tensorflow::GraphDef*, tensorflow::SimpleGraphExecutionStateOptions const&, std::unique_ptr<tensorflow::SimpleGraphExecutionState, std::default_delete<tensorflow::SimpleGraphExecutionState> >*) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68B9D: tensorflow::DirectSession::MaybeInitializeExecutionState(tensorflow::GraphDef const&, bool*) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68CF9: tensorflow::DirectSession::ExtendLocked(tensorflow::GraphDef const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0xBE68FC7: tensorflow::DirectSession::Create(tensorflow::GraphDef const&) (in /usr/lib/libtensorflow_cc.so)
==12366== by 0x26B899: TensorFlow::initializeRecognition() (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
==12366== by 0x24197D: RecognitionWithImages::RecognitionWithImages() (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
==12366== by 0x12F27C: main (in /home/ashok/eclipseWorkspace/faceRecognition-x86_64/Debug/faceRecognition-x86_64)
These type of errors are also generated when I do session -> run ()
Due to the above issues, the memory needed to run the program keeps increasing as time passes and the application crashes due to insufficient memory after a certain point of time
Did you close the session ? You also need to delete the Session pointer.
session->Close();
delete session;
Related
I'm using ASAN address sanitizer to detect memory issues. When the program stops ASAN complains about the following:
==102121==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 537 byte(s) in 1 object(s) allocated from:
#0 0x75cb48 in operator new(unsigned long) (/home/app+0x75cb48)
#1 0x7dca83 in __gnu_cxx::new_allocator<char>::allocate(unsigned long, void const*) /opt/rh/devtoolset-7/root/usr/include/c++/7/ext/new_allocator.h:111
#2 0x7ce766 in std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/basic_string.tcc:1057
#3 0x7cc54d in std::string::_Rep::_M_clone(std::allocator<char> const&, unsigned long) (/home/app+0x7cc54d)
#4 0x7c1f2a in std::string::reserve(unsigned long) /opt/rh/devtoolset-7/root/usr/include/c++/7/bits/basic_string.tcc:960
#5 0x7fa0a639c6f5 in std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::overflow(int) (/lib64/libstdc++.so.6+0x9b6f5)
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x75cec8 in operator new(unsigned long, std::nothrow_t const&) (/home/app+0x75cec8)
#1 0x7fa0a635df1d in __cxa_thread_atexit (/lib64/libstdc++.so.6+0x5cf1d)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x75cec8 in operator new(unsigned long, std::nothrow_t const&) (/home/app+0x75cec8)
#1 0x7fa0a635df1d in __cxa_thread_atexit (/lib64/libstdc++.so.6+0x5cf1d)
I've seen on the ASAN page that it can come from the fact the the standard library is statically linked. Although, in my case it is dynamic one.
The application is compiled with devtoolset-7 on RHEL.
Do you have any idea where the leak comes from?
You can get more info than
#0 0x75cb48 in operator new(unsigned long) (/home/app+0x75cb48)
by using llvm-symbolizer.
Download it, and set the environment variable
ASAN_SYMBOLIZER_PATH=/usr/where/ever/the/binary/is
If you are sure that the leak is a false alarm, you can use a suppression file:
create a suppression text file and add to it: leak: __cxa_thread_atexit
Set environment variable
LSAN_OPTIONS=suppressions=path/to/suppr.txt
and then run your app.
http://clang.llvm.org/docs/AddressSanitizer.html#symbolizing-the-reports
Valgrind tool which is a well known memory analyzing tool reports an Invalid Read in OCIStmtPrepare in Oracle C API Function. This can be observed in several such Oracle C API functions.
Please refer the following stack trace.
According to my observations and understanding the the application creates a buffer of 317 bytes. However when it is passed to Oracle library it does some memory copy using the __intel_new_memcpy function. However the __intel_new_memcpy function copies 320 bytes (which is 8 from 312). The actual allocated memory was 317 bytes.
Could you please confirm whether this behaviour correct? What goes wrong in this?
==22195== Invalid read of size 8
==22195== at 0x68CD2D9: __intel_new_memcpy (in /x02/app/oracle/product/11.2.0/client_1/lib/libclntsh.so.11.1)
==22195== by 0x5D84158: kpurclientparse (in /x02/app/oracle/product/11.2.0/client_1/lib/libclntsh.so.11.1)
==22195== by 0x5D878DE: kpureq (in /x02/app/oracle/product/11.2.0/client_1/lib/libclntsh.so.11.1)
==22195== by 0x5D607FA: OCIStmtPrepare (in /x02/app/oracle/product/11.2.0/client_1/lib/libclntsh.so.11.1)
==22195== by 0x4099E0: DBCursor::Parse(char const*) (OCICPP.C:1020)
==22195== by 0x40CE29: DBCon::NewCursor(char const*, int) (OCICPP.C:753)
==22195== by 0x4047A6: main (main.cpp:59)
==22195== Address 0xa2e7e68 is 312 bytes inside a block of size 317 alloc'd
==22195== at 0x4C26E1C: operator new[](unsigned long) (vg_replace_malloc.c:305)
==22195== by 0x4EBD00F: String::Set(char const*, unsigned int) (String.cpp:544)
==22195== by 0x4EBD169: String::Set(char const*) (String.cpp:512)
==22195== by 0x4EBD188: String::operator=(char const*) (String.cpp:590)
==22195== by 0x404784: main (main.cpp:55)
I use the log4cxx logging library in a project and Valgrind Memory Analyzer (in Qt Creator) to check for memory leaks.
It appears to me that the log4cxx::Level::getError() and log4cxx::Level::getFatal() leak 18 bytes of memory.
Here is the relevant part of the Valgrind dump:
18 bytes in 1 blocks are possibly lost in loss record 157 of 409 in OLogger::getLogLevel(char const*) in XXX/Infrastructure/Logging/OLogger.cpp:51
1: operator new(unsigned int) in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so
2: std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
3: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
4: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.19
5: log4cxx::Level::getError() in /usr/lib/liblog4cxx.so.10.0.0
6: OLogger::getLogLevel(char const*) in XXX/Infrastructure/Logging/OLogger.cpp:51
Now the question is if the library intentionally leaks memory in a view places. VLD, for example, had problems in the past with statically allocated memory. Maybe the logging system wants to stay alive as long as possible to report errors leading to memory leaks. That's just my speculations...
Can anyone verify that leak? Is it there by design? What do I have to do to remove it, if possible?
Thank you :-)
According to a short discussion at apache.org the leak is there in my current version (see log in my question).
The leak needs further investigation, for example, in the development branch of log4cxx.
I post this as answer because I know now that the leak does not stem from me.
In my project i am using jsoncpp, boost and many libraries, when i ran the valgrind for my program in many palces including jsoncpp, boost libraries it shows possible memory leak in string creation
I have pasted the valgrind error snippets
==5506== 427,198 bytes in 489 blocks are possibly lost in loss record 8,343 of 8,359
==5506== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5506== by 0x9360A88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==5506== by 0x55EB0BD: char* std::string::_S_construct(char const*, char const*, std::allocator const&, std::forward_iterator_tag) (basic_string.tcc:140)
==5506== by 0x936261C: std::basic_string, std::allocator >::basic_string(char
const*, unsigned long, std::allocator const&) (in
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==5506== by 0x63FEB99: Json::Value::asString() const (json_value.cpp:611)
My question is are these errors are valid or they are false positive ?
Thanks in advance
To be completely sure, you can do a looped test and check for memory hogging.
We had similar messages and they turned out to be false positives, so we added them to the suppression list.
Valgrind has some heuristics that reduces the number of 'false positive'
possibly lost.
A.o., it has an heuristic to better detect std::string.
Use the following options to activate some heuristics:
--leak-check-heuristics=heur1,heur2,... which heuristics to use for
improving leak search false positive [none]
where heur is one of:
stdstring length64 newarray multipleinheritance all none
Note that in the upcoming 3.11 release, the default for this option
has been changed from 'none' to 'all'.
I have been digging several days in my 1500 lines of code to find those 15 bytes (possibly lost) to no avail.
There are no sufficient data provided by valgrind even though I ran the following command:
valgrind --leak-check=full --show-reachable=yes --track-origins=yes --show-below-main=yes ./myapp
I get the following block of report:
==3283== 15 bytes in 1 blocks are possibly lost in loss record 1 of 4
==3283== at 0x402842F: operator new(unsigned int) (vg_replace_malloc.c:255)
==3283== by 0x40D2A83: std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x40D4CF7: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x40D4E65: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16)
==3283== by 0x804DB22: _GLOBAL__sub_I__ZN7processC2Ei7in_addr (main.cpp:1304)
==3283== by 0x8050131: __libc_csu_init (in /home/username/myapp-write/src/myapp)
==3283== by 0x41A60A9: __libc_start_main (libc-start.c:185)
==3283== by 0x80499C0: ??? (in /home/username/myapp-write/src/myapp)
==3283==
Would any one, please, tell me how to detect the faulty line?
If your code invokes alternative termination strategies that are not designed to properly clean up automatic variables due to hindering unwind semantics, your automatic variable destructors will not be called.
std::terminate, as you mentioned in-comment you were using, unfortunately ponies up one such condition. The default action for the termination handler is to invoke std::abort, which does not fire cleanup destruction on automatic, thread-local, or static storage duration objects, and any such vars that have assumed dynamic memory management will leak like a sieve.
Avoid termination in this fashion unless you have a very good reason for it, and in general there are very few good reasons for it.
Best of luck.