Visual Leak Detector: WARNING! There are memory leaks!
---------- 1 block (#3791210) consuming 24 bytes ----------
0x1000234A (File and line number not available): Visualleakdetector
0x102095D6 (File and line number not available): malloc_dbg
0x102094B9 (File and line number not available): malloc_dbg
0x10209409 (File and line number not available): malloc
The stack goes on but there are no files displayed. What can be the reason?
I'd imagine that you don't have the debug symbols for the program you're analysing (pdb files).
Is this a program you have the source code for? If so, I'd suggest checking you have your project set to generate debug symbols, rebuilding locally, and rerunning the program.
Related
I’m using Teensy 3.2 and cannot build my teensy code due to two warnings resulting in an error 1 return.
Warning 1 - .pio/build/teensy31/firmware.elf section .text' will not fit in region FLASH’
Warning 2 - region `FLASH’ overflowed by 86948 bytes
Error - collect2: error: ld returned 1 exit status
From what I read it basically means that the file is too large but my src folder is 40129 bytes and Teensy 3.2 flash size is 262144 bytes as it is written in the platforms/teensy/boards/teensy31.json file.
Even the build begins with >
Verbose mode can be enabled via -v, --verbose option
CONFIGURATION: https://docs.platformio.org/page/boards/teensy/teensy31.html
PLATFORM: Teensy (4.16.0) > Teensy 3.1 / 3.2
HARDWARE: MK20DX256 72MHz, 64KB RAM, 256KB Flash
DEBUG: Current (jlink) External (jlink)
PACKAGES:
- framework-arduinoteensy # 1.156.0 (1.56)
- toolchain-gccarmnoneeabi # 1.50401.190816 (5.4.1)
The src folder is a cpp file (with setup and loop functions) + 4 header files surrounding it with functions used in the cpp file. Also, the 2 warnings in the .h files are unrelated to the issue.
Tree for more clarity
From what I read it basically means that the file is too large but my
src folder is 40129 bytes and Teensy 3.2 flash size is 262144
The size of your src folder has not much to do with the size of the generated program. If you are interested in where all that memory goes to you can use an ELF viewer.
For example, here you find an online viewer: http://www.sunshine2k.de/coding/javascript/onlineelfviewer/onlineelfviewer.html.
Upload your elf file and scroll down to the symbol table section to find out what eats up that huge amount of memory.
The core dump files generated when my app crashes seem systematically short of about 2692 bytes, when loading them in gdb I get :
BFD: Warning: [...]core is truncated: expected core file size >= 117628928, found: 117626236.
Sizes vary a bit but the missing part (difference between the expected size and what's found) is always between 2690 and 2693 bytes.
Before starting the app I force :
ulimit -c unlimited
There's plenty of space and time for the system to write the file.
Other details that may be relevant :
- The target is an APF27.
- It runs Linux kernel 2.6.38
- The core file is generated on a SD card.
- The size of the file matches the size found by gdb.
Any hint will be appreciated.
I am working with OpenCV2.4 and SVM classification and I need to load a big dataset (about 400Mb of data) in C++. I've been able to save this dataset under a XML file, but I am unable to load it after that. Indedd, I receive the following message :
OpenCV Error: Insufficient memory (Failed to allocate 408909812 bytes) in OutOfMemoryError, file (my opencv2.4 directory)modules\core\src\alloc.cpp, line 52 - error: (-4)
How could I increase the available memory (I have plenty of free RAM) ?
Thanks a lot !
EDIT :
Here is the place where the problem appears. The code works when I load a smaller file
std::cout<<"ok 0"<<std::endl;
FileStorage XML_Data(Filename, FileStorage::READ);
XML_Data["Data"]>>m_Data_Matrix;
XML_Data.release();
std::cout<<"ok 1"<<std::endl;
EDIT 2 :
Problem solved : the solution was to compile my application and OpenCV2.4.5 as a 64 bit application. I've installed a 64 bit version of MinGW, build OpenCV with this new version (and using cmake to configure) and then modified the compiler used by codeblocks.
You could find these links usefull : http://forums.codeblocks.org/index.php?topic=13016.0 and http://www.drangon.org/mingw.
I'm using the latest version of dbxtool (Solaris Studio ) on RHEL6.1.
I'm working through the tutorial example here using their example code, but when trying to run dbxtool on the core file generated, I get the following:
(dbx) cd /users/rory/Desktop/debug_tutorial
(dbx) debug /users/rory/Desktop/debug_tutorial/a.out core.a.out.10665
Reading a.out
dbx: warning: The corefile was truncated.
It should have been 1765376 bytes long (is only 483328)
Because of this, some functionality will be missing from dbx.
(See `help core')
core file header read successfully
Reading ld-linux-x86-64.so.2
Reading libstdc++.so.6
Reading libm.so.6
Reading libgcc_s.so.1
Reading libc.so.6
program terminated by signal SEGV (Segmentation fault)
dbx: core file read error: address 0x3faff579bc not available
dbx: attempt to fetch registers failed - stack corrupted
The first warning is about the core file being truncated (should have been 1765376 bytes long (is only 483328)), but I am able to generate other core files in the same directory with a larger size, so not sure why this one is being truncated?
I've also gone through the tutorial here on removing core size file limits, but with no luck.
This is a known dbx problem on RH6 (CR 7077948). The core file size is miscalculated if a data segment has a memory size larger than the file size (p_filesz) in the elf header. This problem has been identified and fixed in dbx 7.9.
I have a native release dll that is built with symbols. There is a post build step that modifies the dll. The post build step does some compression and probably appends some data. The pdb file is still valid however neither WinDbg nor Visual Studio 2008 will load the symbols for the dll after the post build step. What bits in either the pdb file or the dll do we need to modify to get either WinDbg or Visual Studio to load the symbols when it loads a dump in which our release dll is referenced?
Is it filesize that matters? A checksum or hash? A timestamp?
Modify the dump? or modify the pdb? modify the dll before it is shipped?
(We know the pdb is valid because we are able to use it to manually get symbol names for addresses in dump callstacks that reference the released dll. It's just a total pain in the *ss do it by hand for every address in a callstack in all the threads.)
This post led me to chkmatch. On the processed dll, chkmatch shows this info:
Executable:
TimeDateStamp: 4a086937
Debug info: 2 ( CodeView )
TimeStamp: 4a086937 Characteristics: 0 MajorVer: 0 MinorVer: 0
Size: 123 RVA: 00380460 FileOffset: 00380460
CodeView signature: sUar
Debug information file:
Format: PDB 7.00
Result: unmatched (reason: incompatible debug information formats)
With the same pdb against the pre-processed dll, it reports this:
Executable:
TimeDateStamp: 4a086937
Debug info: 2 ( CodeView )
TimeStamp: 4a086937 Characteristics: 0 MajorVer: 0 MinorVer: 0
Size: 123 RVA: 00380460 FileOffset: 00380460
CodeView format: RSDS
Signature: (my guid) Age: 19
PdbFile: (my path)
Debug information file:
Format: PDB 7.00
Signature: (my matching guid) Age: 19
I opened up both versions of the dll and went to offset 00380460. In the original version, clear enough I see the name of the pdb, but in the post-processed version there is no pdb info at that offset. I searched for the pdb path and found the exact same block - just at a different offset. Then I did bin search for the bytes "38 00 60 04" in the original dll. Looking at the same offset in the processed dll, I found the same bytes. So I adjusted the RVA and the offset (located by matching the bytes). Bingo! Now chkmatch reports the exact same results for the processed dll as the original (aside from the RVA and FileOffset that I changed).
Edit: Confirmed, now Visual Studio loads the symbols for dumps that reference the processed dll.
in Windbg try using the .symopt +40, this will force loading the pdb.