Tracking memory leak - c++

The situation:
I see around + 120kbytes increase when closing the class,
so when i close the clas few times memory increases - and i need to find out what is causing this.
Just looking for any good tip or trick of how to find out what is not freed/released with vstudio 2010 - any ideas?
Here's more clearly what i do ( very simplified )
class cSomeClass
{
cSomeClass();
~cSomeClass();
int Initialize();
void Deinitialize();
}
cSomeClass cCamera;
main()
{
Sleep(10000);
// Do Init / Deinit to find out if we are freeing the memory
while(1)
{
// Init camera
if(cCamera.Initialize()==0)
{
// Rest for a while
Sleep(1500);
cCamera.Deinitialize();
// Rest for a while
Sleep(1500);
}
}
}
I just did a small application to init / deinit the class object to see
in the 'task manager' if my memory for this application is returning to it's starting value - but it is not, it keeps incrementing every time i initialize the cSomeClass - so i believe i have something that is initialized but not freed in Deinitialize.
Update:
I don't think it is a simple memory growth, when application is launched it stabilizes itself after 10 seconds to let's say : 1MB of ram then when while(1) kicks in every Initialize i call i get +120kBytes in overall application memory ( checked in task manager ).
Update:
Thanks to Chad - got it sniffed with
_CrtDumpMemoryLeaks
Detected memory leaks!
Dumping objects ->
{76} normal block at 0x003F4BC8, 32 bytes long.
Data: <Logitech QuickCa> 4C 6F 67 69 74 65 63 68 20 51 75 69 63 6B 43 61
{75} normal block at 0x003F4B80, 8 bytes long.
Data: < K? > 20 4B 3F 00 00 00 00 00
{74} normal block at 0x003F4B20, 32 bytes long.
Data: < K? K? > 80 4B 3F 00 C8 4B 3F 00 CD CD CD CD CD CD CD CD
{70} normal block at 0x003F4A30, 8 bytes long.
Data: < )i > 0C 29 69 00 00 00 00 00
Object dump complete.

The most straight forward method is to use the Windows API functions for memory usage tracking, like _CrtDumpMemoryLeaks.
Using this in conjunction with _CrtMemCheckpoint can prove to be vital when tracking down stubborn leaks.
If you are using MFC, you can optionally define DEBUG_NEW, which adds additional tracking to the global new/delete operators, giving you file and line numbers for each allocation that leaks, this can be extremely helpful as well, but it doesn't work with some implementations of new (std::nothrow, for instance).

I'm not sure exactly what you mean by "closing the class", but if you're able to run your code in Linux valgrind is always a great option to track memory leaks. Purify also works well in Windows but it's $$.
Another approach is to try to stop the problem up front: Use smart pointers instead of raw pointers.
Finally make sure that you're actually seeing a leak and not just a memory growth up to a certain plateau.

Related

How to retrieve underlying block device IO error

Consider a device in the system, something under /dev/hdd[sg][nvme]xx
Open the device, get the file descriptor and start working with it (read(v)/write(v)/lseek, etc), at some point you may get EIO. How do you retrieve the underlying error reported by the device driver?
EDIT001: in case it is impossible using unistd functions, maybe there is other ways to work with block devices which can provide more low-level information like sg_scsi_sense_hdr?
You can't get any more error detail out of the POSIX functions. You're onto the right track with the SCSI generic stuff though. But, boy, it's loaded with hair. Check out the example in sg3_utils of how to do a SCSI READ(16). This will let you look at the sense data when it comes back:
https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_simple16.c
Of course, this technique doesn't work with NVMe drives. (At least, not to my knowledge).
One concept I've played with in the past is to use normal POSIX/libc block I/O functions like pread and pwrite until I get an EIO out. At that point, you can bring in the SCSI-generic versions to try to figure out what happened. In the ideal case, a pread or lseek/read fails with EIO. You then turn around and re-issue it using a SG READ (10) or (16). If it's not just a transient failure, this may return sense data that your application can use.
Here's an example, using the command-line sg_read program. I have an iSCSI attached disk that I'm reading and writing. On the target, I remove its LUN mapping. dd reports EIO:
# dd if=/dev/sdb of=/tmp/output bs=512 count=1 iflag=direct
dd: error reading ‘/dev/sdb’: Input/output error
but sg_read reports some more useful information:
[root#localhost src]# sg_read blk_sgio=1 bs=512 cdbsz=10 count=512 if=/dev/sdb odir=1 verbose=10
Opened /dev/sdb for SG_IO with flags=0x4002
read cdb: 28 00 00 00 00 00 00 00 80 00
duration=9 ms
reading: SCSI status: Check Condition
Fixed format, current; Sense key: Illegal Request
Additional sense: Logical unit not supported
Raw sense data (in hex):
70 00 05 00 00 00 00 0a 00 00 00 00 25 00 00 00
00 00
sg_read: SCSI READ failed
Some error occurred, remaining block count=512
0+0 records in
You can see the Logical unit not supported additional sense code in the above output, indicating that there's no such LU at the target.
Possible? Yes. But as you can see from the code in sg_simple16.c, it's not easy!

Why does the memory allocation number keep changing for the same memory leak in VS2015 MFC?

I'm trying to remove memory leaks in my app using VS2015 and MFC in VC++.
The answers to this similar question did not help: How to detect memory leak when memory allocation number isn't always same?
In Configuration Properties>C/C++>Code Generation,
I changed the option selected for Runtime Library from /MT to /MTd.
The app is not multi-threaded(afaik).
The memory allocation number changes between program runs, leading me to different places in the code.
The procedure I use worked well before:
I copy a memory allocation number from the previous memory leakage report, and start the app.
When it stops at the breakpoint, I go to the Watch Window, and paste it in the value column of _crtBreakAlloc.
(Eg _crtBreakAlloc 1171).
Then run the program on until it breaks, and use the Call Stack to locate the unfreed object.
// Example of the memory report
...
{1171} client block at 0x088157A0, subtype c0, 224 bytes long.
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dumpcont.cpp(23) : atlTraceGeneral - a ProgressBar object at $088157A0, 224 bytes long
{223} normal block at 0x01E79600, 324 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...
// Example of the next report
...
{1112} client block at 0x08B30480, subtype c0, 224 bytes long.
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dumpcont.cpp(23) : atlTraceGeneral - a ProgressBar object at $08B30480, 224 bytes long
{223} normal block at 0x01F693D8, 324 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...
Note the memory allocation number "1171" changes to "1112", also affecting all the numbers above it.
This happens even after starting the PC with only VS2015 opened, and doing nothing between adjacent runs of the program. I keep each run of the program exactly the same each time, doing the same things, in the same order.
E.g. load the same file, press the same buttons/keys etc.
To remap operator new, the code has-
//stdafx.h
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
#ifdef _DEBUG
#define DBG_NEW new ( _NORMAL_BLOCK , __FILE__ , __LINE__ )
#else
#define DBG_NEW new
#endif
// CImage.h : main header file for the CImage application
#define _CRTDBG_MAP_ALLOC // Supports memory leakage detection.
#include <stdlib.h>
#include <crtdbg.h>
#ifdef _DEBUG
#ifndef DBG_NEW
#define DBG_NEW new ( _NORMAL_BLOCK , __FILE__ , __LINE__ )
#endif
#endif
Any help will be very much appreciated. Thank you
There are many answers to this. Your shown code isn't specific enough to tell you more.
Do you spawn other threads in the init code? This will change the behaviour, because the sequence of execution isn't guaranteed.
When you terminate your program, you UI usually also saves it state (MFC-Next). This state is loaded again, when you start again. Different UI settings may cause it.
Different data. Even breaking a different command line or any other different input string into CString or std::string elements may cause a shift. Because depending on the input.
Even when creating windows, some message processing might differ from start to start of the program, depending on when timer and paint messages will take place.
I am sure that there are other reasons that I missed... this list may grow...
In your case it is a very early stage when I see the allocation number. And looking at the objects name that is reported in your question, I am sure it has to do with the UI.
So it may help, to clear all registry entries of your program, and make sure that the input data is really the same.
Even it should help, to break into your code in an earlier stage. (i.E. at allocation 1100). Step over and out and look what happens in your code. Watch the allocation count in the watch window. There are so many allocations, so I am sure that you find the code fast and easy with a few steps.

Memory leak when using anything involving wxFileName

I'm making use of wxWidgets in my program for directory management and compressing/uncompromising collections of files. As I've been building my file system, I've noticed that I get memory leaks every run. After a lot of testing, I realized that any time I use any functions related to wxFileName, I get a memory leak. I'm using wx widgets 3.0.1, and my standalone example is as follows.
#include <wx\filename.h>
int main()
{
wxFileName::Mkdir("Test");
return 0;
}
The result is the same if I make an instance of the wxFileName class.
How do I make wx widgets not create a memory leak? I want to be able to package large collections of files in one file, and read the data from them with various other libraries (via extracting the zip to a temporary folder and reading the data from there). I haven't been able to get any other library to zip/unzip entire folders, so I really need to be able to use wxWidgets without a memory leak.
I read in another thread that the visual studios debugger is falsely identifying the memory leaks, but I ran it through AQtime and it confirmed that there was indeed a memory leak.
The exact debug output involving the memory leak is as follows:
Detected memory leaks!
Dumping objects ->
{1087} normal block at 0x009B4BC0, 64 bytes long.
Data: <\+= d+= l+= t+= > 5C 2B 3D 00 64 2B 3D 00 6C 2B 3D 00 74 2B 3D 00
{1086} normal block at 0x009B4880, 772 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{1085} normal block at 0x009B4680, 28 bytes long.
Data: < H > 80 48 9B 00 C1 00 00 00 00 00 00 00 CD CD CD CD
Object dump complete.
After a bit of digging (it WOULD be the digging I did AFTER posting the question) I found that when you're using wxWidgets without creating a wxWidgets app object, you need to use the following two functions:
wxInitialize()
and
wxUninitialize()
So the fixed version of my code is as follows:
#include <wx/app.h>
#include <wx\filename.h>
int main()
{
wxInitialize();
wxFileName::Mkdir("Waka Waka");
wxUninitialize();
return 0;
}
I suggest if anyone is using wxWidgets purely for the file management to either call these functions in the constructor and destructor of whatever class handles files, or at the beginning and end of your program's main loop.

Setting endianness of VS debugger

I am using VS 2012 and programming in C++. I have a wide string
wchar_t *str = L"Hello world".
Technically I read the string from a file but I don't know if that makes a difference. When I look at str in the memory window it looks like this:
00 48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00
As you can see the string is stored in memory as big-endian.
When I hover my mouse over the string I get:
L"䠀攀氀氀漀Ⰰ 眀漀爀氀搀℀"
And after I reverse the endianness of str the memory looks like:
48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00 00
And the hover over looks like:
L"Hello, world!"
It seems that the debugger displays UTF-16 in little-endian by default. My program reads big-endian files so it is very tedious to keep reversing the endianness of all strings to debug them. Is there any way to change the endianness of the debugger's display?
Except for debug purposes I can do all my processing in big endian.
It's not only the debugger. The wchar_t function of Visual Studio are little endian as the host is. When you want to process the data you need to reverse the string endianess to little endian anyway.
It's worth to have this change even if you output the strings to a file with a different endianess. Strings are defined as a byte sequence, your endianess applied to a string looks strange anyhow.
Your best shot in getting this to work is to define your own type and create a debugger type visualizer for it (see Customizing the Visual Studio Debugger Display of Your Data, or here).
Or maybe you can quick-hack it by shifting the address by 1 byte in watch window.
You're working with a non-native string format that just happens to "feel" similar to the native format. So you are tempted to think there should be almost a way to do it. But to the debugger, it's just a foreign binary format. The debugger is not designed to handle foreign endianness just as it does not handle visualizing an OGG stream packet.
If you want to use available tools for manipulating native-endian Unicode strings, you'll need to convert to native-endian Unicode format.
As has been pointed out, VS uses the native endianness, which is
little endian on an Intel/AMD. The problem is that you're not
reading the strings correctly; you should imbue the
std::istream with a locale which reads UTF-16BE (since this is
apparently the encoding form you're trying to read).
std::istream (or rather the backing std::filebuf) will
automatically do the code translation on the fly when reading
and writing.
You can set the endianness of the Memory window using the context menu. Right-click in the Memory window and check "Big Endian".

Dump only a portion of memory in VS 2005

Does any one know if there is a way to dump only a chunk of memory to disk using VS? Basically, I want to give it an address and a length, and have it write the memory to disk. That way I can do a binary diff.
Thanks.
I'm kind of surprised VS won't let you do that from the Memory dump window...
You might be able to get what you want (or close to it) with the VS command window:
>Tools.LogCommandWindowOutput c:\temp\testdump.log /overwrite
>Debug.ListMemory /Count:16 0x00444B20
0x00444B20 00 00 00 00 00 00 00 00 13 00 12 00 86 07 19 00 ................
>Tools.LogCommandWindowOutput /off
If you're willing to use WinDBG, (or ntsd/cdb) you can use the .writemem debugger command to do exactly what you want.
I believe you can only save a complete binary minidump. However, you can use the Debug Memory window and copy/paste to a text file to do memory diffs.
OK, this I have tried in VS 2008, but I believe VS 2005 should allow the same:
If the memory is a string (if it doesn't contain zero bytes), you can put the following into a watch window: (unsigned char*)(ptr),1024 to see 1kB in the text visualizer. However, this stops at zero bytes, so if you have binary data, this won't work.