I have client/server app. Interaction implemented via Boost.Asio.
I created unit test to check long running transmission of data.
During the test memory leak detected.
Task Manager shows me that memory usage constantly grows - up to 35MB per 10min. Report produced at the end of test contains this:
Result StandardError: Detected memory leaks!
Dumping objects ->
{14522} normal block at 0x00E8ADC0, 16 bytes long.
Data: < _M} Y > B0 5F 4D 7D F9 59 F2 02 F4 E9 E6 00 CC CC CC CC
{14012} normal block at 0x00E8B280, 16 bytes long.
Data: < v > C0 76 A4 00 94 01 00 00 98 01 00 00 F0 D2 E3 00
{14011} normal block at 0x00E74B38, 12 bytes long.
Data: < > 00 00 00 00 9C 01 00 00 98 01 00 00
{14007} normal block at 0x00E745F8, 8 bytes long.
Data: < L > E0 4C E5 00 00 00 00 00
{14006} normal block at 0x00E54CB8, 60 bytes long.
Data: < v 4 > E4 76 A4 00 D0 D3 B0 00 00 00 00 00 34 80 E3 00
{13724} normal block at 0x00E710F8, 385 bytes long.
Data: < > 03 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{13722} normal block at 0x00E85C58, 28 bytes long.
Data: < F _ _ > F2 B6 46 00 B4 5F E3 00 A0 5F E3 00 BC 96 E7 00
{13720} normal block at 0x00E6F9B8, 80 bytes long.
Data: <wxF > 77 78 46 00 FC FF FF FF 00 00 00 00 CC CC CC CC
{13700} normal block at 0x00E6DFD0, 88 bytes long.
Data: < > C8 A4 A4 00 01 00 00 00 01 00 00 00 00 00 00 00
…
Data: <` X L > 60 8E E0 00 58 17 E2 00 CD 4C F7 EA
{153} normal block at 0x00DF0070, 12 bytes long.
Data: <` kf > 60 8D E0 00 98 00 E2 00 15 6B 66 0E
{151} normal block at 0x00DF0038, 12 bytes long.
Data: < .g> 20 86 E0 00 E0 FC E1 00 9D B7 2E 67
{149} normal block at 0x00DF0658, 12 bytes long.
Data: < G > A0 89 E0 00 00 00 00 00 47 01 D5 11
{147} normal block at 0x00DF0268, 12 bytes long.
Data: <` > 60 84 E0 00 A8 F5 E1 00 ED 8C AA BA
{145} normal block at 0x00DF0230, 12 bytes long.
Data: < ' " > 20 84 E0 00 00 11 E2 00 27 B0 22 00
{143} normal block at 0x00DF0690, 12 bytes long.
Data: <` P KnOQ> 60 88 E0 00 50 04 E2 00 4B 6E 4F 51
{141} normal block at 0x00DF0540, 12 bytes long.
Data: <` > 7> 60 82 E0 00 00 0A E2 00 3E 0D 9E 37
{139} normal block at 0x00DF0620, 12 bytes long.
Data: <Pq 1 > 50 71 DF 00 00 00 00 00 E5 DD 31 B5
{137} normal block at 0x00DF0700, 12 bytes long.
Data: < q # #> 10 71 DF 00 40 FA E1 00 14 8B 0D 23
{134} normal block at 0x00DF5CE0, 96 bytes long.
Data: <h BV BV > 68 19 E0 00 D0 42 56 00 E0 42 56 00 88 00 00 00
{133} normal block at 0x00DF0188, 8 bytes long.
Data: < \ > A0 5C DF 00 00 00 00 00
{132} normal block at 0x00DF5CA0, 16 bytes long.
Data: < > 88 01 DF 00 D8 AA DF 00 20 AC DF 00 20 AC DF 00
Object dump complete.
I tried to put breakpoint to mentioned memory allocations via boost's --detect_memory_leaks="allocation number" and setting in Watch window at Debug mode _crtBreakAlloc = 1000. It does not work. Maybe because leaks occur not in my code, but in boost/OpenSSL code?
I can't figure out where leaks occur. What can I do?
Windows 8, Visual Studio 2015, boost 1.60, OpenSSL 1.0.2g
Have a look at this post to see some suggested tips for dealing with memory leaks under windows. Have a scroll down, don't just look at the first answer. In particular it may be worth considering the DEBUG_NEW macro-based solution discussed by the second answer. Given that boost asio is largely header-only, this should help you even if the offending allocations come from the boost library.
Part 1: Report from Visual Studio about memory leaks
I'm using Boost.Asio to communicate with the server over TLS, i.e. Boost.Asio uses OpenSSL.
Seems that OpenSSL initializes itself and do not cleans memory before the end of the app (because app closes and memory will be released anyway).
This is not big chunk of memory (I do not know how to measure it).
As result Visual Studio treated that memory as leak. But it is not.
(This is my assumption, maybe real reason for such report is smth else. But I do not see any other possible reasons. )
Part 2:
In the question above I asked about memory leak for tens of Mb. This is my bad code that leads to huge memory buffer )).
Huge memory consumption and report from VisualStudio about memory leak made me believe that smth is very wrong ))
Buffer easily reduced to much smaller size.
Related
I've really been looking through similar posts, but couldn't really find anything that fits my issue.
I'm trying to make a basic program that makes queries with a MySQL database, and it all works fine, but I have a lot of memory leaks.
#include <cppconn/driver.h>
#include <crtdbg.h>
int main() {
{
sql::Driver* driver = get_driver_instance();
}
_CrtDumpMemoryLeaks();
return 0;
}
This is a small snippet of what I'm using. The rest of it isn't really relevant, since I've observed that even this small bit of code yields a lot of memory leaks, as given by the _CrtDumpMemoryLeaks call.
I got the 64bit version and used the dynamically linked library. What I observed is that I also needed to link the boost library separately, so I downloaded it and put its "include" directory as well.
I'm using Visual Studio 2019 Community.
Any help would be greatly appreciated. Cheers!
This is the output after running the program.
Detected memory leaks!
Dumping objects ->
{193} normal block at 0x0000014FB1F74710, 16 bytes long.
Data: < F O > 90 46 FA B1 4F 01 00 00 00 00 00 00 00 00 00 00
{192} normal block at 0x0000014FB1FA4670, 88 bytes long.
Data: < ( O ( O > 00 28 F6 B1 4F 01 00 00 00 28 F6 B1 4F 01 00 00
{191} normal block at 0x0000014FB1F8CC30, 24 bytes long.
Data: < g > 18 03 67 C5 FE 7F 00 00 01 00 00 00 01 00 00 00
{190} normal block at 0x0000014FB1F8C7B0, 24 bytes long.
Data: < d > A8 96 64 C5 FE 7F 00 00 02 00 00 00 01 00 00 00
{189} normal block at 0x0000014FB1F5E280, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{188} normal block at 0x0000014FB1F57FE0, 168 bytes long.
Data: < > 00 00 00 00 D2 04 00 00 88 00 00 00 00 00 00 00
{187} normal block at 0x0000014FB1F5F5A0, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{186} normal block at 0x0000014FB1F61720, 56 bytes long.
Data: < > 00 00 00 00 D2 04 00 00 18 00 00 00 00 00 00 00
{185} normal block at 0x0000014FB1F71050, 48 bytes long.
Data: < > 00 00 00 00 D2 04 00 00 10 00 00 00 00 00 00 00
{184} normal block at 0x0000014FB1F70DB0, 40 bytes long.
Data: < p O > 00 00 00 00 CD CD CD CD 70 10 F7 B1 4F 01 00 00
{183} normal block at 0x0000014FB1F70D40, 48 bytes long.
Data: < > 00 00 00 00 D2 04 00 00 10 00 00 00 00 00 00 00
{182} normal block at 0x0000014FB1F710C0, 40 bytes long.
Data: < ` O > 00 00 00 00 CD CD CD CD 60 0D F7 B1 4F 01 00 00
{181} normal block at 0x0000014FB1F64C10, 80 bytes long.
Data: <h i dRi > 68 C6 69 C5 FE 7F 00 00 64 52 69 C5 FE 7F 00 00
{180} normal block at 0x0000014FB1F743F0, 16 bytes long.
Data: < L O > 01 00 00 00 00 00 00 00 10 4C F6 B1 4F 01 00 00
{179} normal block at 0x0000014FB1F5BF60, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{178} normal block at 0x0000014FB1F57280, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{177} normal block at 0x0000014FB1F55310, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{176} normal block at 0x0000014FB1F55560, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{175} normal block at 0x0000014FB1F5E560, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{174} normal block at 0x0000014FB1F55EE0, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{173} normal block at 0x0000014FB1F57530, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{172} normal block at 0x0000014FB1F57C50, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{171} normal block at 0x0000014FB1F57960, 104 bytes long.
Data: < > FF FF FF FF FF FF FF FF FF FF FF FF 00 00 00 00
{170} normal block at 0x0000014FB1F744E0, 8 bytes long.
Data: <8 d > 38 94 64 C5 FE 7F 00 00
{169} normal block at 0x0000014FB1F62560, 24 bytes long.
Data: <0 d D O > 30 8C 64 C5 FE 7F 00 00 E0 44 F7 B1 4F 01 00 00
{168} normal block at 0x0000014FB1F743A0, 16 bytes long.
Data: < f `% O > F0 FE 66 C5 FE 7F 00 00 60 25 F6 B1 4F 01 00 00
{167} normal block at 0x0000014FB1F62800, 88 bytes long.
Data: <pF O pF O > 70 46 FA B1 4F 01 00 00 70 46 FA B1 4F 01 00 00
{166} normal block at 0x0000014FB1F74850, 16 bytes long.
Data: < s > 98 73 E1 C5 FE 7F 00 00 00 00 00 00 00 00 00 00
{162} normal block at 0x0000014FB1F65510, 80 bytes long.
Data: < U O U O > 10 55 F6 B1 4F 01 00 00 10 55 F6 B1 4F 01 00 00
{161} normal block at 0x0000014FB1F74F30, 16 bytes long.
Data: < > D8 D4 E1 C5 FE 7F 00 00 00 00 00 00 00 00 00 00
{160} normal block at 0x0000014FB1F73080, 120 bytes long.
Data: < 0 O 0 O > 80 30 F7 B1 4F 01 00 00 80 30 F7 B1 4F 01 00 00
{159} normal block at 0x0000014FB1F74D00, 16 bytes long.
Data: < > F0 D4 E1 C5 FE 7F 00 00 00 00 00 00 00 00 00 00
{158} normal block at 0x0000014FB1F750C0, 16 bytes long.
Data: <hs > 68 73 E1 C5 FE 7F 00 00 00 00 00 00 00 00 00 00
{157} normal block at 0x0000014FB1F72FE0, 88 bytes long.
Data: < / O / O > E0 2F F7 B1 4F 01 00 00 E0 2F F7 B1 4F 01 00 00
{156} normal block at 0x0000014FB1F74350, 16 bytes long.
Data: < X > 00 58 E1 C5 FE 7F 00 00 00 00 00 00 00 00 00 00
Object dump complete.
So it looks like a lot is leaking, only from calling that single method on the Driver class. The destructor is protected, so I can't call "delete" on it.
This isn't really a memory leak, because you are checking "too early".
Here you can see how get_driver_instance is implemented:
static std::map< sql::SQLString, boost::shared_ptr<MySQL_Driver> > driver;
CPPCONN_PUBLIC_FUNC sql::mysql::MySQL_Driver * get_driver_instance()
{
return get_driver_instance_by_name("");
}
CPPCONN_PUBLIC_FUNC sql::mysql::MySQL_Driver * get_driver_instance_by_name(const char * const clientlib)
{
::sql::SQLString dummy(clientlib);
std::map< sql::SQLString, boost::shared_ptr< MySQL_Driver > >::const_iterator cit;
if ((cit = driver.find(dummy)) != driver.end()) {
return cit->second.get();
} else {
boost::shared_ptr< MySQL_Driver > newDriver;
newDriver.reset(new MySQL_Driver(dummy));
driver[dummy] = newDriver;
return newDriver.get();
}
}
You can see that a global variable driver is created, which is a map of shared_ptrs to the invidiual MySQL_Driver objects. get_driver_instance simply calls get_driver_instance_by_name("") which will return the driver driver[""] or create it if it doesn't exist.
In the shared_ptr documentation you can see that a shared_ptr will delete the pointer that was assigned to it when the shared_ptr itself is destructed. It will be destructed when the map driver is destructed, which will happen when your process is torn down - after main returns.
So, within main, the driver still exists (destructors haven't run yet), so _CrtDumpMemoryLeaks(); will report it as bogus leak.
This is basically the issue described here.
I'm not sure if there is a reliable way to run your code after the destructors though, because the order at which global destructors and atexit handlers run is not specified across different translation units.
Since you are on Windows, one idea would be to (ab)use thread-local storage callbacks, you could run code at the DLL_THREAD_DETACH step which should, as far as I know, run after the regular destructors and such. See this article. (I'm not sure about that though, so I'd be happy if someone could comment who knows this better than I do!)
I'm using _CrtSetBreakAlloc() function to track down memory leaks in debugger builds of my MFC project. (Here's the code from my previous question.)
That technique works for as long as the Allocation order number remains the same. But in many cases it does not. For instance, here's two reports that I'm getting now:
First run:
Detected memory leaks!
Dumping objects ->
{222861} normal block at 0x000002BDF58347C0, 240 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222860} normal block at 0x000002BDEFBA52A0, 16 bytes long.
Data: < > 10 AF B7 EF BD 02 00 00 00 00 00 00 00 00 00 00
{222859} normal block at 0x000002BDEFB7AF10, 40 bytes long.
Data: < R G > A0 52 BA EF BD 02 00 00 C0 47 83 F5 BD 02 00 00
Object dump complete.
Second run:
Detected memory leaks!
Dumping objects ->
{222422} normal block at 0x00000123DDB67540, 224 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222419} normal block at 0x00000123DDBA9C50, 16 bytes long.
Data: < # > 80 16 B7 DD 23 01 00 00 00 00 00 00 00 00 00 00
{222418} normal block at 0x00000123DDB71680, 40 bytes long.
Data: <P # #u # > 50 9C BA DD 23 01 00 00 40 75 B6 DD 23 01 00 00
Object dump complete.
So I'm wondering, if there's a function, or a way to rewrite _CrtSetBreakAlloc to make it trigger a breakpoint on the memory contents? For instance, in my case, when the memory gets Unicode-16 string "C:\Progr" written into it.
There is already a _CrtSetAllocHook. But how should this help? The data is set AFTER the allocation. So no hook will take place when the data you want to trigger is written into the memory allocated.
The only way I see is to use _CrtDoForAllClientObjects and search through all allocated blocks.
My goal is using library OSGEarth to make a MFC project that can display the model "openstreetmap.earth". I finished this and can see the the earth.But every time when i close the project, the output window in vs2015 say there are memory leaks in the program.
Here is the window output:
Detected memory leaks!
Dumping objects ->
{306240} normal block at 0x00000000076902F0, 16 bytes long.
Data: <0,i > 30 2C 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{306239} normal block at 0x0000000007692C30, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{303648} normal block at 0x0000000007693040, 16 bytes long.
Data: < 5i > 90 35 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{303647} normal block at 0x0000000007693590, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{301180} normal block at 0x00000000076938B0, 16 bytes long.
Data: <`8i > 60 38 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{301179} normal block at 0x0000000007693860, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{297799} normal block at 0x0000000007691060, 16 bytes long.
Data: < i > 10 10 69 07 00 00 00 00 00 00 00 00 00 00 00 00
I examined the program and found that when I delete this code m_Model = osgDB::readNodeFile(m_strModelName); there is no more memory leaks.
void COSGEarth::InitSceneGraph(void)
{
// Init the main Root Node/Group
m_Root = new osg::Group;
// Load the Model from the model name,
//delete below line, no memory leak
m_Model = osgDB::readNodeFile(m_strModelName);
if (!m_Model) return;
// Optimize the model
osgUtil::Optimizer optimizer;
optimizer.optimize(m_Model.get());
optimizer.reset();
// Add the model to the scene
m_Root->addChild(m_Model.get());
}
I defined m_Model as osg::ref_ptr<osg::Node> m_Model. This is Intelligent pointer.
Why there are memory leaks and how I can solve this issue?
Here is source code :http://bbs.osgchina.org/forum.php?mod=attachment&aid=NzIwNnwzZWYxZDIyZjlhOGY1MWFjZjhiNGFiMWYwMTc5YmJlNXwxNTEyMzc5ODE2&request=yes&_f=.zip
I believe these reported "leaks" are false positives. Refer to this thread that explains why:
http://forum.openscenegraph.org/viewtopic.php?t=1475
I got memory dump by using #define _CRTDBG_MAP_ALLOC in output window.
Detected memory leaks!
Dumping objects ->
{1078301} normal block at 0x0AB2D840, 48 bytes long.
Data: <2 0 1 4 - 0 9 - > 32 00 30 00 31 00 34 00 2D 00 30 00 39 00 2D 00
{975444} normal block at 0x08D21138, 36 bytes long.
Data: < = = pa > A4 3D C0 08 B0 3D C0 08 01 00 00 00 70 61 BE 08
{975443} normal block at 0x0CE96610, 32 bytes long.
Data: <,X \ pa > 2C 58 C0 08 5C 90 BF 08 01 00 00 00 70 61 BE 08
{975438} normal block at 0x0CE6B1D8, 32 bytes long.
Data: 50 90 BF 08 5C 90 BF 08 01 00 00 00 08 E3 D1 08
{736753} normal block at 0x0CEAA878, 16384 bytes long.
Data: < / / > D8 2F D2 08 D8 2F D2 08 03 00 00 00 00 00 00 00
{736744} normal block at 0x0CEA8838, 8192 bytes long.
Data: <8 8 > 38 0B E2 0C 38 88 EA 0C 01 00 00 00 01 00 00 00
{736738} normal block at 0x0CEA47F8, 16384 bytes long.
Data: < G > 00 00 00 00 F8 47 EA 0C 03 00 00 00 00 00 00 00
{736729} normal block at 0x0CE105A8, 8192 bytes long.
Data: <( > 28 14 D1 08 A8 05 E1 0C 01 00 00 00 CD CD CD CD
{736723} normal block at 0x0CEA07B8, 16384 bytes long.
Data: < G 8 > F8 47 EA 0C 38 88 EA 0C 03 00 00 00 00 00 00 00
{736713} normal block at 0x0CE1E440, 8192 bytes long.
Data: < # > A8 05 E1 0C 40 E4 E1 0C 01 00 00 00 CD CD CD CD
{736707} normal block at 0x0CE1A400, 16384 bytes long.
Data: < > B8 07 EA 0C B8 07 EA 0C 03 00 00 00 00 00 00 00
{736698} normal block at 0x0CE36B18, 8192 bytes long.
Data: <# k > 40 E4 E1 0C 18 6B E3 0C 01 00 00 00 CD CD CD CD
{736692} normal block at 0x0CE163C0, 16384 bytes long.
Data: < > 00 A4 E1 0C 00 A4 E1 0C 03 00 00 00 00 00 00 00
{736682} normal block at 0x0CE44230, 8192 bytes long.
Data: < k 0B > 18 6B E3 0C 30 42 E4 0C 01 00 00 00 CD CD CD CD
{736676} normal block at 0x0CE3E7F8, 16384 bytes long.
Data: < c c > C0 63 E1 0C C0 63 E1 0C 03 00 00 00 00 00 00 00
{736666} normal block at 0x0CE4B6F0, 8192 bytes long.
Data: <0B > 30 42 E4 0C F0 B6 E4 0C 01 00 00 00 CD CD CD CD
{736660} normal block at 0x0CE3A7B8, 16384 bytes long.
Data: < > F8 E7 E3 0C F8 E7 E3 0C 03 00 00 00 00 00 00 00
{736650} normal block at 0x0CE47388, 8192 bytes long.
Data: < s > F0 B6 E4 0C 88 73 E4 0C 01 00 00 00 CD CD CD CD
{736644} normal block at 0x0CE0C568, 16384 bytes long.
Data: < > B8 A7 E3 0C B8 A7 E3 0C 03 00 00 00 00 00 00 00
{736634} normal block at 0x0CE20B38, 8192 bytes long.
Data: < s 8 > 88 73 E4 0C 38 0B E2 0C 01 00 00 00 CD CD CD CD
{736628} normal block at 0x0CE23B70, 16384 bytes long.
Data: 68 C5 E0 0C 68 C5 E0 0C 03 00 00 00 00 00 00 00
{663741} normal block at 0x0CDB6EF0, 60 bytes long.
Data: 50 F2 BF 08 24 6F C0 08 01 00 00 00 30 75 00 00
{1923} normal block at 0x08D20DE8, 8 bytes long.
Data: <#] d > 40 5D BE 08 64 C0 D1 08
{1922} normal block at 0x08D22E10, 56 bytes long.
Data: 70 5C BE 08 00 00 00 00 CD CD CD CD E8 0D D2 08
{1900} normal block at 0x08D27018, 16384 bytes long.
Data: < > F0 E2 D1 08 F0 E2 D1 08 03 00 00 00 00 00 00 00
{1894} normal block at 0x08D22FD8, 16384 bytes long.
Data: 70 3B E2 0C 70 3B E2 0C 03 00 00 00 00 00 00 00
{1883} normal block at 0x08D22900, 144 bytes long.
Data: 43 00 3A 00 5C 00 55 00 73 00 65 00 72 00 73 00
Object dump complete.
Now debegger hits the breakpoint. In watch window, at the time of debuging I add {,,msvcr100d.dll}_crtBreakAlloc in name columm and I add memory location 736723 which is the memory block number in the dump mentioned above in the value columm. This leak happens in a function that loops.
When I continue debugging, it breaks at the memory block number that is entered in watch window see fig 1
press break in the window
see fig 2
_CrtDbgBreak holds 0x69595280.thats where memory leak happens
Now, how do I get to know the pointer that holds particular address loaction at the time of debugging.
Thanks in Advance
Avinash V
The program already breaks there, so you should go the call stack window in Visual studio and find the source code there, then you can find the source code which calls the memory allocation function, that is what you're looking for.
As you're using the allocation number to tracking memory leaks, the allocation number can change if the program doesn't run under previous conditions. Please refer: Finding Memory Leaks Using the CRT Library
When I run the following code:
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#define _CRTDBG_MAP_ALLOC
#include <crtdbg.h>
int main(int argc, char **argv)
{
::testing::InitGoogleTest(&argc, argv);
_CrtDumpMemoryLeaks();
return 0;
}
I get the following output:
Detected memory leaks!
Dumping objects ->
{652} normal block at 0x00074CE0, 4 bytes long.
Data: < L > 98 4C 07 00
{651} normal block at 0x00074C98, 12 bytes long.
Data: <, > 2C 03 1B 01 00 00 00 00 00 00 00 00
{650} normal block at 0x00074C50, 8 bytes long.
Data: <hI > 68 49 07 00 00 00 00 00
{649} normal block at 0x00074C10, 4 bytes long.
Data: <t > 74 03 1B 01
{648} normal block at 0x00074BC8, 8 bytes long.
Data: <xK > 78 4B 07 00 00 00 00 00
{647} normal block at 0x00074B70, 28 bytes long.
Data: < K L > BC 01 1B 01 01 CD CD CD C8 4B 07 00 E0 4C 07 00
{646} normal block at 0x00074B28, 8 bytes long.
Data: < I > 18 49 07 00 00 00 00 00
{645} normal block at 0x00074AE0, 8 bytes long.
Data: < I > 04 49 07 00 00 00 00 00
{644} normal block at 0x00074A98, 8 bytes long.
Data: < H > DC 48 07 00 00 00 00 00
{643} normal block at 0x00074A50, 8 bytes long.
Data: < H > C8 48 07 00 00 00 00 00
{642} normal block at 0x00074A08, 8 bytes long.
Data: < H > B4 48 07 00 00 00 00 00
{641} normal block at 0x000749C0, 8 bytes long.
Data: < H > A0 48 07 00 00 00 00 00
{640} normal block at 0x00074E90, 1 bytes long.
Data: < > 00
{639} normal block at 0x00074870, 272 bytes long.
Data: < t N > 20 03 1B 01 CD CD CD CD 74 FA 1B 01 90 4E 07 00
{638} normal block at 0x00074F68, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{637} normal block at 0x00074E48, 8 bytes long.
Data: <hO G > 68 4F 07 00 47 00 00 00
{616} normal block at 0x00074EE0, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{595} normal block at 0x00074828, 8 bytes long.
Data: < > F0 F9 1B 01 00 00 00 00
{594} normal block at 0x000747E8, 1 bytes long.
Data: < > 00
{561} normal block at 0x000747A0, 5 bytes long.
Data: <fast > 66 61 73 74 00
{496} normal block at 0x00074760, 1 bytes long.
Data: < > 00
{311} normal block at 0x00074720, 1 bytes long.
Data: < > 00
{282} normal block at 0x000746E0, 2 bytes long.
Data: <* > 2A 00
{253} normal block at 0x00074698, 5 bytes long.
Data: <auto > 61 75 74 6F 00
Object dump complete.
What am I doing wrong?
Adding to the accepted answer, the Google documentation states:
Since the statically initialized Google Test singleton requires allocations on the heap, the Visual C++ memory leak detector will report memory leaks at the end of the program run. The easiest way to avoid this is to use the _CrtMemCheckpoint and _CrtMemDumpAllObjectsSince calls to not report any statically initialized heap objects. See MSDN for more details and additional heap check/debug routines.
This involves calling _CrtMemCheckPoint just after ::testing::InitGoogleTest and then calling _CrtMemDumpAllObjectsSince after RUN_ALL_TESTS(). The main function looks a bit like this:
::testing::InitGoogleTest(&argc, &argv);
// Get a checkpoint of the memory after Google Test has been initialized.
_CrtMemState memoryState = {0};
_CrtMemCheckpoint( &memoryState );
int retval = RUN_ALL_TESTS();
// Check for leaks after tests have run
_CrtMemDumpAllObjectsSince( &memoryState );
return retval;
Unfortunately, if a test fails Google test causes a memory leak, which means this isn't a perfect solution.
You're not doing anything wrong. The 'memory leaks' come from heap allocations of the statically initialized google test singleton class.
Here's the answer from google tests FAQ: How do I suppress the memory leak messages on Windows?