I am able to split the Visual Studio memory leak report in false positives and real memory leaks. In my main function, I initialize the memory debugging and generate a real memory leak at the really beginning of my application (never delete pcDynamicHeapStart):
int main()
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char* pcDynamicHeapStart = new char[ 17u ];
strcpy_s( pcDynamicHeapStart, 17u, "DynamicHeapStart" );
...
After my application is finished, the report at the output window contains:
Detected memory leaks!
Dumping objects ->
{15554} normal block at 0x00000000009FCCD0, 80 bytes long.
Data: < > DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD
{14006} normal block at 0x00000000009CB360, 17 bytes long.
Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74
{13998} normal block at 0x00000000009BF4B0, 32 bytes long.
Data: < ^ > E0 5E 9B 00 00 00 00 00 F0 7F 9C 00 00 00 00 00
{13997} normal block at 0x00000000009CA4B0, 8 bytes long.
Data: < > 14 00 00 00 00 00 00 00
{13982} normal block at 0x00000000009A12C0, 16 bytes long.
Data: < # > D0 DD D6 40 01 00 00 00 90 08 9C 00 00 00 00 00
...
Object dump complete.
Now look at line "Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74".
All reportet leaks below are false positives, all above are real leaks.
False positives don't mean there is no leak (it could be a static linked library which allocates heap at startup and never frees it), but you cannot eliminate the leak and that's no problem at all.
Since I invented this approach, I never had leaking applications any more.
I wonder if I could hide the false positives (or leaks from statically allocated objects). My report contains hunderts of reported items. I don't want to scroll up every time to see if all was ok.
Maybe a filter for the output window would help?
Maybe there is an option to ignore the first n leaks or to start reporting after a defined id? If so, the number or Id could be adjusted every now and then as a first approach.
Related
My Visual Studio project, that use gRPC library have memory leaks. After some R&D I made a little project to reproduce the problem and found that don't even need to call any gRPC object in my code.
My steps:
1) Get helloworld.proto from examples
2) Generate C++ files
3) Create C++ project with next code:
#include "helloworld.grpc.pb.h"
void f(){
helloworld::HelloRequest request;
}
int main(){
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
return 0;
}
Part of Output(full dump have 240 lines):
Detected memory leaks!
Dumping objects ->
{1450} normal block at 0x00FD77A0, 16 bytes long.
Data: <`{ t C | > 60 7B FD 00 20 74 FD 00 84 43 CA 00 88 7C CA 00
{1449} normal block at 0x00FECA30, 48 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1448} normal block at 0x00FEA048, 8 bytes long.
Data: < > 20 C6 FE 00 00 00 00 00
{1447} normal block at 0x00FEC610, 52 bytes long.
Data: < v p" v > B8 76 FC 00 70 22 FE 00 B8 76 FC 00 00 00 CD CD
{1441} normal block at 0x00FEA610, 32 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1440} normal block at 0x00FE9B78, 8 bytes long.
If I add google::protobuf::ShutdownProtobufLibrary(); line before return 0;, I will have much less output. Only that:
Detected memory leaks!
Dumping objects ->
{160} normal block at 0x00FCD858, 4 bytes long.
Data: < > 18 D6 B9 00
{159} normal block at 0x00FCD618, 4 bytes long.
Data: < > > C8 3E B9 00
{158} normal block at 0x00FCD678, 4 bytes long.
Data: < ? > D0 3F B9 00
Object dump complete.
But if I include some addition generated sources with many and big services and messages, memory dump will be much bigger.
So since I really don't use any gRPC objects directly, only one think I can imagine is that some static objects still alive when VS Memory Dumper start to work.
Is there a way to fix it or a suggestion what I can do with that?
UPD:
I made some addition work around this problem and open new issue on grpc repository bug tracker: https://github.com/grpc/grpc/issues/22506
Problem Description on that issue contain screenshots with leaked allocations callstack and gRPC debug traces.
UPD2:
I found all of them(1.23.0 version). I leaved the detailed comment there: https://github.com/grpc/grpc/issues/22506#issuecomment-618406755
I'm using _CrtSetBreakAlloc() function to track down memory leaks in debugger builds of my MFC project. (Here's the code from my previous question.)
That technique works for as long as the Allocation order number remains the same. But in many cases it does not. For instance, here's two reports that I'm getting now:
First run:
Detected memory leaks!
Dumping objects ->
{222861} normal block at 0x000002BDF58347C0, 240 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222860} normal block at 0x000002BDEFBA52A0, 16 bytes long.
Data: < > 10 AF B7 EF BD 02 00 00 00 00 00 00 00 00 00 00
{222859} normal block at 0x000002BDEFB7AF10, 40 bytes long.
Data: < R G > A0 52 BA EF BD 02 00 00 C0 47 83 F5 BD 02 00 00
Object dump complete.
Second run:
Detected memory leaks!
Dumping objects ->
{222422} normal block at 0x00000123DDB67540, 224 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222419} normal block at 0x00000123DDBA9C50, 16 bytes long.
Data: < # > 80 16 B7 DD 23 01 00 00 00 00 00 00 00 00 00 00
{222418} normal block at 0x00000123DDB71680, 40 bytes long.
Data: <P # #u # > 50 9C BA DD 23 01 00 00 40 75 B6 DD 23 01 00 00
Object dump complete.
So I'm wondering, if there's a function, or a way to rewrite _CrtSetBreakAlloc to make it trigger a breakpoint on the memory contents? For instance, in my case, when the memory gets Unicode-16 string "C:\Progr" written into it.
There is already a _CrtSetAllocHook. But how should this help? The data is set AFTER the allocation. So no hook will take place when the data you want to trigger is written into the memory allocated.
The only way I see is to use _CrtDoForAllClientObjects and search through all allocated blocks.
I have client/server app. Interaction implemented via Boost.Asio.
I created unit test to check long running transmission of data.
During the test memory leak detected.
Task Manager shows me that memory usage constantly grows - up to 35MB per 10min. Report produced at the end of test contains this:
Result StandardError: Detected memory leaks!
Dumping objects ->
{14522} normal block at 0x00E8ADC0, 16 bytes long.
Data: < _M} Y > B0 5F 4D 7D F9 59 F2 02 F4 E9 E6 00 CC CC CC CC
{14012} normal block at 0x00E8B280, 16 bytes long.
Data: < v > C0 76 A4 00 94 01 00 00 98 01 00 00 F0 D2 E3 00
{14011} normal block at 0x00E74B38, 12 bytes long.
Data: < > 00 00 00 00 9C 01 00 00 98 01 00 00
{14007} normal block at 0x00E745F8, 8 bytes long.
Data: < L > E0 4C E5 00 00 00 00 00
{14006} normal block at 0x00E54CB8, 60 bytes long.
Data: < v 4 > E4 76 A4 00 D0 D3 B0 00 00 00 00 00 34 80 E3 00
{13724} normal block at 0x00E710F8, 385 bytes long.
Data: < > 03 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{13722} normal block at 0x00E85C58, 28 bytes long.
Data: < F _ _ > F2 B6 46 00 B4 5F E3 00 A0 5F E3 00 BC 96 E7 00
{13720} normal block at 0x00E6F9B8, 80 bytes long.
Data: <wxF > 77 78 46 00 FC FF FF FF 00 00 00 00 CC CC CC CC
{13700} normal block at 0x00E6DFD0, 88 bytes long.
Data: < > C8 A4 A4 00 01 00 00 00 01 00 00 00 00 00 00 00
…
Data: <` X L > 60 8E E0 00 58 17 E2 00 CD 4C F7 EA
{153} normal block at 0x00DF0070, 12 bytes long.
Data: <` kf > 60 8D E0 00 98 00 E2 00 15 6B 66 0E
{151} normal block at 0x00DF0038, 12 bytes long.
Data: < .g> 20 86 E0 00 E0 FC E1 00 9D B7 2E 67
{149} normal block at 0x00DF0658, 12 bytes long.
Data: < G > A0 89 E0 00 00 00 00 00 47 01 D5 11
{147} normal block at 0x00DF0268, 12 bytes long.
Data: <` > 60 84 E0 00 A8 F5 E1 00 ED 8C AA BA
{145} normal block at 0x00DF0230, 12 bytes long.
Data: < ' " > 20 84 E0 00 00 11 E2 00 27 B0 22 00
{143} normal block at 0x00DF0690, 12 bytes long.
Data: <` P KnOQ> 60 88 E0 00 50 04 E2 00 4B 6E 4F 51
{141} normal block at 0x00DF0540, 12 bytes long.
Data: <` > 7> 60 82 E0 00 00 0A E2 00 3E 0D 9E 37
{139} normal block at 0x00DF0620, 12 bytes long.
Data: <Pq 1 > 50 71 DF 00 00 00 00 00 E5 DD 31 B5
{137} normal block at 0x00DF0700, 12 bytes long.
Data: < q # #> 10 71 DF 00 40 FA E1 00 14 8B 0D 23
{134} normal block at 0x00DF5CE0, 96 bytes long.
Data: <h BV BV > 68 19 E0 00 D0 42 56 00 E0 42 56 00 88 00 00 00
{133} normal block at 0x00DF0188, 8 bytes long.
Data: < \ > A0 5C DF 00 00 00 00 00
{132} normal block at 0x00DF5CA0, 16 bytes long.
Data: < > 88 01 DF 00 D8 AA DF 00 20 AC DF 00 20 AC DF 00
Object dump complete.
I tried to put breakpoint to mentioned memory allocations via boost's --detect_memory_leaks="allocation number" and setting in Watch window at Debug mode _crtBreakAlloc = 1000. It does not work. Maybe because leaks occur not in my code, but in boost/OpenSSL code?
I can't figure out where leaks occur. What can I do?
Windows 8, Visual Studio 2015, boost 1.60, OpenSSL 1.0.2g
Have a look at this post to see some suggested tips for dealing with memory leaks under windows. Have a scroll down, don't just look at the first answer. In particular it may be worth considering the DEBUG_NEW macro-based solution discussed by the second answer. Given that boost asio is largely header-only, this should help you even if the offending allocations come from the boost library.
Part 1: Report from Visual Studio about memory leaks
I'm using Boost.Asio to communicate with the server over TLS, i.e. Boost.Asio uses OpenSSL.
Seems that OpenSSL initializes itself and do not cleans memory before the end of the app (because app closes and memory will be released anyway).
This is not big chunk of memory (I do not know how to measure it).
As result Visual Studio treated that memory as leak. But it is not.
(This is my assumption, maybe real reason for such report is smth else. But I do not see any other possible reasons. )
Part 2:
In the question above I asked about memory leak for tens of Mb. This is my bad code that leads to huge memory buffer )).
Huge memory consumption and report from VisualStudio about memory leak made me believe that smth is very wrong ))
Buffer easily reduced to much smaller size.
I'm generating a big char for future passing to a thread with strcpyand strcat. It was all going ok until I needed to substitute all the occurrences of the space for a comma in one of the strings. I searched for the solution to this here
Problem is, now I have a memory leak and the program exits with this message:
_Dumping objects ->
{473} normal block at 0x0091E0C0, 32 bytes long.
Data: <AMLUH UL619 BKD > 41 4D 4C 55 48 20 55 4C 36 31 39 20 42 4B 44 20
{472} normal block at 0x049CCD20, 8 bytes long.
Data: < > BC ED 18 00 F0 EC 18 00
{416} normal block at 0x082B5158, 1000 bytes long.
Data: <Number of Aircra> 4E 75 6D 62 65 72 20 6F 66 20 41 69 72 63 72 61
{415} normal block at 0x04A0E200, 20 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{185} normal block at 0x049DA998, 64 bytes long.
Data: < O X8 8 > DC 4F BB 58 38 C5 9A 06 38 D3 88 00 00 00 00 00
PythonPlugin.cpp(76) : {172} normal block at 0x0088D338, 72 bytes long.
Data: < a X F <) > DC 61 BB 58 18 BB 46 06 3C 29 8A 06 CD CD CD CD
Object dump complete._
Here's the code so you can tell me what I'm doing wrong:
Code of the problem:
char* loop_planes(ac){
char *char1=new char[1000];
for(...){
strcpy(char1,"Number of Aircrafts\nHour of simulation\n\n");
string tmp2=fp.GetRoute();
tmp2.replace(tmp2.begin(),tmp2.end()," ",","); #PROBLEM IS IN THIS LINE
const char *tmp3=tmp2.c_str();
strcat(char1,tmp3);
}
return char1;
}
The fp.GetRoute()is a string like this: AMLUH UL619 BKD UM748 RUTOL
Also, now that I'm talking about memory allocation, I don't want any future problems with memory leaks, so when should I delete char1, knowing that the thread is going to call this function?
When you call std::string::replace, the best match is a fumction template whose third and fourth parameters are input iterators. So the string literals you are passing are interpreted as the start and end of a range, when they are not. This leads to undefined behaviour.
You can fix this easily by using the algorithm std::replace instead:
std::replace(tmp2.begin(),tmp2.end(),' ',',');
Note that here the third and fourth parameters are single chars.
The answer from #juanchopanza correctly identifies and fixes the original question, but since you've asked about memory leaks in general, I'd like to additionally suggest that you replace your function with something that doesn't use new or delete or strcpy or strcat.
std::string loop_planes() {
std::string res("Number of Aircrafts\nHour of simulation\n\n");
for (...) {
std::string route = fp.GetRoute();
std::replace(route.begin(), route.end(), ' ',',');
res += route;
}
return res;
}
This doesn't require any explicit memory allocation or deletion and does not leak memory. I also took the liberty of changing the return type from char * to std::string to eliminate messy conversions.
When I run the following code:
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#define _CRTDBG_MAP_ALLOC
#include <crtdbg.h>
int main(int argc, char **argv)
{
::testing::InitGoogleTest(&argc, argv);
_CrtDumpMemoryLeaks();
return 0;
}
I get the following output:
Detected memory leaks!
Dumping objects ->
{652} normal block at 0x00074CE0, 4 bytes long.
Data: < L > 98 4C 07 00
{651} normal block at 0x00074C98, 12 bytes long.
Data: <, > 2C 03 1B 01 00 00 00 00 00 00 00 00
{650} normal block at 0x00074C50, 8 bytes long.
Data: <hI > 68 49 07 00 00 00 00 00
{649} normal block at 0x00074C10, 4 bytes long.
Data: <t > 74 03 1B 01
{648} normal block at 0x00074BC8, 8 bytes long.
Data: <xK > 78 4B 07 00 00 00 00 00
{647} normal block at 0x00074B70, 28 bytes long.
Data: < K L > BC 01 1B 01 01 CD CD CD C8 4B 07 00 E0 4C 07 00
{646} normal block at 0x00074B28, 8 bytes long.
Data: < I > 18 49 07 00 00 00 00 00
{645} normal block at 0x00074AE0, 8 bytes long.
Data: < I > 04 49 07 00 00 00 00 00
{644} normal block at 0x00074A98, 8 bytes long.
Data: < H > DC 48 07 00 00 00 00 00
{643} normal block at 0x00074A50, 8 bytes long.
Data: < H > C8 48 07 00 00 00 00 00
{642} normal block at 0x00074A08, 8 bytes long.
Data: < H > B4 48 07 00 00 00 00 00
{641} normal block at 0x000749C0, 8 bytes long.
Data: < H > A0 48 07 00 00 00 00 00
{640} normal block at 0x00074E90, 1 bytes long.
Data: < > 00
{639} normal block at 0x00074870, 272 bytes long.
Data: < t N > 20 03 1B 01 CD CD CD CD 74 FA 1B 01 90 4E 07 00
{638} normal block at 0x00074F68, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{637} normal block at 0x00074E48, 8 bytes long.
Data: <hO G > 68 4F 07 00 47 00 00 00
{616} normal block at 0x00074EE0, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{595} normal block at 0x00074828, 8 bytes long.
Data: < > F0 F9 1B 01 00 00 00 00
{594} normal block at 0x000747E8, 1 bytes long.
Data: < > 00
{561} normal block at 0x000747A0, 5 bytes long.
Data: <fast > 66 61 73 74 00
{496} normal block at 0x00074760, 1 bytes long.
Data: < > 00
{311} normal block at 0x00074720, 1 bytes long.
Data: < > 00
{282} normal block at 0x000746E0, 2 bytes long.
Data: <* > 2A 00
{253} normal block at 0x00074698, 5 bytes long.
Data: <auto > 61 75 74 6F 00
Object dump complete.
What am I doing wrong?
Adding to the accepted answer, the Google documentation states:
Since the statically initialized Google Test singleton requires allocations on the heap, the Visual C++ memory leak detector will report memory leaks at the end of the program run. The easiest way to avoid this is to use the _CrtMemCheckpoint and _CrtMemDumpAllObjectsSince calls to not report any statically initialized heap objects. See MSDN for more details and additional heap check/debug routines.
This involves calling _CrtMemCheckPoint just after ::testing::InitGoogleTest and then calling _CrtMemDumpAllObjectsSince after RUN_ALL_TESTS(). The main function looks a bit like this:
::testing::InitGoogleTest(&argc, &argv);
// Get a checkpoint of the memory after Google Test has been initialized.
_CrtMemState memoryState = {0};
_CrtMemCheckpoint( &memoryState );
int retval = RUN_ALL_TESTS();
// Check for leaks after tests have run
_CrtMemDumpAllObjectsSince( &memoryState );
return retval;
Unfortunately, if a test fails Google test causes a memory leak, which means this isn't a perfect solution.
You're not doing anything wrong. The 'memory leaks' come from heap allocations of the statically initialized google test singleton class.
Here's the answer from google tests FAQ: How do I suppress the memory leak messages on Windows?