Why will cout.imbue(locale("")) cause memory leaks? - c++

My compiler is Visual VC++ 2013. The following simplest program will cause a few memory leaks.
Why? How to fix it?
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
#include <cstdlib>
#include <iostream>
#include <locale>
using namespace std;
int main()
{
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF|_CRTDBG_LEAK_CHECK_DF);
cout.imbue(locale("")); // If this statement is commented, then OK.
}
The debug window outputs as follows:
Detected memory leaks!
Dumping objects ->
{387} normal block at 0x004FF8C8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{379} normal block at 0x004FF678, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{352} normal block at 0x004FE6E8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{344} normal block at 0x004FE498, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{318} normal block at 0x004FD5C8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{308} normal block at 0x004F8860, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
Object dump complete.
Detected memory leaks!
Dumping objects ->
{387} normal block at 0x004FF8C8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{379} normal block at 0x004FF678, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{352} normal block at 0x004FE6E8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{344} normal block at 0x004FE498, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{318} normal block at 0x004FD5C8, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
{308} normal block at 0x004F8860, 12 bytes long.
Data: <z h - C N > 7A 00 68 00 2D 00 43 00 4E 00 00 00
Object dump complete.
The program '[0x5B44] cpptest.exe' has exited with code 0 (0x0).

I was using std::codecvt and get a similar problem. I am not sure whether it is a same cause. Just try to provide s possible way to discover the root cause.
You can reference the example in http://www.cplusplus.com/reference/locale/codecvt/in/
It actually "use" the member of mylocale, and it seems without an r-value reference version overload. So when directly write const facet_type& myfacet = std::use_facet<facet_type>(std::locale()); may cause the same problem. .
So try
auto myloc = locale("");
cout.imbue(myloc);

Related

_CrtSetBreakAlloc using memory contents and not the allocation number

I'm using _CrtSetBreakAlloc() function to track down memory leaks in debugger builds of my MFC project. (Here's the code from my previous question.)
That technique works for as long as the Allocation order number remains the same. But in many cases it does not. For instance, here's two reports that I'm getting now:
First run:
Detected memory leaks!
Dumping objects ->
{222861} normal block at 0x000002BDF58347C0, 240 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222860} normal block at 0x000002BDEFBA52A0, 16 bytes long.
Data: < > 10 AF B7 EF BD 02 00 00 00 00 00 00 00 00 00 00
{222859} normal block at 0x000002BDEFB7AF10, 40 bytes long.
Data: < R G > A0 52 BA EF BD 02 00 00 C0 47 83 F5 BD 02 00 00
Object dump complete.
Second run:
Detected memory leaks!
Dumping objects ->
{222422} normal block at 0x00000123DDB67540, 224 bytes long.
Data: <C : \ P r o g r > 43 00 3A 00 5C 00 50 00 72 00 6F 00 67 00 72 00
{222419} normal block at 0x00000123DDBA9C50, 16 bytes long.
Data: < # > 80 16 B7 DD 23 01 00 00 00 00 00 00 00 00 00 00
{222418} normal block at 0x00000123DDB71680, 40 bytes long.
Data: <P # #u # > 50 9C BA DD 23 01 00 00 40 75 B6 DD 23 01 00 00
Object dump complete.
So I'm wondering, if there's a function, or a way to rewrite _CrtSetBreakAlloc to make it trigger a breakpoint on the memory contents? For instance, in my case, when the memory gets Unicode-16 string "C:\Progr" written into it.
There is already a _CrtSetAllocHook. But how should this help? The data is set AFTER the allocation. So no hook will take place when the data you want to trigger is written into the memory allocated.
The only way I see is to use _CrtDoForAllClientObjects and search through all allocated blocks.

memory leak in osgearth

My goal is using library OSGEarth to make a MFC project that can display the model "openstreetmap.earth". I finished this and can see the the earth.But every time when i close the project, the output window in vs2015 say there are memory leaks in the program.
Here is the window output:
Detected memory leaks!
Dumping objects ->
{306240} normal block at 0x00000000076902F0, 16 bytes long.
Data: <0,i > 30 2C 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{306239} normal block at 0x0000000007692C30, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{303648} normal block at 0x0000000007693040, 16 bytes long.
Data: < 5i > 90 35 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{303647} normal block at 0x0000000007693590, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{301180} normal block at 0x00000000076938B0, 16 bytes long.
Data: <`8i > 60 38 69 07 00 00 00 00 00 00 00 00 00 00 00 00
{301179} normal block at 0x0000000007693860, 9 bytes long.
Data: <Pragma: > 50 72 61 67 6D 61 3A 20 00
{297799} normal block at 0x0000000007691060, 16 bytes long.
Data: < i > 10 10 69 07 00 00 00 00 00 00 00 00 00 00 00 00
I examined the program and found that when I delete this code m_Model = osgDB::readNodeFile(m_strModelName); there is no more memory leaks.
void COSGEarth::InitSceneGraph(void)
{
// Init the main Root Node/Group
m_Root = new osg::Group;
// Load the Model from the model name,
//delete below line, no memory leak
m_Model = osgDB::readNodeFile(m_strModelName);
if (!m_Model) return;
// Optimize the model
osgUtil::Optimizer optimizer;
optimizer.optimize(m_Model.get());
optimizer.reset();
// Add the model to the scene
m_Root->addChild(m_Model.get());
}
I defined m_Model as osg::ref_ptr<osg::Node> m_Model. This is Intelligent pointer.
Why there are memory leaks and how I can solve this issue?
Here is source code :http://bbs.osgchina.org/forum.php?mod=attachment&aid=NzIwNnwzZWYxZDIyZjlhOGY1MWFjZjhiNGFiMWYwMTc5YmJlNXwxNTEyMzc5ODE2&request=yes&_f=.zip
I believe these reported "leaks" are false positives. Refer to this thread that explains why:
http://forum.openscenegraph.org/viewtopic.php?t=1475

Memory leak while using shared_ptr

Detected memory leaks!
Dumping objects ->
{9370} normal block at 0x000000C16B24C480, 24 bytes long.
Data: <`h= > 60 68 3D FB F6 7F 00 00 01 00 00 00 01 00 00 00
{8549} normal block at 0x000000C16B25CC30, 21627 bytes long.
Data: < 0 %k > FA FA FA FA FA FA FA FA 30 CC 25 6B C1 00 00 00
{5196} normal block at 0x000000C16B253320, 12839 bytes long.
Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
{192} normal block at 0x000000C16B24CE40, 24 bytes long.
Data: < m= > 20 6D 3D FB F6 7F 00 00 02 00 00 00 01 00 00 00
{191} normal block at 0x000000C16B251780, 16 bytes long.
Data: < $k > 10 DB 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{190} normal block at 0x000000C16B251410, 16 bytes long.
Data: < $k > F0 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{189} normal block at 0x000000C16B2514B0, 16 bytes long.
Data: < $k > D0 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{188} normal block at 0x000000C16B2516E0, 16 bytes long.
Data: < $k > B0 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{187} normal block at 0x000000C16B251690, 16 bytes long.
Data: < $k > 90 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{186} normal block at 0x000000C16B251370, 16 bytes long.
Data: <p $k > 70 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{185} normal block at 0x000000C16B251230, 16 bytes long.
Data: <P $k > 50 DA 24 6B C1 00 00 00 00 00 00 00 00 00 00 00
{184} normal block at 0x000000C16B24DA50, 224 bytes long.
Data: <0 %k #3%k > 30 12 25 6B C1 00 00 00 40 33 25 6B C1 00 00 00
{156} normal block at 0x000000C16B24C4E0, 24 bytes long.
Data: <P $k # $k > 50 DA 24 6B C1 00 00 00 40 CE 24 6B C1 00 00 00
{155} normal block at 0x000000C16B24C300, 32 bytes long.
Data: <../dataset/refer> 2E 2E 2F 64 61 74 61 73 65 74 2F 72 65 66 65 72
{154} normal block at 0x000000C16B250AB0, 16 bytes long.
Data: < k > A8 F4 09 6B C1 00 00 00 00 00 00 00 00 00 00 00
Object dump complete.
'3DMM_1st.exe' (Win32): Loaded 'C:\Windows\System32\kernel.appcore.dll'. Cannot find or open the PDB file.
The program '[36392] 3DMM_1st.exe' has exited with code 1 (0x1).strong text
Can anyone help me? I got a problem relating to memory leaks. I don't know how to solve it, can anyone can give some suggestions, it will greatly be appreciated.
Here are some info about my code. I created a struct named ObjectData and a class named ObjectLoader just as follows:
struct ObjectData {
std::vector <glm::vec3> vertices, normals, colors;
std::vector <glm::vec2> texCoords;
std::vector <unsigned int> vIndices, uIndices, nIndices;
};
class ObjectLoader {
private:
std::tr1::shared_ptr<ObjectData> object;
bool hasUV, hasNormals, hasColor, colorChecked, indexChecked;
std::string parseString(std::string src, std::string code);
std::vector<glm::vec3> parseVerColor(std::string src, std::string code);
glm::vec2 parseVec2(std::string src, std::string code);
glm::vec3 parseVec3(std::string src, std::string code);
void addIndices(std::string str);
void checkIndices(std::string str);
void checkColors(std::string str);
void loadObjects(std::string objPath);
public:
ObjectLoader(std::string objName);
~ObjectLoader();
std::tr1::shared_ptr<ObjectData> getModel();
};
Here is the getModel() and ObjectLoader() implementation code:
std::tr1::shared_ptr<ObjectData> ObjectLoader::getModel() {
return object;
}
ObjectLoader::ObjectLoader(std::string objName) {
indexChecked = false;
colorChecked = false;
std::string fileName = objName;
object = std::tr1::shared_ptr<ObjectData>(new ObjectData());
}
When I test my code I get the problem related to the memory leaks.
Here is my test code:
std::tr1::shared_ptr<ObjectLoader> loader = std::tr1::shared_ptr<ObjectLoader>(new ObjectLoader(fileName));
std::tr1::shared_ptr<ObjectData> data = loader->getModel();
_CrtDumpMemoryLeaks();
You have a problem detecting the leaks because of the scope of the std::shared_ptr.
In the code;
std::tr1::shared_ptr<ObjectLoader> loader = std::tr1::shared_ptr<ObjectLoader>(new ObjectLoader(fileName));
std::tr1::shared_ptr<ObjectData> data = loader->getModel();
_CrtDumpMemoryLeaks();
The loader and data destructors, and hence the deletions, do not run until after the _CrtDumpMemoryLeaks(); function reports the leaks.
Adding an extra scope can help with this, else the code needs to be restructured.
{
std::tr1::shared_ptr<ObjectLoader> loader = std::tr1::shared_ptr<ObjectLoader>(new ObjectLoader(fileName));
std::tr1::shared_ptr<ObjectData> data = loader->getModel();
} // destructors run here...
_CrtDumpMemoryLeaks();

Memory Leak with Openssl when allocating memory for X509_STORE

I am using openssl in my project. When I exit my application I get "Detected memory leaks!" in Visual Studio 2013.
Detected memory leaks!
Dumping objects ->
{70202} normal block at 0x056CB738, 12 bytes long.
Data: <8 j > 38 E8 6A 05 00 00 00 00 04 00 00 00
{70201} normal block at 0x056CB6E8, 16 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70200} normal block at 0x056CB698, 20 bytes long.
Data: < l > 00 00 00 00 E8 B6 6C 05 00 00 00 00 04 00 00 00
{70199} normal block at 0x056AE838, 12 bytes long.
Data: < l > 04 00 00 00 98 B6 6C 05 00 00 00 00
{70198} normal block at 0x056CB618, 64 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70197} normal block at 0x056CB578, 96 bytes long.
Data: < l 3 3 > 18 B6 6C 05 00 FE C0 33 C0 FD C0 33 08 00 00 00
Object dump complete.
When I add
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
_CrtSetBreakAlloc(70202);
to main main function I always get a breakpoint at the allocation of the x509 store, no matter for which of the 6 numbers (70202,...) I set the break point.
I initialize and uninitialize the x509 store in a class' constructor and destructor (see below).
Is there anything else I need to look out for when using the x509_STORE?
Foo::CSCACerts::CSCACerts(void)
{
m_store = X509_STORE_new();
}
Foo::CSCACerts::~CSCACerts(void)
{
X509_STORE_free( m_store );
}

Memory leaks in boost asio

I have client/server app. Interaction implemented via Boost.Asio.
I created unit test to check long running transmission of data.
During the test memory leak detected.
Task Manager shows me that memory usage constantly grows - up to 35MB per 10min. Report produced at the end of test contains this:
Result StandardError: Detected memory leaks!
Dumping objects ->
{14522} normal block at 0x00E8ADC0, 16 bytes long.
Data: < _M} Y > B0 5F 4D 7D F9 59 F2 02 F4 E9 E6 00 CC CC CC CC
{14012} normal block at 0x00E8B280, 16 bytes long.
Data: < v > C0 76 A4 00 94 01 00 00 98 01 00 00 F0 D2 E3 00
{14011} normal block at 0x00E74B38, 12 bytes long.
Data: < > 00 00 00 00 9C 01 00 00 98 01 00 00
{14007} normal block at 0x00E745F8, 8 bytes long.
Data: < L > E0 4C E5 00 00 00 00 00
{14006} normal block at 0x00E54CB8, 60 bytes long.
Data: < v 4 > E4 76 A4 00 D0 D3 B0 00 00 00 00 00 34 80 E3 00
{13724} normal block at 0x00E710F8, 385 bytes long.
Data: < > 03 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{13722} normal block at 0x00E85C58, 28 bytes long.
Data: < F _ _ > F2 B6 46 00 B4 5F E3 00 A0 5F E3 00 BC 96 E7 00
{13720} normal block at 0x00E6F9B8, 80 bytes long.
Data: <wxF > 77 78 46 00 FC FF FF FF 00 00 00 00 CC CC CC CC
{13700} normal block at 0x00E6DFD0, 88 bytes long.
Data: < > C8 A4 A4 00 01 00 00 00 01 00 00 00 00 00 00 00
…
Data: <` X L > 60 8E E0 00 58 17 E2 00 CD 4C F7 EA
{153} normal block at 0x00DF0070, 12 bytes long.
Data: <` kf > 60 8D E0 00 98 00 E2 00 15 6B 66 0E
{151} normal block at 0x00DF0038, 12 bytes long.
Data: < .g> 20 86 E0 00 E0 FC E1 00 9D B7 2E 67
{149} normal block at 0x00DF0658, 12 bytes long.
Data: < G > A0 89 E0 00 00 00 00 00 47 01 D5 11
{147} normal block at 0x00DF0268, 12 bytes long.
Data: <` > 60 84 E0 00 A8 F5 E1 00 ED 8C AA BA
{145} normal block at 0x00DF0230, 12 bytes long.
Data: < ' " > 20 84 E0 00 00 11 E2 00 27 B0 22 00
{143} normal block at 0x00DF0690, 12 bytes long.
Data: <` P KnOQ> 60 88 E0 00 50 04 E2 00 4B 6E 4F 51
{141} normal block at 0x00DF0540, 12 bytes long.
Data: <` > 7> 60 82 E0 00 00 0A E2 00 3E 0D 9E 37
{139} normal block at 0x00DF0620, 12 bytes long.
Data: <Pq 1 > 50 71 DF 00 00 00 00 00 E5 DD 31 B5
{137} normal block at 0x00DF0700, 12 bytes long.
Data: < q # #> 10 71 DF 00 40 FA E1 00 14 8B 0D 23
{134} normal block at 0x00DF5CE0, 96 bytes long.
Data: <h BV BV > 68 19 E0 00 D0 42 56 00 E0 42 56 00 88 00 00 00
{133} normal block at 0x00DF0188, 8 bytes long.
Data: < \ > A0 5C DF 00 00 00 00 00
{132} normal block at 0x00DF5CA0, 16 bytes long.
Data: < > 88 01 DF 00 D8 AA DF 00 20 AC DF 00 20 AC DF 00
Object dump complete.
I tried to put breakpoint to mentioned memory allocations via boost's --detect_memory_leaks="allocation number" and setting in Watch window at Debug mode _crtBreakAlloc = 1000. It does not work. Maybe because leaks occur not in my code, but in boost/OpenSSL code?
I can't figure out where leaks occur. What can I do?
Windows 8, Visual Studio 2015, boost 1.60, OpenSSL 1.0.2g
Have a look at this post to see some suggested tips for dealing with memory leaks under windows. Have a scroll down, don't just look at the first answer. In particular it may be worth considering the DEBUG_NEW macro-based solution discussed by the second answer. Given that boost asio is largely header-only, this should help you even if the offending allocations come from the boost library.
Part 1: Report from Visual Studio about memory leaks
I'm using Boost.Asio to communicate with the server over TLS, i.e. Boost.Asio uses OpenSSL.
Seems that OpenSSL initializes itself and do not cleans memory before the end of the app (because app closes and memory will be released anyway).
This is not big chunk of memory (I do not know how to measure it).
As result Visual Studio treated that memory as leak. But it is not.
(This is my assumption, maybe real reason for such report is smth else. But I do not see any other possible reasons. )
Part 2:
In the question above I asked about memory leak for tens of Mb. This is my bad code that leads to huge memory buffer )).
Huge memory consumption and report from VisualStudio about memory leak made me believe that smth is very wrong ))
Buffer easily reduced to much smaller size.