I have been unable to find much or any information on this. I have a project which is built using VS2005, thus using the mscvr80.dll. My project also loads a third party library, which then loads up mscvrt60.dll.
Now I have a strange bug in my program where the program crashes with a memory read violation (in debug it is at 0xcdcdcdcd, which from my searches describes an non-initialized memory location). The debugger indicates that the violation is within an unknown function in the third party library.
I have contacted the owners of this library and they do not know of any error, as described. Also, I have other projects, compiled in VS60, which use this third party library, and that do not have similar errors. Thus I am wondering, can there be issues with using multiple common runtime versions? I remember vaguely hearing about situations where one runtime (say in the .dll) could allocate memory, and then if the other version tries to free this memory, that can cause issues. However, I cannot remember where I read this, nor can I find much information on the subject.
Any input is much appreciated.
Freeing memory allocated by one version of the runtime in another version can certainly cause problems. There's no guarantee that the implementation details of the CRT heap remained the same between versions. If you can't find any other work arounds, you can try compiling your application against mscvrt60.dll.
If you are seeing 0xcdcdcdcd, then you may be mixing the debug run-time library and the release run-time library. They should work OK together, but you might try replicating the issue using only the release runtime.
Related
I've read about that some time ago but am unable to locate the change to the crt on msdn or anywhere else in the web.
I think the msvcrt has been changed in the VC++ release of VS2012 in a way that it no longer uses the private heap for allocations and uses process heap instead.
Afaik it targeted issues of memory allocated in multiple libs which link statically against a crt and the subsequent release of memory in another lib where it was allocated.
This is a major change in my view and I wonder why I cannot find any doc mentioning it.
Either that or I made it up (which I doubt as I've discussed the topic with colleagues some time ago)
That is correct. This change was made in VS2012 and appears to be the behavior going forward. You can see the relevant code in the source code you have for the CRT, find it in the vc/crt/src/heapinit.c source code file:
int __cdecl _heap_init (void)
{
if ( (_crtheap = GetProcessHeap()) == NULL )
return 0;
return 1;
}
Previous versions used HeapCreate() here, also with VS5 and VS6 legacy hacks. This is not well publicized so the exact reasoning behind it is not that clear. One important detail of VS2012 is that it initially shipped without XP support. Which dropped the requirement to explicitly having to enable the low-fragmentation heap. LFH is automatically enabled on Vista and up so the default process heap is already good as-is.
A very major advantage of using the default process heap is that it solves the very nasty problems with DLLs having their own copy of the CRT and thus using their own allocators. This traditionally ended up very poorly when one DLL needed to release the memory allocated by another. No such problems anymore, as long as DLLs are rebuilt to target the later version of the CRT, they now automatically use the exact same heap so passing pointers across the module boundaries is safe.
Some concerns are valid as well. It is not clear to me what happened in Update 1, the one that brought back XP support. Assuming the CRT source is accurate (it looks like it is) then there are good odds that your program is running without the LFH enabled on XP.
And there's a possible security concern, the default process heap is also used by the winapi. So it is now technically possible for malware to hijack a buffer overflow bug in your code and reach winapi buffers. If you've previously subjected your code to a security analysis and/or fuzz testing then you should redo this. I assume that Microsoft is strongly relying on their secure CRT implementation to avoid the security issues but that's a blind guess. Hopefully James McNellis will be available to give deeper insight.
I've just started trying to integrate the Poco C++ Library with our game engine, however every time I link /usr/lib/libPocoFoundation.so my program suddenly has 51 memory leaks. Removing the link option gets rid of all the leaks (none of them are from my code). This happens even if I remove all Poco #includes from my c++ files.
I doubt there are really 51 memory leaks with Poco's Foundation (core) methods - searching their forums didn't reveal anything, and I'm sure other users would notice something this glaring. I think it is more likely a problem with how I'm linking to Poco?
I'm on Ubuntu 11, using Code::Blocks for an IDE, g++ 4.5.2 to build, and fetched Poco 1.3.6p1-1build1 from the ubuntu ppa (sudo apt-get install libpoco-dev libpoco-doc).
Any suggestions on what the problem could be are most welcome! The only thing I can think of is that I'm not linking Poco correctly.
Possibly related: The OP at c-poco-lib-linking-error-when-trying-static-linking-vs9-express mentions something about "Added Preprocessor flags Foundation_EXPORTS POCO_STATIC PCRE_STATIC" - what would that do? I can't see any information on the Poco reference pages about how to correctly link Poco.
You don't say what tool is telling you that you have memory leaks, but I'm going to presume that it is valgrind since you are on Ubuntu.
If the leaks occur even if you aren't calling any POCO methods, then it is likely that these are one-off allocations occurring during the static initialization of the POCO library that for whatever reason are not being torn down later.
While that isn't particularly great behavior on the part of a library, it is not fatal. The leaks that you should be worried about are ones that will occur repeatedly and will gradually consume memory.
I'd recommend using valgrind --gen-suppressions=all to generate suppressions for the one time leaks for now (especially good that you aren't calling any POCO methods). Then, have a look at the POCO library and see if you can figure out why these allocations aren't being unwound at .fini time. If you can, great, let the POCO people have your fixes, and then you can take down your suppression entries. If not, leave them in so these 'false positives' don't interfere with finding truly harmful memory leaks in your code.
Some objects must be deallocated by calling method "release"
Below is an extract from a doc poco
"
DOMObject defines the rules for memory management in this implementation of the DOM. Violation of these rules, which are outlined in the following, results in memory leaks or dangling pointers.
Every object created by new or by a factory method (for example, Document::create*) must be released with a call to release() or autoRelease() when it is no longer needed. "
OK..... I've done all the reading on related questions, and a few MSDN articles, and about a day's worth of googling.
What's the current "state of the art" answer to this question:
I'm using VS 2008, C++ unmanaged code. I have a solution file with quite a few DLLs and quite a few EXEs. As long as I completely control the build environment, such that all pieces and parts are built with the same flags, and use the same runtime libaries, and no one has a statically linked CRT library, am I ok to pass STL objects around?
It seems like this should be OK, but depending on which article you read, there's lots of Fear, Uncertainty, and Doubt.
I know there's all sorts of problems with templates that produce static data behind the scenes (every dll would get their own copy, leading to heartache), but what about regular old STL?
As long as they ALL use the exact same version of runtime DLLs, there should be no problem with STL. But once you happen to have several around, they will use for instance different heaps - leading to no end of troubles.
We successfully pass STL objects around in our application which is made up from dozens of DLLs. To ensure it works one of our automated tests that runs at every build is to verify the settings for all projects. If you add a new project and misconfigure it, or break the configuration of an existing project, the build fails.
The settings we check are as follows. Note not all of these will cause issues, but we check them for consistency.
#defines
_WIN32_WINNT
STRICT
_WIN32_IE
NDEBUG
_DEBUG
_SECURE_SCL
Compiler options
DebugInformationFormat
WholeProgramOptimization
RuntimeLibrary
We use stl collections in our application and pass them to and from methods in different dlls (usually as references). This doesn't cause any trouble.
The only area where we have had trouble is where one dll allocates memory and another dll tries to delete it. This only is reported as bad, but I am not sure why. However it only seems to be a problem on Debug builds (where it is reported), but still works on release builds. Having said that where ever I come across this I do fix it.
If I was writing a 3rd party library I would think twice about using stl parameters in the api. Previously (VC6) we had to use the OCI (Oracles C api) as opposed to OCCI (Oracles C++ api) because it only worked with the Microsoft STL implementation and we were using stlport. Of course if you enable your clients to build the library with their own stl implementation this is not an issue.
Yesterday, I got bit by a rather annoying crash when using DLLs compiled with GCC under Cygwin. Basically, as soon as you run with a debugger, you may end up landing in a debugging trap caused by RtlFreeHeap() receiving an address to something it did not allocate.
This is a known bug with GCC 3.4 on Cygwin. The situation arises because the libstdc++ library includes a "clever" optimization for empty strings. I spare you the details (see the references throughout this post), but whenever you allocate memory in one DLL for an std::string object that "belongs" to another DLL, you end up giving one heap a chunk to free that came from another heap. Hence the SIGTRAP in RtlFreeHeap().
There are other problems reported when exceptions are thrown across DLL boundaries.
This makes GCC 3.4 on Windows an unacceptable solution as soon as your project is based on DLLs and the STL. I have a few options to move past this option, many of which are very time-consuming and/or annoying:
I can patch my libstdc++ or rebuild it with the --enable-fully-dynamic-string configuration option
I can use static libraries instead, which increases my link time
I cannot (yet) switch to another compiler either, because of some other tools I'm using. The comments I find from some GCC people is that "it's almost never reported, so it's probably not a problem", which annoys me even more.
Does anyone have some news about this? I can't find any clear announcement that this has been fixed (the bug is still marked as "assigned"), except one comment on the GNU Radio bug tracker.
Thanks!
The general problem you're running into is that C++ was never really meant as a component language. It was really designed to be used to create complete standalone applications. Things like shared libraries and other such mechanisms were created by vendors on their own. Think of this example: suppose you created a C++ component that returns a C++ object. How is the C++ component know that it will be used by a C++ caller? And if the caller is a C++ application, why not just use the library directly?
Of course, the above information doesn't really help you.
Instead, I would create the shared libraries/DLLs such that you follow a couple of rules:
Any object created by a component is also destroyed by the same component.
A component can be safely unloaded when all of its created objects are destroyed.
You may have to create additional APIs in your component to ensure these rules, but by following these rules, it will ensure that problems like the one described won't happen.
Back in the 90s when I first started out with MFC I used to dynamically link my apps and shipped the relevant MFC DLLs. This caused me a few issues (DLL hell!) and I switched to statically linking instead - not just for MFC, but for the CRT and ATL. Other than larger EXE files, statically linking has never caused me any problems at all - so are there any downsides that other people have come across? Is there a good reason for revisiting dynamic linking again? My apps are mainly STL/Boost nowadays FWIW.
Most of the answers I hear about this involve sharing your dll's with other programs, or having those dll's be updated without the need to patch your software.
Frankly I consider those to be downsides, not upsides. When a third party dll is updated, it can change enough to break your software. And these days, hard drive space isn't as precious as it once was, an extra 500k in your executable? Who cares?
Being 100% sure of the version of dll that your software is using is a good thing.
Being 100% sure that the client is not going to have a dependency headache is a good thing.
The upsides far outweigh the downsides in my opinion
There are some downsides:
Bigger exe size (esp if you ship multiple exe's)
Problems using other DLL's which rely on or assume dynamic linking (eg: 3rd party DLL's which you cannot get as static libraries)
Different c-runtimes between DLL's with independent static linkage (no cross-module allocate/deallocate)
No automatic servicing of shared components (no ability to have 3rd party module supplier update their code to fix issues without recompiling and updating your application)
We do static linking for our Windows apps, primarily because it allows xcopy deployment, which is just not possible with installing or relying on SxS DLL's in a way which works, since the process and mechanism is not well documented or easily remotable. If you use local DLL's in the install directory it will kinda work, but it's not well supported. The inability to easily do remote installation without going through a MSI on the remote system is the primary reason why we don't use dynamic linking, but (as you pointed out) there are many other benefits to static linking. There are pros and cons to each; hopefully this helps enumerate them.
As long as you keep your usage limited to certain libraries and do not use any dll's then you should be good.
Unfortunately, there are some libraries that you cannot link statically. The best example I have is OpenMP. If you take advantage of Visual Studio's OpenMP support, you will have to make sure the runtime is installed (in this case vcomp.dll).
If you do use dll's then you can't pass some items back and forth without some serious gymnastics. std::strings come to mind. If your exe and dll are dynamically linked then the allocation takes place in in the CRT. Otherwise your program may try to allocate the string on one side and deallocate it on the other. Bad things ensue...
That said, I still statically link my exe's and dll's. It does reduce a lot of the variablilty in the install and I consider that well worth the few limitations.
One good feature of using dll's are that if multiple processess loads the same dll its code can be shared between them. This can save memory and shorten loading times for an application loading a dll that's already used by another program.
No, nothing new on that front. Keep it that way.
Most definitely.
Allocation is done on a 'static' heap. Since allocation an deallocation should be done on the same heap, this means that if you ship a library, you should take care that client code can not call 'your' p = new LibClass() and delete that object itself using delete p;.
My conclusion: either shield allocation and deallocation from client code, or dynamically link the CRT.
There are some software licenses such as LGPL that require you to either use a DLL or distribute your application as object files that the user can link together. If you are using such a library, you'll probably want to use it as a DLL.