Crash with new/delete, but not with malloc/free in C++ code - c++

My developement environment is [Windows 7; visual studio 2010; x86].
I have a dll that was built for server 2003 long time back. When I use it in my project and follow new/delete sequence to use a class, application crashes during delete call. I verified the same even without any other call between new and delete. When I replace new/delete with malloc/free, there is no crash. If I simply declare an instance of the class without new, no crash happens when scope is exited.
Any idea what may be going wrong? This is internal library of our company, so I will not be able to name it and other such stuff.
Additional Information:
To use this library in first place, I had to turn off VS feature "Treat wchar_t as built-in type".
Code is simple
{
CLogger * myLog = new CLogger();
delete myLog; // Crash happens here
}
{ // No crash here
CLogger MyLog;
}
{
CLogger * myLog = (CLogger *) malloc (sizeof(CLogger));
free (myLog); // This does not crash.
}
This being proprietary library, I cannot post constructor and destructor.

delete does more than just freeing memory: it also calls the destructor before. That means that there must be something bad in the destructor of that class.
If an uncaught exception occurs in a destructor the whole process exits (*).
As commented below (thanks for the good feedback) this is over-simplified here is a good link for more details:
throwing exceptions out of a destructor
I would recommend you to put a
try {} catch (std::exception& e){} catch(...) {}
inside the destructor and log out what is going on, or better let it go through the debugger with the option to stop at the place where the exception is thrown.
Then it should be easy to identify what is different. Just a guess from me, it may be some registry access or file access rights, where some changes were introduced from server 2003 to windows 7.

I apply my psychic debugging skills to suggest that you are using delete where you should be using delete[].
Reasoning: if you were able to trivially replace new with malloc, you're probably allocating an array of primitive types rather than an object, and naively using delete in place of free on the assumption that object allocation and array allocation are the same in C++. (They're not.)

Related

Exceptions on unique_ptr and make_unique [duplicate]

There is a method called foo that sometimes returns the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abort
Is there a way that I can use a try-catch block to stop this error from terminating my program (all I want to do is return -1)?
If so, what is the syntax for it?
How else can I deal with bad_alloc in C++?
In general you cannot, and should not try, to respond to this error. bad_alloc indicates that a resource cannot be allocated because not enough memory is available. In most scenarios your program cannot hope to cope with that, and terminating soon is the only meaningful behaviour.
Worse, modern operating systems often over-allocate: on such systems, malloc and new can return a valid pointer even if there is not enough free memory left – std::bad_alloc will never be thrown, or is at least not a reliable sign of memory exhaustion. Instead, attempts to access the allocated memory will then result in a segmentation fault, which is not catchable (you can handle the segmentation fault signal, but you cannot resume the program afterwards).
The only thing you could do when catching std::bad_alloc is to perhaps log the error, and try to ensure a safe program termination by freeing outstanding resources (but this is done automatically in the normal course of stack unwinding after the error gets thrown if the program uses RAII appropriately).
In certain cases, the program may attempt to free some memory and try again, or use secondary memory (= disk) instead of RAM but these opportunities only exist in very specific scenarios with strict conditions:
The application must ensure that it runs on a system that does not overcommit memory, i.e. it signals failure upon allocation rather than later.
The application must be able to free memory immediately, without any further accidental allocations in the meantime.
It’s exceedingly rare that applications have control over point 1 — userspace applications never do, it’s a system-wide setting that requires root permissions to change.1
OK, so let’s assume you’ve fixed point 1. What you can now do is for instance use a LRU cache for some of your data (probably some particularly large business objects that can be regenerated or reloaded on demand). Next, you need to put the actual logic that may fail into a function that supports retry — in other words, if it gets aborted, you can just relaunch it:
lru_cache<widget> widget_cache;
double perform_operation(int widget_id) {
std::optional<widget> maybe_widget = widget_cache.find_by_id(widget_id);
if (not maybe_widget) {
maybe_widget = widget_cache.store(widget_id, load_widget_from_disk(widget_id));
}
return maybe_widget->frobnicate();
}
…
for (int num_attempts = 0; num_attempts < MAX_NUM_ATTEMPTS; ++num_attempts) {
try {
return perform_operation(widget_id);
} catch (std::bad_alloc const&) {
if (widget_cache.empty()) throw; // memory error elsewhere.
widget_cache.remove_oldest();
}
}
// Handle too many failed attempts here.
But even here, using std::set_new_handler instead of handling std::bad_alloc provides the same benefit and would be much simpler.
1 If you’re creating an application that does control point 1, and you’re reading this answer, please shoot me an email, I’m genuinely curious about your circumstances.
You can catch it like any other exception:
try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}
Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.
What is the C++ Standard specified behavior of new in c++?
The usual notion is that if new operator cannot allocate dynamic memory of the requested size, then it should throw an exception of type std::bad_alloc.
However, something more happens even before a bad_alloc exception is thrown:
C++03 Section 3.7.4.1.3: says
An allocation function that fails to allocate storage can invoke the currently installed new_handler(18.4.2.2), if any. [Note: A program-supplied allocation function can obtain the address of the currently installed new_handler using the set_new_handler function (18.4.2.3).] If an allocation function declared with an empty exception-specification (15.4), throw(), fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall only indicate failure by throw-ing an exception of class std::bad_alloc (18.4.2.1) or a class derived from std::bad_alloc.
Consider the following code sample:
#include <iostream>
#include <cstdlib>
// function to call if operator new can't allocate enough memory or error arises
void outOfMemHandler()
{
std::cerr << "Unable to satisfy request for memory\n";
std::abort();
}
int main()
{
//set the new_handler
std::set_new_handler(outOfMemHandler);
//Request huge memory size, that will cause ::operator new to fail
int *pBigDataArray = new int[100000000L];
return 0;
}
In the above example, operator new (most likely) will be unable to allocate space for 100,000,000 integers, and the function outOfMemHandler() will be called, and the program will abort after issuing an error message.
As seen here the default behavior of new operator when unable to fulfill a memory request, is to call the new-handler function repeatedly until it can find enough memory or there is no more new handlers. In the above example, unless we call std::abort(), outOfMemHandler() would be called repeatedly. Therefore, the handler should either ensure that the next allocation succeeds, or register another handler, or register no handler, or not return (i.e. terminate the program). If there is no new handler and the allocation fails, the operator will throw an exception.
What is the new_handler and set_new_handler?
new_handler is a typedef for a pointer to a function that takes and returns nothing, and set_new_handler is a function that takes and returns a new_handler.
Something like:
typedef void (*new_handler)();
new_handler set_new_handler(new_handler p) throw();
set_new_handler's parameter is a pointer to the function operator new should call if it can't allocate the requested memory. Its return value is a pointer to the previously registered handler function, or null if there was no previous handler.
How to handle out of memory conditions in C++?
Given the behavior of newa well designed user program should handle out of memory conditions by providing a proper new_handlerwhich does one of the following:
Make more memory available: This may allow the next memory allocation attempt inside operator new's loop to succeed. One way to implement this is to allocate a large block of memory at program start-up, then release it for use in the program the first time the new-handler is invoked.
Install a different new-handler: If the current new-handler can't make any more memory available, and of there is another new-handler that can, then the current new-handler can install the other new-handler in its place (by calling set_new_handler). The next time operator new calls the new-handler function, it will get the one most recently installed.
(A variation on this theme is for a new-handler to modify its own behavior, so the next time it's invoked, it does something different. One way to achieve this is to have the new-handler modify static, namespace-specific, or global data that affects the new-handler's behavior.)
Uninstall the new-handler: This is done by passing a null pointer to set_new_handler. With no new-handler installed, operator new will throw an exception ((convertible to) std::bad_alloc) when memory allocation is unsuccessful.
Throw an exception convertible to std::bad_alloc. Such exceptions are not be caught by operator new, but will propagate to the site originating the request for memory.
Not return: By calling abort or exit.
I would not suggest this, since bad_alloc means you are out of memory. It would be best to just give up instead of attempting to recover. However here is is the solution you are asking for:
try {
foo();
} catch ( const std::bad_alloc& e ) {
return -1;
}
I may suggest a more simple (and even faster) solution for this. new operator would return null if memory could not be allocated.
int fv() {
T* p = new (std::nothrow) T[1000000];
if (!p) return -1;
do_something(p);
delete p;
return 0;
}
I hope this could help!
Let your foo program exit in a controlled way:
#include <stdlib.h> /* exit, EXIT_FAILURE */
try {
foo();
} catch (const std::bad_alloc&) {
exit(EXIT_FAILURE);
}
Then write a shell program that calls the actual program. Since the address spaces are separated, the state of your shell program is always well-defined.
Of course you can catch a bad_alloc, but I think the better question is how you can stop a bad_alloc from happening in the first place.
Generally, bad_alloc means that something went wrong in an allocation of memory - for example when you are out of memory. If your program is 32-bit, then this already happens when you try to allocate >4 GB. This happened to me once when I copied a C-string to a QString. The C-string wasn't '\0'-terminated which caused the strlen function to return a value in the billions. So then it attempted to allocate several GB of RAM, which caused the bad_alloc.
I have also seen bad_alloc when I accidentally accessed an uninitialized variable in the initializer-list of a constructor. I had a class foo with a member T bar. In the constructor I wanted to initialize the member with a value from a parameter:
foo::foo(T baz) // <-- mistyped: baz instead of bar
: bar(bar)
{
}
Because I had mistyped the parameter, the constructor initialized bar with itself (so it read an uninitialized value!) instead of the parameter.
valgrind can be very helpful with such errors!

How to catch or handle segfault from free() or delete()

In c++, I have a server code running continuously 24*7 but i am getting segfault sometimes while freeing the buffer.
I tried try catch as well.
try {
free(partialBuf);
} catch (...) {
printf("Caught partial buf free error");
}
Thanks in advance!
Since you're apparently able to use try/catch, you're writing C++ code. It helps to know which language you're using.
The solution then is to use std::shared_ptr. You may have multiple places in which a pointer goes out of scope. With shared_ptr you no longer call free, and as a bonus shared_ptr will call delete only once (after the last pointer goes out of scope).
However, you should now allocate memory with new instead of malloc.
A segfault is not an exception in the sense of other C++ exceptions, hence you cannot catch it with try/catch. A segfault can have any number of reasons, but in 99.9% of cases it's a memory access bug :-) If the segfault happens during a call to delete or free(), chances are that you are having a double-free issue.
You could use GDB to debug, and find out whether you are trying to free a pointer which was not allocated previously.

Using STL containers with exception handling in low memory situation

I am using STL containers in my code (being developed in C++ using Visual Studio 2010)
I have never used exception handling with STL containers before. Since STL containers throw bad_alloc exception, I plan to use like in sample code shown below. Let's assume function() gets called in low memory situation.
Now, I am not sure if it is full proof code OR do I need to do any additional cleanup activity.
class MyClass
{
std::vector<int>* integer_vector;
public:
MyClass()
{
std::string message;
message += "my message"; // bad_alloc could be thrown from here
integer_vector = new std::vector<int>; // bad_alloc could be thrown from here
}
};
void function()
{
try
{
MyClass* myclass_ptr;
myclass_ptr = new (std::nothrow) MyClass;
if (myclass_ptr==NULL)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
std::map<int, char> myintcharmap; // bad_alloc could be thrown from here
}
catch(...)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
}
Please can someone have a look and help.
You have two main potential leaks in the code you show. Both of which arguably stem from using raw pointers. You should prefer using std::unique_ptr (if you have C++11) or other similar "smart" pointers to indicate ownership, and for exception safety in general. Modern guidance is to avoid almost all usage of new or delete; when they cannot be avoided, they need to be paired. Note that your code has two calls to new but none to delete.
Inside function, the core problem is you could fully allocate data "owned" by myclass_ptr, cause an exception in later allocations, and then not be able to clean it up as myclass_ptr is no longer in scope.
Let's say you fixed that so it cleaned up the MyClass instance if an exception occurred after its creation. Your code would still leak because inside MyClass there's currently a similar problem with integer_vector. Although you could follow the rule of three and write a destructor to handle this case, it's probably easier to use a smart pointer here as well.
Exception handling is much bigger, much more opinionated topic. I'll leave it with the summary that it's typically bad to catch exceptions and squash them (usually that's only legit in an outer loop of a program that needs specific kinds of stability). It's also typically bad to catch exceptions in a scope so narrow that you don't know how to handle them. (For example, how would function decide whether to try again, give up, or use another approach? Its caller, or the caller further on up the chain, may have more information and be in a better place to handle this.)
In most cases you should not deal with bad_alloc exceptions. Your try/catch should be removed, as well as if (myclass_ptr==NULL).
Ask yourself : if the process memory is exhausted, what could I possibly do ? Best thing you can hope is to log something, clean up / free the system resources, and let the program terminates. This is the only right thing to do.
You can do this by setting the new_handler (with set_new_handler). It will be called by the new operator if memory allocation failed.
std::set_new_handler(my_handler); // do your cleanup in 'my_handler'

Crash on Exit within std::locale destructor

I'm working on a Win32 application in C++ with Visual Studio 2010. When run in debug mode the application runs fine and closes properly upon exit. In release however, the program runs fine but upon application close there's an unhandled exception: Unhandled exception at 0x009C19F5 in Application.exe: 0xC0000005: Access violation reading location 0x00297628.
The debugger then breaks inside xlocale in std::local's destructor:
~locale() _THROW0()
{ // destroy the object
if (_Ptr != 0)
_DELETE_CRT(_Ptr->_Decref()); // breaks here with unhandled exception
}
The above code is being run, I believe, after my main function returns and exit( 0 ) is called somewhere. Here's my callstack upon crash:
Application.exe!std::locale::~locale() Line 411 C++
Application.exe!doexit(int code, int quick, int retcaller) Line 567 C
Application.exe!exit(int code) Line 393 C
Application.exe!__tmainCRTStartup() Line 284 C
kernel32.dll!#BaseThreadInitThunk#12() Unknown
ntdll.dll!___RtlUserThreadStart#8() Unknown
ntdll.dll!__RtlUserThreadStart#8() Unknown
I'm using Microsoft's application verifier and I'm running _CrtCheckMemory( ); often to check for heap corruption and I don't see any detected in either debug or release mode. I'm also not messing with std::locale at all in any of my source.
I recently switched the solution's settings to use unicode by default instead of 256 one byte sized characters. However switching back and forth now between unicode and multi-byte settings doesn't seem to affect the crash on exit in release.
Does anyone have any idea what's going on?
I've found the solution. There are some global variables calling new and I have new overloaded. When new is called for the first time a custom memory manager is constructed. This is fine, but on program close some objects destruct after the memory manager does. Order of destruction is what was causing the problems.
Another possible cause of a crash with identical symptoms is caused by a behaviour described by this subtle note:
Overload 7 (i.e. locale(const locale& other, Facet* f)) is typically called with its second argument, f, obtained directly from a new-expression: the locale is responsible for calling the matching delete from its own destructor.
This seems like implying that the locale will delete the facet only if it came from new, but in practice it doesn't. In other words, a locale was constructed from a facet that was not constructed by new, or was deleted earlier, causing the crash on the locale destructor's delete.
This can be fixed either by giving the locale proper absolute ownership of the facet, i.e. not deleting the facet manually or letting any smart pointers do it automatically (as well as ensuring that it actually was constructed via new), or, if modifying the facet is possible, overloading delete.

using class specific set_new_handler

For class specific new_handler implementation, i came across the following example in book "effective c++". This looks problem in multithreaded environment, My Question is how to achieve class specific new_handler in multithreaded environment?
void * X::operator new(size_t size)
{
new_handler globalHandler = // install X's
std::set_new_handler(currentHandler); // handler
void *memory;
try { // attempt
memory = ::operator new(size); // allocation
}
catch (std::bad_alloc&) { // restore
std::set_new_handler(globalHandler); // handler;
throw; // propagate
} // exception
std::set_new_handler(globalHandler); // restore
// handler
return memory;
}
You're right. This is probably not thread safe. You might want to consider an alternative approach like using the nothrow version of new instead:
void* X::operator new(std::size_t sz) {
void *p;
while ((p = ::operator new(sz, std::nothrow) == NULL) {
X::new_handler();
}
return p;
}
This will cause your class-specific handler to be called whenever memory allocation fails. I wouldn't do this until you really understand all of the headaches surrounding overloading operator new. In particular, read Herb Sutter's two part article To New, Perchance To Throw, Part 1 and Part 2. Interestingly enough, he says to avoid the nothrow version... hmmm.
C++ doesn't (yet) know what threads are. You'll have to turn to your compiler/C++ standard library/operating system/thread library manuals to determine a thread safe way to do this, or if it is even possible. I would suggest that the new handler should probably be the same across the application. It's not a very flexible mechanism, perhaps your needs would be better served with an allocator or perhaps a factory (function)? What are you looking to do inside the custom new handler?
Perhaps you're looking at it the wrong way. I don't think there is any way to limit the whole application from allocating memory (since much of the memory allocation may be outside of your code), so the best way to do it would be to control what you can - i.e. the implementation of the handler.
Setup the handler to call an instance of a "OutOfMemoryHandler" class (call it what you will) at the start of the program and have its default behaviour to call the existing handler. When you want to add class specific handling, add a behaviour to your OutOfMemoryHandler using your favourite C++ techniques for dynamic behaviour.
This solution should work well in a single-threaded enviroment, but will fail in a multi-threaded environment. To make it work in a multi-threaded environment you need to have the caller notify the handler object that it is working in a particular thread; passing the thread-id with the class would be a good way to do this. If the handler is called, then it checks the thread-id and determines the behaviour to execute based upon the associated class. When the new() call is finished simply deregister the thread-id to ensure the correct default behaviour (much like you are already doing in resetting the default handler).