weak-ptr become null, crash app 1 time every week - c++

Unhandled exception at 0x764F135D (kernel32.dll) in RFNReader_NFCP.exe.4448.dmp: 0xC0000005: Access violation writing location 0x00000001.
void Notify( const char* buf, size_t len )
{
for( auto it = m_observerList.begin(); it != m_observerList.end(); )
{
auto item = it->lock();
if( item )
{
item->Update( buf, len );
++it;
}
else
{
it = m_observerList.erase( it );
}
}
}
variable item's value in debug window:
item shared_ptr {m_interface="10.243.112.12" m_port="8889" m_clientSockets={ size=0 } ...} [3 strong refs, 2 weak refs] [default] std::tr1::shared_ptr
but in item->Update():
the item(this) become null!
why??

The problem here is most likely not the weak_ptr, which is used correctly.
In fact, the code you posted is completely fine, so the error must be elsewhere. The raw pointer and length arguments indicate a possible memory corruption.
Be aware that the debugger might lie to you if you accidentally mess up stack frames due to memory corruption. Since you seem to be debugging this from a minidump it might also be that the dumping swallowed some info here.
Mind you, the corrupted this pointer that you are seeing here is just a value on the stack! The underlying object is most probably still alive, as you are maintaining several shared_ptrs to it (you can verify this in a debug build by checking if the original memory location of the object was overwritten by magic numbers). It's really just your stack values that are bogus. I would definitely recommend you double check the stack manually using VS's memory and register windows. If you do have a memory corruption, it should become visible there.
Also consider temporarily cranking up the amount of data saved to the minidump if it threw away too much.
Finally, be sure you double check your buffer handling. It's very likely that you messed up there somewhere and an out-of-bounds buffer write caused the corruption.

Note that your this is invalid (0x00000001), i.e. the object got destroyed. Notify member function was called for a destroyed object. This obviously crashes as soon as Notify tries to access an object member.

Related

Exceptions on unique_ptr and make_unique [duplicate]

There is a method called foo that sometimes returns the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abort
Is there a way that I can use a try-catch block to stop this error from terminating my program (all I want to do is return -1)?
If so, what is the syntax for it?
How else can I deal with bad_alloc in C++?
In general you cannot, and should not try, to respond to this error. bad_alloc indicates that a resource cannot be allocated because not enough memory is available. In most scenarios your program cannot hope to cope with that, and terminating soon is the only meaningful behaviour.
Worse, modern operating systems often over-allocate: on such systems, malloc and new can return a valid pointer even if there is not enough free memory left – std::bad_alloc will never be thrown, or is at least not a reliable sign of memory exhaustion. Instead, attempts to access the allocated memory will then result in a segmentation fault, which is not catchable (you can handle the segmentation fault signal, but you cannot resume the program afterwards).
The only thing you could do when catching std::bad_alloc is to perhaps log the error, and try to ensure a safe program termination by freeing outstanding resources (but this is done automatically in the normal course of stack unwinding after the error gets thrown if the program uses RAII appropriately).
In certain cases, the program may attempt to free some memory and try again, or use secondary memory (= disk) instead of RAM but these opportunities only exist in very specific scenarios with strict conditions:
The application must ensure that it runs on a system that does not overcommit memory, i.e. it signals failure upon allocation rather than later.
The application must be able to free memory immediately, without any further accidental allocations in the meantime.
It’s exceedingly rare that applications have control over point 1 — userspace applications never do, it’s a system-wide setting that requires root permissions to change.1
OK, so let’s assume you’ve fixed point 1. What you can now do is for instance use a LRU cache for some of your data (probably some particularly large business objects that can be regenerated or reloaded on demand). Next, you need to put the actual logic that may fail into a function that supports retry — in other words, if it gets aborted, you can just relaunch it:
lru_cache<widget> widget_cache;
double perform_operation(int widget_id) {
std::optional<widget> maybe_widget = widget_cache.find_by_id(widget_id);
if (not maybe_widget) {
maybe_widget = widget_cache.store(widget_id, load_widget_from_disk(widget_id));
}
return maybe_widget->frobnicate();
}
…
for (int num_attempts = 0; num_attempts < MAX_NUM_ATTEMPTS; ++num_attempts) {
try {
return perform_operation(widget_id);
} catch (std::bad_alloc const&) {
if (widget_cache.empty()) throw; // memory error elsewhere.
widget_cache.remove_oldest();
}
}
// Handle too many failed attempts here.
But even here, using std::set_new_handler instead of handling std::bad_alloc provides the same benefit and would be much simpler.
1 If you’re creating an application that does control point 1, and you’re reading this answer, please shoot me an email, I’m genuinely curious about your circumstances.
You can catch it like any other exception:
try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}
Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.
What is the C++ Standard specified behavior of new in c++?
The usual notion is that if new operator cannot allocate dynamic memory of the requested size, then it should throw an exception of type std::bad_alloc.
However, something more happens even before a bad_alloc exception is thrown:
C++03 Section 3.7.4.1.3: says
An allocation function that fails to allocate storage can invoke the currently installed new_handler(18.4.2.2), if any. [Note: A program-supplied allocation function can obtain the address of the currently installed new_handler using the set_new_handler function (18.4.2.3).] If an allocation function declared with an empty exception-specification (15.4), throw(), fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall only indicate failure by throw-ing an exception of class std::bad_alloc (18.4.2.1) or a class derived from std::bad_alloc.
Consider the following code sample:
#include <iostream>
#include <cstdlib>
// function to call if operator new can't allocate enough memory or error arises
void outOfMemHandler()
{
std::cerr << "Unable to satisfy request for memory\n";
std::abort();
}
int main()
{
//set the new_handler
std::set_new_handler(outOfMemHandler);
//Request huge memory size, that will cause ::operator new to fail
int *pBigDataArray = new int[100000000L];
return 0;
}
In the above example, operator new (most likely) will be unable to allocate space for 100,000,000 integers, and the function outOfMemHandler() will be called, and the program will abort after issuing an error message.
As seen here the default behavior of new operator when unable to fulfill a memory request, is to call the new-handler function repeatedly until it can find enough memory or there is no more new handlers. In the above example, unless we call std::abort(), outOfMemHandler() would be called repeatedly. Therefore, the handler should either ensure that the next allocation succeeds, or register another handler, or register no handler, or not return (i.e. terminate the program). If there is no new handler and the allocation fails, the operator will throw an exception.
What is the new_handler and set_new_handler?
new_handler is a typedef for a pointer to a function that takes and returns nothing, and set_new_handler is a function that takes and returns a new_handler.
Something like:
typedef void (*new_handler)();
new_handler set_new_handler(new_handler p) throw();
set_new_handler's parameter is a pointer to the function operator new should call if it can't allocate the requested memory. Its return value is a pointer to the previously registered handler function, or null if there was no previous handler.
How to handle out of memory conditions in C++?
Given the behavior of newa well designed user program should handle out of memory conditions by providing a proper new_handlerwhich does one of the following:
Make more memory available: This may allow the next memory allocation attempt inside operator new's loop to succeed. One way to implement this is to allocate a large block of memory at program start-up, then release it for use in the program the first time the new-handler is invoked.
Install a different new-handler: If the current new-handler can't make any more memory available, and of there is another new-handler that can, then the current new-handler can install the other new-handler in its place (by calling set_new_handler). The next time operator new calls the new-handler function, it will get the one most recently installed.
(A variation on this theme is for a new-handler to modify its own behavior, so the next time it's invoked, it does something different. One way to achieve this is to have the new-handler modify static, namespace-specific, or global data that affects the new-handler's behavior.)
Uninstall the new-handler: This is done by passing a null pointer to set_new_handler. With no new-handler installed, operator new will throw an exception ((convertible to) std::bad_alloc) when memory allocation is unsuccessful.
Throw an exception convertible to std::bad_alloc. Such exceptions are not be caught by operator new, but will propagate to the site originating the request for memory.
Not return: By calling abort or exit.
I would not suggest this, since bad_alloc means you are out of memory. It would be best to just give up instead of attempting to recover. However here is is the solution you are asking for:
try {
foo();
} catch ( const std::bad_alloc& e ) {
return -1;
}
I may suggest a more simple (and even faster) solution for this. new operator would return null if memory could not be allocated.
int fv() {
T* p = new (std::nothrow) T[1000000];
if (!p) return -1;
do_something(p);
delete p;
return 0;
}
I hope this could help!
Let your foo program exit in a controlled way:
#include <stdlib.h> /* exit, EXIT_FAILURE */
try {
foo();
} catch (const std::bad_alloc&) {
exit(EXIT_FAILURE);
}
Then write a shell program that calls the actual program. Since the address spaces are separated, the state of your shell program is always well-defined.
Of course you can catch a bad_alloc, but I think the better question is how you can stop a bad_alloc from happening in the first place.
Generally, bad_alloc means that something went wrong in an allocation of memory - for example when you are out of memory. If your program is 32-bit, then this already happens when you try to allocate >4 GB. This happened to me once when I copied a C-string to a QString. The C-string wasn't '\0'-terminated which caused the strlen function to return a value in the billions. So then it attempted to allocate several GB of RAM, which caused the bad_alloc.
I have also seen bad_alloc when I accidentally accessed an uninitialized variable in the initializer-list of a constructor. I had a class foo with a member T bar. In the constructor I wanted to initialize the member with a value from a parameter:
foo::foo(T baz) // <-- mistyped: baz instead of bar
: bar(bar)
{
}
Because I had mistyped the parameter, the constructor initialized bar with itself (so it read an uninitialized value!) instead of the parameter.
valgrind can be very helpful with such errors!

Function in consumer thread unable to access memory location

I have some code that processes images. Performance is critical so I'm tyring to implement multi-threading using BoundedBuffer. The image data is stored as unsigned char* (dictated by the SDK I'm using to process the image data).
The problem occurs in the processData function called in the consumer thread. Inside the processData, there is another function (from the image processing SDK) that uses cudaMemcpy2D function. The cuda function always throws an exception saying Access violation reading location.
However, the cuda function works fine if I call the the processData directly within the producer thread or deposit. When I call processData from the consumer thread (as desired), I get the exception from the cuda function. I even tried calling processData from fetch and I got the same exception.
My guess is that after the data is deposited into the rawImageBuffer by the producer thread, somehow the memory pointed to by unsigned char* changes, thus the consumer thread (or fetch) actually sends bad image data to processData (and the cuda function).
This is what my code looks like:
void processData(vector<unsigned char*> unProcessedData)
{
// Process the data
}
struct BoundedBuffer {
queue<vector<unsigned char*>> buffer;
int capacity;
std::mutex lock;
std::condition_variable not_full;
std::condition_variable not_empty;
BoundedBuffer(int capacity) : capacity(capacity) {}
void deposit(vector<unsigned char*> vData)
{
std::unique_lock<std::mutex> l(lock);
bool bWait = not_full.wait_for(l, 3000ms, [this] {return buffer.size() != capacity; }); // Wait if full
if (bWait)
{
buffer.push(vData); // only push data when timeout doesn't expire
not_empty.notify_one();
}
}
vector<unsigned char*> fetch()
{
std::unique_lock<std::mutex> l(lock);
not_empty.wait(l, [this]() {return buffer.size() != 0; }); // Wait if empty
vector<unsigned char*> result{};
result = buffer.front();
buffer.pop();
not_full.notify_one();
return result;
}
};
void producerTask(BoundedBuffer &rawImageBuffer)
{
for(;;)
{
// Produce Data
vector<unsigned char*> producedDataVec{dataElement0, dataElement1};
rawImageBuffer.deposit(producedDataVec);
} //loop breaks upon user interception
}
void consumerTask(BoundedBuffer &rawImageBuffer)
{
for(;;)
{
vector<unsigned char*> fetchedDataVec{};
fetchedDataVec = rawImageBuffer.fetch();
processData(fetchedDataVec);
} //loop breaks upon user interception
}
int main()
{
BoundedBuffer rawImageBuffer(6);
thread consumer(consumerTask, ref(rawImageBuffer));
thread producer(producerTask, ref(rawImageBuffer),
consumer.join();
producer.join();
return 0;
}
Am I correct in my guess about why the exception is being thrown? How do I resolve this? For reference, each vector element contains data for a 2448px X 2048px image in RGBa 8bit format.
UPDATES:
After someone pointed out in the comments that the unsigned char* pointers could be invalid, I found that the address pointed by the pointers is in fact a real memory location. In the exception Access violation reading location X. X is larger than the location pointed by the pointer.
After some more debugging, I've found that the memory pointed to by the unsigned char* in unprocessedData vector in processData doesn't remain intact, the pointer address is correct, but some blocks of memory are unreadable. I found this by printing each char in the unsigned char* in processData. When processData is called by producer thread (this is when cuda doesn't throw exception), all chars get printed nicely (I'm printing 2048*2448*4 chars, dictated by the aforementioned image resolution and format). But when processData is called by the consumer thread, printing the char throws the same exception, exception is thrown around the 40th char (around 40th, not always 40th).
Okay, so now I'm pretty sure not only my pointers are pointing to real memory locations, I also know that the first memory block pointed by the pointer holds the expected value for as many times as I've tested this. To test this, in producerTask I deliberately write a test value (such as int 42, or char *) to the 0th memory block pointed by the unsigned char*. In the processData function, I check if the memory block still contains the test value and it does. So, now I know some of the memory blocks pointed by the pointer become inaccessible to read for some unknown reason. Also, my test doesn't prove that the first memory block is immune to become inaccessible, just that it didn't become inaccessible for the few number of tests I did. TLDR for Updates 1 to 3: The unprocessedImage pointers are valid, they point to a real memory address and also they point to the memory address that hold the expected value.
Another debugging attempt. Now I'm using Visual Studio's memory window to visually inspect the data. The debugger tells me that unProcessedData[0] points to 0x00000279d7c76070. This is what memory around 0x00000279d7c76070 looks like:
Memory seems sensible, the RGBa format can be clearly seen, the image is all black so it makes sense that the RGB channels are close to 0 whereas alpha is ff. I scrolled down for a long time to see what the memory looks like, all the way till 0x00000279D8F9606F the data looks good (RGBa values as expected). The 0x00000279D8F9606F number also makes sense because 0x00000279D8F9606F - 0x00000279d7c76070 = 0d20054015, which means there are 20054016 valid chars which is expected (2048 height*2448 width*4 channels = 20054016). Okay, so far so good. Note that all this is right before running the cuda function. After stepping through the cuda function I get the same exception: Access violation reading location 0x00000279D80B8000. Note that 0x00000279D80B8000 is between 0x00000279d7c76070 and 0x00000279D8F9606F, the parts of memory which I visually checked to be correct. Now, after running the cuda function here is what the memory between 0x00000279d7c76070 and 0x00000279D8F9606F looks like:
When I cout anything in processData before calling the cuda function. The memory pointed by the pointer changes. All the chars become equivalent to 0xdd as can be seen in the image below. This page on MSDN says that The freed blocks kept unused in the debug heap's linked list when the _CRTDBG_DELAY_FREE_MEM_DF flag is set are currently filled with 0xDD.
But when I call processData from the producer thread, the pointed memory doesn't change after I cout anything.
Right now the most upvoted comment to this question is telling me to learn more about pointers. I am doing this currently (hopefully as my updates may suggest), however what topics do I need to learn about them? I do know how pointers work. I know my the pointers are pointing to valid memory location (see Update 2). I know some memory blocks pointed by the pointer become inaccessible to read (see Update 3). But I don't know why the memory blocks become inaccessible. Especially, I don't know why they only become inaccessible when processData is called from the consumer thread (note that there is no exception thrown when processData is called form the producer thread). Is there anything else I can do to help narrow down this problem?
The problem was fairly simple, n.m.'s comments guided me towards the right direction and I'm thankful for that.
In my updates I mentioned that printing anything using cout caused the data to become corrupt. Although, it seemed like that was happening, but after putting some breakpoints in fetch and deposit, I got a complete picture of what was really happening.
The way I produced the image data was by using another SDK supplied with the camera, the SDK provided me with image data in the type of wrapped pointer. Then I converted the image format and then unwrapped converted image to get the pointer to the raw image. Then the pointer to the raw image is stored into producedDataVec and deposited it into rawImageBuffer. The problem was that as soon as the converted image went out of scope, my data became corrupted. So, the cout statements weren't really responsible for corrupting my data. With breakpoints placed everywhere I could see the data becoming corrupt just after the converted image went out of scope. To resolve this, now my producer directly deposits the wrapped pointer to the buffer. The consumer fetches the wrapped pointer, the converted image is obtained by converting the format in the consumer, and then the raw image pointer is obtained. Now the converted image only goes out of scope after processData has returned so the exception is never thrown.

Returning from Function changes context to NULL

I have three classes relevant to this issue. I'm implementing a hardware service for an application. PAPI (Platform API) is a hardware service class that keeps track of various hardware interfaces. I have implemented an abstract HardwareInterface class, and a class that derives it called HardwareWinUSB.
Below are examples similar to what I've done. I've left out members that don't appear to be relevant to this issue, like functions to open the USB connection:
class PAPI {
HardwareInterface *m_pHardware;
PAPI() {
m_pHardware = new HardwareWinUSB();
}
~PAPI() {
delete m_pHardware;
}
ERROR_CODE WritePacket(void* WriteBuf)
{
return m_pHardware->write( WriteBuf);
}
};
class HardwareInterface {
virtual ERROR_CODE write( void* WriteBuf) = 0;
};
class HardwareWinUSB : public HardwareInterface
{
ERROR_CODE write( void* Params)
{
// Some USB writing code.
// This had worked just fine before attempting to refactor
// Into this more sustainable hardware management scheme
{
};
I've been wrestling with this for several hours now. It's a strange, reproducible issue, but is sometimes intermittent. If I step through the debugger at a higher context, things execute well. If I don't dig deep enough, I'm met with an error that reads
Exception thrown at 0x00000000 in <ProjectName.exe>: 0xC0000005: Access violation executing location 0x00000000
If I dig down into the PAPI code, I see bizarre behavior.
When I set a breakpoint in the body of WritePacket, everything appears normal. Then I do a "step over" in the debugger. After the return from the function call, my reference to 'this' is set to 0x00000000.
What is going on? It looks like a null value was pushed on the return stack? Has anyone seen something like this happen before? Am I using virtual methods incorrectly?
edit
After further dissection, I found that I was reading before calling write, and the buffer that I was reading into was declared in local scope. When new reads came in, they were being pushed into the stack, corrupting it. The next function called, write, would return to a destroyed stack.
A buffer overrun can trash the return address on the stack. You seem to be reading and writing packets with void pointers and without passing around explicit sizes, so a simple overrun bug seems quite likely. The Visual Studio compiler has options to add stack integrity checks to detect these kinds of bugs, but they're not 100% perfect. Nonetheless, make sure you have them switched on.
Also note that the Visual Studio debugger can occasionally (but rarely) show the wrong value for this, especially if you're trying to debug optimized code. If you're at the } at the end of a method, I wouldn't necessarily worry about the debugger showing a bizarre value for this.
After further dissection, I found that I was reading before calling write, and the buffer that I was reading into was declared in local scope (in the read function).
When new reads came in, they were being pushed into the stack, corrupting it. The next function I called, write, would return to a destroyed stack.

General way of solving Error: Stack around the variable 'x' was corrupted

I have a program which prompts me the error in VS2010, in debug :
Error: Stack around the variable 'x' was corrupted
This gives me the function where a stack overflow likely occurs, but I can't visually see where the problem is.
Is there a general way to debug this error with VS2010? Would it be possible to indentify which write operation is overwritting the incorrect stack memory?
thanks
Is there a general way to debug this error with VS2010?
No, there isn't. What you have done is to somehow invoke undefined behavior. The reason these behaviors are undefined is that the general case is very hard to detect/diagnose. Sometimes it is provably impossible to do so.
There are however, a somewhat smallish number of things that typically cause your problem:
Improper handling of memory:
Deleting something twice,
Using the wrong type of deletion (free for something allocated with new, etc.),
Accessing something after it's memory has been deleted.
Returning a pointer or reference to a local.
Reading or writing past the end of an array.
This can be caused by several issues, that are generally hard to see:
double deletes
delete a variable allocated with new[] or delete[] a variable allocated with new
delete something allocated with malloc
delete an automatic storage variable
returning a local by reference
If it's not immediately clear, I'd get my hands on a memory debugger (I can think of Rational Purify for windows).
This message can also be due to an array bounds violation. Make sure that your function (and every function it calls, especially member functions for stack-based objects) is obeying the bounds of any arrays that may be used.
Actually what you see is quite informative, you should check in near x variable location for any activity that might cause this error.
Below is how you can reproduce such exception:
int main() {
char buffer1[10];
char buffer2[20];
memset(buffer1, 0, sizeof(buffer1) + 1);
return 0;
}
will generate (VS2010):
Run-Time Check Failure #2 - Stack around the variable 'buffer1' was corrupted.
obviously memset has written 1 char more than it should. VS with option \GS allows to detect such buffer overflows (which you have enabled), for more on that read here: http://msdn.microsoft.com/en-us/library/Aa290051.
You can for example use debuger and step throught you code, each time watch at contents of your variable, how they change. You can also try luck with data breakpoints, you set breakpoint when some memory location changes and debugger stops at that moment,possibly showing you callstack where problem is located. But this actually might not work with \GS flag.
For detecting heap overflows you can use gflags tool.
I was puzzled by this error for hours, I know the possible causes, and they are already mentioned in the previous answers, but I don't allocate memory, don't access array elements, don't return pointers to local variables...
Then finally found the source of the problem:
*x++;
The intent was to increment the pointed value. But due to the precedence ++ comes first, moving the x pointer forward then * does nothing, then writing to *x will be corrupt the stack canary if the parameter comes from the stack, making VS complain.
Changing it to (*x)++ solves the problem.
Hope this helps.
Here is what I do in this situation:
Set a breakpoint at a location where you can see the (correct) value of the variable in question, but before the error happens. You will need the memory address of the variable whose stack is being corrupted. Sometimes I have to add a line of code in order for the debugger to give me the address easily (int *x = &y)
At this point you can set a memory breakpoint (Debug->New Breakpoint->New Data Breakpoint)
Hit Play and the debugger should stop when the memory is written to. Look up the stack (mine usually breaks in some assembly code) to see whats being called.
I usually follow the variable before the complaining variable which usually helps me get the problem. But this can sometime be very complex with no clue as you have seen it. You could enable Debug menu >> Exceptions and tick the 'Win32 exceptions" to catch all exceptions. This will still not catch this exceptions but it could catch something else which could indirectly point to the problem.
In my case it was caused by library I was using. It turnout the header file I was including in my project didn't quite match the actual header file in that library (by one line).
There is a different error which is also related:
0xC015000F: The activation context being deactivated is not the most
recently activated one.
When I got tired of getting the mysterious stack corrupted message on my computer with no debugging information, I tried my project on another computer and it was giving me the above message instead. With the new exception I was able to work my way out.
I encountered this when I made a pointer array of 13 items, then trying to set the 14th item. Changing the array to 14 items solved the problem. Hope this helps some people ^_^
One relatively common source of "Stack around the variable 'x' was corrupted" problem is wrong casting. It is sometimes hard to spot. Here is an example of a function where such problem occurs and the fix. In the function assignValue I want to assign some value to a variable. The variable is located at the memory address passed as argument to the function:
using namespace std;
template<typename T>
void assignValue(uint64_t address, T value)
{
int8_t* begin_object = reinterpret_cast<int8_t*>(std::addressof(value));
// wrongly casted to (int*), produces the error (sizeof(int) == 4)
//std::copy(begin_object, begin_object + sizeof(T), (int*)address);
// correct cast to (int8_t*), assignment byte by byte, (sizeof(int8_t) == 1)
std::copy(begin_object, begin_object + sizeof(T), (int8_t*)address);
}
int main()
{
int x = 1;
int x2 = 22;
assignValue<int>((uint64_t)&x, x2);
assert(x == x2);
}

Questions about C++ memory allocation and delete

I'm getting a bad error. When I call delete on an object at the top of an object hierarchy (hoping to the cause the deletion of its child objects), my progam quits and I get this:
*** glibc detected *** /home/mossen/workspace/abbot/Debug/abbot: double free or corruption (out): 0xb7ec2158 ***
followed by what looks like a memory dump of some kind. I've searched for this error and from what I gather it seems to occur when you attempt to delete memory that has already been deleted. Impossible as there's only one place in my code that attempts this delete. Here's the wacky part: it does not occur in debug mode. The code in question:
Terrain::~Terrain()
{
if (heightmap != NULL) // 'heightmap' is a Heightmap*
{
cout << "heightmap& == " << heightmap << endl;
delete heightmap;
}
}
I have commented out everything in the heightmap destructor, and still this error. When the error occurs,
heightmap& == 0xb7ec2158
is printed. In debug mode I can step through the code slowly and
heightmap& == 0x00000000
is printed, and there is no error. If I comment out the 'delete heightmap;' line, error never occurs. The destructor above is called from another destructor (separate classes, no virtual destructors or anything like that). The heightmap pointer is new'd in a method like this:
Heightmap* HeightmapLoader::load() // a static method
{
// ....
Heightmap* heightmap = new Heightmap();
// ....other code
return heightmap;
}
Could it be something to do with returning a pointer that was initialized in the stack space of a static method? Am I doing the delete correctly? Any other tips on what I could check for or do better?
What happens if load() is never called? Does your class constructor initialise heightmap, or is it uninitialised when it gets to the destructor?
Also, you say:
... delete memory that has already been deleted. Impossible as there's only one place in my code that attempts this delete.
However, you haven't taken into consideration that your destructor might be called more than once during the execution of your program.
In debug mode pointers are often set to NULL and memory blocks zeroed out. That is the reason why you are experiencing different behavior in debug/release mode.
I would suggest you use a smart pointer instead of a traditional pointer
auto_ptr<Heightmap> HeightmapLoader::load() // a static method
{
// ....
auto_ptr<Heightmap> heightmap( new Heightmap() );
// ....other code
return heightmap;
}
that way you don't need to delete it later as it will be done for you automatically
see also boost::shared_ptr
It's quite possible that you're calling that dtor twice; in debug mode the pointer happens to be zeroed on delete, in optimized mode it's left alone. While not a clean resolution, the first workaround that comes to mind is setting heightmap = NULL; right after the delete -- it shouldn't be necessary but surely can't hurt while you're looking for the explanation of why you're destroying some Terrain instance twice!-) [[there's absolutely nothing in the tiny amount of code you're showing that can help us explain the reason for the double-destruction.]]
It looks like the classic case of uninitialized pointer. As #Greg said, what if load() is not called from Terrain? I think you are not initializing the HeightMap* pointer inside the Terrain constructor. In debug mode, this pointer may be set to NULL and C++ gurantees that deleting a NULL pointer is a valid operation and hence the code doesn't crash. However, in release mode due to optimizations, the pointer in uninitialized and you try to free some random block of memory and the above crash occurs.