Obj *op = new Obj;
Obj *op2 = op;
delete op;
delete op2; // What happens here?
What's the worst that can happen when you accidentally double delete? Does it matter? Is the compiler going to throw an error?
It causes undefined behaviour. Anything can happen. In practice, a runtime crash is probably what I'd expect.
Undefined behavior. There are no guarantees whatsoever made by the standard. Probably your operating system makes some guarantees, like "you won't corrupt another process", but that doesn't help your program very much.
Your program could crash. Your data could be corrupted. The direct deposit of your next paycheck could instead take 5 million dollars out of your account.
It's undefined behavior, so the actual result will vary depending on the compiler & runtime environment.
In most cases, the compiler won't notice. In many, if not most, cases, the runtime memory management library will crash.
Under the hood, any memory manager has to maintain some metadata about each block of data it allocates, in a way that allows it to look up the metadata from the pointer that malloc/new returned. Typically this takes the form of a structure at fixed offset before the allocated block. This structure can contain a "magic number" -- a constant that is unlikely to occur by pure chance. If the memory manager sees the magic number in the expected place, it knows that the pointer provided to free/delete is most likely valid. If it doesn't see the magic number, or if it sees a different number that means "this pointer was recently freed", it can either silently ignore the free request, or it can print a helpful message and abort. Either is legal under the spec, and there are pro/con arguments to either approach.
If the memory manager doesn't keep a magic number in the metadata block, or doesn't otherwise check the sanity of the metadata, then anything can happen. Depending on how the memory manager is implemented, the result is most likely a crash without a helpful message, either immediately in the memory manager logic, somewhat later the next time the memory manager tries to allocate or free memory, or much later and far away when two different parts of the program each think they have ownership of the same chunk of memory.
Let's try it. Turn your code into a complete program in so.cpp:
class Obj
{
public:
int x;
};
int main( int argc, char* argv[] )
{
Obj *op = new Obj;
Obj *op2 = op;
delete op;
delete op2;
return 0;
}
Compile it (I'm using gcc 4.2.1 on OSX 10.6.8, but YMMV):
russell#Silverback ~: g++ so.cpp
Run it:
russell#Silverback ~: ./a.out
a.out(1965) malloc: *** error for object 0x100100080: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap
Lookie there, the gcc runtime actually detects that it was a double delete and is fairly helpful before it crashes.
While this is undefined:
int* a = new int;
delete a;
delete a; // same as your code
this is well defined:
int* a = new int;
delete a;
a = nullptr; // or just NULL or 0 if your compiler doesn't support c++11
delete a; // nothing happens!
Thought I should post it since no one else was mentioning it.
The compiler may give a warning or something, especially in obvious (like in your example) but it is not possible for it to always detect. (You can use something like valgrind, which at runtime can detect it though). As for the behaviour, it can be anything. Some safe library might check, and handle it fine -- but other runtimes (for speed) will make the assumption you call is correct (which it's not) and then crash or worse. The runtime is allowed to make the assumption you're not double deleting (even if double deleting would do something bad, e.g. crashing up your computer)
Everyone already told you that you shouldn't do this and that it will cause undefined behavior. That is widely known, so let's elaborate on this on a lower level and let's see what actually happens.
Standard universal answer is that anything can happen, that's not entirely true. For example, the computer will not attempt to kill you for doing this (unless you are programming AI for a robot) :)
The reason why there can't be any universal answer is that as this is undefined, it may differ from compiler to compiler and even across different versions of same compiler.
But this is what "roughly" happens in most cases:
delete consist of 2 primary operations:
it calls the destructor if it's defined
it somehow frees the memory allocated to the object
So, if your destructor contains any code that access any data of class that already was deleted, it may segfault OR (most likely) you will read some nonsense data. If these deleted data are pointers then it will most likely segfault, because you will attempt to access memory that contains something else, or doesn't belong to you.
If your constructor doesn't touch any data or isn't present (let's not consider virtual destructors here for simplicity), it may not be a reason for crash in most compiler implementations. However, calling a destructor is not the only operation that is going to happen here.
The memory needs to be free'd. How it's done depends on implementation in compiler, but it may as well execute some free like function, giving it the pointer and size of your object. Calling free on memory that was already deleted may crash, because the memory may not belong to you anymore. If it does belong to you, it may not crash immediately, but it may overwrite memory that was already allocated for some different object of your program.
That means one or more of your memory structures just got corrupted and your program will likely crash sooner or later or it might behave incredibly weirdly. The reasons will not be obvious in your debugger and you may spend weeks figuring out what the hell just happened.
So, as others have said, it's generally a bad idea, but I suppose you already know that. Don't worry though, innocent kitten will most likely not die if you delete an object twice.
Here is example code that is wrong but may work just fine as well (it works OK with GCC on linux):
class a {};
int main()
{
a *test = new a();
delete test;
a *test2 = new a();
delete test;
return 0;
}
If I don't create intermediate instance of that class between deletes, 2 calls to free on same memory happens as expected:
*** Error in `./a.out': double free or corruption (fasttop): 0x000000000111a010 ***
To answer your questions directly:
What is the worst that can happen:
In theory, your program causes something fatal. It may even randomly attempt to wipe your hard drive in some extreme cases. The chances depends on what your program actually is (kernel driver? user space program?).
In practice, it would most likely just crash with segfault. But something worse might happen.
Is the compiler going to throw an error?
It shouldn't.
No, it isn't safe to delete the same pointer twice. It is undefined behaviour according to C++ standard.
From the C++ FAQ: visit this link
Is it safe to delete the same pointer twice?
No! (Assuming you didn’t get that pointer back from new in between.)
For example, the following is a disaster:
class Foo { /*...*/ };
void yourCode()
{
Foo* p = new Foo();
delete p;
delete p; // DISASTER!
// ...
}
That second delete p line might do some really bad things to you. It might, depending on the phase of the moon, corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on the heap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit the hardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post, etc.).
Note: some runtime systems will protect you from certain very simple cases of double delete. Depending on the details, you might be okay if you happen to be running on one of those systems and if no one ever deploys your code on another system that handles things differently and if you are deleting something that doesn’t have a destructor and if you don’t do anything significant between the two deletes and if no one ever changes your code to do something significant between the two deletes and if your thread scheduler (over which you likely have no control!) doesn’t happen to swap threads between the two deletes and if, and if, and if. So back to Murphy: since it can go wrong, it will, and it will go wrong at the worst possible moment.
A non-crash doesn’t prove the absence of a bug; it merely fails to prove the presence of a bug.
Trust me: double-delete is bad, bad, bad. Just say no.
On GCC 7.2, Clang 7.0, and Zapcc 5.0, it aborts on the 9th delete.
#include <iostream>
int main() {
int *x = new int;
for(size_t n = 1;;n++) {
std::cout << "delete x " << n << " time(s)\n";
delete x;
}
return 0;
}
Related
I am learning c++ using tutorials from http://www.learncpp.com. In the lesson on Dynamic memory allocation with new and delete (http://www.learncpp.com/cpp-tutorial/69-dynamic-memory-allocation-with-new-and-delete/), it states:
Similarly, when a dynamically allocated variable is deleted, the pointer pointing to it is not zero’d. Consider the following snippet:
int *pnValue = new int;
delete pnValue; // pnValue not set to 0
if (pnValue)
*pnValue = 5; // will cause a crash
However, when I try it (compiler: GNU GCC compiler, ubuntu), my program doesn't crash.
int *pnValue = new int;
delete pnValue; // pnValue not set to 0
if (pnValue)
*pnValue = 5; // will cause a crash -> doesn't
std::cout << "Did not crash" << std::endl;
What is happening here? Is there any form of runtime checking in C++?
Your program doesn't crash because you are lucky (or unlucky, thanks #user1320881, since you may not detect such things in more complicated code, and some later crashes or nasty side effects may happen). In fact, what you have here is undefined behaviour, and anything can happen (not necessarily a crash). Technically, your program doesn't crash probably because the operating system didn't yet reclaim that part of the memory and you don't overwrite memory that belongs to some other process, but you shouldn't write such code ever.
Welcome to undefined behavior in C++. Though your question has already been directly answered, since you are new to the language I'd recommend John Regehr's blog post the C++ Super-FAQ entry about the subject. I believe they will clarify a lot of later questions that you may have.
EDIT: after couple of comments - the short answer - and perhaps the best - is that the behaviour is undefined.
C++ does not enforce the pointer to NULL after delete, so the pointer might still have and most likely does have the memory reference to the deallocated memory.
If the memory reference does exist, there is no NULL pointer exception it may not crash because what you do with the memory referenced goes unnoticed.
The Operating system doesn't notice the invalid reference, as long as you are just using with memory allocated for your process in the heap. It would crash, if delete operation would deallocate some of the operating system memory reserved for heap, but that's unlikely event.
The C++ library's heap manager may not notice the access to deallocated memory - this is implementation dependent.
Your own program may not be affected either, because you haven't yet done allocations that might reuse the memory you just deallocated.
The C++ compiler doesn't notice this - nor does it enforce run-time reference counting for the memory allocated objects, nor does it perform any run-time checking for validity of the references. The compiler is just happy you assigned a variable of correct type to the memory referenced by the pointer
With C++ the memory allocations are not only hard to manage but also may cause very serious problems. This is why reference counting is a good practice and it can be used also with C++ with libraries dedicated to it.
The second delete in following code will cause program crash because it has been deleted before:
int* i = new int;
delete i;
delete i;
Trying to catch it using exception doesn't help either:
int* i = new int;
delete i;
try {
delete i;
} catch(exception& e) { // program just crashes, doesn't go into this exception block
cout << "delete failed" << endl;
}
How to perform a safe delete (check first if region pointed by pointer has been deleted before)?
Or if it's not possible, how to spit out the line number where crash occurs (without debugging tool)?
delete does not try detect whether the pointer is valid or not, it just deallocates the pointer passed to it. You can set i to nullptr after deletion each time. And check if(i==nullptr) before deleting again(although deleting nullptr again will not cause any problem, since deleting nullptr is no op, it effectively does nothing).
If you are just playing around, then this kind of code may help to learn about the language well. But in production you should be careful about these kinds of code and eliminate them. It is also a good indicator that your code may have other resource management bugs.
The modern C++ solution is to never use new or delete. Just make C++ handle everything automatically.
unique_ptr<int> i = make_unique<int>();
or
shared_ptr<int> i = make_shared<int>();
No need to delete it. In case you do not have make_unique you can write your own.
{
std::unique_ptr<int> i(new int);
}
Let's start by saying that a double delete is an undefined behaviour... and that means a behaviour which is not defined :)
Allow me to remind also that deleting a null pointer is a no-op. So no need to check on whether the pointer is null.
Without more context, the answer is that there is no portable solution.
Depending on architecture, OS, even the memory allocation library you are using, it will produce different effects and provide you with different options to detect it.
There's no guarantee that the program will crash on the second delete. It might simply corrupt the memory allocator structures in a way that will only crash after some time.
If your objective is detecting the problem, your best chance is to set up your program for capturing crashing signals (e.g. SIGABRT, SIGSEGV, SIGBUS...) and print a stack trace on the signal handler, before allowing the program to terminate/write the core file. As I said above, this might be or might not be the place of the memory corruption... but it will be the place where the memory allocator/program cannot go on any more.
That's the less invasive.
Using customized memory allocators or memory allocators with debugging options (e.g. libumem in Solaris) can help you in detecting earlier or more accurately where the problem was. The catch is that usually there's a bigger or smaller performance penalty.
If your objective is to prevent the problem... then you have to resort to best practices... For example.
Resort to using RAII or smart pointers in general, or at least use them when you cannot safely establish the memory ownership throughout your program.
At the very least, always remember to to set to null a pointer that you have deleted. That doesn't guarantee anything because you can always have concurrent deletes... but it helps reduce the scenarios where you could have a crash.
You could set the pointer that you delete to (0, NULL) prior to C++11 or to nullptr for after C++11 compilers. However, this merely solves the problem (look example below):
void deletePointer(int* & iptr) {
delete iptr;
iptr = nullptr;
}
There is no portable and standard test to check whether a pointer is "valid" for deletion. The best you can do, if your compiler supports C++11, is to go with smart pointers. Thus, you wouldn't have to worry about invalid deletions.
why is this not giving error when I compile?
#include <iostream>
using namespace std;
int main()
{
int *a = new int[2];
// int a[2]; // even this is not giving error
a[0] = 0;
a[1] = 1;
a[2] = 2;
a[3] = 3;
a[100] = 4;
int b;
return 0;
}
can someone explain why this is happening.
Thanks in advance.)
Because undefined behavior == anything can happen. You're unlucky that it doesn't crash, this sort of behavior can potentially hide bugs.
Declaring two variables called a certainly is an error; if your compiler accepts that, then it's broken. I assume you mean that you still don't get an error if you replace one declaration with the other.
Array access is not range-checked. At compile time, the size of an array is often not known, and the language does not require a check even when it is. At run time, a check would degrade performance, which would go against the C++ philosophy of not paying for something you don't need. So access beyond the end of an array gives undefined behaviour, and it's up to the programmer to make sure it doesn't happen.
Sometimes, an invalid access will cause a segmentation fault, but this is not guaranteed. Typically, memory protection is only applied to whole pages of memory, with a typical page size of a few kilobytes. Any access within a page of valid memory will not be caught. There's a good chance that the memory you access contains some other program variable, or part of the call stack, so writing there could affect the program's behaviour in just about any way you can imagine.
If you want to be safe, you could use std::vector, and only access its elements using its at() function. This will check the index, and throw an exception if it's out of range. It will also manage memory allocation for you, fixing the memory leak in your example.
I'm guessing you're coming from Java or a Java-like language where once you step out of the boundary of an array, you get the "array index out of bounds" exception.
Well, C expects more from you; it saves up the space you ask for, but it doesn't check to see if you're going outside the boundary of that saved up space. Once you do that as mentioned above, the program has that dreaded undefined behavior.
And remember for the future that if you have a bug in your program and you can't seem to find it, and when you go over the code/debug it, everything seems OK, there is a good chance you're "out of bounds" and accessing an unallocated place.
compilers with good code analysis would certainly warn on that code referencing beyond your array allocation. forgetting the multiple a declaration, if you ran it, it may or may not fault (undefined behavior as others have said). if, for example, you got a 4KB page of heap (in processor address space), if you don't write outside of that page, you won't get a fault from the processor. upon delete of the array, if you had done it, and depending on the heap implementation, the heap might detect that it is corrupted.
Inspired by this question about whether the compiler can optimize away a call to a function without side effects. Suppose I have the following code:
delete[] new char[10];
It does nothing useful. But does it have a side effect? Is heap allocation immediately followed by a deallocation considered a side effect?
It's up to the implementation. Allocating and freeing memory isn't "observable behavior" unless the implementation decides that it's observable behavior.
In practice, your implementation probably links against a C++ runtime library of some sort, and when your TU is compiled, the compiler is forced to recognize that calls into that library may have observable effects. As far as I know, that's not mandated by the standard, it's just how things normally work. If an optimizer can somehow work out that certain calls or combinations of calls in fact don't affect observable behavior then it can remove them, so I believe that a special case to spot your example code and remove it would conform.
Also, I can't remember how user-defined global new[] and delete[] works [I've been reminded]. Since the code might call definitions of those things in another user-defined TU that's later linked to this TU, the calls can't be optimized away at compile time. They could be removed at link time if turns out that the operators aren't user-defined (although then the stuff about the runtime library applies), or are user-defined but don't have side-effects (once the pair of them is inlined - this seems pretty implausible in a reasonable implementation, actually[*]).
I'm pretty sure that you aren't allowed to rely on the exception from new[] to "prove" whether or not you've run out of memory. In other words, just because new char[10] doesn't throw this time, doesn't mean it won't throw after you free the memory and try again. And just because it threw last time and you haven't freed anything since, doesn't mean it'll throw this time. So I don't see any reason on those grounds why the two calls can't be eliminated - there's no situation where the standard guarantees that new char[10] will throw, so there's no need for the implementation to find out whether it would or not. For all you know, some other process on the system freed 10 bytes just before the call to new[], and allocated it just after the call to delete[].
[*]
Or maybe not. If new doesn't check for space, perhaps relying on guard pages, but just increments a pointer, and delete normally does nothing (relying on process exit to free memory), but in the special case where the block freed is the last block allocated it decrements the pointer, your code could be equivalent to:
// new[]
global_last_allocation = global_next_allocation;
global_next_allocation += 10 + sizeof(size_t);
char *tmp = global_last_allocation;
*((size_t *)tmp) = 10; // code to handle alignment requirements is omitted
tmp += sizeof(size_t);
// delete[]
tmp -= sizeof(size_t);
if (tmp == global_last_allocation) {
global_next_allocation -= 10 + *((size_t*)tmp);
}
Which could almost all be removed assuming nothing is volatile, just leaving global_last_allocation = global_next_allocation;. You could get rid of that too by storing the prior value of last in the block header along with the size, and restoring that prior value when the last allocation is freed. That's a pretty extreme memory allocator implementation, though, you'd need to have a single-threaded program, with a speed demon programmer who's confident the program doesn't churn through more memory than was made available to begin with.
No. Neither should it be removed by compiler nor considered to be a side effect. Consider below:
struct A {
static int counter;
A () { counter ++; }
};
int main ()
{
A obj[2]; // counter = 2
delete [] new A[3]; // counter = 2 + 3 = 5
}
Now, if the compiler removes this as side effect then, the logic goes wrong. So, even if you are not doing anything, compiler will always assume that something useful is happening (in constructor). That's the reason why;
A(); // simple object allocation and deallocation
is not optimized away.
new[] and delete[] could ultimately result in system calls. Additionally, new[] might throw. With this in mind, I don't see how the new-delete sequence can be legitimately considered free from side effects and optimized away.
(Here, I assume no overloading of new[] and delete[] is involved.)
The compiler cannot see the implementation of delete[] and new[] and must assume it does.
If you had implemented delete[] and new[] above it, the compiler may inline / optimize away those functions entirely.
new and delete in usual cases will result in calls to the operating systems heap manager, and this can very well have some side effects. If your program only has a single thread the code you show should not have side effects but by my observations on windows (mostly on 32 Bit platforms) show that at least large allocations and following deallocations often lead to 'heap contention' even if all of the memory is been released. See also this related post on MSDN.
More complex problems may occur if multiple threads are running. Although your code releases the memory in the meantime a different thread may have allocated (or freed) memory, and your allocation might lead to further heap fragmentation. This all is rather theoretical but it may sometimes arise.
if your call to new fails, depending on the compiler version you use probably a exception bad_alloc will be thrown and that will of course have side effects.
It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this:
LPVOID ptr = (LPVOID)0x12345678;
free( ptr );
This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory?
It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security?
Situation Report
Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation:
I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.
There's actually some functions called IsBadReadPtr(), IsBadWritePtr(), IsBadStringPtr(), and IsBadCodePtr() that might do the job, but do not use it ever. I mention this only so that you are aware that these options are not to be pursued.
You're much better off making sure you set all your pointers to NULL or 0 when it points to nothing and check against that.
For example:
// Set ptr to zero right after deleting the pointee.
delete ptr; // It's okay to call delete on zero pointers, but it
// certainly doesn't hurt to check.
Note: This might be a performance issue on some compilers (see the section "Code Size" on this page) so it might actually be worth it to do a self-test against zero first.
ptr = 0;
// Set ptr to zero right after freeing the pointee.
if(ptr != 0)
{
free(ptr); // According to Matteo Italia (see comments)
// it's also okay to pass a zero pointer, but
// again it doesn't hurt.
ptr = 0;
}
// Initialize to zero right away if this won't take on a value for now.
void* ptr = 0;
Even better is to use some variant of RAII and never have to deal with pointers directly:
class Resource
{
public:
// You can also use a factory pattern and make this constructor
// private.
Resource() : ptr(0)
{
ptr = malloc(42); // Or new[] or AcquiteArray() or something
// Fill ptr buffer with some valid values
}
// Allow users to work directly with the resource, if applicable
void* GetPtr() const { return ptr; }
~Resource()
{
if(ptr != 0)
{
free(ptr); // Or delete[] or ReleaseArray() or something
// Assignment not actually necessary in this case since
// the destructor is always the last thing that is called
// on an object before it dies.
ptr = 0;
}
}
private:
void* ptr;
};
Or use the standard containers if applicable (which is really an application of RAII):
std::vector<char> arrayOfChars;
Short answer: No.
There is a function in windows that supposedly tells you if a pointer points to real memory (IsBadreadPtr() and it's ilk) but it doesn't work and you should never use it!
The true solution to your problem is to always initialize pointers to NULL, and reset them to NULL once you've deleted them.
EDIT based on your edits:
You're really hinting at a much larger question: How can you be sure your code continues to function properly in the face of client code that screws up?
This really should be a question on its own. There are no simple answers. But it depends on what you mean by "continue to function properly."
There are two theories. One says that even if client code sends you complete crap, you should be able to trudge along, discarding the garbage and processing the good data. A key to accomplishing this is exception handling. If you catch an exception when processing the client's data, roll your state back and try to return as if they had never called you at all.
The other theory is to not even try to continue, and to just fail. Failing can be graceful, and should include some comprehensive logging so that the problem can be identified and hopefully fixed in the lab. Kick up error messages. Tell the user some things to try next time. Generate minidumps, and send them automatically back to the shop. But then, shut down.
I tend to subscribe to the second theory. When client code starts sending crap, the stability of the system is often at risk. They might have corrupted heaps. Needed resources might not be available. Who knows what the problem might be. You might get some good data interspersed with bad, but you dont even know if the good data really is good. So shut down as quickly as you can, to mitigate the risk.
To address your specific concern, I don't think you have to worry about checking the pointer. If the application passes your DLL an invalid address, it represents a memory management problem in the application. No matter how you code your driver, you can't fix the real bug.
To help the application developers debug their problem, you could add a magic number to the object you return to the application. When the your library is called to free an object, check for the number, and if it isn't there, print a debug warning and don't free it! I.e.:
#define DATA_MAGIC 0x12345678
struct data {
int foo; /* The actual object data. */
int magic; /* Magic number for memory debugging. */
};
struct data *api_recv_data() {
struct data *d = malloc(sizeof(*d));
d->foo = whatever;
d->magic = DATA_MAGIC;
return d;
}
void api_free_data(struct data *d) {
if (d->magic == DATA_MAGIC) {
d->magic = 0;
free(d);
} else {
fprintf(stderr, "api_free_data() asked to free invalid data %p\n", d);
}
}
This is only a debugging technique. This will work the correctly if the application has no memory errors. If the application does have problems, this will probably alert the developer to the mistake. It only works because you're actual problem is much more constrained that your initial question indicates.
No, you are supposed to know if your pointers point to correctly allocated memory.
No. You are only supposed to have a pointer to memory that you know is valid, usually because you allocated it in the same program. Track your memory allocations properly and then you won't even need this!
Also, you are invoking Undefined Behaviour by attempting to free an invalid pointer, so it may crash or do anything at all.
Also, free is a function of the C++ Standard Library inherited from C, not a WinAPI function.
First of all, in the standard there's nothing that guarantees such a thing (freeing a non-malloced pointer is undefined behavior).
Anyhow, passing by free is just a twisted route to just trying to access that memory; if you wanted to check if the memory pointed by a pointer is readable/writable on Windows, you really should just try and be ready to deal with the SEH exception; this is actually what the IsBadxxxPtr functions do, by translating such exception in their return code.
However, this is an approach that hides subtle bugs, as explained in this Raymond Chen's post; so, long story short, no there's no safe way to determine if a pointer points to something valid, and I think that, if you need to have such a test somewhere, there's some design flaw in that code.
I'm not going to echo what every one has already said, just to add to those answers though, this is why smart pointers exist - use them!
Any time you find yourself having to work around crashes due to memory errors - take a step back, a large breath, and fix the underlying problem - it's dangerous to attempt to work around them!
EDIT based on your update:
There are two sane ways that I can think of to do this.
The client application provides a buffer where you put the message, meaning your API does not have worry about managing that memory - this requires changes to your interface and client code.
You change the semantics of the interface, and force the clients of the interface to not worry about memory management (i.e. you call with a pointer to something that is only valid in the context of the callback - if client requires, they make their own copy of the data). This does not change your interface - you can still callback with a pointer, however your clients will need to check that they don't use the buffer outside of that context - potentially if they do, it's probably not what you want and so it could be a good thing that they fix it(?)
Personally I would go for the latter as long as you can be sure that the buffer is not used outside of the callback. If it is, then you'll have to use hackery (such as has been suggested with the magic number - though this is not always guaranteed to work, for example let's say there was some form of buffer overrun from the previous block, and you somehow over-write the magic number with crap - what happens there?)
Application memory management is up to the application developer to maintain, not the operating system (even in managed languages, the operating system doesn't do that job, a garbage collector does). If you allocate an object on the heap, it is your responsibility to free it properly. If you fail to do so, your application will leak memory. The operating system (in the case of Windows at least) does know how much memory it has let your application have, and will reclaim it when your application closes (or crashes), but there is no documented way (that works) to query a memory address to see if it is an allocated block.
The best suggestion I can give you: learn to manage your memory properly.
Not without access to the internals of the malloc implementation.
You could perhaps identify some invalid pointers (e.g., ones that don't point anywhere within your process's virtual memory space), but if you take a valid pointer and add 1 to it, it will be invalid for calling free() but will still point within system-allocated memory. (Not to mention the usual problem of calling free on the same pointer more than once).
Aside from the obvious point made by others about this being very bad practice, I see another problem.
Just because a particular address doesn't cause free() to generate an access violation, does not mean it's safe to free that memory. The address could actually be an address within the heap so that no access violation occurs, and freeing it would result in heap corruption. Or it might even be a valid address to free, in which case you've freed some block of memory that might still be in use!
You've really offered no explanation of why such a poor approach should even be considered.
You apparently have determined that you're done with an object that you currently have a pointer to and if that object was malloced you want to free it. This doesn't sound like an unreasonable idea, but the fact that you have a pointer to an object doesn't tell you anything about how that object was allocated (with malloc, with new, with new[], on the stack, as shared memory, as a memory-mapped file, as an APR memory pool, using the Boehm-Demers-Weiser garbage collector, etc.) so there is no way to determine the correct way to deallocate the object (or if deallocation is needed at all; you may have a pointer to an object on the stack). That's the answer to your actual question.
But sometimes it's better to answer the question that should have been asked. And that question is "how can I manage memory in C++ if I can't always tell things like 'how was this object allocated, and how should it be deallocated'?" That's a tricky question, and, while it's not easy, it is possible to manage memory if you follow a few policies. Whenever you hear people complain about properly pairing each malloc with free, each new with delete and each new[] with delete[], etc., you know that they are making their lives harder than necessary by not following a disciplined memory management regime.
I'm going to make a guess that you're passing pointers to a function and when the function is done you want it to clean up the pointers. This policy is generally impossible to get right. Instead I would recommend following a policy that (1) if a function gets a pointer from somebody else, then that "somebody else" is expected to clean up (after all, that "somebody else" knows how the memory was allocated) and (2) if a function allocates an object, then that function's documentation will say what method should be used to deallocate the object. Second, I would highly recommend smart pointers and similar classes.
Stroustrup's advice is:
If I create 10,000 objects and have pointers to them, I need to delete those 10,000 objects, not 9,999, and not 10,001. I don't know how to do that. If I have to handle the 10,000 objects directly, I'm going to screw up. ... So, quite a long time ago I thought, "Well, but I can handle a low number of objects correctly." If I have a hundred objects to deal with, I can be pretty sure I have correctly handled 100 and not 99. If I can get then number down to 10 objects, I start getting happy. I know how to make sure that I have correctly handled 10 and not just 9."
For instance, you want code like this:
#include <cstdlib>
#include <iostream>
#include "boost/shared_ptr.hpp"
namespace {
// as a side note, there is no reason for this sample function to take int*s
// instead of ints; except that I need a simple function that uses pointers
int foo(int* bar, int* baz)
{
// note that since bar and baz come from outside the function, somebody
// else is responsible for cleaning them up
return *bar + *baz;
}
}
int main()
{
boost::shared_ptr<int> quux(new int(2));
// note, I would not recommend using malloc with shared_ptr in general
// because the syntax sucks and you have to initialize things yourself
boost::shared_ptr<int> quuz(reinterpret_cast<int*>(std::malloc(sizeof(int))), std::free);
*quuz = 3;
std::cout << foo(quux.get(), quuz.get()) << '\n';
}
Why would 0x12345678 necessarily be invalid? If your program uses a lot of memory, something could be allocated there. Really, there's only one pointer value you should absolutely rely on being an invalid allocation: NULL.
C++ does not use 'malloc' but 'new', which usually has a different implementation; therefore 'delete' and 'free' can't be mixed – neither can 'delete' and 'delete[]' (its array-version).
DLLs have their own memory-area and can't be mixed with the memory management system of the non-DLL memory-area.
Every API and language has its own memory management for its own type of memory-objects.
I.e.: You do not 'free()' or 'delete' open files, you 'close()' them. The same goes for very other API, even if the type is a pointer to memory instead of a handle.