Using c:
char ptr[n];
free(ptr);
In my opinion: when "char ptr[n];" is used, the memory is allocated, and ptr is pointed to it, free(ptr) should work.
And the program failed, why?(n == 5 e.g.)
Any deep analysis?
Because you called free on a variable not allocated with malloc.
This causes Undefined Behavior. Luckily for you it crashes and you can detect it, else it can crash at most awkward times.
You call free for deallocating memory of heap allocated variables, What you have is an array on local storage(assuming it to be in a function) and it automatically deallocates when the scope({,}) in which it was created ends.
Because this is undefined behavior what you're doing. (It means it can literally can do anything, including crashing, running seemingly fine, making daemons fly out of your nose, etc.) You can only free() a pointer that you acquired using malloc().
Auto arrays do not have to be free()'d. They are deallocated when their scope ends.
Only free an object that has been allocated by malloc. Freeing an object that has not been allocated by malloc is undefined behavior.
Because of `char ptr[n];' is away to declare an array in STACK memory, and it has scope of the block, which mean it destroyed from the memory when the block is finish.
but when you use malloc(size) the pointer will point to in a piece of memory in the HEAP memory and it take the scope which the programer give it. I mean that when you want to destroy it you must use free(ptr) or OS will free it after the program finish.
So, when you use free on pointer that point to a piece of memory in STACK memory it cause Undefined Behavior and the program crash, because free operates only on the HEAP memory.
This looks similar can a call to free in c ever fail(SO)
The behavior is undefined as per the standard, In some cases your code would not crash so soon. It may corrupt the heap and crash very late during execution and make the debugging kind of difficult.
In a way it depends on the design of malloc/free methods.
One way which I know is :
with each malloc, an extra block of memory is attached to the block which is returned by malloc(). This block contains some housekeeping data which is needed while a call to free(). In your case this data is missing since the memory was not allocated by malloc(). So free()
is trying to use the data preceding your array without knowing that its junk.
Related
int* integer = new int{10};
cout << *integer << endl;
free(integer);
cout << *integer << endl;
Output:
10
0
According to the code I've tested on my machine, It's quite obvious that the allocated memory is being successfully deallocated by using free()
But there are articles on the internet that discourages the use of the new operator and deallocation with free()
Are there any possibilities of memory leakage, if yes. Can anyone paste an example code where this type of leakage happens
Will it cause a memory leakage on using free() to deallocate a heap allocated variable by new operator?
It will cause undefined behaviour. Technically, memory leak is included in the set of all behaviours and thus it can be part of any outcome, but the memory leak is really the least of your worries in this case.
Let me be clear: Don't ever call free on anything that wasn't returned by malloc (or the other related C functions).
Accessing the object after its lifetime and storage duration has ended also has undefined behaviour. Don't ever do that either.
Can anyone paste an example code where this type of leakage happens
Technically, you didn't delete what you new'd, so the program that you show is an example of program with memory leak. Interestingly, having undefined behaviour technically means that the memory leak doesn't necessarily happen. Regardless, you didn't demonstrate the lack of memory leak in your program, so there is no reason to assume that there isn't a leak.
The value pointed by a is set to 0. Why ?
Because the behaviour of the program is undefined.
And my code compiled fine
That's not a proof of lack of memory leak.
While debugging a segmentation fault in a real project, where the crash happens after a long run with random testing which are not easy to reproduce. Crash point shows crash in a function which is written something like
void deallocateObject( objectType* ptr)
{
ASSERT(ptr);
if(!ptr)
return;
if(ptr->customDeallocator)
ptr->customDeallocator->deallocate();
else
free(ptr);
}
There are various kind of allocator and deallocator being used in the project.
To just verify that the segmentation fault is not because of allocated memory not being set to NULL after deallocation, I added a call to memset after the last statement in this function.
memset(ptr, 0, sizeof(objectType));
But after this change I started getting crash every time with message saying heap is corrupted.
So my question is how and in what scenario a call to memset() can cause heap corruption.
So my question is how and in what scenario a call to memset() can cause heap corruption.
Any time you use it to modify memory that might be being used to track the internal structure of the heap. For example, memory that you just told the heap allocator that you were finished with and that it was now free to use for any purpose such as, for example, tracking the internal structure of the heap.
To just verify that the segmentation fault is not because of allocated memory not being set to NULL after deallocation
Well, that's not how you debug dynamically allocated memory related problem. A pointer containing NULL is as invalid as a pointer which has been already passed to free() memory, in terms of further deallocation.
So, whether an already free()-d pointer is (manually) set to NULL or not, further usage (dereference) of that pointer will cause undefined behavior, you may or may not get a segmentation fault, for certain, it's just one of many side effects of having UB.
You need to use a memory debugger, like valgrind to catch and resolve the issue.
FWIW, any attempt of using invalid memory (including NULL, yes) invokes UB, avoid that.
Functions like wcsdup, implicitly calls malloc to allocate memory for the destination buffer. I was wondering as the memory allocation is not very explicit, so does it seems logical to explicitly free the storage?
This is more like a design dilemma and the reasons for and against are as follows
Should be freed because
Not freeing it would cause Memory Leak.
It is well documented that wcsdup/_wcsdup calls malloc to allocate memory even when its called from a C++ Program.
Should not be freed because
Memory accumulated by wcsdup would eventually be freed when program exits. We always live with some memory leaks through out the program lifetime(Unless we are heavily calling wcsdup for large buffer size).
It can be confusing as free was not preceded by an explicit malloc.
As its not part of the standard but posix compliant, Microsoft implementation may not use malloc for allocating destination buffer.
What should be the approach?
From MSDN:
it is good practice always to release this memory by calling the free routine on the pointer returned
From the page you linked:
The returned pointer can be passed to free()
It seems fairly explicit: if you care about memory leaks, then you should free the memory by using free.
To be honest, I'm concerned about the cavalier attitude hinted at with this:
We always live with some memory leaks through out the program lifetime
There are very rarely good reasons to leak memory. Even if the code you write today is a one-off, and it's not a long-lived process, can you be sure that someone's not going to copy-and-paste it into some other program?
Yes, you should always free heap-allocated memory when you're done using it and know that it is safe to do so. The documentation you link to even states:
For functions that allocate memory as if by malloc(), the application
should release such memory when it is no longer required by a call to
free(). For wcsdup(), this is the return value.
If you are concerned about the free being potentially confusing, leave a comment explaining it. To be honest, though, that seems superfluous; it's pretty obvious when a pointer is explicitly freed that it's "owned" by the code freeing it, and anyone who does become confused can easily look up the wcsdup documentation.
Also, you should really never have memory leaks in your program. In practice some programs do have memory leaks, but that doesn't mean it's okay for them to exist. Also note that just because you have a block of memory allocated for the entire lifespan of the program, it is not leaked memory if you are still using it for that entire duration.
From your own link:
For functions that allocate memory as if by malloc(), the application should release such memory when it is no longer required by a call to free().
From MSDN:
The _strdup function calls malloc to allocate storage space for a copy of strSource and then copies strSource to the allocated space.
and strdup is deprecated as from MSVC 2005 and calling it calls _strdup so it is using malloc
I am here stuck with a question in my C++ book with the following:
"What does the use of new require you to also call delete?"
Maybe you have an answer for that?
Because that is the way C++ is designed & that is the intended behavior.
The intention was to provide a memory allocation which you demand and own till you reliquish it explicitly.
new gives you a dynamic memory allocation(on heap) which will continue to exist and you own it untill you explicitly deallocate it by calling delete.
Failing to call a delete on a newed buffer will lead to Undefined Behaviors usually in the form of. 1 memory leaks.
1 This was discussed here.
when you do a new, OS allocates the memory to the pointer you are assigning it. After your usage is completed you may not require it anymore. But the memory is still marked as "being used" by OS.
Now, when the pointer is declared in a scope of a function or any other block (of {}), it will be deleted (only pointer will be removed) when the execution of the block is over. In such cases the memory that was allocated using new is remained marked "being used" by OS and is not allocated to any other pointer that calls new or to a variable. This causes an orphan block of memory in RAM, that will never be used because its pointer was removed from memory but it will occupy a memory block.
This is called a memory leak. A few of such blocks may make your application unstable as well.
You use delete to free such memory blocks and relieve the OS so that it can be used well for other requests
There is no Garbage Collector in C++, and therefore you are responsible for deallocating the allocated memory. Anyway, the operating system "knows" what memory your program allocated. So when your program exits, the operating system is again responsible for the memory. But if you have a long running C++ program and never call delete noone will help you to get rid of your garbage.
Calling new has allocated memory for the object and it has also arranged for the constructor of that object to be executed.
You could free the memory by calling free(), but you should actually use delete to free memory allocated by new, since this will also cause the objects destructor to be executed.
First of all, using delete for anything allocated with new[] is undefined behaviour according to C++ standard.
In Visual C++ 7 such pairing can lead to one of the two consequences.
If the type new[]'ed has trivial constructor and destructor VC++ simply uses new instead of new[] and using delete for that block works fine - new just calls "allocate memory", delete just calls "free memory".
If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t storing the number of elements. Now the address returned by new[] points onto the first element, not onto the beginning of the block. So if delete is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption.
Yet every here and there one can read false statements that using delete after new[] leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects.
How could using delete after new[] possibly lead only to a memory leak on some C++ implementation?
Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this;
| size | data ... |
^
pointer returned by new and new[]
Note that, in terms of memory allocation, there is no difference between new and new[]: both just allocate a block of memory of a certain size.
Now how will delete[] know the size of the array, in order to call the right number of destructors? Simply divide the size of the memory block by sizeof(T), where T is the type of elements of the array.
Now suppose I implement delete as simply one call to the destructor, followed by the freeing of the size bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size bytes (not sizeof(T) bytes), no heap corruption occurs.
The fairy tale about mixing new[] and delete allegedly causing a memory leak is just that: a fairy tale. It has absolutely no footing in reality. I don't know where it came from, but by now it acquired a life of its own and survives like a virus, propagating by the word of mouth from one beginner to another.
The most likely rationale behind this "memory leak" nonsense is that from the innocently naive point of view the difference between delete and delete[] is that delete is used to destroy just one object, while delete[] destroys an array of objects ("many" objects). A naive conclusion that is usually derived from this is that the first element of the array will be destroyed by delete, while the rest will persist, thus creating the alleged "memory leak". Of course, any programmer with at least basic understanding of typical heap implementations would immediately understand that the most likely consequence of that is heap corruption, not a "memory leak".
Another popular explanation for the naive "memory leak" theory is that since the wrong number of destructors gets called, the secondary memory owned by the objects in the array does not get deallocated. This might be true, but it is obviously a very forced explanation, which bears little relevance in the face of much more serious problem with heap corruption.
In short, mixing different allocation functions is one of those error that lead to solid, unpredictable and very practical undefined behavior. Any attempts to impose some concrete limits on the manifestations of this undefined behavior are just waste of time and sure sign of the lack of basic understanding.
Needless to add, new/delete and new[]/delete[] are in fact two independent memory management mechanisms, which are independently customizable. Once they get customized (by replacing raw memory management functions) there's absolutely no way to even begin to predict what might happen if they get mixed.
It seems that your question is really "why heap corruption doesn't happen?". The answer to that one is "because the heap manager keeps track of allocated block sizes". Let's go back to C for a minute: if you want to allocate a single int in C you would do int* p = malloc(sizeof(int)), if you want to allocate array of size n you can either write int* p = malloc(n*sizeof(int)) or int* p = calloc(n, sizeof(int)). But in any case you'll free it by free(p), no matter how you allocated it. You never pass size to free(), free() just "knows" how much to free, because the size of a malloc()-ed block is saved somewhere "in front" of the block. Back to C++, new/delete and new[]/delete[] are usually implemented in terms of malloc (although they don't have to be, you shouldn't rely on that). This is why new[]/delete combination doesn't corrupt the heap - delete will free the right amount of memory, but, as explained by everyone before me, you can get leaks by not calling the right number of destructors.
That said, reasoning about undefined behavior in C++ is always pointless exercise. Why does it matter if new[]/delete combination happens to work, "only" leaks or causes heap corruption? You shouldn't code like that, period! And, in practice, I would avoid manual memory management whenever possible - STL & boost are there for a reason.
If the non-trivial destructor that are not called for all but the first element in the array are supposed to free some memory you get a memory leak as these objects are not cleaned up properly.
It will lead to a leak in ALL implementations of C++ in any case where the destructor frees memory, because the destructor never gets called.
In some cases it can cause much worse errors.
memory leak might happen if new() operator is overridden but new[] is not. same goes to the delete / delete[] operator
Apart from resulting in undefined behavior, the most straightforward cause of leaks lies in the implementation not calling the destructor for all but the first object in the array. This will obviously result in leaks if the objects have allocated resources.
This is the simplest possible class I could think of resulting in this behaviour:
struct A {
char* ch;
A(): ch( new char ){}
~A(){ delete ch; }
};
A* as = new A[10]; // ten times the A::ch pointer is allocated
delete as; // only one of the A::ch pointers is freed.
PS: note that constructors fail to get called in lots of other programming mistakes, too: non-virtual base class destructors, false reliance on smart pointers, ...
Late for an answer, but...
If your delete mechanism is simply to call the destructor and put the freed pointer, together with the size implied by sizeof, onto a free stack, then calling delete on a chunk of memory allocated with new[] will result memory being lost -- but not corruption.
More sophisticated malloc structures could corrupt on, or detect, this behaviour.
Why can't the answer be that it causes both?
Obviously memory is leaked whether heap corruption occurs or not.
Or rather, since I can re-implement new and delete..... can't it not cause anything at all. Technically I can cause new and delete to perform new[] and delete[].
HENCE: Undefined Behavior.
I was answering a question which was marked off as a duplicate, so i'll just copy it here in case it metters. It was said well before me the way memory allocation works, i`ll just explain the cause & effects.
Just a little thing right off google: http://en.cppreference.com/w/cpp/memory/new/operator_delete
Anyhow, delete is a function for a single object. It frees the instance from the pointer, and leaves;
delete[] is a function used in order to deallocate arrays. That means, it doesnt just free the pointer; It declares the whole memory block of that array as garbage.
That's all cool in practice, but you tell me your application works. You are probably wondering... why?
The solution is C++ does not fix memory leaks. If you`ll use delete without the parenthesises, it'll delete just the array as an object - a proccess which might cause a memory leak.
cool story, memory leak, why should i care?
Memory leak happens when allocated memory doesn't get deleted. That memory then requires unneccessary disk-space, which will make you lose useful memory for pretty much no reason. That's bad programming, and you should probably fix it in your systems.