My code:
#include<iostream>
#include<cstdlib>
using namespace std;
int main()
{
int * p;
p = new int[5];
p[0] = 34;
p[1] = 35;
p[2] = 36;
p[3] = 44;
p[4] = 32;
cout << "Before deletion: " << endl;
for (int i = 0; i < 5; i++)
{
cout << p[i] << endl;
}
delete[] p;
cout << "After deletion: " << endl;
for (int i = 0; i < 5; i++)
{
cout << p[i] << endl;
}
}
If delete[] is supposed to delete all the memory blocks created by the new then why does only the first two elements get deleted here?
This is the output:
Before deletion:
34
35
36
44
32
After deletion:
0
0
36
44
32
One possible reason I could think of is that it removes the first two which some how makes the memory management think that the whole block is free to be reallocated again when needed. Though this is a complete guess on my part and I don't know the real reason.
There's no need to clear (as in change the value) of a deleted element, it only means that the allocator is free to use the space it occupied for whatever it wants, and that accessing the values results in undefined behaviour.
You have experienced a really nasty result of this, which is that some values remain unchanged, while successive runs of the same program might (and you should assume that will) have different results.
It is not the first two items deleted, but all of them. That memory area could now be reused by further new operation, but there is no guarantee what is left in that memory area.
It is completely possible that you would see different results for different machines, subsequent runs on the same machine, different compiler versions and so on. Just do not rely on it.
However, you are accessing a dangling pointer afterwards which is Undefined Behavior (shortly UB).
Just in case, if you call delete without the square brackets, all the memory is de-allocated, but destructors may not be called. That is also UB even though you do not have custom types in this case, just ints.
In your case, it is exactly the same UB behavior with delete as well as delete[], respectively. That is why you see the same results in both cases for your current runs, but then again, even that is not guaranteed.
Ideally, you would need to set them to zero, too, but really, you should consider using a smart pointer, like unique_ptr.
Q: If delete[] is supposed to delete all the memory blocks created by the new then why does only the first two elements get deleted here?
A: The conclusion is incorrect. Try the following:
#include <iostream>
struct A
{
A() : a(0) {}
~A() { std::cout << "Came to ~A(). a = " << a << std::endl;}
int a;
};
int main()
{
A* ap = new A[5];
for (int i = 0; i < 5; ++i )
{
ap[i].a = i;
}
delete [] ap;
}
I get the following output:
Came to ~A(). a = 4
Came to ~A(). a = 3
Came to ~A(). a = 2
Came to ~A(). a = 1
Came to ~A(). a = 0
A pointer is not magic. It is effectively an unsigned integer of machine word size which refers to a memory address. Changing the value of a memory location from whatever it is to 0 is not deleting anything. When you call delete or delete [] all you've done is told the memory allocator that you are done with that memory and it is free to do (or not do) whatever it wants with that memory. After you call delete, what might be in that memory is undefined, you have no right to expect it to contain anything in particular, and you have no right to access it. Accessing memory that has been freed is like entering an apartment that you moved out of that you still have a key to; you might get away with it or you might get an access violation.
For types with destructors, the memory allocator will call the destructor for each object in the block of memory before reclaiming the memory block. ints have no destructor, so there's nothing extra for it to do.
delete[] p releases the memory held by p, but doesn't (necessarily) overwrite it. The old data will remain there indefinitely.
Related
My question:
int* x = new int;
cout << x<<"\n";
int* p;
cout << p <<"\n";
p = x;
delete p;
cout << p <<"\n";
I wrote this purely by myself to understand the pointer and to understand (also get lost in) the dynamic new and delete.
My XCode can compile the program and return the following results:
0x100104250
0x0
0x100104250
I know I can only call delete on dynamically allocated memory.
However, I called delete on p in the above program and it compiles.
Could anyone explain this to me?
Why could I delete p?
Moreover, I found if the program changes to the following:
int* x = new int;
int* p;
cout << p <<"\n";
delete p;
cout << p <<"\n";
Then my Xcode again compiles and returns me:
0x0
0x0
Program ended with exit code: 0
and now, I got completely lost:(.
Could anyone please explain me this ?
Why I could delete p since it has nothing to with x?
Since Xcode compiles successfully, I assume the above two programs are correct for computer.
However, I think it is again the statement of "only call delete on dynamic allocated memory".
Or probably, I didn't fully understand what is pointer and what is dynamic allocated memory.
I found this post when I searched online.
But I don't think it is like my case.
Please help me out.
I would like to ask one more question. The code is here about binary search tree.
From line 28 to 32, it deals with deleting a node with one child.
I put this part of code here, in case the weblink does not work.
else if(root->left == NULL) {
struct Node *temp = root;
root = root->right;
delete temp;
}
It is these codes leading me ask the above the question regarding pointer.
Following the answer given by this post.
Is it correct to understand the code in the following way?
I cannot firstly link the parent node of root to right child of root.
and then delete the root node, as the subtree beneath the root node will also be deleted.
So I must create a temp pointer, pointing to the memory slot, which is pointed to by root.
Then I link the parent node of root to right child of root.
and now, I can safely delete the memory slot pointed by "root", (i.e. temp as they both point to the same memory).
In this way, I release the memory and also keep the link between parent and children.
In addition, the temp is still there and still points to "that" memory slot.
Should I set it to NULL after deletion?
Thank you all again in advance.
Yaofeng
Yes, you can only call delete on memory which was allocated via new. Notice that it's the address of the memory (the value of the pointer) which matters, and not the variable storing the pointer. So, your first code:
int* x = new int; //(A)
cout << x<<"\n";
int* p;
cout << p <<"\n";
p = x; //(B)
delete p; //(C)
cout << p <<"\n"; //(D)
Line (A) allocates memory dynamically at some address (0x100104250 in your example output) and stores this address in variable x. The memory is allocated via new, which means that delete must eventually be called on address 0x100104250.
Line (B) assigns the address 0x100104250 (value of pointer x) into the pointer p. Line (C) then calls delete p, which means "deallocate the memory pointed to by p." This means it calls delete on address 0x100104250, and all is well. The memory at address 0x100104250 was allocated via new, and so is not correctly allocated via delete. The fact that you used a different variable for storing the value plays no role.
Line (D) just prints out the value of the pointer. delete acts on the memory to which a pointer points, not on the pointer itself. The value of the pointer stays the same - it still points to the same memory, that memory is just no longer allocated.
The second example is different - you're calling delete p when p was not initialised to anything. It points to a random piece of memory, so calling delete on it is of course illegal (technically, it has "Undefined Behaviour", and will most likely crash).
It seems that in your particular example, you're running a debug build and your compiler is "helpfully" initialising local variables to 0 if you don't initialise them yourself. So it actually causes your second example to not crash, since calling delete on a null pointer is valid (and does nothing). But the program actually has a bug, since local variables are normally not initialised implicitly.
p = x;
This would make p contain same value as x ( p would point to same object as x). So basically when you say
delete p;
Its doing delete on the address referred to by p which is same as that of x. Hence this is perfectly valid as object referred to by that address is allocated using new.
Second case :-
Co-incidently your pointer p is set by compiler as a NULL pointer( You should not depend on this). So deleting that pointer is safe. Had that not been a NULL pointer you would have possibly seen a crash.
Ok, let's take a look at the documentation for the "delete" operator. According to http://en.cppreference.com/w/cpp/memory/new/operator_delete:
Called by delete-expressions to deallocate storage previously allocated for a single object. The behavior of the standard library implementation of this function is undefined unless ptr is a null pointer or is a pointer previously obtained from the standard library implementation of operator new(size_t) or operator new(size_t, std::nothrow_t).
So what happens is the following: you are calling the new operator, which allocates sizeof(int) bytes for an int variable in memory. That part in memory is referenced by your pointer x. Then you create another pointer, p, which points to the same memory address. When you call delete, the memory is released. Both p and x still point to the same memory address, except that the value at that location is now garbage. To make this easier to understand, I modified your code as follows:
#include <iostream>
using namespace std;
int main() {
int * x = new int; // Allocate sizeof(int) bytes, which x references to
*x = 8; // Store an 8 at the newly created storage
cout << x << " " << *x << "\n"; // Shows the memory address x points to and "8". (e.g. 0x21f6010 8)
int * p; // Create a new pointer
p = x; // p points to the same memory location as x
cout << p << " " << *p << "\n"; // Shows the memory address p points to (the same as x) and "8".
delete p; // Release the allocated memory
cout << x << " " << p << " "
<< *x << " " << *p << "\n"; // p now points to the same memory address as before, except that it now contains garbage (e.g. 0x21f6010 0)
return 0;
}
After running, I got the following results:
0x215c010 8
0x215c010 8
0x215c010 0x215c010 0 0
So remember, by using delete you release the memory, but the pointer still points to the same address. That's why it's usually a safe practice to also set the pointer to NULL afterwards. Hope this makes a bit more sense now :-)
Before the answers, you need to understand the following points.
once you delete a pointer, it need not be assigned with 0. It
can be anything.
You can delete a 0 or NULL without any harm.
Undefined behavior is one in which anything could happen. The
program might crash, it might work properly as if nothing happened,
it might produce some random results, etc.,
However, I called delete on p in the above program and it compiles.
Could anyone explain this to me? Why could I delete p?
That is because you assign the address of memory allocated by new via x. ( p = x;) x ( or p) is a valid memory location and can be deleted.
Now x is called a Dangling pointer. Because it is pointing to a memory that no longer valid. Accessing x after deleting is undefined behavior.
Could anyone please explain me this ? Why I could delete p since it has nothing to with x?
This is because your p is assigned with 0. Hence you are getting away with undefined behavior. However it is not guaranteed that your uninitialized pointer will have a value of 0 or NULL. At this point it seems to work fine, but you are wading through undefined behavior here.
delete/free keyword is used to empty the stored value from the memory location. if we dont use pointers we have the reassign the value to NULL or just let them go out of scope.
Be careful using pointers, if we go out of scope using pointers without deleting the value. It will create memory leak. because that portion of the memory block is not usable anymore. And we lost the address because we are in different scope.
bool example1()
{
long a;
a = 0;
cout << a;
a = 1;
cout << a;
a = 2;
cout << a;
//and again...again until
a = 1000000;
cout << a+1;
return true;
}
bool example2()
{
long* a = new long;//sorry for the misstake
*a = 0;
cout << *a;
*a = 1;
cout << *a;
*a = 2;
cout << *a;
//and again...again until
*a = 1000000;
cout << *a + 1;
return true;
}
Note that I do not delete a in example2(), just a newbie's questions:
1. When the two functions are executing, which one use more memories?
2. After the function return, which one make the whole program use more memories?
Thanks for your help!
UPATE: just repace long* a; with long* a = new long;
UPDATE 2: to avoid the case that we are not doing anything with a, I cout the value each time.
Original answer
It depends and there will be no difference, at the same time.
The first program is going to consume sizeof(long) bytes on the stack, and the second is going to consume sizeof(long*). Typically long* will be at least as big as a long, so you could say that the second program might use more memory (depends on the compiler and architecture).
On the other hand, stack memory is allocated with OS memory page granularity (4KB would be a good estimate), so both programs are almost guaranteed to use the same number of memory pages for the stack. In this sense, from the viewpoint of someone observing the system, memory usage is going to be identical.
But it gets better: the compiler is free to decide (depending on settings) that you are not really doing anything with these local variables, so it might decide to simply not allocate any memory at all in both cases.
And finally you have to answer the "what does the pointer point to" question (as others have said, the way the program is currently written it will almost surely crash due to accessing invalid memory when it runs).
Assuming that it does not (let's say the pointer is initialized to a valid memory address), would you count that memory as being "used"?
Update (long* a = new long edit):
Now we know that the pointer will be valid, and heap memory will be allocated for a long (but not released!). Stack allocation is the same as before, but now example2 will also use at least sizeof(long) bytes on the heap as well (in all likelihood it will use even more, but you can't tell how much because that depends on the heap allocator in use, which in turn depends on compiler settings etc).
Now from the viewpoint of someone observing the system, it is still unlikely that the two programs will exhibit different memory footprints (because the heap allocator will most likely satisfy the request for the new long in example2 from memory in a page that it has already received from the OS), but there will certainly be less free memory available in the address space of the process. So in this sense, example2 would use more memory. How much more? Depends on the overhead of the allocation which is unknown as discussed previously.
Finally, since example2 does not release the heap memory before it exits (i.e. there is a memory leak), it will continue using heap memory even after it returns while example1 will not.
There is only one way to know, which is by measuring. Since you never actually use any of the values you assign, a compiler could, under the "as-if rule" simply optimize both functions down to:
bool example1()
{
return true;
}
bool example2()
{
return true;
}
That would a perfectly valid interpretation of your code under the rules of C++. It's up to you to compile and measure it to see what actually happens.
Sigh, an edit to the question made a difference to the above. The main point still stands: you can't know unless you measure it. Now both of the functions can be optimized to:
bool example1()
{
cout << 0;
cout << 1;
cout << 2;
//and again...again until
cout << 1000001;
return true;
}
bool example2()
{
cout << 0;
cout << 1;
cout << 2;
//and again...again until
cout << 1000001;
return true;
}
example2() never allocates memory for the value referenced by pointer a. If it did, it would take slightly more memory because it would require the space required for a long as well as space for the pointer to it.
Also, no matter how many times you assign a value to a, no more memory is used.
example 2 has a problem of not allocating memory for the pointer. Pointer a initially has an unknown value which makes it to point to somewhere in memory. assigning values to this pointer corrupts the content of that somewhere.
both examples use same amount of memory. (which is 4 bytes.)
I have the following program:
//simple array memory test.
#include <iostream>
using namespace std;
void someFunc(float*, int, int);
int main() {
int convert = 2;
float *arr = new float[17];
for(int i = 0; i < 17; i++) {
arr[i] = 1.0;
}
someFunc(arr, 17, convert);
for(int i = 0; i < 17; i++) {
cout << arr[i] << endl;
}
return 0;
}
void someFunc(float *arr, int num, int flag) {
if(flag) {
delete []arr;
}
}
When I put the following into gdb and insert a break point at float *arr ..., I step through the program and observe the following:
Printing the array arr after it has been initialized gives me 1 17 times.
Inside someFunc too, I print arr before delete to get the same print as above.
Upon going back into main, when I print arr, I get the first digit as 0 followed by 16 1.0s.
My questions:
1. Once the array has been deleted in someFunc, how am I still able to access arr without a segfault in someFunc or main?
2. The code snippet above is a test version of another piece of code that runs in a bigger program. I observe the same behaviour in both places (first digit is 0 but all others are the same. If this is some unexplained memory error, how am I observing the same thing in different areas?
3. Some explanations to fill the gaps in my understanding are most welcome.
A segfault occurs when you access a memory address that isn't mapped into the process. Calling delete [] releases memory back to the memory allocator, but usually not to the OS.
The contents of the memory after calling delete [] are an implementation detail that varies across compilers, libraries, OSes and especially debug-vs-release builds. Debug memory allocators, for instance, will often fill the memory with some tell-tale signature like 0xdeadbeef.
Dereferencing a pointer after it has been deleteed is undefined behavior, which means that anything can happen.
Once the array has been deleted, any access to it is undefined behavior.
There's no guarantee that you'll get a segment violation; in fact,
typically you won't. But there's no guarantee of what you will get; in
larger programs, modifying the contents of the array could easily result
in memory corruption elsewhere.
delete gives the memory back to the OS memory manager, but does not necessarily clears the contents in the memory(it should not, as it causes overhead for nothing). So the values are retained in the memory. And in your case, you are accessing the same memory -- so it will print what is in the memory -- it is not necessarily an undefined behaviour(depends on memory manager)
Re 1: You can't. If you want to access arr later, don't delete it.
C++ doesn't check for array boundaries. Only if you access a memory which you are not allowed to you will get segfault
Hii
I know that vector allocated contiguous fixed memory when it needs to push_back some items.And if its not able to fit into it it will alllocate new memory and copy old values into it and delete the old memory.
If that is the case how the following code works
#include<iostream>
#include<vector>
using namespace std;
class Test
{
public :
int val;
Test()
{
val=0;
}
};
int main()
{
vector<Test>vec;
Test t[10];
for (int i=0;i<5;i++)
vec.push_back(t[i]);
cout<<vec.capacity()<<endl; //printing 8
Test* obj=&vec[2];
obj->val=2;
cout<<obj<<endl;
for (int i=0;i<5;i++) //Adding 5 elements more
vec.push_back(t[i]);
Test* obk=&vec[2];
cout<<obk->val<<endl;
cout<<obj->val<<endl;
cout<<obj<<endl;
cout<<obk<<endl;
}
Here if you see obj is taking pointer of vec[2] which value in my machine is coming 0x8bcb048 .Then I am inserting 5 more items so vector will allocate new memory.Now if i am taking obk from vector[2] its coming different address 0x8bcb070 . But if i try to acess i with the help of obj its not giving any problem.Any reason ?
But if i try to access it with the help of obj its not giving any problem. Any reason?
This is undefined behavior - the fact that it appears to work is simply due to luck (whether good or bad luck is matter of opinion).
Essentially you're dereferencing a 'dangling' pointer. and just like if you dereference a pointer to a block of memory that's been freed, you simply read out old, stale data that happens to look like it has reasonable values. or you might read out something that appears to be garbage. Or you might cause a segfault.
One thing is for certain - it's a bug, whether it seems to act like one or not.
vectors are dynamic arrays. You are right when they overflow they are reallocated someplace else and all earlier initialized pointers/ iterators( which generally for vectors are also implemented as pointers, although the standards don't require it)/ address references become invalid. and earlier held memory space is freed finally.
Here is how events occur in simplistic terms:
currently available space is insufficient.
new memory space is acquired ( k times(generally k=2) as large as original memory size)
current array is copied to new memory locations
old = current; current = new; { this ensures atomicity for single insertions}
old memory is freed.
Notice that old memory location's data is not destroyed or reset. So when the resources are needed again that might or might not be reused. But the original memory location that you have is a false location and must be avoid always.
When a vector reallocates, all iterators, pointers and references that you obtained prior to the reallocation are invalidated. Using them leads to undefined behavior, which includes "apparently not giving any problems".
To visualize the problem, you can insert debug outputs in the constructors and the destructor. You could also log which objects exist at any point in time:
class Test;
std::set<Test*> valid_Test_objects;
class Test
{
int val;
public:
Test() : val(0)
{
valid_Test_objects.insert(this);
std::cout << "constructed Test object # " << this << std::endl;
}
Test(const Test& that) : val(that.val)
{
std::cout << "constructed Test object # " << this << std::endl;
valid_Test_objects.insert(this);
}
~Test()
{
std::cout << "destructing Test object # " << this << std::endl;
valid_Test_objects.erase(this);
}
int value()
{
std::cout << " accessing Test object # " << this << std::endl;
if (valid_Test_objects.find(this) == valid_Test_objects.end())
{
abort();
}
return val;
}
};
int main()
{
std::vector<Test> vec;
vec.reserve(3);
Test x;
vec.push_back(x);
Test* p = &vec[0];
std::cout << p->value() << std::endl;
std::vector<Test>::size_type cap = vec.capacity();
while (cap == vec.capacity()) vec.push_back(x);
std::cout << p->value() << std::endl;
}
Unfortunately, we should not even try to call p->value() when p is invalid. Enjoy your stay at undefined behavior land ;-)
If you're asking why there's no crash when you try to access 'obj' then I would say you should try to run it in Release mode and see. In any case, obj should not be accessible, as the pointer that was returned by &vec[2] was invalidated when the vector increased its capacity. It could still be pointing to some random junk that happens to be allocated to your processes.
I have this function:
char* ReadBlock(fstream& stream, int size)
{
char* memblock;
memblock = new char[size];
stream.read(memblock, size);
return(memblock);
}
The function is called every time I have to read bytes from a file. I think it allocates new memory every time I use it but how can I free the memory once I have processed the data inside the array? Can I do it from outside the function? Processing data by allocating big blocks gives better performance than allocating and deleting small blocks of data?
Thank you very much for your help!
Dynamic arrays are freed using delete[]:
char* block = ReadBlock(...);
// ... do stuff
delete[] block;
Ideally however you don't use manual memory management here:
std::vector<char> ReadBlock(std::fstream& stream, int size) {
std::vector<char> memblock(size);
stream.read(&memblock[0], size);
return memblock;
}
Just delete[] the return value from this function when you've finished with it. It doesn't matter that you're deleting it from outside. Just don't delete it before you finish using it.
You can call:
char * block = ReadBlock(stream, size);
delete [] block;
But... that's a lot of heap allocation for no gain. Consider taking this approach
char *block = new char[size];
while (...) {
stream.read(block, size);
}
delete [] block;
*Note, if size can be a compile time constant, you can just stack allocate block.
I had a similar question, and produced a simple program to demonstrate why calling delete [] outside a function will still deallocate the memory that was allocated within the function:
#include <iostream>
#include <vector>
using namespace std;
int *allocatememory()
{
int *temppointer = new int[4]{0, 1, 2, 3};
cout << "The location of the pointer temppointer is " << &temppointer << ". Locations pointed to by temppointer:\n";
for (int x = 0; x < 4; x++)
cout << &temppointer[x] << " holds the value " << temppointer[x] << ".\n";
return temppointer;
}
int main()
{
int *mainpointer = allocatememory();
cout << "The location of the pointer mainpointer is " << &mainpointer << ". Locations pointed to by mainpointer:\n";
for (int x = 0; x < 4; x++)
cout << &mainpointer[x] << " holds the value " << mainpointer[x] << ".\n";
delete[] mainpointer;
}
Here was the resulting readout from this program on my terminal:
The location of the pointer temppointer is 0x61fdd0. Locations pointed to by temppointer:
0xfb1f20 holds the value 0.
0xfb1f24 holds the value 1.
0xfb1f28 holds the value 2.
0xfb1f2c holds the value 3.
The location of the pointer mainpointer is 0x61fe10. Locations pointed to by mainpointer:
0xfb1f20 holds the value 0.
0xfb1f24 holds the value 1.
0xfb1f28 holds the value 2.
0xfb1f2c holds the value 3.
This readout demonstrates that although temppointer (created within the allocatememory function) and mainpointer have different values, they point to memory at the same location. This demonstrates why calling delete[] for mainpointer will also deallocate the memory that temppointer had pointed to, as that memory is in the same location.
Yes. You may call delete from outside of the function. In this case though, may I suggest using an std::string so you don't have to worry about the management yourself?
first thing to note: memory allocated with new and delete is completely global. things are not automatically deleted when pointers go out of scope or a function is exited. as long as you have a pointer to the allocation (such as the pointer being returned there) you can delete it when ever and where ever you want. the trick, is just makeing sure other stuff doesn't delete it with out you knowing.
that is a benefit with the sort of function structure the fstream read function has. it is fairly clear that all that function is going to do is read 'size' number of bytes into the buffer you provide, it doesn't matter whether that buffer has been allocated using new, whether its a static or global buffer, or even a local buffer, or even just a pointer to a local struct. and it is also fairly clear that the function is going to do nothing more with the buffer you pass it once it's read the data to it.
on the other hand, take the structure of your ReadBlock function; if you didn't have the code for that it would be tricky to figure out exactly what it was returning. is it returning a pointer to new'd memory? if so is it expecting you to delete it? will it delete it it's self? if so, when? is it even a new pointer? is it just returning an address to some shared static buffer? if so, when will the buffer become invalid (for example, overwritten by something else)
looking at the code to ReadBlock, it is clear that it is returning a pointer to new'd memory, and is expecting you to delete it when ever you are done with it. that buffer will never be overwritten or become invalid until you delete it.
speed wise, thats the other advantage to fsream.read's 'you sort out the buffer' approach: YOU get the choice on when memory is allocated. if you are going "read data, process, delete buffer, read data process delete buffer, ect.... " it is going to be alot more efficient to just allocate one buffer (to the maximum size you will need, this will be the size of your biggest single read) and just use that for everything, as suggested by Stephen.
How about using a static char* memblock; It will be initialised just once and it wont allocate memblock a new space everytime.
Since c++11, one can use std::unique_ptr for this purpose.
From https://en.cppreference.com/book/intro/smart_pointers :
void my_func()
{
int* valuePtr = new int(15);
int x = 45;
// ...
if (x == 45)
return; // here we have a memory leak, valuePtr is not deleted
// ...
delete valuePtr;
}
But,
#include <memory>
void my_func()
{
std::unique_ptr<int> valuePtr(new int(15));
int x = 45;
// ...
if (x == 45)
return; // no memory leak anymore!
// ...
}