I have the following program:
//simple array memory test.
#include <iostream>
using namespace std;
void someFunc(float*, int, int);
int main() {
int convert = 2;
float *arr = new float[17];
for(int i = 0; i < 17; i++) {
arr[i] = 1.0;
}
someFunc(arr, 17, convert);
for(int i = 0; i < 17; i++) {
cout << arr[i] << endl;
}
return 0;
}
void someFunc(float *arr, int num, int flag) {
if(flag) {
delete []arr;
}
}
When I put the following into gdb and insert a break point at float *arr ..., I step through the program and observe the following:
Printing the array arr after it has been initialized gives me 1 17 times.
Inside someFunc too, I print arr before delete to get the same print as above.
Upon going back into main, when I print arr, I get the first digit as 0 followed by 16 1.0s.
My questions:
1. Once the array has been deleted in someFunc, how am I still able to access arr without a segfault in someFunc or main?
2. The code snippet above is a test version of another piece of code that runs in a bigger program. I observe the same behaviour in both places (first digit is 0 but all others are the same. If this is some unexplained memory error, how am I observing the same thing in different areas?
3. Some explanations to fill the gaps in my understanding are most welcome.
A segfault occurs when you access a memory address that isn't mapped into the process. Calling delete [] releases memory back to the memory allocator, but usually not to the OS.
The contents of the memory after calling delete [] are an implementation detail that varies across compilers, libraries, OSes and especially debug-vs-release builds. Debug memory allocators, for instance, will often fill the memory with some tell-tale signature like 0xdeadbeef.
Dereferencing a pointer after it has been deleteed is undefined behavior, which means that anything can happen.
Once the array has been deleted, any access to it is undefined behavior.
There's no guarantee that you'll get a segment violation; in fact,
typically you won't. But there's no guarantee of what you will get; in
larger programs, modifying the contents of the array could easily result
in memory corruption elsewhere.
delete gives the memory back to the OS memory manager, but does not necessarily clears the contents in the memory(it should not, as it causes overhead for nothing). So the values are retained in the memory. And in your case, you are accessing the same memory -- so it will print what is in the memory -- it is not necessarily an undefined behaviour(depends on memory manager)
Re 1: You can't. If you want to access arr later, don't delete it.
C++ doesn't check for array boundaries. Only if you access a memory which you are not allowed to you will get segfault
Related
int main() {
int* i = new int(1);
i++;
*i=1;
delete i;
}
Here is my logic:
I increment I by 1, and then assign a value to it. Then I delete the I, so I free the memory location while leaking the original memory. Where is my problem?
I also tried different versions. Every time, as long as I do the arithmetics and delete the pointer, my program crashes.
What your program shows is several cases of undefined behaviour:
You write to memory that hasn't been allocated (*i = 1)
You free something that you didn't allocate, effectively delete i + 1.
You MUST call delete on exactly the same pointer-value that you got back from new - nothing else. Assuming the rest of your code was valid, it would be fine to do int *j = i; after int *i = new int(1);, and then delete j;. [For example int *i = new int[2]; would then make your i++; *i=1; valid code]
Who allocates is who deallocates. So you should not be able to delete something you did not new by yourself. Furthermore, i++;*i=1; is UB since you may access a restricted memory area or read-only memory...
The code made no sense . I think You have XY problem. If you could post your original problem there will be more chance to help you.
In this case you need to have a short understanding how the heap memory management works. in particular implementation of it, when you allocate an object you receive a pointer to the start of the memory available to you to work with. However, the 'really' allocated memory starts a bit 'earlier'. This means the allocated block is a bit more than you have requested to allocate. The start of the block is the address you have received minus some offset. Thus, when you pass the incremented pointer to the delete it tries to find the internal information at the left side of it. And because your address is now incremented this search fails what results in a crash. That's in short.
The problem lies here:
i++;
This line doesn't increment the value i points to, but the pointer itself by the number of bytes an int has (4 on 32-bit platform).
You meant to do this:
(*i)++;
Let's take it step by step:
int* i = new int(1); // 1. Allocate a memory.
i++; // 2. Increment a pointer. The pointer now points to
// another location.
*i=1; // 3. Dereference a pointer which points to unknown
// memory. This could cause segmentation fault.
delete i; // 4. Delete the unknown memory which is undefined
// behavior.
In short: If you don't own a piece of memory you can't do arithmetic with it neither delete it!
I am running the following code where I declare a dynamic 2D array, and then go on to assign values at column indexes higher than the number columns actually allocated for the dynamic array. However, when I do this the code runs perfectly and I don't get an error, which I believe I should get.
void main(){
unsigned char **bitarray = NULL;
bitarray = new unsigned char*[96];
for (int j = 0; j < 96; j++)
{
bitarray[j] = new unsigned char[56];
if (bitarray[j] == NULL)
{
cout << "Memory could not be allocated for 2D Array.";
return;// return if memory not allocated
}
}
bitarray[0][64] = '1';
bitarray[10][64] = '1';
cout << bitarray[0][64] << " " << bitarray[10][64];
getch();
return;
}
The link to the output I get is here (The values are actually assigned accurately, don't know why, though).
In C++, accessing a buffer out of its bounds invokes undefined behavior (not a trapped error, as you expected).
The C++ specification defines the term undefined behavior as:
behavior for which this International Standard imposes no requirements.
In your code, both
bitarray[0][64] = '1';
bitarray[10][64] = '1';
are accessing memory out-of-bound,. i.e., those memory locations are "invalid". Accessing invalid memory invokes undefined behaviour.
The access violation error or segmentation fault is one of the many possible outcomes of UB. Nothing is guaranteed.
From the wiki page for segmentation fault,
On systems using hardware memory segmentation to provide virtual memory, a segmentation fault occurs when the hardware detects an attempt to refer to a non-existent segment, or to refer to a location outside the bounds of a segment, .....
so, maybe, just maybe, the memory area for bitarray[0][64] is inside the allocated page (segment) which is accessible (but invalid anyway) by the program , in this very particular case. That does not mean it will be, always.
That said, void main() is not a correct signature of main() function. The recommended (C++11,ยง3.6.1) signature of main() is int main(void).
C++11 introduced std::array and the method at() provides out of bounds checking.
I am sorry if I may not have phrased the question correctly, but in the following code:
int main() {
char* a=new char[5];
a="2222";
a[7]='f'; //Error thrown here
cout<<a;
}
If we try to access a[7] in the program, we get an error because we haven't been assigned a[7].
But if I do the same thing in a class :
class str
{
public:
char* a;
str(char *s) {
a=new char[5];
strcpy(a,s);
}
};
int main()
{
str s("ssss");
s.a[4]='f';s.a[5]='f';s.a[6]='f';s.a[7]='f';
cout<<s.a<<endl;
return 0;
}
The code works, printing the characters "abcdfff".
How are we able to access a[7], etc in the code when we have only allocated char[5] to a while we were not able to do so in the first program?
In your first case, you have an error:
int main()
{
char* a=new char[5]; // declare a dynamic char array of size 5
a="2222"; // assign the pointer to a string literal "2222" - MEMORY LEAK HERE
a[7]='f'; // accessing array out of bounds!
// ...
}
You are creating a memory leak and then asking why undefined behavior is undefined.
Your second example is asking, again, why undefined behavior is undefined.
As others have said, it's undefined behavior. When you write to memory out of bounds of the allocated memory for the pointer, several things can happen
You overwrite an allocated, but unused and so far unimportant location
You overwrite a memory location that stores something important for your program, which will lead to errors because you've corrupted your own memory at that point
You overwrite a memory location that you aren't allowed to access (something out of your program's memory space) and the OS freaks out, causing an error like "AccessViolation" or something
For your specific examples, where the memory is allocated is based on how the variable is defined and what other memory has to be allocated for your program to run. This may impact the probability of getting one error or another, or not getting an error at all. BUT, whether or not you see an error, you shouldn't access memory locations out of your allocated memory space because like others have said, it's undefined and you will get non-deterministic behavior mixed with errors.
int main() {
char* a=new char[5];
a="2222";
a[7]='f'; //Error thrown here
cout<<a;
}
If we try to access a[7] in the program, we get an error because we
haven't been assigned a[7].
No, you get a memory error from accessing memory that is write-protected, because a is pointing to the write-only memory of "2222", and by chance two bytes after the end of that string is ALSO write-protected. If you used the same strcpy as you use in the class str, the memory access would overwrite some "random" data after the allocated memory which is quite possibly NOT going to fail in the same way.
It is indeed invalid (undefined behaviour) to access memory outside of the memory you have allocated. The compiler, C and C++ runtime library and OS that your code is produced with and running on top of is not guaranteed to detect all such things (because it can be quite time-consuming to check every single operation that accesses memory). But it's guaranteed to be "wrong" to access memory outside of what has been allocated - it just isn't always detected.
As mentioned in other answers, accessing memory past the end of an array is undefined behavior, i.e. you don't know what will happen. If you are lucky, the program crashes; if not, the program continues as if nothing was wrong.
C and C++ do not perform bounds checks on (simple) arrays for performance reasons.
The syntax a[7] simply means go to memory position X + sizeof(a[0]), where X is the address where a starts to be stored, and read/write. If you try to read/write within the memory that you have reserved, everything is fine; if outside, nobody knows what happens (see the answer from #reblace).
I wanted to access deleted array to see how the memory was changed it works till I deleted really big array then I get access violation exception. Please do not care about cout I know they are slow but I will get rid of them.
When I do it for 1000 elements array it is ok, when I do it for 1000000 i get an exception. I know that this is weird task but my teacher is stubborn and I can't find out how to deal with that.
EDIT: I know that I never should access that memory, but I also know that there is probably trick he will show then and tell that I am not right.
long max = 1000000;// for 10000 i do not get any exception.
int* t = new int[max];
cout<<max<<endl;
uninitialized_fill_n(t, max, 1);
delete[] t;
cout<<"deleted t"<<endl;
int x;
cin>>x;//wait little bit
int one = 1;
long counter = 0;
for(long i = 0; i < max; i++){
cout<<i<<endl;
if(t[i] != 1){
cout<<t[i]<<endl;
counter++;
}
}
That state of "deleted" memory is undefined, to access memory after delete is UNDEFINED BEHAVIOUR (meaning, the C++ specification allows "anything" to happen when you access such memory - including the appearance of it "working" sometimes, and "not working" sometimes).
You should NEVER access memory that has been deleted, and as shown in your larger array case, it may not work to do so, because the memory may no longer actually be available to your process.
You are not allowed to access to a released buffer
Accessing memory that is no longer in use results in undefined behaviour. You will not get any consistent patterns. If the original memory has not been overwritten after it became invalid, the values will be exactly what they used to be.
I find the answer to this similar question to be very clear in explaining the concept with a simple analogy.
A simple way to mimic this behaviour is to create a function which returns a pointer to a local variable, for example:
int *foo(){
int a=1;
return &a;
}
following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}