Consider the following program:
int main() {
while(...) {
int* foobar = new int;
}
return 0;
}
When does foobar go out of scope?
I know when using new, attributes are allocated on the heap and need to be deleted manually with delete, in the code above, it is causing a memory leak. However, what about scope?
I thought it would go out of scope as soon as the while loop terminates, because you have no direct access to it anymore. For example, you cannot delete it after the loop has terminated.
Be careful here, foobar is local to the while loop, but the allocation on the heap has no scope and will only be destructed if you call delete on it.
The variable and the allocation are not linked in any way as far as the compiler is concerned. Indeed, the allocation happens at run time, so the compiler never even sees it.
foobar is a local variable that goes out of scope at the end of the block.
*foobar is a dynamically allocated object with manual lifetime. Since it doesn't have scoped lifetime, the question makes no sense -- it doesn't have a scope out of which it could go. Its lifetime is managed manually, and the object lives until you delete it.
Your question is dangerously burdened with prejudice and preconceptions. It is best to approach C++ with a clean mind and an open attitude. Only that way will you be able to appreciate the language's wonders to the fullest.
Here's the clean and open approach: Do think about 1) storage classes (automatic, static, dynamic), 2) object lifetime (scoped, permanent, manual), 3) object semantics (value (copies) vs reference (aliases)), 4) RAII and single-responsiblity classes. Purge your mind of a) stack/heap, b) pointers, c) new/delete, d) destructors/copy constructors/assignment operators.
That's a pretty awesome memory leak. You have a variable on the stack pointing to the memory allocated on the heap. You need to delete the memory on the heap before you lose the reference to it when the while loop scope ends. Alternately if you don't want to fuss with memory management always use smart pointers to own the raw memory on the heap and let it clean itself up.
#include <memory>
int main() {
while(...) {
std::unique_ptr<int> foobar = new int;
} // smart pointer foobar deletes the allocated int each iteration
return 0;
}
The pointer (foobar) will go out of scope right as the program gets to the closing brace of the while loop. So if the expression in the ... remains true, memory will be leaked every time the loop executes as you have lost a handle to the allocated object as of that closing brace.
Here foobar is an int pointer occupying memory in the stack. The int instance you are creating dynamically with new goes to heap. When foobar goes out of scope, you lose the reference to it, so you cannot delete the memory allocated in the heap.
The best solution would be:
while(--)
{
int foobar;
}//it goes out of scope here. deleted from stack automatically!!
If you still want to use the dynamic allocation then do this:
while(--)
{
int* foobar=new int;
//do your work here!
delete foobar; //This deletes the heap memory allocated!
foobar=NULL; //avoid dangling pointer! :)
}
foobar goes out of scope after each interation of the loop.
The memory you allocate and assign to foobar is being leaked, in that it is still allocated on the heap but no references to it are available in the program.
Since foobar is declared in the body of the loop, it goes out of scope at the end of every iteration of the loop. It is then redeclared, and new memory is allocated again and again until the loop ends. The actual object that foobar points to never goes out of scope. Scope doesn't apply to dynamically allocated(aka heap) objects, only to automatic(stack) objects.
Foobar the pointer is created on the stack, but the new int is created on the heap. In the case of the while loop, each time the code loops, foobar falls out scope. The newly created int persists on the heap. On each iteration a new int is created, and the pointer is reset, which means the pointer no longer can access any of the previous int(s) on the heap.
What seems to be lacking, in every one of the previous answers, and even in this one, is the heap falling out of scope. Maybe, I am getting the terminology incorrect, but I know that at some point the heap is reset, too. It may occur once the program no longer runs, or when the computer is turned off, but I know it occurs.
Let us look at this question from a different perspective. I have written any number of programs, which leak memory. Over all the years, I have owned my computer, I am positive, I have leaked over 2 gigabytes of memory. My computer only has 1 gig of memory. Therefore, if the heap NEVER falls out of scope, then my computer has some magical memory. Would one of you care to explain when exactly the heap falls out of scope?
Related
From what I'd learned, a dynamically allocated variable needs to be deleted using the delete operator and will not automatically delete at the end of a scope, like in the case of static variables.
Therefore, in the following example when the loop runs for the 2nd and 3rd time, shouldn't "int *p=new int;" be cosnidered as multiple initialization since the dynamically allocated memory 'p' hasn't been deleted?
#include<iostream>
using namespace std;
void main()
{
int i = 2;
while (i > -1)
{
int *p=new int;
*p = 5;
cout << *p;
--i;
}
}
Using Visual Studio 2015, the above program gives no error. According to my understanding this does not make sense.
I'm assuming there is something wrong with my understanding of dynamically allocated variables. Can anyone please clarify?
Your code shows what is called a "memory leak". The memory allocated for each loop iteration is lost when p goes out of scope without deleteing the memory first. This (usually) does not lead to compiler warnings or runtime errors, as it can be quite complicated for the compiler to find this kind of error. Some static code analysers might be able to detect this though.
What you might notice in the case of a memory leak is that your program uses up more and more memory the longer it runs, meaning that memory leaks are especially problematic in systems with low RAM and for programs that are long running, e.h. system services that are supposed to run for several days.
There are special tools to find memory leaks, e.g. valgrind for Linux or built-in tools in the debug runtime for Visual Studio.
You can't dynamically allocate variables, only objects.
p is not a dynamically allocated object, but *p - the object created by new - is.
delete p would not delete p but the object it points to.
Scope is a syntactic compile-time property that applies to names.
The variable p, having a name, has a scope.
The object it points to is unnamed, so the concept of scope doesn't even apply to it.
At runtime, both p and the object it points to have a lifetime.
The lifetime of p coincides with its scope, since it's an automatic variable. Every iteration of the loop has its own variable, all with the same name - there is no multiple initialization because the variables are distinct.
The lifetime of the object that p points to extends until its address is passed to delete.
Since you never do that, every object you created with new "leaks".
p is just a static variable containing the address to the dynamic data, the pointer itself gets lost at the end of the scope, while the dynamic data persist without having a pointer to access it (memory leak).
The data pointed to by p is dynamic memory (in the heap) and won't get freed after each iteration, however, the content of variable p itself (which is the address of the dynamic data, if you go to p in memory you find it a simple number in stack, which is just the address of the dynamic data in the heap not the data itself) is just a static address in the stack that gets lost at the end of the scope (an iteration).
Suppose that you have the following function:
void doSomething(){
int *data = new int[100];
}
Why will this produce a memory leak? Since I can not access this variable outside the function, why doesn't the compiler call delete by itself every time a call to this function ends?
Why will this produce a memory leak?
Because you're responsible for deleting anything you create with new.
Why doesn't the compiler call delete by itself every time a call to this function ends?
Often, the compiler can't tell whether or not you still have a pointer to the allocated object. For example:
void doSomething(){
int *data = new int[100];
doSomethingElse(data);
}
Does doSomethingElse just use the pointer during the function call (in which case we still want to delete the array here)? Does it store a copy of the pointer for use later (in which case we don't want to delete it yet)? The compiler has no way of knowing; it's up to you to tell it. Rather than making a complicated, error-prone rule (like "you must delete it unless you can figure out that the compiler must know that there are no other references to it"), the rule is kept simple: you must delete it.
Luckily, we can do better than juggling a raw pointer and trying to delete it at the right time. The principle of RAII allows objects to take ownership of allocated resources, and automatically release them when their destructor is called as they go out of scope. Containers allow dynamic objects to be maintained within a single scope, and copied if needed; smart pointers allow ownership to be moved or shared between scopes. In this case, a simple container will give us a dynamic array:
void doSomething(){
std::vector<int> data(100);
} // automatically deallocated
Of course, for a small fixed-size array like this, you might as well just make it automatic:
void doSomething(){
int data[100];
} // all automatic variables are deallocated
The whole point of dynamic allocation like this is that you manually control the lifetime of the allocated object. It is needed and appropriate a lot less often than many people seem to think. In fact, I cannot think of a valid use of new[].
It is indeed a good idea to let the compiler handle the lifetime of objects. This is called RAII. In this particular case, you should use std::vector<int> data(100). Now the memory gets freed automatically as soon as data goes out of scope in any way.
Actually you can access this variable outside of your doSomething() function, you just need to know address which this pointer holds.
In picture below, rectangle is one memory cell, on top of rectangle is memory cell address, inside of rectangle is memory cell value.
For example, in picture above, if you know that allocated array starts from address 0x200, then outside of your function you can do next:
int *data = (int *)0x200;
std::cout << data[0];
So compiler can't delete this memory for you, but please pay attention to compiler warning:
main.cpp: In function ‘void doSomething()’:
main.cpp:3:10: warning: unused variable ‘data’ [-Wunused-variable]
int *data = new int[100];
When you do new, OS allocates memory in RAM for you, so you need to make OS know, when you don't need this memory anymore, doing delete, so it's only you, who knows when execute delete, not a compiler.
This is memory allocated on the heap, and is therefore available outside the function to anything which has its address. The compiler can't be expected to know what you intend to do with heap-allocated memory, and it will not deduce at compile time the need to free the memory by (for example) tracking whether anything has an active reference to the memory. There is also no automatic run-time garbage collection as in Java. In C++ the responsibility is on the programmer to free all memory allocated on the heap.
See also: Why doesn't C++ have a garbage collector?
Here you have an int array dynamically allocated on the heap with a pointer data pointing to the first element of your array.
Since you explicitly allocated your array on the heap with new, this one will not be deleted from the memory when you go out of scope. When you go out of scope, your pointer will be deleted, and then you will not have access to the array -> Memory leak
Since I can not access this variable outside the function, why doesn't
the compiler call delete by itself every time a call to this function
ends?
Because you should not allocate memory on the heap that you wouldn't use anyway in the first place.
That's how C++ works.
EDIT:
also if compiler would delete the pointer after function returns then there would be way of returning a pointer from function.
The languages in question are C/C++.
My prof said to free memory on the heap when your done using it, because otherwise you can end up with memory that can't be accessed. The problem with this is you might end up with all your memory used up and you are unable to access any of it.
Why doesn't the same concept apply to the stack? I understand that you can always access the memory you used on the stack, but if you keep creating new variables, you will eventually run out of space right? So why can't you free variables on the stack to make room for new variables like you can on the heap?
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
Dynamically allocated objects ("heap objects" in colloquial language) are never variables. Thus, they can never go out of scope. They don't live inside any scope. The only way you can handle them is via a pointer that you get at the time of allocation.
(The pointer is usually assigned to a variable, but that doesn't help.)
To repeat: Variables have scope; objects don't. But many objects are variables.
And to answer the question: You can only free objects, not variables.
The end of the closed "}" braces is where the stack "frees" its memory. So if I have:
{
int a = 1;
int b = 2;
{
int c = 3; // c gets "freed" at this "}" - the stack shrinks
// and c is no longer on the stack.
}
} // a and b are "freed" from the stack at this last "}".
You can think of c as being "higher up" on the stack than "a" and "b", so c is getting popped off before them. Thus, every time you write a "}" symbol, you are effectively shrinking the stack and "freeing" data.
There are already nice answers but I think you might need some more clarification, so I'll try to make this a more detailed answer and also try to make it simple (if I manage to). If something isn't clear (which with me being not a native english speaker and having problems with formulating answers sometimes might be likely) just ask in the comments. Also gonna take the use the Variables vs Objects idea that Kerrek SB uses in his answer.
To make that clearer I consider Variables to be named Objects with an Object being something to store data within your program.
Variables on the stack got automatic storage duration they automatically get destroyed and reclaimed once their scope ends.
{
std::string first_words = "Hello World!";
// do some stuff here...
} // first_words goes out of scope and the memory gets reclaimed.
In this case first_words is a Variable (since it got its own name) which means it is also an Object.
Now what about the heap? Lets describe what you might consider being "something on the heap" as a Variable pointing to some memory location on the heap where an Object is located. Now these things got what's called dynamic storage duration.
{
std::string * dirty = nullptr
{
std::string * ohh = new std::string{"I don't like this"} // ohh is a std::string* and a Variable
// The actual std::string is only an unnamed
// Object on the heap.
// do something here
dirty = ohh; // dirty points to the same memory location as ohh does now.
} // ohh goes out of scope and gets destroyed since it is a Variable.
// The actual std::string Object on the heap doesn't get destroyed
std::cout << *dirty << std::endl; // Will work since the std::string on the heap that dirty points to
// is still there.
delete dirty; // now the object being pointed to gets destroyed and the memory reclaimed
dirty = nullptr; can still access dirty since it's still in its scope.
} // dirty goes out of scope and get destroyed.
As you can see objects don't adhere to scopes and you got to manually manage their memory. That's also a reason why "most" people prefer to use "wrappers" around it. See for example std::string which is a wrapper around a dynamic "String".
Now to clarify some of your questions:
Why can't we destroy objects on the stack?
Easy answer: Why would you want to?
Detailed answer: It would be destroued by you and then destroyed again once it leaves the scope which isn't allowed. Also you should generally only have variables in your scope that you actually need for your computation and how would you destroy it if you actually need that variable to finish your computation? But if you really were to only need a variable for a small time within a computation you could just make a new smaller scope with { } so your variable gets automatically destroyed once it isn't needed anymore.
Note: If you got a lot of variables that you only need for a small part of your computation it might be a hint that that part of the computation should be in its own function/scope.
From your comments: Yeah I get that, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope?
They don't. Objects on the heap got no scope, you can pass their address out of a function and it still persists. The pointer pointing to it can go out of scope and be destroyed but the Object on the heap still exists and you can't access it anymore (memory leak). That's also why it's called manual memory management and most people prefer wrappers around them so that it gets automatically destroyed when it isn't needed anymore. See std::string, std::vector as examples.
From your comments: Also how can you run out of memory on a computer? An int takes up like 4 bytes, most computers have billions of bytes of memory... (excluding embedded systems)?
Well, computer programs don't always just hold a few ints. Let me just answer with a little "fake" quote:
640K [of computer memory] ought to be enough for anybody.
But that isn't enough like we all should know. And how many memory is enough? I don't know but certainly not what we got now. There are many algorithms, problems and other stuff that need tons of memory. Just think about something like computer games. Could we make "bigger" games if we had more memory? Just think about it... You can always make something bigger with more resources so I don't think there's any limit where we can say it's enough.
So why can't you free variables on the stack to make room for new variables like you can on the heap?
All information that "stack allocator" knows is ESP which is pointer to the bottom of stack.
N: used
N-1: used
N-2: used
N-3: used <- **ESP**
N-4: free
N-5: free
N-6: free
...
That makes "stack allocation" very efficient - just decrease ESP by the size of allocation, plus it is locality/cache-friendly.
If you would allow arbitrary deallocations, of different sizes - that will turn your "stack" into "heap", with all associated additional overhead - ESP would not be enough, because you have to remember which space is deallocated and which is not:
N: used
N-1: free
N-2: free
N-3: used
N-4: free
N-5: used
N-6: free
...
Clearly - ESP is not more enough. And you also have to deal with fragmentation problems.
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
One of the reasons is that you don't always want that - sometimes you want to return allocated data to caller of your function, that data should outlive scope where it was created.
That said, if you really need scope-based lifetime management for "heap" allocated data (and most of time it is scope-based, indeed) - it is common practice in C++ to use wrappers around such data. One of examples is std::vector:
{
std::vector<int> x(1024); // internally allocates array of 1024 ints on heap
// use x
// ...
} // at the end of the scope destructor of x is called automatically,
// which does deallocation
Read about function calls - each call pushes data and function address on the stack. Function pops data from stack and eventually pushes its result.
In general, stack is managed by OS, and yes - it can be depleted. Just try doing something like this:
int main(int argc, char **argv)
{
int table[1000000000];
return 0;
}
That should end quickly enough.
Local variables on the stack don't actually get freed. The registers pointing at the current stack are just moved back up and the stack "forgets" about them. And yes, you can occupy so much stack space that it overflows and the program crashes.
Variables on the heap do get freed automatically - by the operating system, when the program exits. If you do
int x;
for(x=0; x<=99999999; x++) {
int* a = malloc(sizeof(int));
}
the value of a keeps getting overwritten and the place in the heap where a was stored is lost. This memory is NOT freed, because the program doesn't exit. This is called a "memory leak". Eventually, you will use up all the memory on the heap, and the program will crash.
The heap is managed by code: Deleting a heap allocation is done by calling the heap manager. The stack is managed by hardware. There is no manager to call.
This is a very newbie question, but something completely new to me. In my code, and everywhere I have seen it before, new objects are created as such...
MyClass x = new MyClass(factory);
However, I just saw some example code that looks like this...
MyClass x(factory);
Does that do the same thing?
Not at all.
The first example uses dynamic memory allocation, i.e., you are allocating an instance of MyClass on the heap (as opposed to the stack). You would need to call delete on that pointer manually or you end up with a memory leak. Also, operator new returns a pointer, not the object itself, so your code would not compile. It needs to change to:
MyClass* x = new MyClass(factory);
The second example allocated an instance of MyClass on the stack. This is very useful for short lived objects as they will automatically be cleaned up when the leave the current scope (and it is fast; cleaning up the stack involves nothing more than incrementing or decrementing a pointer).
This is also how you would implement the Resource Acquisition is Initialization pattern, more commonly referred to as RAII. The destructor for your type would clean up any dynamically allocated memory, so when the stack allocated variable goes out of scope any dynamically allocated memory is cleaned up for you without the need for any outside calls to delete.
No. When you use new, you create objects off the heap that you must then delete later. In addition, you really need MyClass*. The other form creates an object on the stack that will be automatically destroyed at end of scope.
Suppose I have a class named A:
Class A
{
...
}
And what's the difference between the following 2 approaches to instanciate an object:
void main(void)
{
A a; // 1
A *pa=new A(); // 2
}
As my current understanding (not sure about this yet):
Approach 1 allocate the object a on the stack frame of the main() method, and so this object cannot deleted because that deletion doesn't make sense (don't know why yet, could someone explain that?).
Approach 2 allocate the object a on the heap of the process and also a A* vairable pa on the stack frame of the main() method, so the object can be deleted and and the pa can be assigned null after the deletion.
Am I right? If my understanding is correct, could someone tell me why i cannot delete the a object from the stack in approach 1?
Many thanks...
Object a has automatic storage duration so it will be deleted automatically at the end of the scope in which it is defined. It doesn't make sense to attempt to delete it manually. Manually deletion is only required for objects with dynamic storage duration such as *pa which has been allocated using new.
The objects live time is limited to
the scope the variable is defined
in, once you leave the scope the
object will be cleaned up. In c++ a
scope is defined by any Block
between { an the corresponding }.
Here only the pointer is on the stack and not the object, so when
you leave the scope only the pointer
will be cleaned up, the object will
still be around somewhere.
To the part of deleting an object, delete not only calls the destructor of your object but also releases its memory, this would not work as the memory management of the stack is automated by the compiler, in contrast the heap is not automated and requires calls to new and delete to manage the live time of an object.
Any object created by a call to new has to be deleted once, forgetting to do this results in an memory leak as the objects memory will never be released.
Approach 1 declared a variable and created an object. In Approach 2, you created an instance and pointer to it.
EDIT : In approach 1, the object will go out of scope and will be automatically deleted. In approach 2, the pointer will be automatically deleted, but not what it is pointing to. That will be your job.
Imagine stack as void* stack = malloc(1.000.000);
Now, this memory block is managed internally by the compiler and the CPU.
It is a shared piece of memory. Every function can use it to store temporary objects there.
That's called automatic storage. You cannot delete parts of that memory because its purpose is
to be used again again. If you explicitly delete memory, that memory returns back to the system,
and you don't want that to happen in a shared memory.
In a way, automatic objects are also get deleted. When an object gets out of scope the compiler places
an invisible call to object's destructor and the memory is available again.
You cannot delete objects on the stack because it's implemented in memory exactly that way -- as a stack. As you create objects on the stack, they are added on top of each other. As the objects leave scope, they are destroyed from the top, in the opposite order they were created in (add to the top of the stack, and remove from the top of the stack). Trying to call delete on something in the stack would break that order. The analogy would be like trying to pull some paper out of the middle of a stack of papers.
The compiler controls how objects are created and removed on the stack. It can do this because it knows exactly how large each object on the stack is. Since the stack size is set at compile time, it means that allocating memory for things on the stack is extremely fast, much faster than allocating memory from the heap, which is controlled by the operating system.
Allocation does two things:
1) Allocates memory for the object
2) Calls the constructor on the allocated memory
Deletion does two things:
1) Calls the destructor on the object
2) Deallocates the memory used by the destructed object
When you allocate on the stack (A a;), you're telling the compiler "please make an object for me, by allocating memory, then call the constructor on that memory. And while you're at it, could you handle calling the destructor and freeing the memory, when it goes out of scope? Thanks!". Once the function (main) ends, the object goes out of scope, the destructor is called, and the memory is freed.
When you allocate on the heap (A* pa = new A();), you're telling the compiler "please make an object for me. I know what I'm doing, so don't bother calling the destructor or freeing the memory. I'll tell you when to do it, some other time". Once the function (main) ends, the object you allocated stays in scope, and is not destructed or freed. Hopefully you have a pointer to it stored somewhere else in your program (as in, you copied pa to some other variable with a bigger scope). You're gonna have to tell the compiler to destruct the object and free the memory at some point in the future. Otherwise, you get a memory leak.
Simply put, the "delete" command is only for objects allocated on the heap, because that's the manual memory management interface in C++ - new/delete. It is a command for the heap allocator, and the heap allocator doesn't know anything about stack allocated objects. If you try to call delete on a stack allocated object, you might as well have called it on a random memory address - they're the same thing as far as the heap allocator is concerned. Very much like trying to access an object outside array bounds:
int a[10];
std::cout << a[37] << "\n"; // a[37] points at... ? no one knows!
It just isn't meant to do that :)
Edit:
P.S. Memory leaks are more important when you are allocating memory in a function other than main. When the program ends, leaked memory gets deallocated, so a memory leak in main might not be a big deal, depending on your scenario. However, destructors never get called on leaked objects. If the destructor does something important, like closing a database or a file, then you might have a more serious bug on your hands.
stack memory is not being managed on the same way as heap memory.
there is no point to delete objects from stack: they will be deleted
automatically upon end of scope/function.