Stack-allocated objects still taking memory after going out of scope? - c++

People always talk about how objects created without the new keyword are destroyed when they go out of scope, but when I think about this, it seems like that's wrong. Perhaps the destructor is called when the variable goes out of scope, but how do we know that it is no longer taking up space in the stack? For example, consider the following:
void DoSomething()
{
{
My_Object obj;
obj.DoSomethingElse();
}
AnotherFuncCall();
}
Is it guaranteed that obj will not be saved on the stack when AnotherFuncCall is executed? Because people are always saying it, there must be some truth to what they say, so I assume that the destructor must be called when obj goes out of scope, before AnotherFuncCall. Is that a fair assumption?

You are confusing two different concepts.
Yes, your object's destructor will be called when it leaves its enclosing scope. This is guaranteed by the standard.
No, there is no guarantee that an implementation of the language uses a stack to implement automatic storage (i.e., what you refer to as "stack allocated objects".)
Since most compilers use a fixed size stack I'm not even sure what your question is. It is typically implemented as a fixed size memory region where a pointer move is all that is required to "clean up" the stack as that memory will be used again soon enough.
So, since the memory region used to implement a stack is fixed in size there is no need to set the memory your object took to 0 or something else. It can live there until it is needed again, no harm done.

I believe it depends where in the stack the object was created. If it was on the bottom (assuming stack grows down) then I think the second function may overwrite the destroyed objects space. If the object was inside the stack, then probably that space is wasted, since all further objects would have to be shifted.

Your stack is not dynamically allocated and deallocated, it's just there. Your objects constructors and destructors will get called but you don't get the memory back.

Because people are always saying it, there must be some truth to what they say, so I assume that the destructor must be called when obj goes out of scope, before AnotherFuncCall. Is that a fair assumption?
This is correct. Note that this final question says nothing about a stack". Whether an implementation uses a stack, or something else, is up to the implementation.

Objects created "on the stack" in local scope have what is called automatic storage duration. The Standard says:
C++03 3.7.2 Automatic storage duration
1/ Local objects explicitly declared auto or register or not
explicitly declared static or extern have automatic storage duration.
The storage for these objects lasts until the block in which they are
created exits.
2/ [Note: these objects are initialized and destroyed as described in
6.7. ]
On the destruction of these objects:
6.7 Declaration statement
2/ Variables with automatic storage duration (3.7.2) are initialized
each time their declaration-statement is executed. Variables with
automatic storage duration declared in the block are destroyed on exit
from the block (6.6).
Hence, according to the Standard, when object with local scope fall out of scope, the destructor is called and the storage is released.
Weather or not that storage is on a stack the Standard doesn't say. It just says the storage is released, wherever it might be.
Some architectures don't have stacks in the same sense a PC has. C++ is meant to work on any kind of programmable device. That's why it never mentions anything about stacks, heaps, etc.
On a typical PC-type platform running Windows and user-mode code, these automatic variables are stored on a stack. These stacks are fixed-size, and are created when the thread starts. As they become instantiated, they take up more of the space on the stack, and the stack pointer moves. If you allocate enough of these variables, you will overflow the stack and your program will die an ugly death.
Try running this on a Windows PC and see what happens for an example:
int main()
{
int boom[10000000];
for( int* it = &boom[0]; it != &boom[sizeof(boom)/sizeof(boom[0])]; ++it )
*it = 42;
}

What people say is indeed true. The object still remains in the memory location. However, the way stack works means that the object does not take any memory space from stack.
What usually happens when memory is allocated on the stack is that the stack pointer is decremented by sizeof(type) and when the variable goes out of scope and the object is freed, the stack pointer is incremented, thus shrinking the effective size of data allocated on the stack. Indeed, the data still resides in it's original address, it is not destroyed or deleted at all.
And just to clarify, the C++ standard says absolutely nothing about this! The C++ standard is completely unaware of anything called stack or heap in sense of memory allocation because they are platform specific implementation details.

Your local variables on stack do not take extra memory. The system provides some memory from each thread's stack, and the variables on the stack just use part of it. After running out of the scope, the compiler can reuse the same part of the stack for other variables (used later in the same function).

how do we know that it is no longer taking up space in the stack?
We don't. There are way to see whether they do or don't, but those are architecture and ABI specific. Generally, functions do pop whatever they pushed to the stack when they return control to the caller. What C/C++ guarantees is that it will call a destructor of high-level objects when they leave the scope (though some older C++ like MSVC 6 had terrible bugs at a time when they did not).
Is it guaranteed that obj will not be saved on the stack when AnotherFuncCall is executed?
No. It is up to the compiler to decide when and how to push and pop stack frames as long as that way complies with ABI requirements.

The question "Is something taking up space in the stack" is a bit of a loaded question, because in reality, there is no such thing as free space (at a hardware level.) A lot of people (myself included, at one point) thought that space on a computer is freed by actually clearing it, i.e. changing the data to zeroes. However, this is actually not the case, as doing so would be a lot of extra work. It takes less time to do nothing to memory than it does to clear it. So if you don't need to clear it, don't! This is true for the stack as well as files you delete from your computer. (Ever noticed that "emptying the recycle bin" takes less time than copying those same files to another folder? That's why - they're not actually deleted, the computer just forgets where they're stored.)
Generally speaking, most hardware stacks are implemented with a stack pointer, which tells the CPU where the next empty slot in the stack is. (Or the most recent item pushed on the stack, again, this depends on the CPU architecture.)
When you enter a function, the assembly code subtracts from the stack pointer to create enough room for your local variables, etc. Once the function ends, and you exit scope, the stack pointer is increased by the same amount it was originally decreased, before returning. This increasing of the stack pointer is what is meant by "the local variables on the stack have been freed." It's less that they've been freed and more like "the CPU is now willing to overwrite them with whatever it wants to without a second thought."
Now you may be asking, if our local variables from a previous scope still exist, why can't we use them? Reason being, there's no guarantee they'll still be there from the time you exit scope and the time you try to read them again.

Related

Is the compiler allowed to not retract the stack pointer when an object on the stack goes out of scope?

I'm using a Raspberry Pi Pico, which has two cores, both with a 4KB stack, with core0's on top of core1's so that core0 gets to have 8KB of stack in single-threaded apps.
The gist of the issue sparking this question is as follows:
// Do stuff
{
uint8_t buffer[4096];
// Use buffer (for flash IO)
}
MyObject myObject = buildMyObject();
multicore_launch_core1(core1_entry); // Will allocate on its stack
// Use myObject
Here we allocate 4KB on the stack "while we have 8KB of stack". Then we make it go out of scope. Then we allocate another object on the stack. We then launch core1.
At this point, the bottom 4KB of the stack still belong to core0, the top 4KB now belong to core1. Core1 starts using them. We then use the previously allocated object.
I expect myObject to be in the first 4KB, because I expect buffer going out of its explicit scope to increase the stack pointer by 4KB immediately with regards to control flow.
This isn't what happens on GCC 10.3.1 arm-none-eabi. The 4KB of stack taken by buffer stay there, never to be given back until the enclosing scope (same as myObject's) ends. Which of course, results in myObject being allocated in core1's stack-to-be. Chaos ensues.
This sounds counterintuitive to me and, in the context of embedded programming where we might not even have a heap, harmful.
Is this a compiler bug ? Or does the standard allow this to happen ?
Is the compiler allowed to not retract the stack pointer when an object on the stack gets out of scope ?
Since this is tagged language-lawyer: Neither C nor C++ standard make any guarantees over layout and location of memory. They don't have any real concept of a stack either. (In C++ there is a concept of "stack unwinding" which however doesn't really require a stack as in the memory concept and in C++23 there is support for stacktraces, but it also has no concept of memory addresses.)
There is also no standard-approved way of actually depending on the memory location chosen for variables. It is fundamentally impossible to get from a pointer to one of them to a pointer to another (without taking the address of the latter with & first and storing the result somewhere in an object reachable from the former). The compiler can assume that individual variables are completely independent in terms of their memory location and that they cannot be messed with from anything external. It can (and does) for example reorder the location of variables on the stack in whatever way deemed suitable for optimization. It may also add padding, etc. It may decide arbitrarily to reuse storage of variables whose storage duration has ended, but it doesn't have to either.
Everything you are doing that allows you to do context switches or the like is completely outside the standard's specification and dependent on the C++ implementation, i.e. compiler, architecture, etc.
For your use case it seems that you likely want to write inline assembly (also a compiler-specific extension) so that you have control over where your data is located in memory. Alternatively there may be other compiler-specific extensions such as attributes to help with that.

Memory allocation and constructor

I am sorry if it has been asked before explicitly stated in the standard, but I fail to find whether the memory for objects with automatic storage is allocated in the beginning of enclosing block or immediately before executing the constructor?
I am asking this because https://en.cppreference.com/w/cpp/language/storage_duration says that.
Storage duration
All objects in a program have one of the following storage durations:
automatic storage duration. The storage for the object is allocated at the beginning of the enclosing code block and deallocated at the end. All local objects have this storage duration, except those declared static, extern or thread_local.
Now, does it mean that the storage space is allocated even where constructor is not invoked for some reason?
For example, I have something like that.
{
if(somecondition1) throw something;
MyHugeObject o{};
/// do something
}
So, there a chance that MyHugeObject does not need to be constructed, yet according to the source I've cited, the memory for it is still allocated, despite the fact that the object might never get constructed. Is it the case or it is something implementation based?
First of all, from a language standard perspective, you cannot access the object's storage outside of the lifetime of the object. Before the object is created, you do not know where the object is located, and after it has been destructed, accessing the storage yields undefined behavior. In short: A conforming C++ program cannot observe the difference of when the storage is allocated.
Automatic storage typically means "on the call-stack". I.e. allocation happens by decrementing the stack pointer, and deallocation happens be re-incrementing it. A compiler could emit code that does the stack pointer adjustments exactly where the lifetime of the object starts/ends, but this is inefficient: It would clutter the generated code with two extra instructions for each object that is used. This is especially a problem with objects that are created in a loop: The stack pointer would jump back and forth between two or more positions constantly.
To improve efficiency, compilers huddle all possible object allocations together into a single stack frame allocation: The compiler assigns an offset to each variable within the function, determines the max. size that is required to store all the variables that are present within the function, and allocates all the memory with a single stack pointer decrement instruction at the start of the function execution. Cleanup is then the respective stack pointer increment. This removes any allocation/deallocation overhead from loops as the variables in the next iteration will simply reuse the same spot within the stack frame as the previous iteration used. This is an important optimization, for many loops declare at least one variable.
The C++ standard does not care. Since use of the storage outside of an object's lifetime is UB, the compiler is free to do with the storage whatever it pleases to do. Programmers should not care as well, but they do tend to care about their programs execution times. And that's what most compilers optimize for by using stack frame allocation.
The moment at which the memory is reclaimed from the system is implementation dependant. The only thing that is mandated by the standard is the moment when the constructor is called and when the object can safely be used.
Common implementations use a stack for automatic storage duration objects, and most of the time allocate a whole frame at the beginning of a bloc and pop it at the end of the bloc. Even if stack operations are fast, it is simpler to limit their number, and the simpler is the more robust.
But anyway, even using a stack for automatic storage duration is not mandated by the standard, not speaking of the moment when frames are allocated on and popped from that stack.
The C++ standard has the following to say about it in [basic.stc] :
2 Static, thread, and automatic storage durations are associated with objects introduced by declarations (6.1) and implicitly created by the implementation (6.6.7).
This 6.6.7 reference refers to [class.temporary], which is about temporaries. Temporaries aren't quite the same concept, but that section has this to say :
2 The materialization of a temporary object is generally delayed as long as possible in order to avoid creating unnecessary temporary objects.
I haven't found anything else that would address your question, so the standard appears to give the implementation some leeway as to when storage is allocated for the object.
Note this does not apply to when the object is initialized - that happens when the declaration statement is executed, as per [stmt.dcl] :
2 Variables with automatic storage duration (6.6.5.3) are initialized each time their declaration-statement is executed. Variables with automatic storage duration declared in the block are destroyed on exit from the block (8.6).
The cppreference link you mentioned likely discusses a typical implementation, where objects with automatic storage duration are allocated on the stack. In such implementations, it makes sense to allocate storage at the start of an enclosing block (it's just a simple (in/de)crement of the stack pointer after all, and grouping them is beneficial).
If you want to avoid allocating storage for a huge object when not needed, restructuring the code is an option. On some implementations, introducing an additional block scope will achieve that :
{
if(somecondition1) throw something;
{
MyHugeObject o{};
/// do something
}
}
On other implementations, other approaches might be needed. #DanielLangr's comment below indicates implementations where the allocation happens at the start of the enclosing function, rather than at the start of the block.

Why can't you free variables on the stack?

The languages in question are C/C++.
My prof said to free memory on the heap when your done using it, because otherwise you can end up with memory that can't be accessed. The problem with this is you might end up with all your memory used up and you are unable to access any of it.
Why doesn't the same concept apply to the stack? I understand that you can always access the memory you used on the stack, but if you keep creating new variables, you will eventually run out of space right? So why can't you free variables on the stack to make room for new variables like you can on the heap?
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
Dynamically allocated objects ("heap objects" in colloquial language) are never variables. Thus, they can never go out of scope. They don't live inside any scope. The only way you can handle them is via a pointer that you get at the time of allocation.
(The pointer is usually assigned to a variable, but that doesn't help.)
To repeat: Variables have scope; objects don't. But many objects are variables.
And to answer the question: You can only free objects, not variables.
The end of the closed "}" braces is where the stack "frees" its memory. So if I have:
{
int a = 1;
int b = 2;
{
int c = 3; // c gets "freed" at this "}" - the stack shrinks
// and c is no longer on the stack.
}
} // a and b are "freed" from the stack at this last "}".
You can think of c as being "higher up" on the stack than "a" and "b", so c is getting popped off before them. Thus, every time you write a "}" symbol, you are effectively shrinking the stack and "freeing" data.
There are already nice answers but I think you might need some more clarification, so I'll try to make this a more detailed answer and also try to make it simple (if I manage to). If something isn't clear (which with me being not a native english speaker and having problems with formulating answers sometimes might be likely) just ask in the comments. Also gonna take the use the Variables vs Objects idea that Kerrek SB uses in his answer.
To make that clearer I consider Variables to be named Objects with an Object being something to store data within your program.
Variables on the stack got automatic storage duration they automatically get destroyed and reclaimed once their scope ends.
{
std::string first_words = "Hello World!";
// do some stuff here...
} // first_words goes out of scope and the memory gets reclaimed.
In this case first_words is a Variable (since it got its own name) which means it is also an Object.
Now what about the heap? Lets describe what you might consider being "something on the heap" as a Variable pointing to some memory location on the heap where an Object is located. Now these things got what's called dynamic storage duration.
{
std::string * dirty = nullptr
{
std::string * ohh = new std::string{"I don't like this"} // ohh is a std::string* and a Variable
// The actual std::string is only an unnamed
// Object on the heap.
// do something here
dirty = ohh; // dirty points to the same memory location as ohh does now.
} // ohh goes out of scope and gets destroyed since it is a Variable.
// The actual std::string Object on the heap doesn't get destroyed
std::cout << *dirty << std::endl; // Will work since the std::string on the heap that dirty points to
// is still there.
delete dirty; // now the object being pointed to gets destroyed and the memory reclaimed
dirty = nullptr; can still access dirty since it's still in its scope.
} // dirty goes out of scope and get destroyed.
As you can see objects don't adhere to scopes and you got to manually manage their memory. That's also a reason why "most" people prefer to use "wrappers" around it. See for example std::string which is a wrapper around a dynamic "String".
Now to clarify some of your questions:
Why can't we destroy objects on the stack?
Easy answer: Why would you want to?
Detailed answer: It would be destroued by you and then destroyed again once it leaves the scope which isn't allowed. Also you should generally only have variables in your scope that you actually need for your computation and how would you destroy it if you actually need that variable to finish your computation? But if you really were to only need a variable for a small time within a computation you could just make a new smaller scope with { } so your variable gets automatically destroyed once it isn't needed anymore.
Note: If you got a lot of variables that you only need for a small part of your computation it might be a hint that that part of the computation should be in its own function/scope.
From your comments: Yeah I get that, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope?
They don't. Objects on the heap got no scope, you can pass their address out of a function and it still persists. The pointer pointing to it can go out of scope and be destroyed but the Object on the heap still exists and you can't access it anymore (memory leak). That's also why it's called manual memory management and most people prefer wrappers around them so that it gets automatically destroyed when it isn't needed anymore. See std::string, std::vector as examples.
From your comments: Also how can you run out of memory on a computer? An int takes up like 4 bytes, most computers have billions of bytes of memory... (excluding embedded systems)?
Well, computer programs don't always just hold a few ints. Let me just answer with a little "fake" quote:
640K [of computer memory] ought to be enough for anybody.
But that isn't enough like we all should know. And how many memory is enough? I don't know but certainly not what we got now. There are many algorithms, problems and other stuff that need tons of memory. Just think about something like computer games. Could we make "bigger" games if we had more memory? Just think about it... You can always make something bigger with more resources so I don't think there's any limit where we can say it's enough.
So why can't you free variables on the stack to make room for new variables like you can on the heap?
All information that "stack allocator" knows is ESP which is pointer to the bottom of stack.
N: used
N-1: used
N-2: used
N-3: used <- **ESP**
N-4: free
N-5: free
N-6: free
...
That makes "stack allocation" very efficient - just decrease ESP by the size of allocation, plus it is locality/cache-friendly.
If you would allow arbitrary deallocations, of different sizes - that will turn your "stack" into "heap", with all associated additional overhead - ESP would not be enough, because you have to remember which space is deallocated and which is not:
N: used
N-1: free
N-2: free
N-3: used
N-4: free
N-5: used
N-6: free
...
Clearly - ESP is not more enough. And you also have to deal with fragmentation problems.
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
One of the reasons is that you don't always want that - sometimes you want to return allocated data to caller of your function, that data should outlive scope where it was created.
That said, if you really need scope-based lifetime management for "heap" allocated data (and most of time it is scope-based, indeed) - it is common practice in C++ to use wrappers around such data. One of examples is std::vector:
{
std::vector<int> x(1024); // internally allocates array of 1024 ints on heap
// use x
// ...
} // at the end of the scope destructor of x is called automatically,
// which does deallocation
Read about function calls - each call pushes data and function address on the stack. Function pops data from stack and eventually pushes its result.
In general, stack is managed by OS, and yes - it can be depleted. Just try doing something like this:
int main(int argc, char **argv)
{
int table[1000000000];
return 0;
}
That should end quickly enough.
Local variables on the stack don't actually get freed. The registers pointing at the current stack are just moved back up and the stack "forgets" about them. And yes, you can occupy so much stack space that it overflows and the program crashes.
Variables on the heap do get freed automatically - by the operating system, when the program exits. If you do
int x;
for(x=0; x<=99999999; x++) {
int* a = malloc(sizeof(int));
}
the value of a keeps getting overwritten and the place in the heap where a was stored is lost. This memory is NOT freed, because the program doesn't exit. This is called a "memory leak". Eventually, you will use up all the memory on the heap, and the program will crash.
The heap is managed by code: Deleting a heap allocation is done by calling the heap manager. The stack is managed by hardware. There is no manager to call.

main function stack size

Max function stack size is limited and can be quickly exhausted if we use big stack variables or get careless with recursive functions.
But main's stack isn't really a stack. main is always called exactly once and never recursively. By all means main's stack is static storage allocated at the very beginning and it lives until the very end. Does it mean I can allocate big arrays in main's stack?
int main()
{
double a[5000000];
}
main is just a normal function. Stack size is system dependent.
Alos remember your process shares only one stack, for all function calls. Items are pushed and popped from the stack, as function are called by main.
It's implementation-defined (the language standard doesn't talk about stacks, AFAIK). But typically, main lives on the stack just like any other function.
It's 100% compiler and system dependent, like most of this kind of funny business. Heck, even the existence of the stack isn't mandated by the standard.
In practice, yes, it's on the stack, and no, you can't allocate things like that on the stack without running into trouble.
When you allocate an array in that manner, it is allocated on the stack. There is a platform-dependent maximum size the stack can grow to. And yes, you've exceeded it.
Second thought, I just remembered - it can be called recursively. Check out this obfuscated code:
http://en.wikipedia.org/wiki/Obfuscated_code
It calls main many times and works wonders :) It's a fun link anyway. So, its definitely stack allocated, sorry about that!
The stack is something that is used by all functions - the way you've worded your question suggests that each function is given a stack which is not the case.
Stack usage grows with each function call - main() being the first. The allocation that you used in your example is just as bad as making a stack allocation in another function.
For most modern systems, there is no real reason the stack size needs to be limited. You can probably adjust an operating system parameter and that program will work fine. (As will any that allocates an equal amount of data on the stack, main or not.)
However, if you really want an object with a lifetime equal to the duration of the program, create a global variable instead of a local inside main. Most platforms do not artificially limit the size of global objects — they can usually be as large as the memory map allows.
By the way, main is not active for the duration of a C++ program. It may be preceded by construction of global objects and followed by destruction of same and atexit handlers.

Stack, Static, and Heap in C++

I've searched, but I've not understood very well these three concepts. When do I have to use dynamic allocation (in the heap) and what's its real advantage? What are the problems of static and stack? Could I write an entire application without allocating variables in the heap?
I heard that others languages incorporate a "garbage collector" so you don't have to worry about memory. What does the garbage collector do?
What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?
Once someone said to me that with this declaration:
int * asafe=new int;
I have a "pointer to a pointer". What does it mean? It is different of:
asafe=new int;
?
A similar question was asked, but it didn't ask about statics.
Summary of what static, heap, and stack memory are:
A static variable is basically a global variable, even if you cannot access it globally. Usually there is an address for it that is in the executable itself. There is only one copy for the entire program. No matter how many times you go into a function call (or class) (and in how many threads!) the variable is referring to the same memory location.
The heap is a bunch of memory that can be used dynamically. If you want 4kb for an object then the dynamic allocator will look through its list of free space in the heap, pick out a 4kb chunk, and give it to you. Generally, the dynamic memory allocator (malloc, new, et c.) starts at the end of memory and works backwards.
Explaining how a stack grows and shrinks is a bit outside the scope of this answer, but suffice to say you always add and remove from the end only. Stacks usually start high and grow down to lower addresses. You run out of memory when the stack meets the dynamic allocator somewhere in the middle (but refer to physical versus virtual memory and fragmentation). Multiple threads will require multiple stacks (the process generally reserves a minimum size for the stack).
When you would want to use each one:
Statics/globals are useful for memory that you know you will always need and you know that you don't ever want to deallocate. (By the way, embedded environments may be thought of as having only static memory... the stack and heap are part of a known address space shared by a third memory type: the program code. Programs will often do dynamic allocation out of their static memory when they need things like linked lists. But regardless, the static memory itself (the buffer) is not itself "allocated", but rather other objects are allocated out of the memory held by the buffer for this purpose. You can do this in non-embedded as well, and console games will frequently eschew the built in dynamic memory mechanisms in favor of tightly controlling the allocation process by using buffers of preset sizes for all allocations.)
Stack variables are useful for when you know that as long as the function is in scope (on the stack somewhere), you will want the variables to remain. Stacks are nice for variables that you need for the code where they are located, but which isn't needed outside that code. They are also really nice for when you are accessing a resource, like a file, and want the resource to automatically go away when you leave that code.
Heap allocations (dynamically allocated memory) is useful when you want to be more flexible than the above. Frequently, a function gets called to respond to an event (the user clicks the "create box" button). The proper response may require allocating a new object (a new Box object) that should stick around long after the function is exited, so it can't be on the stack. But you don't know how many boxes you would want at the start of the program, so it can't be a static.
Garbage Collection
I've heard a lot lately about how great Garbage Collectors are, so maybe a bit of a dissenting voice would be helpful.
Garbage Collection is a wonderful mechanism for when performance is not a huge issue. I hear GCs are getting better and more sophisticated, but the fact is, you may be forced to accept a performance penalty (depending upon use case). And if you're lazy, it still may not work properly. At the best of times, Garbage Collectors realize that your memory goes away when it realizes that there are no more references to it (see reference counting). But, if you have an object that refers to itself (possibly by referring to another object which refers back), then reference counting alone will not indicate that the memory can be deleted. In this case, the GC needs to look at the entire reference soup and figure out if there are any islands that are only referred to by themselves. Offhand, I'd guess that to be an O(n^2) operation, but whatever it is, it can get bad if you are at all concerned with performance. (Edit: Martin B points out that it is O(n) for reasonably efficient algorithms. That is still O(n) too much if you are concerned with performance and can deallocate in constant time without garbage collection.)
Personally, when I hear people say that C++ doesn't have garbage collection, my mind tags that as a feature of C++, but I'm probably in the minority. Probably the hardest thing for people to learn about programming in C and C++ are pointers and how to correctly handle their dynamic memory allocations. Some other languages, like Python, would be horrible without GC, so I think it comes down to what you want out of a language. If you want dependable performance, then C++ without garbage collection is the only thing this side of Fortran that I can think of. If you want ease of use and training wheels (to save you from crashing without requiring that you learn "proper" memory management), pick something with a GC. Even if you know how to manage memory well, it will save you time which you can spend optimizing other code. There really isn't much of a performance penalty anymore, but if you really need dependable performance (and the ability to know exactly what is going on, when, under the covers) then I'd stick with C++. There is a reason that every major game engine that I've ever heard of is in C++ (if not C or assembly). Python, et al are fine for scripting, but not the main game engine.
The following is of course all not quite precise. Take it with a grain of salt when you read it :)
Well, the three things you refer to are automatic, static and dynamic storage duration, which has something to do with how long objects live and when they begin life.
Automatic storage duration
You use automatic storage duration for short lived and small data, that is needed only locally within some block:
if(some condition) {
int a[3]; // array a has automatic storage duration
fill_it(a);
print_it(a);
}
The lifetime ends as soon as we exit the block, and it starts as soon as the object is defined. They are the most simple kind of storage duration, and are way faster than in particular dynamic storage duration.
Static storage duration
You use static storage duration for free variables, which might be accessed by any code all times, if their scope allows such usage (namespace scope), and for local variables that need extend their lifetime across exit of their scope (local scope), and for member variables that need to be shared by all objects of their class (classs scope). Their lifetime depends on the scope they are in. They can have namespace scope and local scope and class scope. What is true about both of them is, once their life begins, lifetime ends at the end of the program. Here are two examples:
// static storage duration. in global namespace scope
string globalA;
int main() {
foo();
foo();
}
void foo() {
// static storage duration. in local scope
static string localA;
localA += "ab"
cout << localA;
}
The program prints ababab, because localA is not destroyed upon exit of its block. You can say that objects that have local scope begin lifetime when control reaches their definition. For localA, it happens when the function's body is entered. For objects in namespace scope, lifetime begins at program startup. The same is true for static objects of class scope:
class A {
static string classScopeA;
};
string A::classScopeA;
A a, b; &a.classScopeA == &b.classScopeA == &A::classScopeA;
As you see, classScopeA is not bound to particular objects of its class, but to the class itself. The address of all three names above is the same, and all denote the same object. There are special rule about when and how static objects are initialized, but let's not concern about that now. That's meant by the term static initialization order fiasco.
Dynamic storage duration
The last storage duration is dynamic. You use it if you want to have objects live on another isle, and you want to put pointers around that reference them. You also use them if your objects are big, and if you want to create arrays of size only known at runtime. Because of this flexibility, objects having dynamic storage duration are complicated and slow to manage. Objects having that dynamic duration begin lifetime when an appropriate new operator invocation happens:
int main() {
// the object that s points to has dynamic storage
// duration
string *s = new string;
// pass a pointer pointing to the object around.
// the object itself isn't touched
foo(s);
delete s;
}
void foo(string *s) {
cout << s->size();
}
Its lifetime ends only when you call delete for them. If you forget that, those objects never end lifetime. And class objects that define a user declared constructor won't have their destructors called. Objects having dynamic storage duration requires manual handling of their lifetime and associated memory resource. Libraries exist to ease use of them. Explicit garbage collection for particular objects can be established by using a smart pointer:
int main() {
shared_ptr<string> s(new string);
foo(s);
}
void foo(shared_ptr<string> s) {
cout << s->size();
}
You don't have to care about calling delete: The shared ptr does it for you, if the last pointer that references the object goes out of scope. The shared ptr itself has automatic storage duration. So its lifetime is automatically managed, allowing it to check whether it should delete the pointed to dynamic object in its destructor. For shared_ptr reference, see boost documents: http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm
It's been said elaborately, just as "the short answer":
static variable (class)
lifetime = program runtime (1)
visibility = determined by access modifiers (private/protected/public)
static variable (global scope)
lifetime = program runtime (1)
visibility = the compilation unit it is instantiated in (2)
heap variable
lifetime = defined by you (new to delete)
visibility = defined by you (whatever you assign the pointer to)
stack variable
visibility = from declaration until scope is exited
lifetime = from declaration until declaring scope is exited
(1) more exactly: from initialization until deinitialization of the compilation unit (i.e. C / C++ file). Order of initialization of compilation units is not defined by the standard.
(2) Beware: if you instantiate a static variable in a header, each compilation unit gets its own copy.
The main difference is speed and size.
Stack
Dramatically faster to allocate. It is done in O(1), since it is allocated when setting up the stack frame, so it is essentially free. The drawback is that if you run out of stack space you are in deep trouble. You can adjust the stack size, but, IIRC, you have ~2MB to play with. Also, as soon as you exit the function everything on the stack is cleared. So, it can be problematic to refer to it later. (Pointers to stack allocated objects lead to bugs.)
Heap
Dramatically slower to allocate. But, you have GB to play with, and point to.
Garbage Collector
The garbage collector is some code that runs in the background and frees memory. When you allocate memory on the heap it is very easy to forget to free it, which is known as a memory leak. Over time, the memory your application consumes grows and grows until it crashes. Having a garbage collector periodically free the memory you no longer need helps eliminate this class of bugs. Of course, this comes at a price, as the garbage collector slows things down.
What are the problems of static and stack?
The problem with "static" allocation is that the allocation is made at compile-time: you can't use it to allocate some variable number of data, the number of which isn't known until run-time.
The problem with allocating on the "stack" is that the allocation is destroyed as soon as the subroutine which does the allocation returns.
I could write an entire application without allocate variables in the heap?
Perhaps but not a non-trivial, normal, big application (but so-called "embedded" programs might be written without the heap, using a subset of C++).
What garbage collector does ?
It keeps watching your data ("mark and sweep") to detect when your application is no longer referencing it. This is convenient for the application, because the application doesn't need to deallocate the data ... but the garbage collector might be computationally expensive.
Garbage collectors aren't a usual feature of C++ programming.
What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?
Learn the C++ mechanisms for deterministic memory deallocation:
'static': never deallocated
'stack': as soon as the variable "goes out of scope"
'heap': when the pointer is deleted (explicitly deleted by the application, or implicitly deleted within some-or-other subroutine)
Stack memory allocation (function variables, local variables) can be problematic when your stack is too "deep" and you overflow the memory available to stack allocations. The heap is for objects that need to be accessed from multiple threads or throughout the program lifecycle. You can write an entire program without using the heap.
You can leak memory quite easily without a garbage collector, but you can also dictate when objects and memory is freed. I have run in to issues with Java when it runs the GC and I have a real time process, because the GC is an exclusive thread (nothing else can run). So if performance is critical and you can guarantee there are no leaked objects, not using a GC is very helpful. Otherwise it just makes you hate life when your application consumes memory and you have to track down the source of a leak.
What if your program does not know upfront how much memory to allocate (hence you cannot use stack variables). Say linked lists, the lists can grow without knowing upfront what is its size. So allocating on a heap makes sense for a linked list when you are not aware of how many elements would be inserted into it.
An advantage of GC in some situations is an annoyance in others; reliance on GC encourages not thinking much about it. In theory, waits until 'idle' period or until it absolutely must, when it will steal bandwidth and cause response latency in your app.
But you don't have to 'not think about it.' Just as with everything else in multithreaded apps, when you can yield, you can yield. So for example, in .Net, it is possible to request a GC; by doing this, instead of less frequent longer running GC, you can have more frequent shorter running GC, and spread out the latency associated with this overhead.
But this defeats the primary attraction of GC which appears to be "encouraged to not have to think much about it because it is auto-mat-ic."
If you were first exposed to programming before GC became prevalent and were comfortable with malloc/free and new/delete, then it might even be the case that you find GC a little annoying and/or are distrustful(as one might be distrustful of 'optimization,' which has had a checkered history.) Many apps tolerate random latency. But for apps that don't, where random latency is less acceptable, a common reaction is to eschew GC environments and move in the direction of purely unmanaged code (or god forbid, a long dying art, assembly language.)
I had a summer student here a while back, an intern, smart kid, who was weaned on GC; he was so adament about the superiorty of GC that even when programming in unmanaged C/C++ he refused to follow the malloc/free new/delete model because, quote, "you shouldn't have to do this in a modern programming language." And you know? For tiny, short running apps, you can indeed get away with that, but not for long running performant apps.
Stack is a memory allocated by the compiler, when ever we compiles the program, in default compiler allocates some memory from OS ( we can change the settings from compiler settings in your IDE) and OS is the one which give you the memory, its depends on many available memory on the system and many other things, and coming to stack memory is allocate when we declare a variable they copy(ref as formals) those variables are pushed on to stack they follow some naming conventions by default its CDECL in Visual studios
ex: infix notation:
c=a+b;
the stack pushing is done right to left PUSHING, b to stack, operator, a to stack and result of those i,e c to stack.
In pre fix notation:
=+cab
Here all the variables are pushed to stack 1st (right to left)and then the operation are made.
This memory allocated by compiler is fixed. So lets assume 1MB of memory is allocated to our application, lets say variables used 700kb of memory(all the local variables are pushed to stack unless they are dynamically allocated) so remaining 324kb memory is allocated to heap.
And this stack has less life time, when the scope of the function ends these stacks gets cleared.