Related
I've just begun learning C++ and I wanted to get my head around the different ways to create variables and what the different keywords mean. I couldn't find any description that really went through it, so I wrote this to try to understand what's going on. Have I missed anything? Am I wrong about anything?
Global Variables
Global variables are stored neither on the heap nor stack.
static global variables are non-exported (standard global variables can be accessed with extern, static globals cannot)
Dynamic Variables
Any variable that is accessed with a pointer is stored on the heap.
Heap variables are allocated with the new keyword, which returns a pointer to the memory address on the heap.
The pointer itself is a standard stack variable.
Variables inside {} that aren't created with new
Stored in the stack, which is limited in size so it should be used only for primitives and small data structures.
static keyword means the variable is essentially global and stored in the same memory space as global variables, but scope is restricted to this function/class.
const keyword means you can't change the variable.
thread_local is like static but each thread gets its own variable.
Register
A variable can be declared as register to hint to the compiler that it should be stored in the register.
The compiler will probaby ignore this and apply it to whatever it thinks would be the best improvement.
Typical usage would be for an index or pointer being used as an interator in a loop.
Good practice
Use const by default when applicable, its faster.
Be wary of static and globals in multithreaded applications, instead use thread_local or mutex
Use register on iterators
Notes
Any variables created inside a function (non-global) that is not static or thread_local and is not created with new, will be on the stack. Stack variables should not exceed more than a few KB in memory, otherwise use new to put it on the heap.
The full available system memory can be used for variables with static keyword, thread_local keyword, created with new, or global.
Variables created with new need to be freed with delete. All others are automatically freed when they're out of scope, except static, thead_local and globals which are freed when the program ends.
Despite all the parroting about how globals should not be used, don't be bullied: they are great for some use cases, and more efficient than variables allocated on the heap. Mutexes will be needed to avoid race conditions in multi-threaded applications.
Mostly right.
Any variable that is accessed with a pointer is stored on the heap.
This isn't true. You can have pointers to stack-based or global variables.
Also it's worth pointing out that global variables are generally unified by the linker (i.e. if two modules have "int i" at global scope, you'll only have one global variable called "i"). Dynamic libraries complicate that slightly; on Windows, DLLs don't have that behaviour (i.e. an "int i" in a Windows DLL will not be the same "int i" as in another DLL in the same process, or as the main executable), while most other platforms dynamic libraries do. There are some additional complications on Darwin (iOS/macOS) which has a hierarchical namespace for symbols; as long as you're linking with the flat_namespace option, what I just said will hold.
Additionally, it's worth talking about initialisation behaviour; global variables are initialised automatically by the runtime (typically either using special linker features or by means of a call that is inserted into the code for your main function). The order of initialisation of globals isn't guaranteed. However, static variables declared at function scope are initialised when that function is first executed, and not at program start-up as you might suppose, and that feature is commonly used by C++ programmers to do lazy initialisation.
(Similar concerns apply to destructors for global objects; those are best avoided entirely IMO, not least because on some platforms there are fast termination features that simply won't call them.)
const keyword means you can't change the variable.
Almost. const affects the type, and there is a difference depending on where you write it exactly. For example
const char *foo;
should be read as foo is a pointer to a const char, i.e. foo itself is not const, but the thing it points at is. Contrast with
char * const foo;
which says that foo is a const pointer to char.
Finally, you've missed out volatile, the point of which is to tell the compiler not to make assumptions about the thing to which it applies (e.g. it can't assume that it's safe to cache a volatile value in a register, or to optimise away accesses, or in general to optimise across any operation that affects a volatile value). Hopefully you'll never need to use volatile; it's most often useful if you're doing really low-level things that frankly a lot of people have no need to go anywhere near.
The other answer is correct, but doesn't mention the use of register.
The compiler will probaby ignore this and apply it to whatever it thinks would be the best improvement.
This is correct. Compilers are so good at choosing variables to put in registers (and typical programmer is bad at that), that C++ committees decided it's completely useless.
This keyword was deprecated in C++11 and removed in C++17 (but it's still reserved for possible future use).
Do not use it.
You need to differentiate between specification and implementation. The specification does not say anything about stack and heap, because that's an implementation detail. They purposely talk about Storage duration.
How this storage duration is achieved depends on the target environment and if the compiler needs to do allocations those at all or if these values can be determined at the compile-time, and are then only part of the machine code (which for sure is also at some part of the memory).
So most of your descriptions would be For the target platform XY it will generally allocate on stack/heap if I do XY
C++ could also be used as an interpreted language e.g. cling that could have completely different ways of handling memory.
It could be cross-compiled to some kind of byte interpreter in which every type is dynamically allocated.
And when it comes to embedded systems the way how memory is managed/handled might be even more different.
Heap variables are allocated with the new keyword, which returns a pointer to the memory address on the heap.
If the default operator new, operator new[] are mapped to something like malloc (or any other equivalent in the given OS) this is likely the case (if the object really needs to be allocated).
But for embedded systems, it might be the case that operator new, operator new[] aren't implemented at all. The "OS" just might provide you a chunk of memory for the application that is handled like stack memory for which you manually reserve a certain amount of memory, and you implement a operator new and operator new[] that works with this preallocated memory, so in such a case you only have stack memory.
Besides that, you can create a custom operator new for certain classes that allocates the memory on some hardware that is different to the "regular" memory provided by the OS.
The std::vector is allocating the memory in the same memory space that new is allocating it, i.e. the heap, or its not? This is important because it changes how I use it.
A std::vector is defined as template<class T, class Allocator = std::allocator<T>> class vector; so there is a default behavior (that is given by the implementation) where the vector allocates memory, for common Desktop OS it uses something like OS call like malloc to dynamically allocate memory. But you could also provide a custom allocator that uses memory at any other addressable memory location (e.g. stack).
Ok, so I'm very new at C++ programming, and I've been looking around for a couple days for a decisive answer for this. WHEN should I declare member variables on the heap vs. the stack? Most of the answers that I've found have dealt with other issues, but I want to know when it is best to use the heap for member variables and why it is better to heap the members instead of stacking them.
There are two important concepts to grasp first:
One should avoid thinking in terms of "heap" and "stack". Those are implementation details of your compiler/platform, not of the language.1 Instead, think in terms of object lifetimes: should the object's lifetime correspond to that of its "parent", or should it outlive it? If you need the latter, then you'll need to use new (directly or indirectly) to dynamically allocate an object.
Member variables always have the same lifetime as their parent. The member variable may be a pointer, and the object it points to may well have an independent lifetime. But the pointed-to object is not a member variable.
However, there is no general answer to your question. Crudely speaking, don't dynamically allocate unless there is a good reason to. As I hinted above, these reasons usually correspond to situations where the lifetime needs to differ from its "parent".
1. Indeed, the C++ standard doesn't really talk about "heap" and "stack". They're important to consider when optimising or generally thinking about performance, but they're mostly irrelevant from a program-functionality point of view.
Member variables are members of the class itself. They are neither on
the heap nor on the stack, or rather, they are where ever the class
itself is.
There are very few reasons to add a level of indirection, and allocate a
member separately on the heap: polymorphism (if the type of the member
is not always the same) is by far the most common.
To get some terminology straight: What you call a heap and stack describe the lifetime of objects. The first means that the lifetime is dynamic, the second automatic and the third (which you don't mention) is static.
Usually you will need dynamic lifetime of an object when it should outlive the scope it was created in. Another common case is when you want it to be shared across different parent objects. Also, dynamic lifetime is also necessary when you work with a design that is heavyly object-oriented (uses a lot of polymorphism, doesn't use values), e.g. Qt.
An idiom that requires dynamic lifetimes is the pimpl-idiom.
Most generic-programming libraries are more focused towards value and value-semantics, so you won't use dynamic binding that much and automatic lifetimes become a lot more common.
There are also some examples where dynamic allocation is required for more implementation specific reasons:
dynamically sized objects (containers)
handling incomplete types (see pimpl-idiom)
easy nullability of a type
All of those are just general guidelines and it has to be decided on a case by case basis. In general, prefer automatic objects over dynamic ones.
The stack refers to the call stack. Function calls, return addresses, parameters, and local variables are kept on the call stack. You use stack memory whenever you pass a parameter or create a local variable. The stack has only temporary storage. Once the current function goes out of scope, you no longer have access to any variables for parameters.
The heap is a large pool of memory used for dynamic allocation. When you use the new operator to allocate memory, this memory is assigned from the heap. You want to allocate heap memory when you are creating objects that you don't want to lose after the current function terminates (loses scope). Objects are stored in the heap until the space is deallocated with delete or free().
Consider this example:
You implement a linked list which has a field member head of class node.
Each node has a field member next. If this member of the type Node and not Node* the size of every Node would depend on the number of the nodes after it in the chain.
For example, if you have 100 nodes in your list your head member will be huge. Because it holds the next node inside itself so it needs to have enough size to hold it and next holds the next and so on. So the head has to have enough space to hold in it 99 nodes the next 98 and so on...
You want to avoid that so in this case it's better to have pointer to to next node in each Node rather than the next node itself.
I'm aware that questions about the stack vs. the heap have been asked several times, but I'm confused about one small aspect of choosing how to declare objects in C++.
I understand that the heap--accessed with the "new" operator--is used for dynamic memory allocation. According to an answer to another question on Stack Overflow, "the heap is for storage of data where the lifetime of the storage cannot be determined ahead of time". The stack is faster than the heap, and seems to be used for variables of local scope, i.e., the variables are automatically deleted when the relevant section of code is completed. The stack also has a relatively limited amount of available space.
In my case, I know prior to runtime that I will need an array of pointers to exactly 500 objects of a particular class, and I know I will need to store the pointers and the objects throughout the duration of runtime. The heap doesn't make sense because I know beforehand how long I will need the memory and I know exactly how man objects I will need. The stack also doesn't make sense if it is limited in scope; plus, I don't know if it can actually hold all of my objects/pointers.
What would be the best way to approach this situation and why? Thanks!
Objects allocated on the stack in main() have a lifetime of the entire run of the program, so that's an option. An array of 500 pointers is either 2000 or 4000 bytes depending on whether your pointers are 32 or 64 bits wide -- if you were programming in an environment whose stack limit was that small, you would know it (such environments do exist: for instance, kernel mode stacks are often 8192 bytes or smaller in total) so I wouldn't hesitate to put the array there.
Depending on how big your objects are, it might also be reasonable to put them on the stack -- the typical stack limit in user space nowadays is order of 8 megabytes, which is not so large that you can totally ignore it, but is not peanuts, either.
If they are too big for the stack, I would seriously consider making a global variable that was an array of the objects themselves. The major downside of this is you can't control precisely when they are initialized. If the objects have nontrivial constructors this is very likely to be a problem. An alternative is to allocate storage for the objects as a global variable, initialize them at the appropriate point within main using placement new, and explicitly call their destructors on the way out. This requires care in the presence of exceptions; I'd write a one-off RAII class that encapsulated the job.
It is not a matter of stack or heap (which to be accurate do not mean what you think in c++: they are just data structures like vector, set or queue). It is a matter of storage duration.
You most likely need here static duration objects, which can be either global, or members of a class. Automatic variables declared inside the main function could also do the job, if you design a way to access them from your other code.
There is some information about the different storage durations of C++ (automatic, static, dynamic) there. The accepted answer however uses the confusing stack/heap terminology, but the explanation is correct.
the heap is for storage of data where the lifetime of the storage cannot be determined ahead of time
While that is correct, it's also incomplete.
The stack unwinds when you exit its scope, so using it for global scoped variables is unfeasible, like you said. This however is where you stop being on the right track. While you know the lifetime (or more accurately the scope since that's the most important factor here), you also know it's above the stack frame, so given only two choices, you put it on the heap.
There is a third option, an actual static variable declared at the top scope, but this will only work if your objects have default constructors.
TL;DR: use global (static) storage for either a pointer to the array (dynamic allocation) or just the actual array (static allocation).
Note: Your assumption that the stack is somehow "faster" than the heap is wrong, they're both backed in the same RAM, you just access it relative to different registers. Also I'd like to mention yet again how much I dislike the use of the terms stack and heap.
The stack, as you mention, often has size limits. That leaves you with two choices - dynamically allocate your objects, or make them global. The time cost to allocate all of your objects once at application startup is almost certainly not of significant concern. So just pick whichever method you prefer and do it.
I think you are confusing the use of the stack (storage for local variables and parameters) and unscoped data (static class variables and data allocated via new or malloc). One appropriate solution based on your description would be a static class that has your array of pointers declared as a static class member. This would be allocated on a heap like structure (maybe even the heap depending on your C++ implementation). a "quick and dirty" solution would be to declare the array as a static variable (basically a global variable), however it isn't the best approach from a maintainability perspective.
The best concept to go by is "Use the stack when you can and the heap when you must." I don't see why the stack wouldn't be able to hold all of your objects unless they're large or you're working with a limited resources system. Try the stack and if it can't handle it, the time it would take to allocate the memory on the heap can all be done early in the program's execution and wouldn't be a significant problem.
Unless speed is an issue or you can't afford the overhead, you should stick the objects in a std::vector. If copy semantics aren't defined for the objects, you should use a std::vector of std::shared_ptrs.
I know that there are sections like Stack, Heap, Code and Data. Stack/Heap do they use the same section of memory as they can grow independently?
What is this code section? When I have a function is it a part of the stack or the code section?
Also what is this initialized/uninitialized data segment?
Are there read only memory section available? When I have a const variable, what is actually happening is it that the compiler marks a memory section as read only or does it put into a read only memory section.
Where are static data kept?
Where are global data kept?
Any good references/articles for the same?
I thought the memory sections and layout are OS independent and it has more to do with compiler. Doesn't Stack, Heap, Code, Data [Initialized, Uninitialized] segment occur in all the OS? When there is a static data, what is happening the compiler has understood it is static, what next, what will it do? It is the compiler which is managing the program and it should know what to do right? All compilers shouldn't they follow common standards?
There's very little that's actually definitive about C++ memory layouts. However, most modern OS's use a somewhat similar system, and the segments are separated based on permissions.
Code has execute permission. The other segments don't. In a Windows application, you can't just put some native code on the stack and execute. Linux offers the same functionality- it's in the x86 architecture.
Data is data that's part of the result (.exe, etc) but can't be written to. This section is basically where literals go. Only read permission in this section.
Those two segments are part of the resulting file. Stack and Heap are runtime allocated, instead of mapped off the hard drive.
Stack is essentially one, large (1MB or so, many compilers offer a setting for it) heap allocation. The compiler manages it for you.
Heap memory is memory that the OS returns to you through some process. Normally, heap is a heap (the data structure) of pointers to free memory blocks and their sizes. When you request one, it's given to you. Both read and write permissions here, but no execute.
There is read-only memory(ROM). However, this is just the Data section. You can't alter it at runtime. When you make a const variable, nothing special happens to it in memory. All that happens is that the compiler will only create certain instructions on it. That's it. x86 has no knowledge or notion of const- it's all in the compiler.
AFAIK:
Stack/Heap
do they use the same section of memory
as they can grow independently?
They can grow indipendently.
What is this code section?
A read-only segment where code and const data are stored.
When I have a function is it a part of the stack or
the code section?
The definition (code) of the function will be in the CS. The arguments of each call are passed on the stack.
Also what is this
initialized/uninitialized data
segment?
The data segment is where globals/static variables are stored.
Are there read only memory section
available?
The code segment. I suppose some OS's might offer primitives for creating custom read-only segments.
When I have a const variable, what is actually happening
is it that the compiler marks a memory
section as read only or does it put
into a read only memory section.
It goes into the CS.
Where are static data kept? Where are
global data kept?
The data segment.
I was in same dilemma when I was reading about memory layout's of C/C++. Here is the link which I followed to get the questions cleared.
http://www.geeksforgeeks.org/memory-layout-of-c-program/
The link's main illustration is added here:
I hope this helps 'the one' finding answers to similar question.
(Note: The following applies to Linux)
The stack and heap of a process both exist in the "same" part of a process's memory. The stack and heap grow towards each other (initially, when the process is started, the stack occupies the entire area that can be occupied by the combination of the stack and the heap; each memory allocation (malloc/free/new/delete) can push the boundary between the stack and the heap either up or down). The BSS section, also located on the same OS-allocated process space, is in its own section and contains global variables. Read-only data resides in the rodata section and contains such things as string literals. For example, if your code has the line:
char tmpStr[] = "hello";
Then, the portion of the source code containing "hello" will reside in the rodata section.
A good, thorough book on this is Randall E. Bryant's Computer Systems.
As an addendum to the answers, here is a quote from GotW that classifies some major memory areas (note the difference between free-store, which is what I would usually refer to as the heap, and the actual heap, which is the part managed through malloc/free). The article is a bit old so I don't know if it applies to modern C++; so far I haven't found a direct contradiction.
Const Data The const data area stores string literals and
other data whose values are known at compile
time. No objects of class type can exist in
this area. All data in this area is available
during the entire lifetime of the program. Further, all
of this data is read-only, and the
results of trying to modify it are undefined.
This is in part because even the underlying
storage format is subject to arbitrary
optimization by the implementation. For
example, a particular compiler may store string
literals in overlapping objects if it wants to.
Stack The stack stores automatic variables. Typically
allocation is much faster than for dynamic
storage (heap or free store) because a memory
allocation involves only pointer increment
rather than more complex management. Objects
are constructed immediately after memory is
allocated and destroyed immediately before
memory is deallocated, so there is no
opportunity for programmers to directly
manipulate allocated but uninitialized stack
space (barring willful tampering using explicit
dtors and placement new).
Free Store The free store is one of the two dynamic memory
areas, allocated/freed by new/delete. Object
lifetime can be less than the time the storage
is allocated; that is, free store objects can
have memory allocated without being immediately
initialized, and can be destroyed without the
memory being immediately deallocated. During
the period when the storage is allocated but
outside the object's lifetime, the storage may
be accessed and manipulated through a void* but
none of the proto-object's nonstatic members or
member functions may be accessed, have their
addresses taken, or be otherwise manipulated.
Heap The heap is the other dynamic memory area,
allocated/freed by malloc/free and their
variants. Note that while the default global
new and delete might be implemented in terms of
malloc and free by a particular compiler, the
heap is not the same as free store and memory
allocated in one area cannot be safely
deallocated in the other. Memory allocated from
the heap can be used for objects of class type
by placement-new construction and explicit
destruction. If so used, the notes about free
store object lifetime apply similarly here.
Global/Static Global or static variables and objects have
their storage allocated at program startup, but
may not be initialized until after the program
has begun executing. For instance, a static
variable in a function is initialized only the
first time program execution passes through its
definition. The order of initialization of
global variables across translation units is not
defined, and special care is needed to manage
dependencies between global objects (including
class statics). As always, uninitialized proto-
objects' storage may be accessed and manipulated
through a void* but no nonstatic members or
member functions may be used or referenced
outside the object's actual lifetime.
I've searched, but I've not understood very well these three concepts. When do I have to use dynamic allocation (in the heap) and what's its real advantage? What are the problems of static and stack? Could I write an entire application without allocating variables in the heap?
I heard that others languages incorporate a "garbage collector" so you don't have to worry about memory. What does the garbage collector do?
What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?
Once someone said to me that with this declaration:
int * asafe=new int;
I have a "pointer to a pointer". What does it mean? It is different of:
asafe=new int;
?
A similar question was asked, but it didn't ask about statics.
Summary of what static, heap, and stack memory are:
A static variable is basically a global variable, even if you cannot access it globally. Usually there is an address for it that is in the executable itself. There is only one copy for the entire program. No matter how many times you go into a function call (or class) (and in how many threads!) the variable is referring to the same memory location.
The heap is a bunch of memory that can be used dynamically. If you want 4kb for an object then the dynamic allocator will look through its list of free space in the heap, pick out a 4kb chunk, and give it to you. Generally, the dynamic memory allocator (malloc, new, et c.) starts at the end of memory and works backwards.
Explaining how a stack grows and shrinks is a bit outside the scope of this answer, but suffice to say you always add and remove from the end only. Stacks usually start high and grow down to lower addresses. You run out of memory when the stack meets the dynamic allocator somewhere in the middle (but refer to physical versus virtual memory and fragmentation). Multiple threads will require multiple stacks (the process generally reserves a minimum size for the stack).
When you would want to use each one:
Statics/globals are useful for memory that you know you will always need and you know that you don't ever want to deallocate. (By the way, embedded environments may be thought of as having only static memory... the stack and heap are part of a known address space shared by a third memory type: the program code. Programs will often do dynamic allocation out of their static memory when they need things like linked lists. But regardless, the static memory itself (the buffer) is not itself "allocated", but rather other objects are allocated out of the memory held by the buffer for this purpose. You can do this in non-embedded as well, and console games will frequently eschew the built in dynamic memory mechanisms in favor of tightly controlling the allocation process by using buffers of preset sizes for all allocations.)
Stack variables are useful for when you know that as long as the function is in scope (on the stack somewhere), you will want the variables to remain. Stacks are nice for variables that you need for the code where they are located, but which isn't needed outside that code. They are also really nice for when you are accessing a resource, like a file, and want the resource to automatically go away when you leave that code.
Heap allocations (dynamically allocated memory) is useful when you want to be more flexible than the above. Frequently, a function gets called to respond to an event (the user clicks the "create box" button). The proper response may require allocating a new object (a new Box object) that should stick around long after the function is exited, so it can't be on the stack. But you don't know how many boxes you would want at the start of the program, so it can't be a static.
Garbage Collection
I've heard a lot lately about how great Garbage Collectors are, so maybe a bit of a dissenting voice would be helpful.
Garbage Collection is a wonderful mechanism for when performance is not a huge issue. I hear GCs are getting better and more sophisticated, but the fact is, you may be forced to accept a performance penalty (depending upon use case). And if you're lazy, it still may not work properly. At the best of times, Garbage Collectors realize that your memory goes away when it realizes that there are no more references to it (see reference counting). But, if you have an object that refers to itself (possibly by referring to another object which refers back), then reference counting alone will not indicate that the memory can be deleted. In this case, the GC needs to look at the entire reference soup and figure out if there are any islands that are only referred to by themselves. Offhand, I'd guess that to be an O(n^2) operation, but whatever it is, it can get bad if you are at all concerned with performance. (Edit: Martin B points out that it is O(n) for reasonably efficient algorithms. That is still O(n) too much if you are concerned with performance and can deallocate in constant time without garbage collection.)
Personally, when I hear people say that C++ doesn't have garbage collection, my mind tags that as a feature of C++, but I'm probably in the minority. Probably the hardest thing for people to learn about programming in C and C++ are pointers and how to correctly handle their dynamic memory allocations. Some other languages, like Python, would be horrible without GC, so I think it comes down to what you want out of a language. If you want dependable performance, then C++ without garbage collection is the only thing this side of Fortran that I can think of. If you want ease of use and training wheels (to save you from crashing without requiring that you learn "proper" memory management), pick something with a GC. Even if you know how to manage memory well, it will save you time which you can spend optimizing other code. There really isn't much of a performance penalty anymore, but if you really need dependable performance (and the ability to know exactly what is going on, when, under the covers) then I'd stick with C++. There is a reason that every major game engine that I've ever heard of is in C++ (if not C or assembly). Python, et al are fine for scripting, but not the main game engine.
The following is of course all not quite precise. Take it with a grain of salt when you read it :)
Well, the three things you refer to are automatic, static and dynamic storage duration, which has something to do with how long objects live and when they begin life.
Automatic storage duration
You use automatic storage duration for short lived and small data, that is needed only locally within some block:
if(some condition) {
int a[3]; // array a has automatic storage duration
fill_it(a);
print_it(a);
}
The lifetime ends as soon as we exit the block, and it starts as soon as the object is defined. They are the most simple kind of storage duration, and are way faster than in particular dynamic storage duration.
Static storage duration
You use static storage duration for free variables, which might be accessed by any code all times, if their scope allows such usage (namespace scope), and for local variables that need extend their lifetime across exit of their scope (local scope), and for member variables that need to be shared by all objects of their class (classs scope). Their lifetime depends on the scope they are in. They can have namespace scope and local scope and class scope. What is true about both of them is, once their life begins, lifetime ends at the end of the program. Here are two examples:
// static storage duration. in global namespace scope
string globalA;
int main() {
foo();
foo();
}
void foo() {
// static storage duration. in local scope
static string localA;
localA += "ab"
cout << localA;
}
The program prints ababab, because localA is not destroyed upon exit of its block. You can say that objects that have local scope begin lifetime when control reaches their definition. For localA, it happens when the function's body is entered. For objects in namespace scope, lifetime begins at program startup. The same is true for static objects of class scope:
class A {
static string classScopeA;
};
string A::classScopeA;
A a, b; &a.classScopeA == &b.classScopeA == &A::classScopeA;
As you see, classScopeA is not bound to particular objects of its class, but to the class itself. The address of all three names above is the same, and all denote the same object. There are special rule about when and how static objects are initialized, but let's not concern about that now. That's meant by the term static initialization order fiasco.
Dynamic storage duration
The last storage duration is dynamic. You use it if you want to have objects live on another isle, and you want to put pointers around that reference them. You also use them if your objects are big, and if you want to create arrays of size only known at runtime. Because of this flexibility, objects having dynamic storage duration are complicated and slow to manage. Objects having that dynamic duration begin lifetime when an appropriate new operator invocation happens:
int main() {
// the object that s points to has dynamic storage
// duration
string *s = new string;
// pass a pointer pointing to the object around.
// the object itself isn't touched
foo(s);
delete s;
}
void foo(string *s) {
cout << s->size();
}
Its lifetime ends only when you call delete for them. If you forget that, those objects never end lifetime. And class objects that define a user declared constructor won't have their destructors called. Objects having dynamic storage duration requires manual handling of their lifetime and associated memory resource. Libraries exist to ease use of them. Explicit garbage collection for particular objects can be established by using a smart pointer:
int main() {
shared_ptr<string> s(new string);
foo(s);
}
void foo(shared_ptr<string> s) {
cout << s->size();
}
You don't have to care about calling delete: The shared ptr does it for you, if the last pointer that references the object goes out of scope. The shared ptr itself has automatic storage duration. So its lifetime is automatically managed, allowing it to check whether it should delete the pointed to dynamic object in its destructor. For shared_ptr reference, see boost documents: http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm
It's been said elaborately, just as "the short answer":
static variable (class)
lifetime = program runtime (1)
visibility = determined by access modifiers (private/protected/public)
static variable (global scope)
lifetime = program runtime (1)
visibility = the compilation unit it is instantiated in (2)
heap variable
lifetime = defined by you (new to delete)
visibility = defined by you (whatever you assign the pointer to)
stack variable
visibility = from declaration until scope is exited
lifetime = from declaration until declaring scope is exited
(1) more exactly: from initialization until deinitialization of the compilation unit (i.e. C / C++ file). Order of initialization of compilation units is not defined by the standard.
(2) Beware: if you instantiate a static variable in a header, each compilation unit gets its own copy.
The main difference is speed and size.
Stack
Dramatically faster to allocate. It is done in O(1), since it is allocated when setting up the stack frame, so it is essentially free. The drawback is that if you run out of stack space you are in deep trouble. You can adjust the stack size, but, IIRC, you have ~2MB to play with. Also, as soon as you exit the function everything on the stack is cleared. So, it can be problematic to refer to it later. (Pointers to stack allocated objects lead to bugs.)
Heap
Dramatically slower to allocate. But, you have GB to play with, and point to.
Garbage Collector
The garbage collector is some code that runs in the background and frees memory. When you allocate memory on the heap it is very easy to forget to free it, which is known as a memory leak. Over time, the memory your application consumes grows and grows until it crashes. Having a garbage collector periodically free the memory you no longer need helps eliminate this class of bugs. Of course, this comes at a price, as the garbage collector slows things down.
What are the problems of static and stack?
The problem with "static" allocation is that the allocation is made at compile-time: you can't use it to allocate some variable number of data, the number of which isn't known until run-time.
The problem with allocating on the "stack" is that the allocation is destroyed as soon as the subroutine which does the allocation returns.
I could write an entire application without allocate variables in the heap?
Perhaps but not a non-trivial, normal, big application (but so-called "embedded" programs might be written without the heap, using a subset of C++).
What garbage collector does ?
It keeps watching your data ("mark and sweep") to detect when your application is no longer referencing it. This is convenient for the application, because the application doesn't need to deallocate the data ... but the garbage collector might be computationally expensive.
Garbage collectors aren't a usual feature of C++ programming.
What could you do manipulating the memory by yourself that you couldn't do using this garbage collector?
Learn the C++ mechanisms for deterministic memory deallocation:
'static': never deallocated
'stack': as soon as the variable "goes out of scope"
'heap': when the pointer is deleted (explicitly deleted by the application, or implicitly deleted within some-or-other subroutine)
Stack memory allocation (function variables, local variables) can be problematic when your stack is too "deep" and you overflow the memory available to stack allocations. The heap is for objects that need to be accessed from multiple threads or throughout the program lifecycle. You can write an entire program without using the heap.
You can leak memory quite easily without a garbage collector, but you can also dictate when objects and memory is freed. I have run in to issues with Java when it runs the GC and I have a real time process, because the GC is an exclusive thread (nothing else can run). So if performance is critical and you can guarantee there are no leaked objects, not using a GC is very helpful. Otherwise it just makes you hate life when your application consumes memory and you have to track down the source of a leak.
What if your program does not know upfront how much memory to allocate (hence you cannot use stack variables). Say linked lists, the lists can grow without knowing upfront what is its size. So allocating on a heap makes sense for a linked list when you are not aware of how many elements would be inserted into it.
An advantage of GC in some situations is an annoyance in others; reliance on GC encourages not thinking much about it. In theory, waits until 'idle' period or until it absolutely must, when it will steal bandwidth and cause response latency in your app.
But you don't have to 'not think about it.' Just as with everything else in multithreaded apps, when you can yield, you can yield. So for example, in .Net, it is possible to request a GC; by doing this, instead of less frequent longer running GC, you can have more frequent shorter running GC, and spread out the latency associated with this overhead.
But this defeats the primary attraction of GC which appears to be "encouraged to not have to think much about it because it is auto-mat-ic."
If you were first exposed to programming before GC became prevalent and were comfortable with malloc/free and new/delete, then it might even be the case that you find GC a little annoying and/or are distrustful(as one might be distrustful of 'optimization,' which has had a checkered history.) Many apps tolerate random latency. But for apps that don't, where random latency is less acceptable, a common reaction is to eschew GC environments and move in the direction of purely unmanaged code (or god forbid, a long dying art, assembly language.)
I had a summer student here a while back, an intern, smart kid, who was weaned on GC; he was so adament about the superiorty of GC that even when programming in unmanaged C/C++ he refused to follow the malloc/free new/delete model because, quote, "you shouldn't have to do this in a modern programming language." And you know? For tiny, short running apps, you can indeed get away with that, but not for long running performant apps.
Stack is a memory allocated by the compiler, when ever we compiles the program, in default compiler allocates some memory from OS ( we can change the settings from compiler settings in your IDE) and OS is the one which give you the memory, its depends on many available memory on the system and many other things, and coming to stack memory is allocate when we declare a variable they copy(ref as formals) those variables are pushed on to stack they follow some naming conventions by default its CDECL in Visual studios
ex: infix notation:
c=a+b;
the stack pushing is done right to left PUSHING, b to stack, operator, a to stack and result of those i,e c to stack.
In pre fix notation:
=+cab
Here all the variables are pushed to stack 1st (right to left)and then the operation are made.
This memory allocated by compiler is fixed. So lets assume 1MB of memory is allocated to our application, lets say variables used 700kb of memory(all the local variables are pushed to stack unless they are dynamically allocated) so remaining 324kb memory is allocated to heap.
And this stack has less life time, when the scope of the function ends these stacks gets cleared.