In C the standard memory handling functions are malloc(), realloc() and free(). However, C++ stdlib allocators only parallel two of them: there is no reallocation function. Of course, it would not be possible to do exactly the same as realloc(), because simply copying memory is not appropriate for non-aggregate types. But would there be a problem with, say, this function:
bool reallocate (pointer ptr, size_type num_now, size_type num_requested);
where
ptr is previously allocated with the same allocator for num_now objects;
num_requested >= num_now;
and semantics as follows:
if allocator can expand given memory block at ptr from size for num_now objects to num_requested objects, it does so (leaving additional memory uninitialized) and returns true;
else it does nothing and returns false.
Granted, this is not very simple, but allocators, as I understand, are mostly meant for containers and containers' code is usually complicated already.
Given such a function, std::vector, say, could grow as follows (pseudocode):
if (allocator.reallocate (buffer, capacity, new_capacity))
capacity = new_capacity; // That's all we need to do
else
... // Do the standard reallocation by using a different buffer,
// copying data and freeing the current one
Allocators that are incapable of changing memory size altogether could just implement such a function by unconditional return false;.
Are there so few reallocation-capable allocator implementation that it wouldn't worth it to bother? Or are there some problems I overlooked?
From:
http://www.sgi.com/tech/stl/alloc.html
This is probably the most questionable
design decision. It would have
probably been a bit more useful to
provide a version of reallocate that
either changed the size of the
existing object without copying or
returned NULL. This would have made it
directly useful for objects with copy
constructors. It would also have
avoided unnecessary copying in cases
in which the original object had not
been completely filled in.
Unfortunately, this would have
prohibited use of realloc from the C
library. This in turn would have added
complexity to many allocator
implementations, and would have made
interaction with memory-debugging
tools more difficult. Thus we decided
against this alternative.
This is actually a design flaw that Alexandrescu points out with the standard allocators (not operator new[]/delete[] but what were originally the stl allocators used to implement std::vector, e.g.).
A realloc can occur significantly faster than a malloc, memcpy, and free. However, while the actual memory block can be resized, it can also move memory to a new location. In the latter case, if the memory block consists of non-PODs, all objects will need to be destroyed and copy-constructed after the realloc.
The main thing the standard library needs to accommodate this as a possibility is a reallocate function as part of the standard allocator's public interface. A class like std::vector could certainly use it even if the default implementation is to malloc the newly sized block and free the old one. It would need to be a function that is capable of destroying and copy-constructing the objects in memory though, it cannot treat the memory in an opaque fashion if it did this. There's a little complexity involved there and would require some more template work which may be why it was omitted from the standard library.
std::vector<...>::reserve is not sufficient: it addresses a different case where the size of the container can be anticipated. For truly variable-sized lists, a realloc solution could make contiguous containers like std::vector a lot faster, especially if it can deal with realloc cases where the memory block was successfully resized without being moved, in which case it can omit calling copy constructors and destructors for the objects in memory.
What you're asking for is essentially what vector::reserve does. Without move semantics for objects, there's no way to reallocate memory and move the objects around without doing a copy and destroy.
I guess this is one of the things where god went wrong, but I was just too lazy to write to the standards committee.
There should have been a realloc for array allocations:
p = renew(p) [128];
or something like that.
Because of the object oriented nature of C++, and the inclusion of the various standard container types, I think it's simply that less focus was placed on direction memory management than in C. I agree that there are cases that a realloc() would be useful, but the pressure to remedy this is minimal, as almost all of the resulting functionality can be gained by using containers instead.
Related
I am currently implementing my own vector container and I encountered a pretty interesting Issue(At leas for me). It may be a stupid question but idk.
My vector uses an heap array of pointers to heap allocated objects of unknown type (T**).
I did this because I wanted the pointers and references to individual elements to stay same, even after resizing.
This comes at performance cost when constructing and copying, because I need to create the array on the heap and each object of the array on the heap too. (Heap allocation is slower than on the stack, right?)
T** arr = new *T[size]{nullptr};
and then for each element
arr[i] = new T{data};
Now I wonder if it would be safe, beneficial(faster) and possible, if instead of allocating each object individually, I could create a second array on the heap and save the pointer of each object in the first one.Then use (and delete) these objects later as if they were allocated separately.
=> Is allocating arrays on the heap faster than allocating each object individually?
=> Is it safe to allocate objects in an array and forgetting about the array later? (sounds pretty dumb i think)
Link to my github repo: https://github.com/LinuxGameGeek/personal/tree/main/c%2B%2B/vector
Thanks for your help :)
First a remark, you should not think of the comparison heap/stack in terms of efficiency, but on object lifetime:
automatic arrays (what you call on stack) end their life at the end of the block where they are defined
dynamic arrays (whay you call on heap) exists until they are explicitly deleted
Now it is always more efficient to allocate a bunch of objects in an array than to allocate them separately. You save a number of internal calls and various data structure to maintain the heap. Simply you can only deallocate the array and not the individual objects.
Finally, except for trivially copyable objects, only the compiler and not the programmer knows about the exact allocation detail. For example (and for common implementations) an automatic string (so on stack) contains a pointer to a dynamic char array (so on heap)...
Said differently, unless you plan to only use you container for POD or trivially copyable objects, do not expect to handle all the allocation and deallocation yourself: non trivial objects have internal allocations.
Heap allocation is slower than on the stack, right?
Yes. Dynamic allocation has a cost.
Is allocating arrays on the heap faster than allocating each object individually?
Yes. Multiple allocations have that cost multiplied.
I wonder if it would be ... possible, if instead of allocating each object individually, I could create a second array on the heap and save the pointer of each object in the first one
It would be possible, but not trivial. Think hard how you would implement element erasure. And then think about how you would implement other features such as random access correctly into the container with arrays that contain indices from which elements have been erased.
... safe
It can be implemented safely.
... beneficial(faster)
Of course, reducing allocations from N to 1 would be beneficial by itself. But it comes at the cost of some scheme to implement the erasure. Whether this cost is greater than the benefit of reduced allocations depends on many things such as how the container is used.
Is it safe to allocate objects in an array and forgetting about the array later?
"Forgetting" about an allocation seems like a way to say "memory leak".
You could achieve similar advantages with a custom "pool" allocator. Implementing support for custom allocators to your container might be more generally useful.
P.S. Boost already has a "ptr_vector" container that supports custom allocators. No need to reinvent the wheel.
I did this because I wanted the pointers and references to individual
elements to stay same, even after resizing.
You should just use std::vector::reserve to prevent reallocation of vector data when it is resized.
Vector is quite primitive, but is is highly optimized. It will be extremely hard for you to beat it with your code. Just inspect its API and try its all functionalities. To create something better advanced knowledge of template programing is required (which apparently you do not have yet).
What you are trying to come up with is a use of placement new allocation for a deque-like container. It's a viable optimization, but usually its done to reduce allocation calls and memory fragmentation, e.g. on some RT or embedded systems. The array maybe even a static array in that case. But if you also require that instances of T would occupy adjacent space, that's a contradicting requirement, resorting them would kill any performance gains.
... beneficial(faster)
Depends on T. E.g. there is no point to do that to something like strings or shared pointers. Or anything that actually allocates resources elsewhere, unless T allows to change that behaviour too.
I wonder if it would be ... possible, if instead of allocating each
object individually, I could create a second array on the heap and
save the pointer of each object in the first one
Yes it is possible, even with standard ISO containers, thanks to allocators.
There is concern of thread safety or awareness if this "array" appears to be shared resource between multiple writer and reader threads. You might want to implement thread-local storages instead of using shared one and implement semaphores for crossover cases.
Usual application for that is to allocate not on heap but in statically allocated array, predetermined. Or in array that was allocated once at start of program.
Note that if you use placement new you should not use delete on created objects, you have to call destructor directly. placement new overload is not a true new as far as delete concerned. You may or may not cause error but you certainly will cause an crash if you used static array and you will cause heap corruption when deleting element that got same address as dynamically allocated array beginning
This comes at performance cost when constructing and copying, because I need to create the array on the heap and each object of the array on the heap too.
Copying a POD is extremely cheap. If you research perfect forwarding you can achieve the zero cost abstraction for constructors and the emplace_back() function. When copying, use std::copy() as it is very fast.
Is allocating arrays on the heap faster than allocating each object individually?
Each allocation requires you to ask the operating system for memory. Unless you are asking for a particularly large amount of memory you can assume each request will be a constant amount of time. Instead of asking for a parking space 10 times, ask for 10 parking spaces.
Is it safe to allocate objects in an array and forgetting about the array later? (sounds pretty dumb i think)
Depends what you mean by safe. If you can't answer this question on your own, then you must cleanup the memory and not leak under any circumstance.
An example of a time you might ignore cleaning up memory is when you know the program is going to end and cleaning up memory just to exit is kinda pointless. Still, you should clean it up. Read Serge Ballesta answer for more information about lifetime.
I understand the benefits of using new against malloc in C++. But for specific cases such as primitive data types (non array) - int, float etc., is it faster to use malloc than new?
Although, it is always advisable to use new even for primitives, if we are allocating an array so that we can use delete[].
But for non-array allocation, I think there wouldn't be any constructor call for int? Since, new operator allocates memory, checks if it's allocated and then calls the constructor. But just for primitives non-array heap allocation, is it better to use malloc than new?
Please advise.
Never use malloc in C++. Never use new unless you are implementing a low-level memory management primitive.
The recommendation is:
Ask yourself: "do I need dynamic memory allocation?". A lot of times you might not need it - prefer values to pointers and try to use the stack.
If you do need dynamic memory allocation, ask yourself "who will own the allocated memory/object?".
If you only need a single owner (which is very likely), you should
use std::unique_ptr. It is a zero cost abstraction over
new/delete. (A different deallocator can be specified.)
If you need shared ownership, you should use std::shared_ptr. This is not a zero cost abstraction, as it uses atomic operations and an extra "control block" to keep track of all the owners.
If you are dealing with arrays in particular, the Standard Library provides two powerful and safe abstractions that do not require any manual memory management:
std::array<T, N>: a fixed array of N elements of type T.
std::vector<T>: a resizable array of elements of type T.
std::array and std::vector should cover 99% of your "array needs".
One more important thing: the Standard Library provides the std::make_unique and std::make_shared which should always be used to create smart pointer instances. There are a few good reasons:
Shorter - no need to repeat the T (e.g. std::unique_ptr<T>{new T}), no need to use new.
More exception safe. They prevent a potential memory leak caused by the lack of a well-defined order of evaluation in function calls. E.g.
f(std::shared_ptr<int>(new int(42)), g())
Could be evaluated in this order:
new int(42)
g()
...
If g() throws, the int is leaked.
More efficient (in terms of run-time speed). This only applies to std::make_shared - using it instead of std::shared_ptr directly allows the implementation to perform a single allocation both for the object and for the control block.
You can find more information in this question.
It can still be necessary to use malloc and free in C++ when you are interacting with APIs specified using plain C, because it is not guaranteed to be safe to use free to deallocate memory allocated with operator new (which is ultimately what all of the managed memory classes use), nor to use operator delete to deallocate memory allocated with malloc.
A typical example is POSIX getline (not to be confused with std::getline): it takes a pointer to a char * variable; that variable must point to a block of memory allocated with malloc (or it can be NULL, in which case getline will call malloc for you); when you are done calling getline you are expected to call free on that variable.
Similarly, if you are writing a library, it can make sense to use C++ internally but define an extern "C" API for your external callers, because that gives you better binary interface stability and cross-language interoperability. And if you return heap-allocated POD objects to your callers, you might want to let them deallocate those objects with free; they can't necessarily use delete, and making them call YourLibraryFree when there are no destructor-type operations needed is unergonomic.
It can also still be necessary to use malloc when implementing resizable container objects, because there is no equivalent of realloc for operator new.
But as the other answers say, when you don't have this kind of interface constraint tying your hands, use one of the managed memory classes instead.
It's always better to use new. If you use malloc you still have to check manually if space is allocated.
In modern c++ you can use smart pointers. With make_unique and make_shared you never call new explicitly. std::unique_ptr is not bigger than the underlying pointer and the overhead of using it is minimal.
The answer to "should I use new or malloc" is single responsibillity rule.
Resource management should be done by a type that has that as its sole purpose.
Those classes already exists, such as unique_ptr, vector etc.
Directly using either malloc or new is a cardinal sin.
zwol's answer already gives the correct correctness answer: Use malloc()/free() when interacting with C interfaces only.
I'm not going to repeat those details, I'm going to answer the performance question.
The truth is, that the performance of malloc() and new can, and does differ. When you perform an allocation with new, the memory will generally be allocated via call to the global operator new() function, which is distinct from malloc(). It is trivial to implement operator new() by calling through to malloc(), but this is not necessarily done.
As a matter of fact, I've seen a system where an operator new() that calls through to malloc() would outperform the standard implementation of operator new() by roughly 100 CPU cycles per call. That's definitely a measurable difference, and a clear indication that the standard implementation does something very different from malloc().
So, if you are worried about performance, there is three things to do:
Measure your performance.
Write replacement implementations for the global operator new() function and its friends.
Measure your performance and compare.
The gains/losses may or may not be significant.
I'm starting with the assumption that, generally, it is a good idea to allocate small objects in the stack, and big objects in dynamic memory. Another assumption is that I'm possibly confused while trying to learn about memory, STL containers and smart pointers.
Consider the following example, where I have an object that is necessarily allocated in the free store through a smart pointer, and I can rely on clients getting said object from a factory, for instance. This object contains some data that is specifically allocated using an STL container, which happens to be a std::vector. In one case, this data vector itself is dynamically allocated using some smart pointer, and in the other situation I just don't use a smart pointer.
Is there any practical difference between design A and design B, described below?
Situation A:
class SomeClass{
public:
SomeClass(){ /* initialize some potentially big STL container */ }
private:
std::vector<double> dataVector_;
};
Situation B:
class SomeOtherClass{
public:
SomeOtherClass() { /* initialize some potentially big STL container,
but is it allocated in any different way? */ }
private:
std::unique_ptr<std::vector<double>> pDataVector_;
};
Some factory functions.
std::unique_ptr<SomeClass> someClassFactory(){
return std::make_unique<SomeClass>();
}
std::unique_ptr<SomeOtherClass> someOtherClassFactory(){
return std::make_unique<SomeOtherClass>();
}
Use case:
int main(){
//in my case I can reliably assume that objects themselves
//are going to always be allocated in dynamic memory
auto pSomeClassObject(someClassFactory());
auto pSomeOtherClassObject(someOtherClassFactory());
return 0;
}
I would expect that both design choices have the same outcome, but do they?
Is there any advantage or disadvantage for choosing A or B? Specifically, should I generally choose design A because it's simpler or are there more considerations? Is B morally wrong because it can dangle for a std::vector?
tl;dr : Is it wrong to have a smart pointer pointing to a STL container?
edit:
The related answers pointed to useful additional information for someone as confused as myself.
Usage of objects or pointers to objects as class members and memory allocation
and Class members that are objects - Pointers or not? C++
And changing some google keywords lead me to When vectors are allocated, do they use memory on the heap or the stack?
std::unique_ptr<std::vector<double>> is slower, takes more memory, and the only advantage is that it contains an additional possible state: "vector doesn't exist". However, if you care about that state, use boost::optional<std::vector> instead. You should almost never have a heap-allocated container, and definitely never use a unique_ptr. It actually works fine, no "dangling", it's just pointlessly slow.
Using std::unique_ptr here is just wasteful unless your goal is a compiler firewall (basically hiding the compile-time dependency to vector, but then you'd need a forward declaration to standard containers).
You're adding an indirection but, more importantly, the full contents of SomeClass turns into 3 separate memory blocks to load when accessing the contents (SomeClass merged with/containing unique_ptr's block pointing to std::vector's block pointing to its element array). In addition you're paying one extra superfluous level of heap overhead.
Now you might start imagining scenarios where an indirection is helpful to the vector, like maybe you can shallow move/swap the unique_ptrs between two SomeClass instances. Yes, but vector already provides that without a unique_ptr wrapper on top. And it already has states like empty that you can reuse for some concept of validity/nilness.
Remember that variable-sized containers themselves are small objects, not big ones, pointing to potentially big blocks. vector isn't big, its dynamic contents can be. The idea of adding indirections for big objects isn't a bad rule of thumb, but vector is not a big object. With move semantics in place, it's worth thinking of it more like a little memory block pointing to a big one that can be shallow copied and swapped cheaply. Before move semantics, there were more reasons to think of something like std::vector as one indivisibly large object (though its contents were always swappable), but now it's worth thinking of it more like a little handle pointing to big, dynamic contents.
Some common reasons to introduce an indirection through something like unique_ptr is:
Abstraction & hiding. If you're trying to abstract or hide the concrete definition of some type/subtype, Foo, then this is where you need the indirection so that its handle can be captured (or potentially even used with abstraction) by those who don't know exactly what Foo is.
To allow a big, contiguous 1-block-type object to be passed around from owner to owner without invoking a copy or invalidating references/pointers (iterators included) to it or its contents.
A hasty kind of reason that's wasteful but sometimes useful in a deadline rush is to simply introduce a validity/null state to something that doesn't inherently have it.
Occasionally it's useful as an optimization to hoist out certain less frequently-accessed, larger members of an object so that its commonly-accessed elements fit more snugly (and perhaps with adjacent objects) in a cache line. There unique_ptr can let you split apart that object's memory layout while still conforming to RAII.
Now wrapping a shared_ptr on top of a standard container might have more legitimate applications if you have a container that can actually be owned (sensibly) by more than one owner. With unique_ptr, only one owner can possess the object at a time, and standard containers already let you swap and move each other's internal guts (the big, dynamic parts). So there's very little reason I can think of to wrap a standard container directly with a unique_ptr, as it's already somewhat like a smart pointer to a dynamic array (but with more functionality to work with that dynamic data, including deep copying it if desired).
And if we talk about non-standard containers, like say you're working with a third party library that provides some data structures whose contents can get very large but they fail to provide those cheap, non-invalidating move/swap semantics, then you might superficially wrap it around a unique_ptr, exchanging some creation/access/destruction overhead to get those cheap move/swap semantics back as a workaround. For the standard containers, no such workaround is needed.
I agree with #MooingDuck; I don't think using std::unique_ptr has any compelling advantages. However, I could see a use case for std::shared_ptr if the member data is very large and the class is going to support COW (copy-on-write) semantics (or any other use case where the data is shared across multiple instances).
In std::vectors push_back implemetation, when size()==capacity() , the part, that it allocates two times more place and copy the elements of old , I want to ask, the most effective way of that copying?
This is not really specified in the standard, but left for the implementation. Thus, different C++ library versions will do this differently. However, the allocator of the vector must be used for any allocation and de-allocations
(hence no usage of realloc()).
What the standard does specify, though is that if objects can be moved (instead of copied) then they should be moved (since C++11). Typically, a new block of raw memory will be allocated, typically twice the size of the previous, and then the elements moved (or copied and destroyed) from the previous to the old. For the generic case, this moving/copying must be done one by one (since otherwise the integrity of the moves/copied cannot be guaranteed without knowledege of the internals of the respective move/copy constructors). Of course, for pod (=plain old data) types optimisations via std::memcpy() are possible (and most likely implemented).
You can try to look at your C++ library implementation, but be warned that the code can be quite opaque to those unfamiliar with meta template programming: the code is complicated because of the various optimisations (for pod, movable types, etc.).
Sounds a bit as if you wanted to implement std::vector<> yourself.
The most efficient way is to avoid reallocations in the first place, which often but not always is possible.
If you have prior knowledge about how many items you will push_back() next, you can use std::vector<>::reserve() and avoid reallocations altogether.
If you really were to implement yourself, you might want to use realloc(), in spite of the fact that this is a horrible function which can do - depending on parameter values anything from malloc() to free() to realloc().
The reason, why this function can be helpful is, that there is a good chance that there is still room behind the data block on the heap. realloc() is the only function which can avoid a 100% of the time moving of the data by expanding this heap blocks size, not touching the data at all, if there is enough room to do that.
The drawback of using realloc() is that your "special performance" implementation would forego the concept of allocators as well as fail to handle element types which require copy constructor calls etc. So you would best not name your vector vector but something like "SimpleDataVector" and possibly build in some mechanism to detect misuse.
I'm new to C++ and I'm wondering why I should even bother using new and delete? It can cause problems (memory leaks) and I don't get why I shouldn't just initialize a variable without the new operator. Can someone explain it to me? It's hard to google that specific question.
For historical and efficiency reasons, C++ (and C) memory management is explicit and manual.
Sometimes, you might allocate on the call stack (e.g. by using VLAs or alloca(3)). However, that is not always possible, because
stack size is limited (depending on the platform, to a few kilobytes or a few megabytes).
memory need is not always FIFO or LIFO. It does happen that you need to allocate memory, which would be freed (or becomes useless) much later during execution, in particular because it might be the result of some function (and the caller - or its caller - would release that memory).
You definitely should read about garbage collection and dynamic memory allocation. In some languages (Java, Ocaml, Haskell, Lisp, ....) or systems, a GC is provided, and is in charge of releasing memory of useless (more precisely unreachable) data. Read also about weak references. Notice that most GCs need to scan the call stack for local pointers.
Notice that it is possible, but difficult, to have quite efficient garbage collectors (but usually not in C++). For some programs, Ocaml -with a generational copying GC- is faster than the equivalent C++ code -with explicit memory management.
Managing memory explicitly has the advantage (important in C++) that you don't pay for something you don't need. It has the inconvenience of putting more burden on the programmer.
In C or C++ you might sometimes consider using the Boehm's conservative garbage collector. With C++ you might sometimes need to use your own allocator, instead of the default std::allocator. Read also about smart pointers, reference counting, std::shared_ptr, std::unique_ptr, std::weak_ptr, and the RAII idiom, and the rule of three (in C++, becoming the rule of 5). The recent wisdom is to avoid explicit new and delete (e.g. by using standard containers and smart pointers).
Be aware that the most difficult situation in managing memory are arbitrary, perhaps circular, graphs (of reference).
On Linux and some other systems, valgrind is a useful tool to hunt memory leaks.
The alternative, allocating on the stack, will cause you trouble as stack sizes are often limited to Mb magnitudes and you'll get lots of value copies. You'll also have problems sharing stack-allocated data between function calls.
There are alternatives: using std::shared_ptr (C++11 onwards) will do the delete for you once the shared pointer is no longer being used. A technique referred to by the hideous acronym RAII is exploited by the shared pointer implementation. I mention it explicitly since most resource cleanup idioms are RAII-based. You can also make use of the comprehensive data structures available in the C++ Standard Template Library which eliminate the need to get your hands too dirty with explicit memory management.
But formally, every new must be balanced with a delete. Similarly for new[] and delete[].
Indeed in many cases new and delete are not needed, you can just use standard containers instead and leaving to them the allocation/deallocation management.
One of the reasons for which you may need to use allocation explicitly is for objects where the identity is important (i.e. they are not just values that can be copied around).
For example if you have a gui "window" object then making copies probably doesn't make sense and thus you're more or less ruling out all standard containers (they're designed for objects that can be copied and assigned). In this case if the object needs to survive the function that creates it probably the simplest solution is to just allocate explicitly it on the heap, possibly using a smart pointer to avoid leaks or use-after-delete.
In other cases it may be important to avoid copies not because they're illegal, but just not very efficient (big objects) and explicitly handling the instance lifetime may be a better (faster) solution.
Another case where explicit allocation/deallocation may be the best option are complex data structures that cannot be represented by the standard library (for example a tree in which each node is also part of a doubly-linked list).
Modern C++ styles often frown on explicit calls to new and delete outside of specialized resource management code.
This is not because the stack/automatic storage is sufficient, but rather because RAII smart resource owners (be they containers, shared pointers, or something else) make almost all direct memory wrangling unnessecary. And as the problem of memory management is often error prone, this makes your code more robust, easier to read, and sometimes faster (as the fancy resource owners can use techniques you might not bother with everywhere).
This is exemplified by the rule of zero: write no destructor, copy/move assign, copy/move constructor. Store state in smart storage, and have it handle it for you.
None of the above applies when you yourself are writing smart memory owning classes. This is a rare thing to need to do, however. It also requires C++14 (for make_unique) to get rid of the penultimate excuse to call new.
Now, the free store is still used, just not directly, under the above style. The free store (aka heap) is needed because automatic storage (aka the stack) only supports really simple object lifetime rules (scope based, compile time deterministic size and count, FILO order). As runtime sized and counted data is common, and object lifetime is often not that simple, the free store is used by most programs. Sometimes copying an object around on the stack is enough to make the simple lifetime less of a problem, but at other times identity is important.
The final reason is stack overflow. On some C++ implementations the stack/automatic storage is seriously constrained in size. What more is that there is rarely if ever a reliable failure mode when you put to much stuff in it. By storing large data on the free store, we can reduce the chance the stack will overflow.
First, if you don't need dynamic allocation, don't use it.
The most frequent reason for needing dynamic allocation is that
the object will have a lifetime which is determined by the
program logic rather than lexical scope. The new and
delete operators are designed to support explicitly managed
lifetimes.
Another common reason is that the size or structure of the
"object" is determined at runtime. For simple cases (arrays,
etc.) there are standard classes (std::vector) which will
handle this for you, but for more complicated structures (e.g.
graphs and trees), you'll have to do this yourself. (The usual
technique here is to create a class representing the graph or
tree, and have it manage the memory.)
And there is the case where the object must be polymorphic, and
the actual type won't be known until runtime. (There are some
tricky ways of handling this without dynamic allocation in the
simplest cases, but in general, you'll need dynamic allocation.)
In this case, std::unique_ptr might be appropriate to handle
the delete, or if the object must be shared, std::shared_ptr
(although usually, objects which must be shared fall into the
first category, above, and so smart pointers aren't
appropriate).
There are probably other reasons as well, but these are the
three that I've encountered the most often.
Only on simple programs you can know beforehand how much memory you'd use. In general you can not foresee how much memory you'd use.
However with modern C++11 you generally rely on standard libraries like vector and map for memory allocation, and the use of smart pointers helps you avoid memory leaks, so you don't really need to use new and delete explicitly by hand.
When you are using New then your object stores in Heap, and it remains there until you don't manually delete it. but in the case without using new your object goes in Stack and it destroys automatically when it goes out of scope.
Stack is set to a fix size, so if there is no any block for assign a new object then Stack Overflow occurs. This often happens when a lot of nested functions are being called, or if there is an infinite recursive call. If the current size of the heap is too small to accommodate new memory, then more memory can be added to the heap by the operating system.
Another reason may be if you are explicitly calling an external library or API with a C-style interface. Setting up a callback in such cases often means context data must be supplied and returned in the callback, and such an interface usually provides only a 'simple' void* or int*. Allocating an object or struct with new is appropriate for such actions, (you can delete it later in the callback, should you need to).