C++ Dynamic vs Stack Objects and How to Use Them [closed] - c++

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I've always done allocations dynamically on the heap; I've done a lot of Objective-C programming as well as plain C and since I'm usually dealing with large chunks of memory, heap objects are necessary to prevent a stack overflow.
I've recently been told that using dynamically allocated objects is discouraged in C++ and that stack objects should be used whenever possible. Why is this?
I guess the best way to illustrate this is by example:
Class *_obj1;
Class *_obj2;
void doThis(Class *obj) {}
void create() {
Class *obj1 = new Class();
Class obj2;
doThis(obj1);
doThis(&obj2);
_obj1 = obj1;
_obj2 = &obj2;
}
int main (int argc, const char * argv[]) {
create();
_obj1->doSomething();
_obj2->doSomething();
return 0;
}
This creates 2 objects, stores them in the pointers, then main() calls a method on each. The Class object creates a char* and stores the C string "Hello!" in it; the ~Class() deallocator frees the memory. The doSomething() method prints out "buff: %s" using printf(). Simple enough. Now let's run it:
Dealloc
Buff: Hello!
Buff: ¯ø_ˇ
Whoa, what happened? C++ deallocated that _obj2 even though we stored a pointer to it; that's because it's on the stack and not the heap, and C++ has no retain count mechanism like Objective-C (I tried implementing one at one point; it worked perfectly but I didn't feel like adding it to everything as a superclass). So we have to jump through hoops to keep it around after the function returns.

Instead of objects, think of "simpler" types. Would you do this:
void create() {
int *obj1 = new int();
int obj2;
_obj1 = obj1;
_obj2 = &obj2;
}
Would you think this would work? Clearly not.
It's very simple. You can't pass out the pointer to an object allocated to the stack (and, as a rule of thumb, you shouldn't pass out the pointer to an object you have just allocated. If someone allocates an object he is responsable to free it)

Heap objects per se are not wrong, failure to manage their lifetime is.
Stack objects have the property that their destructor will be called regardless of how the code leaves the function (exception, return value). Smart pointers exploit this to manage the lifetime of heap allocated objects (a happy medium?)

A basic design principle of C++ is that you don't pay for what you don't use, so that C++ can be used to write highly optimized code. Stack allocation is more efficient, whatever your language.

Related

When to use new-operator? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
When should I use the new-operator?
In my example I get the same result using two different methods:
#include <iostream>
int main() {
int *p1;
int n1 = 5;
p1 = &n1;
int *p2;
p2 = new int;
*p2 = 5;
std::cout << *p1 << std::endl;
std::cout << *p2 << std::endl;
return 0;
}
The purpose of using dynamically allocated memory is one (or many) of the following
Run-time control over object's lifetime. E.g. the object is created manually by new and destroyed manually by delete at user's desire.
Run-time control over object's type. E.g. you can deside the actual type of polymorphic object at run-time.
Run-time control over objects' quantity. E.g. you can decide the array size or the number of elements in the list at run-time.
When the object is simply too big to be reasonably placed into any other kind of memory. E.g. a large input-output buffer that is too big to be allocated on stack
In your specific example none of these reasons apply, which means that there's simply no point in using dynamic memory there.
Considering recent C++11 and upcoming C++14 standarts, you should mostly use new operator while programming in languages with garbage collection, such a Java or C#. It is quite natural for these languages. But in modern C++ you can (and mostly always should) avoid allocating memory directly. We have a nice set of smart pointers instead now.
Use new when you want to allocate from the heap, not the stack. Or moving up a level of abstraction. Use new when you need the allocated memory to remain allocated after you the function (more properly scope) in which it is allocated may (in the case of threading) have exited.
You should use new when you wish an object to remain in existence until you delete it. If you do not use new then the object will be destroyed when it goes out of scope.
Some people will say that the use of new decides whether your object is on the heap or the stack, but that is only true of variables declared within functions.
Allocating (and freeing) objects with the use of 'new' is far more expensive than if they are allocated in-place so its use should be restricted to where necessary.
int * p2 = new int;
The new int part tells the program you want some new storage suitable for holding an
operator uses the type to figure out how many bytes are needed.
Then it finds the memory and returns the address. Next, you assign the address to p2, which is
declared to be of type pointer-to-int. Now p2 is the address and *p2 is the value stored
there. Compare this with assigning the address of a variable to a pointer:
int n1;
int * p1 = &n1;
In both cases (p1 and p2), you assign the address of an int to a pointer. In the second
case, you can also access the int by name: p1. In the first case, your only access is via the pointer.
Remember that you should use delete for freeing the memory allocated by new
delete p2;
You need to read some good books ...
I think that "C++ Primer plus" is a good one for you
In this piece of your code you do deal with memory, but with automatic memory. The compiler sorts out for you where to store each variable. you have p1 pointing at n1 but most work was done automatically.
int *p1;
int n1 = 5;
p1 = &n1;ou
However in the next piece of code you request to dynamically allocate an int
int *p2;
p2 = new int;
*p2 = 5;
here you have created a new integer that has been stored dynamically, therefore you should also delete it otherwise you have created your first memory leak. If you allocate dynamically you have to take care you delete it after use.
delete p2;
This is the largest diference when you start to allocate memory using new do delete it otherwise the deconstrucor of an instance of an object will not run and therefore not clear the memory you have allocated.

How can I overload the new operator to allocate on the stack? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
How can I overload the new operator for a class type so that it allocates the memory on the stack instead of heap (basically so that the user doesn't have to call delete afterwards).
What about something like this:
class A{
private:
A(int i):
this->i(i);
{}
A a;
int i;
public:
void* operator new(size_t sz){
a(12);
}
};
Would the above solution work?
Don't!
Use automatic storage...
The new operator is designed to implement dynamic allocation (what you are calling "on the heap") and, although you can provide your own allocator, as such you cannot twist it into obeying the scoping rules of objects of automatic storage duration (what you are calling "on the stack").
Instead, write:
MyType myobject; // automatic storage duration
...or smart pointers...
Or, if you don't mind dynamic storage duration but only want to avoid later manual destruction, use smart pointers:
std::unique_ptr<MyType> myptr(new myobject()); // unique, dynamic storage duration
std::shared_ptr<MyType> myptr(new myobject()); // shared, dynamic storage duration
Both of these are found in C++11 (std::) and Boost (boost::).
... or placement new?
Another approach might be placement new but this is a dark and dangerous path to travel that I would certainly not recommend at this stage. Or, frankly, any stage... and you'd usually still need to do manual destruction. All you gain is using the keyword new, which seems pointless.
I think the good answer here is:
Don't overload operator new.
If you still want to go through that road, you can look at this question.
If not, you can always use smart pointers or shared pointers to avoid users having to delete allocated memory.
It seems that you don't know what you're asking. By definition, the new operator allocates memory on the heap. To create an object on the stack, simply declare it as a local variable.
Looking at what you actually want to do, you said that the reason you thought this would be awesome would be:
basically so that the user doesn't have to call delete afterwards
And that functionality is implemented using smart pointers. I highly suggest that you invest your time learning those instead.
Why not just automatic variable (it is "on stack" and does not need to call destructor manually:
int foo() {
A a;
int i;
...
// don't need to call delete
}
To answer your question literally, there is placement new, which takes memory from user - so you can have this memory as automatic buffer:
alignas(int) char buffer[sizeof(int)];
int* p = new (buffer) int;
// ^^^^^^^^
For non POD object - you do not need to call delete - but you must call destructor by hand:
class A { public: ~A(){} };
alignas(A) char buffer[sizeof(At)];
A* p = new (buffer) A;
// ^^^^^^^^
p->~A();
alignas is new in C++11 - in C++03 you must deal with proper alignment somehow differently. Proper aligned memory must be returned from new - otherwise the behavior is undefined.

delete does not assign 0 to the pointer [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In C or C++ when we delete some pointer, it only frees the memory but does not set the pointer to 0. Since we can not check the validity of a pointer, it would have been easier for the programmer to check the nullness of the pointer if the pointer is set to 0 after freeing the memory.
I was just wondering why 'delete' is implemented only to free the memory.
“Since we can not check the validity of a pointer, it would have been easier for the programmer to check the nullness of the pointer if the pointer is set to 0 after freeing the memory.”
Checking for nullvalue is likely to hide bugs while increasing the code size and complexity, plus giving a false sense of security. There is no guarantee that the pointer value (before nulling) hasn't been copied elsewhere. Also, this prevents delete by value, which is a major annoyance with libraries (such as OpenCV) that, misguided, offer nulling delete operations.
Instead of such counter-productive practice, use techniques that ensure proper cleanup and prevent invalid pointers, such as using appropriate smart pointers.
Because, in C++, it (almost) all about performance. You don't want the compiler to add code, you don't need. You don't want your program to do things, that you haven't added them in your code.
If you're sure, that on one will check/reuse this pointer, there's no need for annulling. One instruction less.
Also, if deleting a pointer, sets it to NULL, what will happen with the other pointers, that point to the already deleted memory? They will not be annulled. This could lead to bad things.
Just assign NULL, if you need it.
There's no need: Since you only delete in the destructor of your SBRM wrapper class, there's nothing else that could possibly access the pointer afterwards:
template <typename T> struct MyPtr
{
template <typename ...Args> MyPtr(Args &&... args)
: p(new T(std::forward<Args>(args)...)
{ }
~MyPtr()
{
delete p; // done! who cares what `p` is now.
}
MyPtr(MyPtr const &) = delete;
MyPtr & operator=(MyPtr const &) = delete;
T * operator->() { return p; }
private:
T * p;
}
It would not make anything safer, even if delete assigned NULL to the pointer. Since multiple pointers can point to the same memory address and assigning NULL to one of these will not make others NULL as well.
You can delete a const pointer variable. Setting it to NULL is not possible, Only setting mutable pointer variables to NULL after delete seems somewhat inconsistent.
For one thing, you might want to do pointer arithmetic after delete for efficiency reasons:
const int SIZE = 5;
MyObject* foo[SIZE];
// ... initialize foo with new MyObjects...
for (MyObject* i = &foo[0], end = &foo[0] + SIZE; i < end; ++i)
{
delete i;
}

C++ about dynamic memory [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What is the difference between:
int myArray[5];
and
int* myArray = new int[5];
int myArray[5] is a bunch of five integers with automatic (or static) storage duration (local memory, often classified as "in stack"). Local memory gets cleared in C++ when the specific scope is exited.
int* myArray = new int[5] is a bunch of five integers on with dynamic storage duration (dynamic memory, often classified as "in heap"). Dynamic memory won't get cleared when the specific scope is exited (myArray has to be an int pointer to store the location of your dynamically created memory).
View the following example:
void foo(){
int myArray[5];
}
void bar(){
int* myArray_dynamic = new int[5];
}
int main(){
foo();
bar();
}
foo will use stack memory, so when foo returns/exits the memory will get freed automatically. However, the dynamic allocated memory, which location is stored in myArray_dynamic in bar won't get freed, as the compiler will only free the memory of myArray_dynamic, not the memory that's stored at its value.
This will create a memory leak, so for every use of new or new[] there has to be a call of delete or delete[] (except you are working with smart pointers, but that's for another question).
The correct version of bar is
void bar(){
int* myArray_dynamic = new int[5];
delete[] myArray_dynamic;
}
The primary reason to pick one or the other is that dynamic allocation is slower, but can be any size (automatic arrays must have a fixed compile-time size), and also that space on the stack is limited, and if you run out, very bad things happen.
The first is an array that's a block of memory allocated either statically or as an automatic variable on the stack during the execution of a function ... it really depends on the context in which its declared/defined.
The second won't compile :-)
To be serious, you really want:
int* myArray = new int[5];
which means we've declared a pointer-type variable that points to an array of integers, and the array of integers is allocated dynamically by the C++ runtime on the heap by the call to new, which is a segment of memory allocated by the OS for your process to dynamically allocate variables in.
The difference is the lifetime.
int myArray[5];
This reserves storage for an array 5 of int. If myArray is declared at block scope, the array is discarded at the end of the block where it is declared.
int* myArray = new int[5];
This dynamically allocates an array 5 of int, the array exists until it is freed with delete [].
What is the difference
One is valid.
The other is not.
In the second you must write
int* myArray = new int[5];
new retuns pointer to the area dynamically allocated in heap.

In what kind of situation, c++ destructor will not be called? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In c++, we love to do something in the destructor. But in what kind of situation, destructor will not be called?
Examples in the following cases:
exit() call in the thread
unhandled exceptions and exit
TerminateProcess() (in Windows)
warm/cold reboot computer
sudden out of power of computer...
This is one case every C++ programmer should know:
#include <stdio.h>
class EmbeddedObject {
private:
char *pBytes;
public:
EmbeddedObject() {
pBytes = new char[1000];
}
~EmbeddedObject() {
printf("EmbeddedObject::~EmbeddedObject()\n");
delete [] pBytes;
}
};
class Base {
public:
~Base(){
printf("Base::~Base()\n");
}
};
class Derived : public Base {
private:
EmbeddedObject emb;
public:
~Derived() {
printf("Derived::~Derived()\n");
}
};
int main (int argc, const char * argv[])
{
Derived *pd = new Derived();
// later for some good reason, point to it using Base pointer
Base* pb = pd;
delete pb;
}
~Base() will be called but ~Derived() will not. This means the code in ~Derived() does not execute. It may have to do something important. Also it's EmbeddedObject's destructor should have been automatically called but is not. Therefore, EmbeddedObject does not get a chance to free its dynamically allocated data. This causes a memory leak.
Solution, make destructor in class Base virtual:
class Base {
public:
virtual ~Base() {
}
};
Making this one change to the above program means all destructors will be called in this oder: Derived::~Derived(), EmbeddedObject::~EmbeddedObject(), Base::~Base()
Read up on destructors in general. These kinds of problems are more likely to be something of concern to you than the other scenarios you mention. For example in the case of a power down, all bets for safe cleanup are usually off!
In C++ we have good control over enforcing the calling of destructors in the order we want them to happen, which is good news. However in the programs you write there is potential for your objects to be leaked and not deleted at all if you are not carefull enough.
Destructors will not be called for objects outside the scope of an infinite loop.
If you create an object with a placement new, the destructor for this object won't be called automatically.
Appart from the obvious things mentioned i.e. exit(), kill signal, power failure etc.
There are some very common programming errors that would prevent the destructor being called.
1) A dynamic array of objects is created with
object* x = new object[n], but freed with delete x instead of delete[] x;
2) Instead of calling delete() on an object you call free() instead. While memory is usually freed, the destructor will not be called.
3) Suppose you have an object hierarchy that should have declared virtual destructors but for some reason wasn't. If one of the subclass instances is cast to a different type in the heirarchy and then deleted, it may not call all the destructors.
Throw an exception in another destructor that's being called because of a thrown exception.