Possible scenarios to justify heap-allocated variables scoped locally? - c++

I encountered the following code snippet.
int main() {
auto a = new A(/* arguments */);
// Do something
delete a;
}
Here A is a very nontrivial class I cannot easily reason about (highly parallelized and networking involved). Because x is instantiated in main (or possibly any other function) and then deleted at the end of the scope, I thought there is no reason to heap allocate this variable a. Instead, I would simply instantiate it as a stack variable as A a(/* */).
However, I wonder if there are any valid reasons to allocate A dynamically. One thing that comes to my mind is the possibility of saving some space in stack if A is a massive object, while I doubt that this would really make sense with modern machines.

First of all there is no reason to not use a smart pointer here, i.e.
auto a = std::make_unique<A>(/* arguments */);
but as to why you would want to heap allocate rather than creating the object on the stack, reasons include
Size. Class A might be huge. Stack space is not inexhaustible; heap space is much much larger. Seriously, you can overflow the stack surprisingly easily, even on modern machines. You don't want to stack allocate an array of 100,000 items, etc.
You might need a pointer for runtime polymorphism. Say instead of calling A's constructor directly you are calling some factory function that returns a unique_ptr to a base class from which A inherits and the rest of your code depends on polymorphic calls to a; you'd need to use dynamic allocation in that case.

Related

Difference between Object object = new Object() and Object object [duplicate]

I'm coming from a Java background and have started working with objects in C++. But one thing that occurred to me is that people often use pointers to objects rather than the objects themselves, for example this declaration:
Object *myObject = new Object;
rather than:
Object myObject;
Or instead of using a function, let's say testFunc(), like this:
myObject.testFunc();
we have to write:
myObject->testFunc();
But I can't figure out why should we do it this way. I would assume it has to do with efficiency and speed since we get direct access to the memory address. Am I right?
It's very unfortunate that you see dynamic allocation so often. That just shows how many bad C++ programmers there are.
In a sense, you have two questions bundled up into one. The first is when should we use dynamic allocation (using new)? The second is when should we use pointers?
The important take-home message is that you should always use the appropriate tool for the job. In almost all situations, there is something more appropriate and safer than performing manual dynamic allocation and/or using raw pointers.
Dynamic allocation
In your question, you've demonstrated two ways of creating an object. The main difference is the storage duration of the object. When doing Object myObject; within a block, the object is created with automatic storage duration, which means it will be destroyed automatically when it goes out of scope. When you do new Object(), the object has dynamic storage duration, which means it stays alive until you explicitly delete it. You should only use dynamic storage duration when you need it.
That is, you should always prefer creating objects with automatic storage duration when you can.
The main two situations in which you might require dynamic allocation:
You need the object to outlive the current scope - that specific object at that specific memory location, not a copy of it. If you're okay with copying/moving the object (most of the time you should be), you should prefer an automatic object.
You need to allocate a lot of memory, which may easily fill up the stack. It would be nice if we didn't have to concern ourselves with this (most of the time you shouldn't have to), as it's really outside the purview of C++, but unfortunately, we have to deal with the reality of the systems we're developing for.
When you do absolutely require dynamic allocation, you should encapsulate it in a smart pointer or some other type that performs RAII (like the standard containers). Smart pointers provide ownership semantics of dynamically allocated objects. Take a look at std::unique_ptr and std::shared_ptr, for example. If you use them appropriately, you can almost entirely avoid performing your own memory management (see the Rule of Zero).
Pointers
However, there are other more general uses for raw pointers beyond dynamic allocation, but most have alternatives that you should prefer. As before, always prefer the alternatives unless you really need pointers.
You need reference semantics. Sometimes you want to pass an object using a pointer (regardless of how it was allocated) because you want the function to which you're passing it to have access that that specific object (not a copy of it). However, in most situations, you should prefer reference types to pointers, because this is specifically what they're designed for. Note this is not necessarily about extending the lifetime of the object beyond the current scope, as in situation 1 above. As before, if you're okay with passing a copy of the object, you don't need reference semantics.
You need polymorphism. You can only call functions polymorphically (that is, according to the dynamic type of an object) through a pointer or reference to the object. If that's the behavior you need, then you need to use pointers or references. Again, references should be preferred.
You want to represent that an object is optional by allowing a nullptr to be passed when the object is being omitted. If it's an argument, you should prefer to use default arguments or function overloads. Otherwise, you should preferably use a type that encapsulates this behavior, such as std::optional (introduced in C++17 - with earlier C++ standards, use boost::optional).
You want to decouple compilation units to improve compilation time. The useful property of a pointer is that you only require a forward declaration of the pointed-to type (to actually use the object, you'll need a definition). This allows you to decouple parts of your compilation process, which may significantly improve compilation time. See the Pimpl idiom.
You need to interface with a C library or a C-style library. At this point, you're forced to use raw pointers. The best thing you can do is make sure you only let your raw pointers loose at the last possible moment. You can get a raw pointer from a smart pointer, for example, by using its get member function. If a library performs some allocation for you which it expects you to deallocate via a handle, you can often wrap the handle up in a smart pointer with a custom deleter that will deallocate the object appropriately.
There are many use cases for pointers.
Polymorphic behavior. For polymorphic types, pointers (or references) are used to avoid slicing:
class Base { ... };
class Derived : public Base { ... };
void fun(Base b) { ... }
void gun(Base* b) { ... }
void hun(Base& b) { ... }
Derived d;
fun(d); // oops, all Derived parts silently "sliced" off
gun(&d); // OK, a Derived object IS-A Base object
hun(d); // also OK, reference also doesn't slice
Reference semantics and avoiding copying. For non-polymorphic types, a pointer (or a reference) will avoid copying a potentially expensive object
Base b;
fun(b); // copies b, potentially expensive
gun(&b); // takes a pointer to b, no copying
hun(b); // regular syntax, behaves as a pointer
Note that C++11 has move semantics that can avoid many copies of expensive objects into function argument and as return values. But using a pointer will definitely avoid those and will allow multiple pointers on the same object (whereas an object can only be moved from once).
Resource acquisition. Creating a pointer to a resource using the new operator is an anti-pattern in modern C++. Use a special resource class (one of the Standard containers) or a smart pointer (std::unique_ptr<> or std::shared_ptr<>). Consider:
{
auto b = new Base;
... // oops, if an exception is thrown, destructor not called!
delete b;
}
vs.
{
auto b = std::make_unique<Base>();
... // OK, now exception safe
}
A raw pointer should only be used as a "view" and not in any way involved in ownership, be it through direct creation or implicitly through return values. See also this Q&A from the C++ FAQ.
More fine-grained life-time control Every time a shared pointer is being copied (e.g. as a function argument) the resource it points to is being kept alive. Regular objects (not created by new, either directly by you or inside a resource class) are destroyed when going out of scope.
There are many excellent answers to this question, including the important use cases of forward declarations, polymorphism etc. but I feel a part of the "soul" of your question is not answered - namely what the different syntaxes mean across Java and C++.
Let's examine the situation comparing the two languages:
Java:
Object object1 = new Object(); //A new object is allocated by Java
Object object2 = new Object(); //Another new object is allocated by Java
object1 = object2;
//object1 now points to the object originally allocated for object2
//The object originally allocated for object1 is now "dead" - nothing points to it, so it
//will be reclaimed by the Garbage Collector.
//If either object1 or object2 is changed, the change will be reflected to the other
The closest equivalent to this, is:
C++:
Object * object1 = new Object(); //A new object is allocated on the heap
Object * object2 = new Object(); //Another new object is allocated on the heap
delete object1;
//Since C++ does not have a garbage collector, if we don't do that, the next line would
//cause a "memory leak", i.e. a piece of claimed memory that the app cannot use
//and that we have no way to reclaim...
object1 = object2; //Same as Java, object1 points to object2.
Let's see the alternative C++ way:
Object object1; //A new object is allocated on the STACK
Object object2; //Another new object is allocated on the STACK
object1 = object2;//!!!! This is different! The CONTENTS of object2 are COPIED onto object1,
//using the "copy assignment operator", the definition of operator =.
//But, the two objects are still different. Change one, the other remains unchanged.
//Also, the objects get automatically destroyed once the function returns...
The best way to think of it is that -- more or less -- Java (implicitly) handles pointers to objects, while C++ may handle either pointers to objects, or the objects themselves.
There are exceptions to this -- for example, if you declare Java "primitive" types, they are actual values that are copied, and not pointers.
So,
Java:
int object1; //An integer is allocated on the stack.
int object2; //Another integer is allocated on the stack.
object1 = object2; //The value of object2 is copied to object1.
That said, using pointers is NOT necessarily either the correct or the wrong way to handle things; however other answers have covered that satisfactorily. The general idea though is that in C++ you have much more control on the lifetime of the objects, and on where they will live.
Take home point -- the Object * object = new Object() construct is actually what is closest to typical Java (or C# for that matter) semantics.
Preface
Java is nothing like C++, contrary to hype. The Java hype machine would like you to believe that because Java has C++ like syntax, that the languages are similar. Nothing can be further from the truth. This misinformation is part of the reason why Java programmers go to C++ and use Java-like syntax without understanding the implications of their code.
Onwards we go
But I can't figure out why should we do it this way. I would assume it
has to do with efficiency and speed since we get direct access to the
memory address. Am I right?
To the contrary, actually. The heap is much slower than the stack, because the stack is very simple compared to the heap. Automatic storage variables (aka stack variables) have their destructors called once they go out of scope. For example:
{
std::string s;
}
// s is destroyed here
On the other hand, if you use a pointer dynamically allocated, its destructor must be called manually. delete calls this destructor for you.
{
std::string* s = new std::string;
delete s; // destructor called
}
This has nothing to do with the new syntax prevalent in C# and Java. They are used for completely different purposes.
Benefits of dynamic allocation
1. You don't have to know the size of the array in advance
One of the first problems many C++ programmers run into is that when they are accepting arbitrary input from users, you can only allocate a fixed size for a stack variable. You cannot change the size of arrays either. For example:
char buffer[100];
std::cin >> buffer;
// bad input = buffer overflow
Of course, if you used an std::string instead, std::string internally resizes itself so that shouldn't be a problem. But essentially the solution to this problem is dynamic allocation. You can allocate dynamic memory based on the input of the user, for example:
int * pointer;
std::cout << "How many items do you need?";
std::cin >> n;
pointer = new int[n];
Side note: One mistake many beginners make is the usage of
variable length arrays. This is a GNU extension and also one in Clang
because they mirror many of GCC's extensions. So the following
int arr[n] should not be relied on.
Because the heap is much bigger than the stack, one can arbitrarily allocate/reallocate as much memory as he/she needs, whereas the stack has a limitation.
2. Arrays are not pointers
How is this a benefit you ask? The answer will become clear once you understand the confusion/myth behind arrays and pointers. It is commonly assumed that they are the same, but they are not. This myth comes from the fact that pointers can be subscripted just like arrays and because of arrays decay to pointers at the top level in a function declaration. However, once an array decays to a pointer, the pointer loses its sizeof information. So sizeof(pointer) will give the size of the pointer in bytes, which is usually 8 bytes on a 64-bit system.
You cannot assign to arrays, only initialize them. For example:
int arr[5] = {1, 2, 3, 4, 5}; // initialization
int arr[] = {1, 2, 3, 4, 5}; // The standard dictates that the size of the array
// be given by the amount of members in the initializer
arr = { 1, 2, 3, 4, 5 }; // ERROR
On the other hand, you can do whatever you want with pointers. Unfortunately, because the distinction between pointers and arrays are hand-waved in Java and C#, beginners don't understand the difference.
3. Polymorphism
Java and C# have facilities that allow you to treat objects as another, for example using the as keyword. So if somebody wanted to treat an Entity object as a Player object, one could do Player player = Entity as Player; This is very useful if you intend to call functions on a homogeneous container that should only apply to a specific type. The functionality can be achieved in a similar fashion below:
std::vector<Base*> vector;
vector.push_back(&square);
vector.push_back(&triangle);
for (auto& e : vector)
{
auto test = dynamic_cast<Triangle*>(e); // I only care about triangles
if (!test) // not a triangle
e.GenericFunction();
else
e.TriangleOnlyMagic();
}
So say if only Triangles had a Rotate function, it would be a compiler error if you tried to call it on all objects of the class. Using dynamic_cast, you can simulate the as keyword. To be clear, if a cast fails, it returns an invalid pointer. So !test is essentially a shorthand for checking if test is NULL or an invalid pointer, which means the cast failed.
Benefits of automatic variables
After seeing all the great things dynamic allocation can do, you're probably wondering why wouldn't anyone NOT use dynamic allocation all the time? I already told you one reason, the heap is slow. And if you don't need all that memory, you shouldn't abuse it. So here are some disadvantages in no particular order:
It is error-prone. Manual memory allocation is dangerous and you are prone to leaks. If you are not proficient at using the debugger or valgrind (a memory leak tool), you may pull your hair out of your head. Luckily RAII idioms and smart pointers alleviate this a bit, but you must be familiar with practices such as The Rule Of Three and The Rule Of Five. It is a lot of information to take in, and beginners who either don't know or don't care will fall into this trap.
It is not necessary. Unlike Java and C# where it is idiomatic to use the new keyword everywhere, in C++, you should only use it if you need to. The common phrase goes, everything looks like a nail if you have a hammer. Whereas beginners who start with C++ are scared of pointers and learn to use stack variables by habit, Java and C# programmers start by using pointers without understanding it! That is literally stepping off on the wrong foot. You must abandon everything you know because the syntax is one thing, learning the language is another.
1. (N)RVO - Aka, (Named) Return Value Optimization
One optimization many compilers make are things called elision and return value optimization. These things can obviate unnecessary copys which is useful for objects that are very large, such as a vector containing many elements. Normally the common practice is to use pointers to transfer ownership rather than copying the large objects to move them around. This has lead to the inception of move semantics and smart pointers.
If you are using pointers, (N)RVO does NOT occur. It is more beneficial and less error-prone to take advantage of (N)RVO rather than returning or passing pointers if you are worried about optimization. Error leaks can happen if the caller of a function is responsible for deleteing a dynamically allocated object and such. It can be difficult to track the ownership of an object if pointers are being passed around like a hot potato. Just use stack variables because it is simpler and better.
Another good reason to use pointers would be for forward declarations. In a large enough project they can really speed up compile time.
In C++, objects allocated on the stack (using Object object; statement within a block) will only live within the scope they are declared in. When the block of code finishes execution, the object declared are destroyed.
Whereas if you allocate memory on heap, using Object* obj = new Object(), they continue to live in heap until you call delete obj.
I would create an object on heap when I like to use the object not only in the block of code which declared/allocated it.
C++ gives you three ways to pass an object: by pointer, by reference, and by value. Java limits you with the latter one (the only exception is primitive types like int, boolean etc). If you want to use C++ not just like a weird toy, then you'd better get to know the difference between these three ways.
Java pretends that there is no such problem as 'who and when should destroy this?'. The answer is: The Garbage Collector, Great and Awful. Nevertheless, it can't provide 100% protection against memory leaks (yes, java can leak memory). Actually, GC gives you a false sense of safety. The bigger your SUV, the longer your way to the evacuator.
C++ leaves you face-to-face with object's lifecycle management. Well, there are means to deal with that (smart pointers family, QObject in Qt and so on), but none of them can be used in 'fire and forget' manner like GC: you should always keep in mind memory handling. Not only should you care about destroying an object, you also have to avoid destroying the same object more than once.
Not scared yet? Ok: cyclic references - handle them yourself, human. And remember: kill each object precisely once, we C++ runtimes don't like those who mess with corpses, leave dead ones alone.
So, back to your question.
When you pass your object around by value, not by pointer or by reference, you copy the object (the whole object, whether it's a couple of bytes or a huge database dump - you're smart enough to care to avoid latter, aren't you?) every time you do '='. And to access the object's members, you use '.' (dot).
When you pass your object by pointer, you copy just a few bytes (4 on 32-bit systems, 8 on 64-bit ones), namely - the address of this object. And to show this to everyone, you use this fancy '->' operator when you access the members. Or you can use the combination of '*' and '.'.
When you use references, then you get the pointer that pretends to be a value. It's a pointer, but you access the members through '.'.
And, to blow your mind one more time: when you declare several variables separated by commas, then (watch the hands):
Type is given to everyone
Value/pointer/reference modifier is individual
Example:
struct MyStruct
{
int* someIntPointer, someInt; //here comes the surprise
MyStruct *somePointer;
MyStruct &someReference;
};
MyStruct s1; //we allocated an object on stack, not in heap
s1.someInt = 1; //someInt is of type 'int', not 'int*' - value/pointer modifier is individual
s1.someIntPointer = &s1.someInt;
*s1.someIntPointer = 2; //now s1.someInt has value '2'
s1.somePointer = &s1;
s1.someReference = s1; //note there is no '&' operator: reference tries to look like value
s1.somePointer->someInt = 3; //now s1.someInt has value '3'
*(s1.somePointer).someInt = 3; //same as above line
*s1.somePointer->someIntPointer = 4; //now s1.someInt has value '4'
s1.someReference.someInt = 5; //now s1.someInt has value '5'
//although someReference is not value, it's members are accessed through '.'
MyStruct s2 = s1; //'NO WAY' the compiler will say. Go define your '=' operator and come back.
//OK, assume we have '=' defined in MyStruct
s2.someInt = 0; //s2.someInt == 0, but s1.someInt is still 5 - it's two completely different objects, not the references to the same one
But I can't figure out why should we use it like this?
I will compare how it works inside the function body if you use:
Object myObject;
Inside the function, your myObject will get destroyed once this function returns. So this is useful if you don't need your object outside your function. This object will be put on current thread stack.
If you write inside function body:
Object *myObject = new Object;
then Object class instance pointed by myObject will not get destroyed once the function ends, and allocation is on the heap.
Now if you are Java programmer, then the second example is closer to how object allocation works under java. This line: Object *myObject = new Object; is equivalent to java: Object myObject = new Object();. The difference is that under java myObject will get garbage collected, while under c++ it will not get freed, you must somewhere explicitly call `delete myObject;' otherwise you will introduce memory leaks.
Since c++11 you can use safe ways of dynamic allocations: new Object, by storing values in shared_ptr/unique_ptr.
std::shared_ptr<std::string> safe_str = make_shared<std::string>("make_shared");
// since c++14
std::unique_ptr<std::string> safe_str = make_unique<std::string>("make_shared");
also, objects are very often stored in containers, like map-s or vector-s, they will automatically manage a lifetime of your objects.
Technically it is a memory allocation issue, however here are two more practical aspects of this.
It has to do with two things:
1) Scope, when you define an object without a pointer you will no longer be able to access it after the code block it is defined in, whereas if you define a pointer with "new" then you can access it from anywhere you have a pointer to this memory until you call "delete" on the same pointer.
2) If you want to pass arguments to a function you want to pass a pointer or a reference in order to be more efficient. When you pass an Object then the object is copied, if this is an object that uses a lot of memory this might be CPU consuming (e.g. you copy a vector full of data). When you pass a pointer all you pass is one int (depending of implementation but most of them are one int).
Other than that you need to understand that "new" allocates memory on the heap that needs to be freed at some point. When you don't have to use "new" I suggest you use a regular object definition "on the stack".
Well the main question is Why should I use a pointer rather than the object itself? And my answer, you should (almost) never use pointer instead of object, because C++ has references, it is safer then pointers and guarantees the same performance as pointers.
Another thing you mentioned in your question:
Object *myObject = new Object;
How does it work? It creates pointer of Object type, allocates memory to fit one object and calls default constructor, sounds good, right? But actually it isn't so good, if you dynamically allocated memory (used keyword new), you also have to free memory manually, that means in code you should have:
delete myObject;
This calls destructor and frees memory, looks easy, however in big projects may be difficult to detect if one thread freed memory or not, but for that purpose you can try shared pointers, these slightly decreases performance, but it is much easier to work with them.
And now some introduction is over and go back to question.
You can use pointers instead of objects to get better performance while transferring data between function.
Take a look, you have std::string (it is also object) and it contains really much data, for example big XML, now you need to parse it, but for that you have function void foo(...) which can be declarated in different ways:
void foo(std::string xml);
In this case you will copy all data from your variable to function stack, it takes some time, so your performance will be low.
void foo(std::string* xml);
In this case you will pass pointer to object, same speed as passing size_t variable, however this declaration has error prone, because you can pass NULL pointer or invalid pointer. Pointers usually used in C because it doesn't have references.
void foo(std::string& xml);
Here you pass reference, basically it is the same as passing pointer, but compiler does some stuff and you cannot pass invalid reference (actually it is possible to create situation with invalid reference, but it is tricking compiler).
void foo(const std::string* xml);
Here is the same as second, just pointer value cannot be changed.
void foo(const std::string& xml);
Here is the same as third, but object value cannot be changed.
What more I want to mention, you can use these 5 ways to pass data no matter which allocation way you have chosen (with new or regular).
Another thing to mention, when you create object in regular way, you allocate memory in stack, but while you create it with new you allocate heap. It is much faster to allocate stack, but it is kind a small for really big arrays of data, so if you need big object you should use heap, because you may get stack overflow, but usually this issue is solved using STL containers and remember std::string is also container, some guys forgot it :)
Let's say that you have class A that contain class B When you want to call some function of class B outside class A you will simply obtain a pointer to this class and you can do whatever you want and it will also change context of class B in your class A
But be careful with dynamic object
There are many benefits of using pointers to object -
Efficiency (as you already pointed out). Passing objects to
functions mean creating new copies of object.
Working with objects from third party libraries. If your object
belongs to a third party code and the authors intend the usage of their objects through pointers only (no copy constructors etc) the only way you can pass around this
object is using pointers. Passing by value may cause issues. (Deep
copy / shallow copy issues).
if the object owns a resource and you want that the ownership should not be sahred with other objects.
This is has been discussed at length, but in Java everything is a pointer. It makes no distinction between stack and heap allocations (all objects are allocated on the heap), so you don't realize you're using pointers. In C++, you can mix the two, depending on your memory requirements. Performance and memory usage is more deterministic in C++ (duh).
Object *myObject = new Object;
Doing this will create a reference to an Object (on the heap) which has to be deleted explicitly to avoid memory leak.
Object myObject;
Doing this will create an object(myObject) of the automatic type (on the stack) that will be automatically deleted when the object(myObject) goes out of scope.
A pointer directly references the memory location of an object. Java has nothing like this. Java has references that reference the location of object through hash tables. You cannot do anything like pointer arithmetic in Java with these references.
To answer your question, it's just your preference. I prefer using the Java-like syntax.
The key strength of object pointers in C++ is allowing for polymorphic arrays and maps of pointers of the same superclass. It allows, for example, to put parakeets, chickens, robins, ostriches, etc. in an array of Bird.
Additionally, dynamically allocated objects are more flexible, and can use HEAP memory whereas a locally allocated object will use the STACK memory unless it is static. Having large objects on the stack, especially when using recursion, will undoubtedly lead to stack overflow.
One reason for using pointers is to interface with C functions. Another reason is to save memory; for example: instead of passing an object which contains a lot of data and has a processor-intensive copy-constructor to a function, just pass a pointer to the object, saving memory and speed especially if you're in a loop, however a reference would be better in that case, unless you're using an C-style array.
In areas where memory utilization is at its premium , pointers comes handy. For example consider a minimax algorithm, where thousands of nodes will be generated using recursive routine, and later use them to evaluate the next best move in game, ability to deallocate or reset (as in smart pointers) significantly reduces memory consumption. Whereas the non-pointer variable continues to occupy space till it's recursive call returns a value.
I will include one important use case of pointer. When you are storing some object in the base class, but it could be polymorphic.
Class Base1 {
};
Class Derived1 : public Base1 {
};
Class Base2 {
Base *bObj;
virtual void createMemerObects() = 0;
};
Class Derived2 {
virtual void createMemerObects() {
bObj = new Derived1();
}
};
So in this case you can't declare bObj as an direct object, you have to have pointer.
tl;dr: Don't "use a pointer rather than the object itself" (usually)
You asked why you should prefer a pointer rather than the object itself. Well, you shouldn't, as a general rule.
Now, there are indeed multiple exceptions to this rule, and other answers have spelled them out. The thing is, these days, many of these exceptions are no longer valid! Let us consider the exceptions listed in the accepted answer:
You need reference semantics.
If you need reference semantics, use references, not pointers; see #ST3's answer's answer. In fact, one could argue that, in Java, what you pass around are usually references.
You need polymorphism.
If you know the set of classes you'll be working with, very often you can just use an std::variant<ClassA, ClassB, ClassC> (see description here) and operate on them using a visitor pattern. Now, granted, C++'s variant implementation is not the prettiest sight; but I'd usually prefer it over getting down-and-dirty with pointers.
You want to represent that an object is optional
Absolutely don't use pointers for that. You have std::optional, and unlike std::variant, it's quite convenient. Use that instead. nullopt is an empty (or "null") optional. And - it's not a pointer.
You want to decouple compilation units to improve compilation time.
You can use references rather than pointers to achieve this as well. To use Object& in a piece of code, it's sufficient to say class Object;, i.e. to use a forward-declaration.
You need to interface with a C library or a C-style library.
Yeah, well, if you work with code that already uses pointers, then - you have to use pointers yourself, can't get around that :-( and C doesn't have references.
Also, some people may tell you to use pointers to avoid making copies of objects. Well this is not really a problem for return values, due to the return-value and named-return-value optimizations (RVO and NRVO). And in other cases - references avoid copying just fine.
The bottom-line rule is still the same as the accepted answer, though: Only use a pointer when you have a good reason to need one.
PS - If you do need a pointer, you should still avoid using new and delete directly. You will probably be better served by a smart pointer - which is automagically freed (not like in Java, but still).
With pointers ,
can directly talk to the memory.
can prevent lot of memory leaks of a program by manipulating pointers.
"Necessity is the mother of invention."
The most of important difference that I would like to point out is the outcome of my own experience of coding.
Sometimes you need to pass objects to functions. In that case, if your object is of a very big class then passing it as an object will copy its state (which you might not want ..AND CAN BE BIG OVERHEAD) thus resulting in an overhead of copying object .while pointer is fixed 4-byte size (assuming 32 bit). Other reasons are already mentioned above...
There are many excellent answers already, but let me give you one example:
I have an simple Item class:
class Item
{
public:
std::string name;
int weight;
int price;
};
I make a vector to hold a bunch of them.
std::vector<Item> inventory;
I create one million Item objects, and push them back onto the vector. I sort the vector by name, and then do a simple iterative binary search for a particular item name. I test the program, and it takes over 8 minutes to finish executing. Then I change my inventory vector like so:
std::vector<Item *> inventory;
...and create my million Item objects via new. The ONLY changes I make to my code are to use the pointers to Items, excepting a loop I add for memory cleanup at the end. That program runs in under 40 seconds, or better than a 10x speed increase.
EDIT: The code is at http://pastebin.com/DK24SPeW
With compiler optimizations it shows only a 3.4x increase on the machine I just tested it on, which is still considerable.

I don't understand the point of pointers to classes [duplicate]

I'm coming from a Java background and have started working with objects in C++. But one thing that occurred to me is that people often use pointers to objects rather than the objects themselves, for example this declaration:
Object *myObject = new Object;
rather than:
Object myObject;
Or instead of using a function, let's say testFunc(), like this:
myObject.testFunc();
we have to write:
myObject->testFunc();
But I can't figure out why should we do it this way. I would assume it has to do with efficiency and speed since we get direct access to the memory address. Am I right?
It's very unfortunate that you see dynamic allocation so often. That just shows how many bad C++ programmers there are.
In a sense, you have two questions bundled up into one. The first is when should we use dynamic allocation (using new)? The second is when should we use pointers?
The important take-home message is that you should always use the appropriate tool for the job. In almost all situations, there is something more appropriate and safer than performing manual dynamic allocation and/or using raw pointers.
Dynamic allocation
In your question, you've demonstrated two ways of creating an object. The main difference is the storage duration of the object. When doing Object myObject; within a block, the object is created with automatic storage duration, which means it will be destroyed automatically when it goes out of scope. When you do new Object(), the object has dynamic storage duration, which means it stays alive until you explicitly delete it. You should only use dynamic storage duration when you need it.
That is, you should always prefer creating objects with automatic storage duration when you can.
The main two situations in which you might require dynamic allocation:
You need the object to outlive the current scope - that specific object at that specific memory location, not a copy of it. If you're okay with copying/moving the object (most of the time you should be), you should prefer an automatic object.
You need to allocate a lot of memory, which may easily fill up the stack. It would be nice if we didn't have to concern ourselves with this (most of the time you shouldn't have to), as it's really outside the purview of C++, but unfortunately, we have to deal with the reality of the systems we're developing for.
When you do absolutely require dynamic allocation, you should encapsulate it in a smart pointer or some other type that performs RAII (like the standard containers). Smart pointers provide ownership semantics of dynamically allocated objects. Take a look at std::unique_ptr and std::shared_ptr, for example. If you use them appropriately, you can almost entirely avoid performing your own memory management (see the Rule of Zero).
Pointers
However, there are other more general uses for raw pointers beyond dynamic allocation, but most have alternatives that you should prefer. As before, always prefer the alternatives unless you really need pointers.
You need reference semantics. Sometimes you want to pass an object using a pointer (regardless of how it was allocated) because you want the function to which you're passing it to have access that that specific object (not a copy of it). However, in most situations, you should prefer reference types to pointers, because this is specifically what they're designed for. Note this is not necessarily about extending the lifetime of the object beyond the current scope, as in situation 1 above. As before, if you're okay with passing a copy of the object, you don't need reference semantics.
You need polymorphism. You can only call functions polymorphically (that is, according to the dynamic type of an object) through a pointer or reference to the object. If that's the behavior you need, then you need to use pointers or references. Again, references should be preferred.
You want to represent that an object is optional by allowing a nullptr to be passed when the object is being omitted. If it's an argument, you should prefer to use default arguments or function overloads. Otherwise, you should preferably use a type that encapsulates this behavior, such as std::optional (introduced in C++17 - with earlier C++ standards, use boost::optional).
You want to decouple compilation units to improve compilation time. The useful property of a pointer is that you only require a forward declaration of the pointed-to type (to actually use the object, you'll need a definition). This allows you to decouple parts of your compilation process, which may significantly improve compilation time. See the Pimpl idiom.
You need to interface with a C library or a C-style library. At this point, you're forced to use raw pointers. The best thing you can do is make sure you only let your raw pointers loose at the last possible moment. You can get a raw pointer from a smart pointer, for example, by using its get member function. If a library performs some allocation for you which it expects you to deallocate via a handle, you can often wrap the handle up in a smart pointer with a custom deleter that will deallocate the object appropriately.
There are many use cases for pointers.
Polymorphic behavior. For polymorphic types, pointers (or references) are used to avoid slicing:
class Base { ... };
class Derived : public Base { ... };
void fun(Base b) { ... }
void gun(Base* b) { ... }
void hun(Base& b) { ... }
Derived d;
fun(d); // oops, all Derived parts silently "sliced" off
gun(&d); // OK, a Derived object IS-A Base object
hun(d); // also OK, reference also doesn't slice
Reference semantics and avoiding copying. For non-polymorphic types, a pointer (or a reference) will avoid copying a potentially expensive object
Base b;
fun(b); // copies b, potentially expensive
gun(&b); // takes a pointer to b, no copying
hun(b); // regular syntax, behaves as a pointer
Note that C++11 has move semantics that can avoid many copies of expensive objects into function argument and as return values. But using a pointer will definitely avoid those and will allow multiple pointers on the same object (whereas an object can only be moved from once).
Resource acquisition. Creating a pointer to a resource using the new operator is an anti-pattern in modern C++. Use a special resource class (one of the Standard containers) or a smart pointer (std::unique_ptr<> or std::shared_ptr<>). Consider:
{
auto b = new Base;
... // oops, if an exception is thrown, destructor not called!
delete b;
}
vs.
{
auto b = std::make_unique<Base>();
... // OK, now exception safe
}
A raw pointer should only be used as a "view" and not in any way involved in ownership, be it through direct creation or implicitly through return values. See also this Q&A from the C++ FAQ.
More fine-grained life-time control Every time a shared pointer is being copied (e.g. as a function argument) the resource it points to is being kept alive. Regular objects (not created by new, either directly by you or inside a resource class) are destroyed when going out of scope.
There are many excellent answers to this question, including the important use cases of forward declarations, polymorphism etc. but I feel a part of the "soul" of your question is not answered - namely what the different syntaxes mean across Java and C++.
Let's examine the situation comparing the two languages:
Java:
Object object1 = new Object(); //A new object is allocated by Java
Object object2 = new Object(); //Another new object is allocated by Java
object1 = object2;
//object1 now points to the object originally allocated for object2
//The object originally allocated for object1 is now "dead" - nothing points to it, so it
//will be reclaimed by the Garbage Collector.
//If either object1 or object2 is changed, the change will be reflected to the other
The closest equivalent to this, is:
C++:
Object * object1 = new Object(); //A new object is allocated on the heap
Object * object2 = new Object(); //Another new object is allocated on the heap
delete object1;
//Since C++ does not have a garbage collector, if we don't do that, the next line would
//cause a "memory leak", i.e. a piece of claimed memory that the app cannot use
//and that we have no way to reclaim...
object1 = object2; //Same as Java, object1 points to object2.
Let's see the alternative C++ way:
Object object1; //A new object is allocated on the STACK
Object object2; //Another new object is allocated on the STACK
object1 = object2;//!!!! This is different! The CONTENTS of object2 are COPIED onto object1,
//using the "copy assignment operator", the definition of operator =.
//But, the two objects are still different. Change one, the other remains unchanged.
//Also, the objects get automatically destroyed once the function returns...
The best way to think of it is that -- more or less -- Java (implicitly) handles pointers to objects, while C++ may handle either pointers to objects, or the objects themselves.
There are exceptions to this -- for example, if you declare Java "primitive" types, they are actual values that are copied, and not pointers.
So,
Java:
int object1; //An integer is allocated on the stack.
int object2; //Another integer is allocated on the stack.
object1 = object2; //The value of object2 is copied to object1.
That said, using pointers is NOT necessarily either the correct or the wrong way to handle things; however other answers have covered that satisfactorily. The general idea though is that in C++ you have much more control on the lifetime of the objects, and on where they will live.
Take home point -- the Object * object = new Object() construct is actually what is closest to typical Java (or C# for that matter) semantics.
Preface
Java is nothing like C++, contrary to hype. The Java hype machine would like you to believe that because Java has C++ like syntax, that the languages are similar. Nothing can be further from the truth. This misinformation is part of the reason why Java programmers go to C++ and use Java-like syntax without understanding the implications of their code.
Onwards we go
But I can't figure out why should we do it this way. I would assume it
has to do with efficiency and speed since we get direct access to the
memory address. Am I right?
To the contrary, actually. The heap is much slower than the stack, because the stack is very simple compared to the heap. Automatic storage variables (aka stack variables) have their destructors called once they go out of scope. For example:
{
std::string s;
}
// s is destroyed here
On the other hand, if you use a pointer dynamically allocated, its destructor must be called manually. delete calls this destructor for you.
{
std::string* s = new std::string;
delete s; // destructor called
}
This has nothing to do with the new syntax prevalent in C# and Java. They are used for completely different purposes.
Benefits of dynamic allocation
1. You don't have to know the size of the array in advance
One of the first problems many C++ programmers run into is that when they are accepting arbitrary input from users, you can only allocate a fixed size for a stack variable. You cannot change the size of arrays either. For example:
char buffer[100];
std::cin >> buffer;
// bad input = buffer overflow
Of course, if you used an std::string instead, std::string internally resizes itself so that shouldn't be a problem. But essentially the solution to this problem is dynamic allocation. You can allocate dynamic memory based on the input of the user, for example:
int * pointer;
std::cout << "How many items do you need?";
std::cin >> n;
pointer = new int[n];
Side note: One mistake many beginners make is the usage of
variable length arrays. This is a GNU extension and also one in Clang
because they mirror many of GCC's extensions. So the following
int arr[n] should not be relied on.
Because the heap is much bigger than the stack, one can arbitrarily allocate/reallocate as much memory as he/she needs, whereas the stack has a limitation.
2. Arrays are not pointers
How is this a benefit you ask? The answer will become clear once you understand the confusion/myth behind arrays and pointers. It is commonly assumed that they are the same, but they are not. This myth comes from the fact that pointers can be subscripted just like arrays and because of arrays decay to pointers at the top level in a function declaration. However, once an array decays to a pointer, the pointer loses its sizeof information. So sizeof(pointer) will give the size of the pointer in bytes, which is usually 8 bytes on a 64-bit system.
You cannot assign to arrays, only initialize them. For example:
int arr[5] = {1, 2, 3, 4, 5}; // initialization
int arr[] = {1, 2, 3, 4, 5}; // The standard dictates that the size of the array
// be given by the amount of members in the initializer
arr = { 1, 2, 3, 4, 5 }; // ERROR
On the other hand, you can do whatever you want with pointers. Unfortunately, because the distinction between pointers and arrays are hand-waved in Java and C#, beginners don't understand the difference.
3. Polymorphism
Java and C# have facilities that allow you to treat objects as another, for example using the as keyword. So if somebody wanted to treat an Entity object as a Player object, one could do Player player = Entity as Player; This is very useful if you intend to call functions on a homogeneous container that should only apply to a specific type. The functionality can be achieved in a similar fashion below:
std::vector<Base*> vector;
vector.push_back(&square);
vector.push_back(&triangle);
for (auto& e : vector)
{
auto test = dynamic_cast<Triangle*>(e); // I only care about triangles
if (!test) // not a triangle
e.GenericFunction();
else
e.TriangleOnlyMagic();
}
So say if only Triangles had a Rotate function, it would be a compiler error if you tried to call it on all objects of the class. Using dynamic_cast, you can simulate the as keyword. To be clear, if a cast fails, it returns an invalid pointer. So !test is essentially a shorthand for checking if test is NULL or an invalid pointer, which means the cast failed.
Benefits of automatic variables
After seeing all the great things dynamic allocation can do, you're probably wondering why wouldn't anyone NOT use dynamic allocation all the time? I already told you one reason, the heap is slow. And if you don't need all that memory, you shouldn't abuse it. So here are some disadvantages in no particular order:
It is error-prone. Manual memory allocation is dangerous and you are prone to leaks. If you are not proficient at using the debugger or valgrind (a memory leak tool), you may pull your hair out of your head. Luckily RAII idioms and smart pointers alleviate this a bit, but you must be familiar with practices such as The Rule Of Three and The Rule Of Five. It is a lot of information to take in, and beginners who either don't know or don't care will fall into this trap.
It is not necessary. Unlike Java and C# where it is idiomatic to use the new keyword everywhere, in C++, you should only use it if you need to. The common phrase goes, everything looks like a nail if you have a hammer. Whereas beginners who start with C++ are scared of pointers and learn to use stack variables by habit, Java and C# programmers start by using pointers without understanding it! That is literally stepping off on the wrong foot. You must abandon everything you know because the syntax is one thing, learning the language is another.
1. (N)RVO - Aka, (Named) Return Value Optimization
One optimization many compilers make are things called elision and return value optimization. These things can obviate unnecessary copys which is useful for objects that are very large, such as a vector containing many elements. Normally the common practice is to use pointers to transfer ownership rather than copying the large objects to move them around. This has lead to the inception of move semantics and smart pointers.
If you are using pointers, (N)RVO does NOT occur. It is more beneficial and less error-prone to take advantage of (N)RVO rather than returning or passing pointers if you are worried about optimization. Error leaks can happen if the caller of a function is responsible for deleteing a dynamically allocated object and such. It can be difficult to track the ownership of an object if pointers are being passed around like a hot potato. Just use stack variables because it is simpler and better.
Another good reason to use pointers would be for forward declarations. In a large enough project they can really speed up compile time.
In C++, objects allocated on the stack (using Object object; statement within a block) will only live within the scope they are declared in. When the block of code finishes execution, the object declared are destroyed.
Whereas if you allocate memory on heap, using Object* obj = new Object(), they continue to live in heap until you call delete obj.
I would create an object on heap when I like to use the object not only in the block of code which declared/allocated it.
C++ gives you three ways to pass an object: by pointer, by reference, and by value. Java limits you with the latter one (the only exception is primitive types like int, boolean etc). If you want to use C++ not just like a weird toy, then you'd better get to know the difference between these three ways.
Java pretends that there is no such problem as 'who and when should destroy this?'. The answer is: The Garbage Collector, Great and Awful. Nevertheless, it can't provide 100% protection against memory leaks (yes, java can leak memory). Actually, GC gives you a false sense of safety. The bigger your SUV, the longer your way to the evacuator.
C++ leaves you face-to-face with object's lifecycle management. Well, there are means to deal with that (smart pointers family, QObject in Qt and so on), but none of them can be used in 'fire and forget' manner like GC: you should always keep in mind memory handling. Not only should you care about destroying an object, you also have to avoid destroying the same object more than once.
Not scared yet? Ok: cyclic references - handle them yourself, human. And remember: kill each object precisely once, we C++ runtimes don't like those who mess with corpses, leave dead ones alone.
So, back to your question.
When you pass your object around by value, not by pointer or by reference, you copy the object (the whole object, whether it's a couple of bytes or a huge database dump - you're smart enough to care to avoid latter, aren't you?) every time you do '='. And to access the object's members, you use '.' (dot).
When you pass your object by pointer, you copy just a few bytes (4 on 32-bit systems, 8 on 64-bit ones), namely - the address of this object. And to show this to everyone, you use this fancy '->' operator when you access the members. Or you can use the combination of '*' and '.'.
When you use references, then you get the pointer that pretends to be a value. It's a pointer, but you access the members through '.'.
And, to blow your mind one more time: when you declare several variables separated by commas, then (watch the hands):
Type is given to everyone
Value/pointer/reference modifier is individual
Example:
struct MyStruct
{
int* someIntPointer, someInt; //here comes the surprise
MyStruct *somePointer;
MyStruct &someReference;
};
MyStruct s1; //we allocated an object on stack, not in heap
s1.someInt = 1; //someInt is of type 'int', not 'int*' - value/pointer modifier is individual
s1.someIntPointer = &s1.someInt;
*s1.someIntPointer = 2; //now s1.someInt has value '2'
s1.somePointer = &s1;
s1.someReference = s1; //note there is no '&' operator: reference tries to look like value
s1.somePointer->someInt = 3; //now s1.someInt has value '3'
*(s1.somePointer).someInt = 3; //same as above line
*s1.somePointer->someIntPointer = 4; //now s1.someInt has value '4'
s1.someReference.someInt = 5; //now s1.someInt has value '5'
//although someReference is not value, it's members are accessed through '.'
MyStruct s2 = s1; //'NO WAY' the compiler will say. Go define your '=' operator and come back.
//OK, assume we have '=' defined in MyStruct
s2.someInt = 0; //s2.someInt == 0, but s1.someInt is still 5 - it's two completely different objects, not the references to the same one
But I can't figure out why should we use it like this?
I will compare how it works inside the function body if you use:
Object myObject;
Inside the function, your myObject will get destroyed once this function returns. So this is useful if you don't need your object outside your function. This object will be put on current thread stack.
If you write inside function body:
Object *myObject = new Object;
then Object class instance pointed by myObject will not get destroyed once the function ends, and allocation is on the heap.
Now if you are Java programmer, then the second example is closer to how object allocation works under java. This line: Object *myObject = new Object; is equivalent to java: Object myObject = new Object();. The difference is that under java myObject will get garbage collected, while under c++ it will not get freed, you must somewhere explicitly call `delete myObject;' otherwise you will introduce memory leaks.
Since c++11 you can use safe ways of dynamic allocations: new Object, by storing values in shared_ptr/unique_ptr.
std::shared_ptr<std::string> safe_str = make_shared<std::string>("make_shared");
// since c++14
std::unique_ptr<std::string> safe_str = make_unique<std::string>("make_shared");
also, objects are very often stored in containers, like map-s or vector-s, they will automatically manage a lifetime of your objects.
Technically it is a memory allocation issue, however here are two more practical aspects of this.
It has to do with two things:
1) Scope, when you define an object without a pointer you will no longer be able to access it after the code block it is defined in, whereas if you define a pointer with "new" then you can access it from anywhere you have a pointer to this memory until you call "delete" on the same pointer.
2) If you want to pass arguments to a function you want to pass a pointer or a reference in order to be more efficient. When you pass an Object then the object is copied, if this is an object that uses a lot of memory this might be CPU consuming (e.g. you copy a vector full of data). When you pass a pointer all you pass is one int (depending of implementation but most of them are one int).
Other than that you need to understand that "new" allocates memory on the heap that needs to be freed at some point. When you don't have to use "new" I suggest you use a regular object definition "on the stack".
Well the main question is Why should I use a pointer rather than the object itself? And my answer, you should (almost) never use pointer instead of object, because C++ has references, it is safer then pointers and guarantees the same performance as pointers.
Another thing you mentioned in your question:
Object *myObject = new Object;
How does it work? It creates pointer of Object type, allocates memory to fit one object and calls default constructor, sounds good, right? But actually it isn't so good, if you dynamically allocated memory (used keyword new), you also have to free memory manually, that means in code you should have:
delete myObject;
This calls destructor and frees memory, looks easy, however in big projects may be difficult to detect if one thread freed memory or not, but for that purpose you can try shared pointers, these slightly decreases performance, but it is much easier to work with them.
And now some introduction is over and go back to question.
You can use pointers instead of objects to get better performance while transferring data between function.
Take a look, you have std::string (it is also object) and it contains really much data, for example big XML, now you need to parse it, but for that you have function void foo(...) which can be declarated in different ways:
void foo(std::string xml);
In this case you will copy all data from your variable to function stack, it takes some time, so your performance will be low.
void foo(std::string* xml);
In this case you will pass pointer to object, same speed as passing size_t variable, however this declaration has error prone, because you can pass NULL pointer or invalid pointer. Pointers usually used in C because it doesn't have references.
void foo(std::string& xml);
Here you pass reference, basically it is the same as passing pointer, but compiler does some stuff and you cannot pass invalid reference (actually it is possible to create situation with invalid reference, but it is tricking compiler).
void foo(const std::string* xml);
Here is the same as second, just pointer value cannot be changed.
void foo(const std::string& xml);
Here is the same as third, but object value cannot be changed.
What more I want to mention, you can use these 5 ways to pass data no matter which allocation way you have chosen (with new or regular).
Another thing to mention, when you create object in regular way, you allocate memory in stack, but while you create it with new you allocate heap. It is much faster to allocate stack, but it is kind a small for really big arrays of data, so if you need big object you should use heap, because you may get stack overflow, but usually this issue is solved using STL containers and remember std::string is also container, some guys forgot it :)
Let's say that you have class A that contain class B When you want to call some function of class B outside class A you will simply obtain a pointer to this class and you can do whatever you want and it will also change context of class B in your class A
But be careful with dynamic object
There are many benefits of using pointers to object -
Efficiency (as you already pointed out). Passing objects to
functions mean creating new copies of object.
Working with objects from third party libraries. If your object
belongs to a third party code and the authors intend the usage of their objects through pointers only (no copy constructors etc) the only way you can pass around this
object is using pointers. Passing by value may cause issues. (Deep
copy / shallow copy issues).
if the object owns a resource and you want that the ownership should not be sahred with other objects.
This is has been discussed at length, but in Java everything is a pointer. It makes no distinction between stack and heap allocations (all objects are allocated on the heap), so you don't realize you're using pointers. In C++, you can mix the two, depending on your memory requirements. Performance and memory usage is more deterministic in C++ (duh).
Object *myObject = new Object;
Doing this will create a reference to an Object (on the heap) which has to be deleted explicitly to avoid memory leak.
Object myObject;
Doing this will create an object(myObject) of the automatic type (on the stack) that will be automatically deleted when the object(myObject) goes out of scope.
A pointer directly references the memory location of an object. Java has nothing like this. Java has references that reference the location of object through hash tables. You cannot do anything like pointer arithmetic in Java with these references.
To answer your question, it's just your preference. I prefer using the Java-like syntax.
The key strength of object pointers in C++ is allowing for polymorphic arrays and maps of pointers of the same superclass. It allows, for example, to put parakeets, chickens, robins, ostriches, etc. in an array of Bird.
Additionally, dynamically allocated objects are more flexible, and can use HEAP memory whereas a locally allocated object will use the STACK memory unless it is static. Having large objects on the stack, especially when using recursion, will undoubtedly lead to stack overflow.
One reason for using pointers is to interface with C functions. Another reason is to save memory; for example: instead of passing an object which contains a lot of data and has a processor-intensive copy-constructor to a function, just pass a pointer to the object, saving memory and speed especially if you're in a loop, however a reference would be better in that case, unless you're using an C-style array.
In areas where memory utilization is at its premium , pointers comes handy. For example consider a minimax algorithm, where thousands of nodes will be generated using recursive routine, and later use them to evaluate the next best move in game, ability to deallocate or reset (as in smart pointers) significantly reduces memory consumption. Whereas the non-pointer variable continues to occupy space till it's recursive call returns a value.
I will include one important use case of pointer. When you are storing some object in the base class, but it could be polymorphic.
Class Base1 {
};
Class Derived1 : public Base1 {
};
Class Base2 {
Base *bObj;
virtual void createMemerObects() = 0;
};
Class Derived2 {
virtual void createMemerObects() {
bObj = new Derived1();
}
};
So in this case you can't declare bObj as an direct object, you have to have pointer.
tl;dr: Don't "use a pointer rather than the object itself" (usually)
You asked why you should prefer a pointer rather than the object itself. Well, you shouldn't, as a general rule.
Now, there are indeed multiple exceptions to this rule, and other answers have spelled them out. The thing is, these days, many of these exceptions are no longer valid! Let us consider the exceptions listed in the accepted answer:
You need reference semantics.
If you need reference semantics, use references, not pointers; see #ST3's answer's answer. In fact, one could argue that, in Java, what you pass around are usually references.
You need polymorphism.
If you know the set of classes you'll be working with, very often you can just use an std::variant<ClassA, ClassB, ClassC> (see description here) and operate on them using a visitor pattern. Now, granted, C++'s variant implementation is not the prettiest sight; but I'd usually prefer it over getting down-and-dirty with pointers.
You want to represent that an object is optional
Absolutely don't use pointers for that. You have std::optional, and unlike std::variant, it's quite convenient. Use that instead. nullopt is an empty (or "null") optional. And - it's not a pointer.
You want to decouple compilation units to improve compilation time.
You can use references rather than pointers to achieve this as well. To use Object& in a piece of code, it's sufficient to say class Object;, i.e. to use a forward-declaration.
You need to interface with a C library or a C-style library.
Yeah, well, if you work with code that already uses pointers, then - you have to use pointers yourself, can't get around that :-( and C doesn't have references.
Also, some people may tell you to use pointers to avoid making copies of objects. Well this is not really a problem for return values, due to the return-value and named-return-value optimizations (RVO and NRVO). And in other cases - references avoid copying just fine.
The bottom-line rule is still the same as the accepted answer, though: Only use a pointer when you have a good reason to need one.
PS - If you do need a pointer, you should still avoid using new and delete directly. You will probably be better served by a smart pointer - which is automagically freed (not like in Java, but still).
With pointers ,
can directly talk to the memory.
can prevent lot of memory leaks of a program by manipulating pointers.
"Necessity is the mother of invention."
The most of important difference that I would like to point out is the outcome of my own experience of coding.
Sometimes you need to pass objects to functions. In that case, if your object is of a very big class then passing it as an object will copy its state (which you might not want ..AND CAN BE BIG OVERHEAD) thus resulting in an overhead of copying object .while pointer is fixed 4-byte size (assuming 32 bit). Other reasons are already mentioned above...
There are many excellent answers already, but let me give you one example:
I have an simple Item class:
class Item
{
public:
std::string name;
int weight;
int price;
};
I make a vector to hold a bunch of them.
std::vector<Item> inventory;
I create one million Item objects, and push them back onto the vector. I sort the vector by name, and then do a simple iterative binary search for a particular item name. I test the program, and it takes over 8 minutes to finish executing. Then I change my inventory vector like so:
std::vector<Item *> inventory;
...and create my million Item objects via new. The ONLY changes I make to my code are to use the pointers to Items, excepting a loop I add for memory cleanup at the end. That program runs in under 40 seconds, or better than a 10x speed increase.
EDIT: The code is at http://pastebin.com/DK24SPeW
With compiler optimizations it shows only a 3.4x increase on the machine I just tested it on, which is still considerable.

Why should I use a pointer rather than the object itself?

I'm coming from a Java background and have started working with objects in C++. But one thing that occurred to me is that people often use pointers to objects rather than the objects themselves, for example this declaration:
Object *myObject = new Object;
rather than:
Object myObject;
Or instead of using a function, let's say testFunc(), like this:
myObject.testFunc();
we have to write:
myObject->testFunc();
But I can't figure out why should we do it this way. I would assume it has to do with efficiency and speed since we get direct access to the memory address. Am I right?
It's very unfortunate that you see dynamic allocation so often. That just shows how many bad C++ programmers there are.
In a sense, you have two questions bundled up into one. The first is when should we use dynamic allocation (using new)? The second is when should we use pointers?
The important take-home message is that you should always use the appropriate tool for the job. In almost all situations, there is something more appropriate and safer than performing manual dynamic allocation and/or using raw pointers.
Dynamic allocation
In your question, you've demonstrated two ways of creating an object. The main difference is the storage duration of the object. When doing Object myObject; within a block, the object is created with automatic storage duration, which means it will be destroyed automatically when it goes out of scope. When you do new Object(), the object has dynamic storage duration, which means it stays alive until you explicitly delete it. You should only use dynamic storage duration when you need it.
That is, you should always prefer creating objects with automatic storage duration when you can.
The main two situations in which you might require dynamic allocation:
You need the object to outlive the current scope - that specific object at that specific memory location, not a copy of it. If you're okay with copying/moving the object (most of the time you should be), you should prefer an automatic object.
You need to allocate a lot of memory, which may easily fill up the stack. It would be nice if we didn't have to concern ourselves with this (most of the time you shouldn't have to), as it's really outside the purview of C++, but unfortunately, we have to deal with the reality of the systems we're developing for.
When you do absolutely require dynamic allocation, you should encapsulate it in a smart pointer or some other type that performs RAII (like the standard containers). Smart pointers provide ownership semantics of dynamically allocated objects. Take a look at std::unique_ptr and std::shared_ptr, for example. If you use them appropriately, you can almost entirely avoid performing your own memory management (see the Rule of Zero).
Pointers
However, there are other more general uses for raw pointers beyond dynamic allocation, but most have alternatives that you should prefer. As before, always prefer the alternatives unless you really need pointers.
You need reference semantics. Sometimes you want to pass an object using a pointer (regardless of how it was allocated) because you want the function to which you're passing it to have access that that specific object (not a copy of it). However, in most situations, you should prefer reference types to pointers, because this is specifically what they're designed for. Note this is not necessarily about extending the lifetime of the object beyond the current scope, as in situation 1 above. As before, if you're okay with passing a copy of the object, you don't need reference semantics.
You need polymorphism. You can only call functions polymorphically (that is, according to the dynamic type of an object) through a pointer or reference to the object. If that's the behavior you need, then you need to use pointers or references. Again, references should be preferred.
You want to represent that an object is optional by allowing a nullptr to be passed when the object is being omitted. If it's an argument, you should prefer to use default arguments or function overloads. Otherwise, you should preferably use a type that encapsulates this behavior, such as std::optional (introduced in C++17 - with earlier C++ standards, use boost::optional).
You want to decouple compilation units to improve compilation time. The useful property of a pointer is that you only require a forward declaration of the pointed-to type (to actually use the object, you'll need a definition). This allows you to decouple parts of your compilation process, which may significantly improve compilation time. See the Pimpl idiom.
You need to interface with a C library or a C-style library. At this point, you're forced to use raw pointers. The best thing you can do is make sure you only let your raw pointers loose at the last possible moment. You can get a raw pointer from a smart pointer, for example, by using its get member function. If a library performs some allocation for you which it expects you to deallocate via a handle, you can often wrap the handle up in a smart pointer with a custom deleter that will deallocate the object appropriately.
There are many use cases for pointers.
Polymorphic behavior. For polymorphic types, pointers (or references) are used to avoid slicing:
class Base { ... };
class Derived : public Base { ... };
void fun(Base b) { ... }
void gun(Base* b) { ... }
void hun(Base& b) { ... }
Derived d;
fun(d); // oops, all Derived parts silently "sliced" off
gun(&d); // OK, a Derived object IS-A Base object
hun(d); // also OK, reference also doesn't slice
Reference semantics and avoiding copying. For non-polymorphic types, a pointer (or a reference) will avoid copying a potentially expensive object
Base b;
fun(b); // copies b, potentially expensive
gun(&b); // takes a pointer to b, no copying
hun(b); // regular syntax, behaves as a pointer
Note that C++11 has move semantics that can avoid many copies of expensive objects into function argument and as return values. But using a pointer will definitely avoid those and will allow multiple pointers on the same object (whereas an object can only be moved from once).
Resource acquisition. Creating a pointer to a resource using the new operator is an anti-pattern in modern C++. Use a special resource class (one of the Standard containers) or a smart pointer (std::unique_ptr<> or std::shared_ptr<>). Consider:
{
auto b = new Base;
... // oops, if an exception is thrown, destructor not called!
delete b;
}
vs.
{
auto b = std::make_unique<Base>();
... // OK, now exception safe
}
A raw pointer should only be used as a "view" and not in any way involved in ownership, be it through direct creation or implicitly through return values. See also this Q&A from the C++ FAQ.
More fine-grained life-time control Every time a shared pointer is being copied (e.g. as a function argument) the resource it points to is being kept alive. Regular objects (not created by new, either directly by you or inside a resource class) are destroyed when going out of scope.
There are many excellent answers to this question, including the important use cases of forward declarations, polymorphism etc. but I feel a part of the "soul" of your question is not answered - namely what the different syntaxes mean across Java and C++.
Let's examine the situation comparing the two languages:
Java:
Object object1 = new Object(); //A new object is allocated by Java
Object object2 = new Object(); //Another new object is allocated by Java
object1 = object2;
//object1 now points to the object originally allocated for object2
//The object originally allocated for object1 is now "dead" - nothing points to it, so it
//will be reclaimed by the Garbage Collector.
//If either object1 or object2 is changed, the change will be reflected to the other
The closest equivalent to this, is:
C++:
Object * object1 = new Object(); //A new object is allocated on the heap
Object * object2 = new Object(); //Another new object is allocated on the heap
delete object1;
//Since C++ does not have a garbage collector, if we don't do that, the next line would
//cause a "memory leak", i.e. a piece of claimed memory that the app cannot use
//and that we have no way to reclaim...
object1 = object2; //Same as Java, object1 points to object2.
Let's see the alternative C++ way:
Object object1; //A new object is allocated on the STACK
Object object2; //Another new object is allocated on the STACK
object1 = object2;//!!!! This is different! The CONTENTS of object2 are COPIED onto object1,
//using the "copy assignment operator", the definition of operator =.
//But, the two objects are still different. Change one, the other remains unchanged.
//Also, the objects get automatically destroyed once the function returns...
The best way to think of it is that -- more or less -- Java (implicitly) handles pointers to objects, while C++ may handle either pointers to objects, or the objects themselves.
There are exceptions to this -- for example, if you declare Java "primitive" types, they are actual values that are copied, and not pointers.
So,
Java:
int object1; //An integer is allocated on the stack.
int object2; //Another integer is allocated on the stack.
object1 = object2; //The value of object2 is copied to object1.
That said, using pointers is NOT necessarily either the correct or the wrong way to handle things; however other answers have covered that satisfactorily. The general idea though is that in C++ you have much more control on the lifetime of the objects, and on where they will live.
Take home point -- the Object * object = new Object() construct is actually what is closest to typical Java (or C# for that matter) semantics.
Preface
Java is nothing like C++, contrary to hype. The Java hype machine would like you to believe that because Java has C++ like syntax, that the languages are similar. Nothing can be further from the truth. This misinformation is part of the reason why Java programmers go to C++ and use Java-like syntax without understanding the implications of their code.
Onwards we go
But I can't figure out why should we do it this way. I would assume it
has to do with efficiency and speed since we get direct access to the
memory address. Am I right?
To the contrary, actually. The heap is much slower than the stack, because the stack is very simple compared to the heap. Automatic storage variables (aka stack variables) have their destructors called once they go out of scope. For example:
{
std::string s;
}
// s is destroyed here
On the other hand, if you use a pointer dynamically allocated, its destructor must be called manually. delete calls this destructor for you.
{
std::string* s = new std::string;
delete s; // destructor called
}
This has nothing to do with the new syntax prevalent in C# and Java. They are used for completely different purposes.
Benefits of dynamic allocation
1. You don't have to know the size of the array in advance
One of the first problems many C++ programmers run into is that when they are accepting arbitrary input from users, you can only allocate a fixed size for a stack variable. You cannot change the size of arrays either. For example:
char buffer[100];
std::cin >> buffer;
// bad input = buffer overflow
Of course, if you used an std::string instead, std::string internally resizes itself so that shouldn't be a problem. But essentially the solution to this problem is dynamic allocation. You can allocate dynamic memory based on the input of the user, for example:
int * pointer;
std::cout << "How many items do you need?";
std::cin >> n;
pointer = new int[n];
Side note: One mistake many beginners make is the usage of
variable length arrays. This is a GNU extension and also one in Clang
because they mirror many of GCC's extensions. So the following
int arr[n] should not be relied on.
Because the heap is much bigger than the stack, one can arbitrarily allocate/reallocate as much memory as he/she needs, whereas the stack has a limitation.
2. Arrays are not pointers
How is this a benefit you ask? The answer will become clear once you understand the confusion/myth behind arrays and pointers. It is commonly assumed that they are the same, but they are not. This myth comes from the fact that pointers can be subscripted just like arrays and because of arrays decay to pointers at the top level in a function declaration. However, once an array decays to a pointer, the pointer loses its sizeof information. So sizeof(pointer) will give the size of the pointer in bytes, which is usually 8 bytes on a 64-bit system.
You cannot assign to arrays, only initialize them. For example:
int arr[5] = {1, 2, 3, 4, 5}; // initialization
int arr[] = {1, 2, 3, 4, 5}; // The standard dictates that the size of the array
// be given by the amount of members in the initializer
arr = { 1, 2, 3, 4, 5 }; // ERROR
On the other hand, you can do whatever you want with pointers. Unfortunately, because the distinction between pointers and arrays are hand-waved in Java and C#, beginners don't understand the difference.
3. Polymorphism
Java and C# have facilities that allow you to treat objects as another, for example using the as keyword. So if somebody wanted to treat an Entity object as a Player object, one could do Player player = Entity as Player; This is very useful if you intend to call functions on a homogeneous container that should only apply to a specific type. The functionality can be achieved in a similar fashion below:
std::vector<Base*> vector;
vector.push_back(&square);
vector.push_back(&triangle);
for (auto& e : vector)
{
auto test = dynamic_cast<Triangle*>(e); // I only care about triangles
if (!test) // not a triangle
e.GenericFunction();
else
e.TriangleOnlyMagic();
}
So say if only Triangles had a Rotate function, it would be a compiler error if you tried to call it on all objects of the class. Using dynamic_cast, you can simulate the as keyword. To be clear, if a cast fails, it returns an invalid pointer. So !test is essentially a shorthand for checking if test is NULL or an invalid pointer, which means the cast failed.
Benefits of automatic variables
After seeing all the great things dynamic allocation can do, you're probably wondering why wouldn't anyone NOT use dynamic allocation all the time? I already told you one reason, the heap is slow. And if you don't need all that memory, you shouldn't abuse it. So here are some disadvantages in no particular order:
It is error-prone. Manual memory allocation is dangerous and you are prone to leaks. If you are not proficient at using the debugger or valgrind (a memory leak tool), you may pull your hair out of your head. Luckily RAII idioms and smart pointers alleviate this a bit, but you must be familiar with practices such as The Rule Of Three and The Rule Of Five. It is a lot of information to take in, and beginners who either don't know or don't care will fall into this trap.
It is not necessary. Unlike Java and C# where it is idiomatic to use the new keyword everywhere, in C++, you should only use it if you need to. The common phrase goes, everything looks like a nail if you have a hammer. Whereas beginners who start with C++ are scared of pointers and learn to use stack variables by habit, Java and C# programmers start by using pointers without understanding it! That is literally stepping off on the wrong foot. You must abandon everything you know because the syntax is one thing, learning the language is another.
1. (N)RVO - Aka, (Named) Return Value Optimization
One optimization many compilers make are things called elision and return value optimization. These things can obviate unnecessary copys which is useful for objects that are very large, such as a vector containing many elements. Normally the common practice is to use pointers to transfer ownership rather than copying the large objects to move them around. This has lead to the inception of move semantics and smart pointers.
If you are using pointers, (N)RVO does NOT occur. It is more beneficial and less error-prone to take advantage of (N)RVO rather than returning or passing pointers if you are worried about optimization. Error leaks can happen if the caller of a function is responsible for deleteing a dynamically allocated object and such. It can be difficult to track the ownership of an object if pointers are being passed around like a hot potato. Just use stack variables because it is simpler and better.
Another good reason to use pointers would be for forward declarations. In a large enough project they can really speed up compile time.
In C++, objects allocated on the stack (using Object object; statement within a block) will only live within the scope they are declared in. When the block of code finishes execution, the object declared are destroyed.
Whereas if you allocate memory on heap, using Object* obj = new Object(), they continue to live in heap until you call delete obj.
I would create an object on heap when I like to use the object not only in the block of code which declared/allocated it.
C++ gives you three ways to pass an object: by pointer, by reference, and by value. Java limits you with the latter one (the only exception is primitive types like int, boolean etc). If you want to use C++ not just like a weird toy, then you'd better get to know the difference between these three ways.
Java pretends that there is no such problem as 'who and when should destroy this?'. The answer is: The Garbage Collector, Great and Awful. Nevertheless, it can't provide 100% protection against memory leaks (yes, java can leak memory). Actually, GC gives you a false sense of safety. The bigger your SUV, the longer your way to the evacuator.
C++ leaves you face-to-face with object's lifecycle management. Well, there are means to deal with that (smart pointers family, QObject in Qt and so on), but none of them can be used in 'fire and forget' manner like GC: you should always keep in mind memory handling. Not only should you care about destroying an object, you also have to avoid destroying the same object more than once.
Not scared yet? Ok: cyclic references - handle them yourself, human. And remember: kill each object precisely once, we C++ runtimes don't like those who mess with corpses, leave dead ones alone.
So, back to your question.
When you pass your object around by value, not by pointer or by reference, you copy the object (the whole object, whether it's a couple of bytes or a huge database dump - you're smart enough to care to avoid latter, aren't you?) every time you do '='. And to access the object's members, you use '.' (dot).
When you pass your object by pointer, you copy just a few bytes (4 on 32-bit systems, 8 on 64-bit ones), namely - the address of this object. And to show this to everyone, you use this fancy '->' operator when you access the members. Or you can use the combination of '*' and '.'.
When you use references, then you get the pointer that pretends to be a value. It's a pointer, but you access the members through '.'.
And, to blow your mind one more time: when you declare several variables separated by commas, then (watch the hands):
Type is given to everyone
Value/pointer/reference modifier is individual
Example:
struct MyStruct
{
int* someIntPointer, someInt; //here comes the surprise
MyStruct *somePointer;
MyStruct &someReference;
};
MyStruct s1; //we allocated an object on stack, not in heap
s1.someInt = 1; //someInt is of type 'int', not 'int*' - value/pointer modifier is individual
s1.someIntPointer = &s1.someInt;
*s1.someIntPointer = 2; //now s1.someInt has value '2'
s1.somePointer = &s1;
s1.someReference = s1; //note there is no '&' operator: reference tries to look like value
s1.somePointer->someInt = 3; //now s1.someInt has value '3'
*(s1.somePointer).someInt = 3; //same as above line
*s1.somePointer->someIntPointer = 4; //now s1.someInt has value '4'
s1.someReference.someInt = 5; //now s1.someInt has value '5'
//although someReference is not value, it's members are accessed through '.'
MyStruct s2 = s1; //'NO WAY' the compiler will say. Go define your '=' operator and come back.
//OK, assume we have '=' defined in MyStruct
s2.someInt = 0; //s2.someInt == 0, but s1.someInt is still 5 - it's two completely different objects, not the references to the same one
But I can't figure out why should we use it like this?
I will compare how it works inside the function body if you use:
Object myObject;
Inside the function, your myObject will get destroyed once this function returns. So this is useful if you don't need your object outside your function. This object will be put on current thread stack.
If you write inside function body:
Object *myObject = new Object;
then Object class instance pointed by myObject will not get destroyed once the function ends, and allocation is on the heap.
Now if you are Java programmer, then the second example is closer to how object allocation works under java. This line: Object *myObject = new Object; is equivalent to java: Object myObject = new Object();. The difference is that under java myObject will get garbage collected, while under c++ it will not get freed, you must somewhere explicitly call `delete myObject;' otherwise you will introduce memory leaks.
Since c++11 you can use safe ways of dynamic allocations: new Object, by storing values in shared_ptr/unique_ptr.
std::shared_ptr<std::string> safe_str = make_shared<std::string>("make_shared");
// since c++14
std::unique_ptr<std::string> safe_str = make_unique<std::string>("make_shared");
also, objects are very often stored in containers, like map-s or vector-s, they will automatically manage a lifetime of your objects.
Technically it is a memory allocation issue, however here are two more practical aspects of this.
It has to do with two things:
1) Scope, when you define an object without a pointer you will no longer be able to access it after the code block it is defined in, whereas if you define a pointer with "new" then you can access it from anywhere you have a pointer to this memory until you call "delete" on the same pointer.
2) If you want to pass arguments to a function you want to pass a pointer or a reference in order to be more efficient. When you pass an Object then the object is copied, if this is an object that uses a lot of memory this might be CPU consuming (e.g. you copy a vector full of data). When you pass a pointer all you pass is one int (depending of implementation but most of them are one int).
Other than that you need to understand that "new" allocates memory on the heap that needs to be freed at some point. When you don't have to use "new" I suggest you use a regular object definition "on the stack".
Well the main question is Why should I use a pointer rather than the object itself? And my answer, you should (almost) never use pointer instead of object, because C++ has references, it is safer then pointers and guarantees the same performance as pointers.
Another thing you mentioned in your question:
Object *myObject = new Object;
How does it work? It creates pointer of Object type, allocates memory to fit one object and calls default constructor, sounds good, right? But actually it isn't so good, if you dynamically allocated memory (used keyword new), you also have to free memory manually, that means in code you should have:
delete myObject;
This calls destructor and frees memory, looks easy, however in big projects may be difficult to detect if one thread freed memory or not, but for that purpose you can try shared pointers, these slightly decreases performance, but it is much easier to work with them.
And now some introduction is over and go back to question.
You can use pointers instead of objects to get better performance while transferring data between function.
Take a look, you have std::string (it is also object) and it contains really much data, for example big XML, now you need to parse it, but for that you have function void foo(...) which can be declarated in different ways:
void foo(std::string xml);
In this case you will copy all data from your variable to function stack, it takes some time, so your performance will be low.
void foo(std::string* xml);
In this case you will pass pointer to object, same speed as passing size_t variable, however this declaration has error prone, because you can pass NULL pointer or invalid pointer. Pointers usually used in C because it doesn't have references.
void foo(std::string& xml);
Here you pass reference, basically it is the same as passing pointer, but compiler does some stuff and you cannot pass invalid reference (actually it is possible to create situation with invalid reference, but it is tricking compiler).
void foo(const std::string* xml);
Here is the same as second, just pointer value cannot be changed.
void foo(const std::string& xml);
Here is the same as third, but object value cannot be changed.
What more I want to mention, you can use these 5 ways to pass data no matter which allocation way you have chosen (with new or regular).
Another thing to mention, when you create object in regular way, you allocate memory in stack, but while you create it with new you allocate heap. It is much faster to allocate stack, but it is kind a small for really big arrays of data, so if you need big object you should use heap, because you may get stack overflow, but usually this issue is solved using STL containers and remember std::string is also container, some guys forgot it :)
Let's say that you have class A that contain class B When you want to call some function of class B outside class A you will simply obtain a pointer to this class and you can do whatever you want and it will also change context of class B in your class A
But be careful with dynamic object
There are many benefits of using pointers to object -
Efficiency (as you already pointed out). Passing objects to
functions mean creating new copies of object.
Working with objects from third party libraries. If your object
belongs to a third party code and the authors intend the usage of their objects through pointers only (no copy constructors etc) the only way you can pass around this
object is using pointers. Passing by value may cause issues. (Deep
copy / shallow copy issues).
if the object owns a resource and you want that the ownership should not be sahred with other objects.
This is has been discussed at length, but in Java everything is a pointer. It makes no distinction between stack and heap allocations (all objects are allocated on the heap), so you don't realize you're using pointers. In C++, you can mix the two, depending on your memory requirements. Performance and memory usage is more deterministic in C++ (duh).
Object *myObject = new Object;
Doing this will create a reference to an Object (on the heap) which has to be deleted explicitly to avoid memory leak.
Object myObject;
Doing this will create an object(myObject) of the automatic type (on the stack) that will be automatically deleted when the object(myObject) goes out of scope.
A pointer directly references the memory location of an object. Java has nothing like this. Java has references that reference the location of object through hash tables. You cannot do anything like pointer arithmetic in Java with these references.
To answer your question, it's just your preference. I prefer using the Java-like syntax.
The key strength of object pointers in C++ is allowing for polymorphic arrays and maps of pointers of the same superclass. It allows, for example, to put parakeets, chickens, robins, ostriches, etc. in an array of Bird.
Additionally, dynamically allocated objects are more flexible, and can use HEAP memory whereas a locally allocated object will use the STACK memory unless it is static. Having large objects on the stack, especially when using recursion, will undoubtedly lead to stack overflow.
One reason for using pointers is to interface with C functions. Another reason is to save memory; for example: instead of passing an object which contains a lot of data and has a processor-intensive copy-constructor to a function, just pass a pointer to the object, saving memory and speed especially if you're in a loop, however a reference would be better in that case, unless you're using an C-style array.
In areas where memory utilization is at its premium , pointers comes handy. For example consider a minimax algorithm, where thousands of nodes will be generated using recursive routine, and later use them to evaluate the next best move in game, ability to deallocate or reset (as in smart pointers) significantly reduces memory consumption. Whereas the non-pointer variable continues to occupy space till it's recursive call returns a value.
I will include one important use case of pointer. When you are storing some object in the base class, but it could be polymorphic.
Class Base1 {
};
Class Derived1 : public Base1 {
};
Class Base2 {
Base *bObj;
virtual void createMemerObects() = 0;
};
Class Derived2 {
virtual void createMemerObects() {
bObj = new Derived1();
}
};
So in this case you can't declare bObj as an direct object, you have to have pointer.
tl;dr: Don't "use a pointer rather than the object itself" (usually)
You asked why you should prefer a pointer rather than the object itself. Well, you shouldn't, as a general rule.
Now, there are indeed multiple exceptions to this rule, and other answers have spelled them out. The thing is, these days, many of these exceptions are no longer valid! Let us consider the exceptions listed in the accepted answer:
You need reference semantics.
If you need reference semantics, use references, not pointers; see #ST3's answer's answer. In fact, one could argue that, in Java, what you pass around are usually references.
You need polymorphism.
If you know the set of classes you'll be working with, very often you can just use an std::variant<ClassA, ClassB, ClassC> (see description here) and operate on them using a visitor pattern. Now, granted, C++'s variant implementation is not the prettiest sight; but I'd usually prefer it over getting down-and-dirty with pointers.
You want to represent that an object is optional
Absolutely don't use pointers for that. You have std::optional, and unlike std::variant, it's quite convenient. Use that instead. nullopt is an empty (or "null") optional. And - it's not a pointer.
You want to decouple compilation units to improve compilation time.
You can use references rather than pointers to achieve this as well. To use Object& in a piece of code, it's sufficient to say class Object;, i.e. to use a forward-declaration.
You need to interface with a C library or a C-style library.
Yeah, well, if you work with code that already uses pointers, then - you have to use pointers yourself, can't get around that :-( and C doesn't have references.
Also, some people may tell you to use pointers to avoid making copies of objects. Well this is not really a problem for return values, due to the return-value and named-return-value optimizations (RVO and NRVO). And in other cases - references avoid copying just fine.
The bottom-line rule is still the same as the accepted answer, though: Only use a pointer when you have a good reason to need one.
PS - If you do need a pointer, you should still avoid using new and delete directly. You will probably be better served by a smart pointer - which is automagically freed (not like in Java, but still).
With pointers ,
can directly talk to the memory.
can prevent lot of memory leaks of a program by manipulating pointers.
"Necessity is the mother of invention."
The most of important difference that I would like to point out is the outcome of my own experience of coding.
Sometimes you need to pass objects to functions. In that case, if your object is of a very big class then passing it as an object will copy its state (which you might not want ..AND CAN BE BIG OVERHEAD) thus resulting in an overhead of copying object .while pointer is fixed 4-byte size (assuming 32 bit). Other reasons are already mentioned above...
There are many excellent answers already, but let me give you one example:
I have an simple Item class:
class Item
{
public:
std::string name;
int weight;
int price;
};
I make a vector to hold a bunch of them.
std::vector<Item> inventory;
I create one million Item objects, and push them back onto the vector. I sort the vector by name, and then do a simple iterative binary search for a particular item name. I test the program, and it takes over 8 minutes to finish executing. Then I change my inventory vector like so:
std::vector<Item *> inventory;
...and create my million Item objects via new. The ONLY changes I make to my code are to use the pointers to Items, excepting a loop I add for memory cleanup at the end. That program runs in under 40 seconds, or better than a 10x speed increase.
EDIT: The code is at http://pastebin.com/DK24SPeW
With compiler optimizations it shows only a 3.4x increase on the machine I just tested it on, which is still considerable.

Store pointers or objects in classes?

Just a design/optimization question. When do you store pointers or objects and why? For example, I believe both of these work (barring compile errors):
class A{
std::unique_ptr<Object> object_ptr;
};
A::A():object_ptr(new Object()){}
class B{
Object object;
};
B::B():object(Object()){}
I believe one difference comes when instantiating on stack or heap?
For example:
int main(){
std::unique_ptr<A> a_ptr;
std::unique_ptr<B> b_ptr;
a_ptr = new A(); //(*object_ptr) on heap, (*a_ptr) on heap?
b_ptr = new B(); //(*object_ptr) on heap, object on heap?
A a; //a on stack, (*object_ptr) on heap?
B b; //b on stack, object on stack?
}
Also, sizeof(A) should be < sizeof(B)?
Are there any other issues that I am missing?
(Daniel reminded me about the inheritance issue in his related post in the comments)
So since stack allocation is faster than the heap allocation in general, but size is smaller for A than B, are these one of those tradeoffs that cannot be answered without testing performance in the case in question even with move semantics? Or some rules of thumbs When it is more advantageous to use one over the other?
(San Jacinto corrected me about stack/heap allocation is faster, not stack/heap)
I would guess that more copy constructing would lead to the same performance issue, (3 copies would ~ 3x similar performance hit as initiating the first instance). But move constructing may be more advantageous to use the stack as much as possible???
Here is a related question, but not exactly the same.
C++ STL: should I store entire objects, or pointers to objects?
Thanks!
If you have a big object inside your A class, then I'd store a pointer to it, but for small objects, or primitive types, you should not really need to store pointers, in most cases.
Also, when something is stored on the stack or on the heap (freestore) is really implementation dependent, and A a is not always guarantueed to be on the stack.
It's better to call this an automatic object, because it's storage duration is determined by the scope of the function it is declared in. When the function returns, a will be destroyed.
Pointers require the use of new and it does carry some overhead, but on machines today, I'd say it is trivial in most cases, unless of course you have start newing up millions of objects, then you will start seeing the performance issues.
Each situation is different, and when you should and shouldn't use a pointer, instead of an automatic object, is largely dependent on your situation.
This depends on a lot of specific factors, and either approach can have its merits. I'd say if you will exclusively use the outer object through dynamic allocation, then you might as well make all the members direct members and avoid the additional member allocation. On the other hand, if the outer object is allocated automatically, large members should probably be handled through a unique_ptr.
There's an additional benefit to handling members only through pointers: You remove compile-time dependencies, and the header file for the outer class may be able to get away with a forward-declaration of the inner class, rather than requiring full inclusion of the inner class's header ("PIMPL"). In large projects this sort of decoupling may turn out to be economically sensible.
The heap is not "slower" than the stack. Heap allocation can be slower than stack allocation, and poor cache locality may cause a lot of cache misses if you design your objects and data structures in such a way that there is not a lot of contiguous memory access. So from this standpoint, it depends on what your design and code use goals are.
Even setting this aside, you have to question your copy semantics too. If you want deep copies of your objects (and your objects' objects are also deeply copied), then why even store pointers? If it's okay to have shared memory due to copy semantics, then store pointers but make sure you don't free the memory twice in the dtor.
I tend to use pointers under two conditions: class member initialization order matters deeply, and I'm injecting dependencies into an object. In most other cases, I use non-pointer types.
edit: There are two additional cases when I use pointers: 1) to avoid circular include dependencies (although I may use a reference in some cases), 2) With the intention of using polymorphic function calls.
There are a few cases where you have almost no choice but to store a pointer. One obvious one is when you're creating something like a binary tree:
template <class T>
struct tree_node {
struct tree_node *left, *right;
T data;
}
In this case, the definition is basically recursive, and you don't know up-front how many descendants a tree node might have. You're pretty much stuck with (at least some variation of) storing pointers, and allocating descendant nodes as needed.
There are also cases like dynamic strings where you have only a single object (or array of objects) in the parent object, but its size can vary over a wide enough range that you just about need to (at least provide for the possibility to) use dynamic allocation. With strings, small sizes are common enough that there's a fairly widely-used "short string optimization", where the string object directly includes enough space for strings up to some limit, as well as a pointer to allow dynamic allocation if the string exceeds that size:
template <class T>
class some_string {
static const limit = 20;
size_t allocated;
size_t in_use;
union {
T short_data[limit];
T *long_data;
};
// ...
};
A less obvious reason to use a pointer instead of directly storing a sub-object is for the sake of exception safety. Just for one obvious example, if you store only pointers in a parent object, that can (usually does) make it trivial to provide a swap for those objects that gives the nothrow guarantee:
template <class T>
class parent {
T *data;
void friend swap(parent &a, parent &b) throw() {
T *temp = a.data;
a.data = b.data;
b.data = temp;
}
};
With only a couple of (usually valid) assumptions:
the pointers are valid to start with, and
assigning valid pointers will never throw an exception
...it's trivial for this swap to give the nothrow guarantee unconditionally (i.e., we can just say: "swap will not throw"). If parent stored objects directly instead of pointers, we could only guarantee that conditionally (e.g., swap will throw if and only if the copy constructor or assignment operator for T throws.")
For C++11, using a pointer like this often (usually?) makes it easy to provide an extremely efficient move constructor (that also gives the nothrow guarantee). Of course, using a pointer to (most of) the data isn't the only possible route to fast move construction -- but it is an easy one.
Finally, there are the cases I suspect you had in mind when you asked the question -- ones where the logic involved doesn't necessarily indicate whether you should use automatic or dynamic allocation. In this case, it's (obviously) a judgement call. From a purely theoretical viewpoint, it probably makes no difference at all which you use in these cases. From a practical viewpoint, however, it can make quite a bit of difference. Even though neither the C nor C++ standard guarantees (or even hints at) anything of the sort, the reality is that on most typical systems, objects using automatic allocation will end up on the stack. On most typical systems (e.g., Windows, Linux) the stack is limited to only a fairly small fraction of the available memory (typically on the order of single-digit to low double-digit megabytes).
This means that if all the objects of these types that might exist at any given time might exceed a few megabytes (or so) you need to ensure that (at least most of) the data is allocated dynamically, not automatically. There are two ways to do that: you can either leave it to the user to allocate the parent objects dynamically when/if they might exceed the available stack space, or else you can have the user work with relatively small "shell" objects that allocate space dynamically on the user's behalf.
If that's at all likely to be an issue, it's almost always preferable for the class to handle the dynamic allocation instead of forcing the user to do so. This has two obvious good points:
The user gets to use stack-based resource management (SBRM, aka RAII), and
The effects of limited stack space are limited instead of "percolating" through the whole design.
Bottom line: especially for a template where the type being stored isn't known up-front, I'd tend to favor a pointer and dynamic allocation. I'd reserve direct storage of sub-objects primarily to situations where I know the stored type will (almost?) always be quite small, or where profiling has indicated that dynamic allocation is causing a real speed problem. In the latter case, however, I'd give at least some though to alternatives like overloading operator new for that class.

When should I use the new keyword in C++?

I've been using C++ for a short while, and I've been wondering about the new keyword. Simply, should I be using it, or not?
With the new keyword...
MyClass* myClass = new MyClass();
myClass->MyField = "Hello world!";
Without the new keyword...
MyClass myClass;
myClass.MyField = "Hello world!";
From an implementation perspective, they don't seem that different (but I'm sure they are)... However, my primary language is C#, and of course the 1st method is what I'm used to.
The difficulty seems to be that method 1 is harder to use with the std C++ classes.
Which method should I use?
Update 1:
I recently used the new keyword for heap memory (or free store) for a large array which was going out of scope (i.e. being returned from a function). Where before I was using the stack, which caused half of the elements to be corrupt outside of scope, switching to heap usage ensured that the elements were intact. Yay!
Update 2:
A friend of mine recently told me there's a simple rule for using the new keyword; every time you type new, type delete.
Foobar *foobar = new Foobar();
delete foobar; // TODO: Move this to the right place.
This helps to prevent memory leaks, as you always have to put the delete somewhere (i.e. when you cut and paste it to either a destructor or otherwise).
Method 1 (using new)
Allocates memory for the object on the free store (This is frequently the same thing as the heap)
Requires you to explicitly delete your object later. (If you don't delete it, you could create a memory leak)
Memory stays allocated until you delete it. (i.e. you could return an object that you created using new)
The example in the question will leak memory unless the pointer is deleted; and it should always be deleted, regardless of which control path is taken, or if exceptions are thrown.
Method 2 (not using new)
Allocates memory for the object on the stack (where all local variables go) There is generally less memory available for the stack; if you allocate too many objects, you risk stack overflow.
You won't need to delete it later.
Memory is no longer allocated when it goes out of scope. (i.e. you shouldn't return a pointer to an object on the stack)
As far as which one to use; you choose the method that works best for you, given the above constraints.
Some easy cases:
If you don't want to worry about calling delete, (and the potential to cause memory leaks) you shouldn't use new.
If you'd like to return a pointer to your object from a function, you must use new
There is an important difference between the two.
Everything not allocated with new behaves much like value types in C# (and people often say that those objects are allocated on the stack, which is probably the most common/obvious case, but not always true). More precisely, objects allocated without using new have automatic storage duration
Everything allocated with new is allocated on the heap, and a pointer to it is returned, exactly like reference types in C#.
Anything allocated on the stack has to have a constant size, determined at compile-time (the compiler has to set the stack pointer correctly, or if the object is a member of another class, it has to adjust the size of that other class). That's why arrays in C# are reference types. They have to be, because with reference types, we can decide at runtime how much memory to ask for. And the same applies here. Only arrays with constant size (a size that can be determined at compile-time) can be allocated with automatic storage duration (on the stack). Dynamically sized arrays have to be allocated on the heap, by calling new.
(And that's where any similarity to C# stops)
Now, anything allocated on the stack has "automatic" storage duration (you can actually declare a variable as auto, but this is the default if no other storage type is specified so the keyword isn't really used in practice, but this is where it comes from)
Automatic storage duration means exactly what it sounds like, the duration of the variable is handled automatically. By contrast, anything allocated on the heap has to be manually deleted by you.
Here's an example:
void foo() {
bar b;
bar* b2 = new bar();
}
This function creates three values worth considering:
On line 1, it declares a variable b of type bar on the stack (automatic duration).
On line 2, it declares a bar pointer b2 on the stack (automatic duration), and calls new, allocating a bar object on the heap. (dynamic duration)
When the function returns, the following will happen:
First, b2 goes out of scope (order of destruction is always opposite of order of construction). But b2 is just a pointer, so nothing happens, the memory it occupies is simply freed. And importantly, the memory it points to (the bar instance on the heap) is NOT touched. Only the pointer is freed, because only the pointer had automatic duration.
Second, b goes out of scope, so since it has automatic duration, its destructor is called, and the memory is freed.
And the barinstance on the heap? It's probably still there. No one bothered to delete it, so we've leaked memory.
From this example, we can see that anything with automatic duration is guaranteed to have its destructor called when it goes out of scope. That's useful. But anything allocated on the heap lasts as long as we need it to, and can be dynamically sized, as in the case of arrays. That is also useful. We can use that to manage our memory allocations. What if the Foo class allocated some memory on the heap in its constructor, and deleted that memory in its destructor. Then we could get the best of both worlds, safe memory allocations that are guaranteed to be freed again, but without the limitations of forcing everything to be on the stack.
And that is pretty much exactly how most C++ code works.
Look at the standard library's std::vector for example. That is typically allocated on the stack, but can be dynamically sized and resized. And it does this by internally allocating memory on the heap as necessary. The user of the class never sees this, so there's no chance of leaking memory, or forgetting to clean up what you allocated.
This principle is called RAII (Resource Acquisition is Initialization), and it can be extended to any resource that must be acquired and released. (network sockets, files, database connections, synchronization locks). All of them can be acquired in the constructor, and released in the destructor, so you're guaranteed that all resources you acquire will get freed again.
As a general rule, never use new/delete directly from your high level code. Always wrap it in a class that can manage the memory for you, and which will ensure it gets freed again. (Yes, there may be exceptions to this rule. In particular, smart pointers require you to call new directly, and pass the pointer to its constructor, which then takes over and ensures delete is called correctly. But this is still a very important rule of thumb)
The short answer is: if you're a beginner in C++, you should never be using new or delete yourself.
Instead, you should use smart pointers such as std::unique_ptr and std::make_unique (or less often, std::shared_ptr and std::make_shared). That way, you don't have to worry nearly as much about memory leaks. And even if you're more advanced, best practice would usually be to encapsulate the custom way you're using new and delete into a small class (such as a custom smart pointer) that is dedicated just to object lifecycle issues.
Of course, behind the scenes, these smart pointers are still performing dynamic allocation and deallocation, so code using them would still have the associated runtime overhead. Other answers here have covered these issues, and how to make design decisions on when to use smart pointers versus just creating objects on the stack or incorporating them as direct members of an object, well enough that I won't repeat them. But my executive summary would be: don't use smart pointers or dynamic allocation until something forces you to.
Which method should I use?
This is almost never determined by your typing preferences but by the context. If you need to keep the object across a few stacks or if it's too heavy for the stack you allocate it on the free store. Also, since you are allocating an object, you are also responsible for releasing the memory. Lookup the delete operator.
To ease the burden of using free-store management people have invented stuff like auto_ptr and unique_ptr. I strongly recommend you take a look at these. They might even be of help to your typing issues ;-)
If you are writing in C++ you are probably writing for performance. Using new and the free store is much slower than using the stack (especially when using threads) so only use it when you need it.
As others have said, you need new when your object needs to live outside the function or object scope, the object is really large or when you don't know the size of an array at compile time.
Also, try to avoid ever using delete. Wrap your new into a smart pointer instead. Let the smart pointer call delete for you.
There are some cases where a smart pointer isn't smart. Never store std::auto_ptr<> inside a STL container. It will delete the pointer too soon because of copy operations inside the container. Another case is when you have a really large STL container of pointers to objects. boost::shared_ptr<> will have a ton of speed overhead as it bumps the reference counts up and down. The better way to go in that case is to put the STL container into another object and give that object a destructor that will call delete on every pointer in the container.
Without the new keyword you're storing that on call stack. Storing excessively large variables on stack will lead to stack overflow.
If your variable is used only within the context of a single function, you're better off using a stack variable, i.e., Option 2. As others have said, you do not have to manage the lifetime of stack variables - they are constructed and destructed automatically. Also, allocating/deallocating a variable on the heap is slow by comparison. If your function is called often enough, you'll see a tremendous performance improvement if use stack variables versus heap variables.
That said, there are a couple of obvious instances where stack variables are insufficient.
If the stack variable has a large memory footprint, then you run the risk of overflowing the stack. By default, the stack size of each thread is 1 MB on Windows. It is unlikely that you'll create a stack variable that is 1 MB in size, but you have to keep in mind that stack utilization is cumulative. If your function calls a function which calls another function which calls another function which..., the stack variables in all of these functions take up space on the same stack. Recursive functions can run into this problem quickly, depending on how deep the recursion is. If this is a problem, you can increase the size of the stack (not recommended) or allocate the variable on the heap using the new operator (recommended).
The other, more likely condition is that your variable needs to "live" beyond the scope of your function. In this case, you'd allocate the variable on the heap so that it can be reached outside the scope of any given function.
The simple answer is yes - new() creates an object on the heap (with the unfortunate side effect that you have to manage its lifetime (by explicitly calling delete on it), whereas the second form creates an object in the stack in the current scope and that object will be destroyed when it goes out of scope.
Are you passing myClass out of a function, or expecting it to exist outside that function? As some others said, it is all about scope when you aren't allocating on the heap. When you leave the function, it goes away (eventually). One of the classic mistakes made by beginners is the attempt to create a local object of some class in a function and return it without allocating it on the heap. I can remember debugging this kind of thing back in my earlier days doing c++.
C++ Core Guidelines R.11: Avoid using new and delete explicitly.
Things have changed significantly since most answers to this question were written. Specifically, C++ has evolved as a language, and the standard library is now richer. Why does this matter? Because of a combination of two factors:
Using new and delete is potentially dangerous: Memory might leak if you don't keep a very strong discipline of delete'ing everything you've allocated when it's no longer used; and never deleteing what's not currently allocated.
The standard library now offers smart pointers which encapsulate the new and delete calls, so that you don't have to take care of managing allocations on the free store/heap yourself. So do other containers, in the standard library and elsewhere.
This has evolved into one of the C++ community's "core guidelines" for writing better C++ code, as the linked document shows. Of course, there exceptions to this rule: Somebody needs to write those encapsulating classes which do use new and delete; but that someone is rarely yourself.
Adding to #DanielSchepler's valid answer:
The second method creates the instance on the stack, along with such things as something declared int and the list of parameters that are passed into the function.
The first method makes room for a pointer on the stack, which you've set to the location in memory where a new MyClass has been allocated on the heap - or free store.
The first method also requires that you delete what you create with new, whereas in the second method, the class is automatically destructed and freed when it falls out of scope (the next closing brace, usually).
The short answer is yes the "new" keyword is incredibly important as when you use it the object data is stored on the heap as opposed to the stack, which is most important!