Suppose I have a class named A:
Class A
{
...
}
And what's the difference between the following 2 approaches to instanciate an object:
void main(void)
{
A a; // 1
A *pa=new A(); // 2
}
As my current understanding (not sure about this yet):
Approach 1 allocate the object a on the stack frame of the main() method, and so this object cannot deleted because that deletion doesn't make sense (don't know why yet, could someone explain that?).
Approach 2 allocate the object a on the heap of the process and also a A* vairable pa on the stack frame of the main() method, so the object can be deleted and and the pa can be assigned null after the deletion.
Am I right? If my understanding is correct, could someone tell me why i cannot delete the a object from the stack in approach 1?
Many thanks...
Object a has automatic storage duration so it will be deleted automatically at the end of the scope in which it is defined. It doesn't make sense to attempt to delete it manually. Manually deletion is only required for objects with dynamic storage duration such as *pa which has been allocated using new.
The objects live time is limited to
the scope the variable is defined
in, once you leave the scope the
object will be cleaned up. In c++ a
scope is defined by any Block
between { an the corresponding }.
Here only the pointer is on the stack and not the object, so when
you leave the scope only the pointer
will be cleaned up, the object will
still be around somewhere.
To the part of deleting an object, delete not only calls the destructor of your object but also releases its memory, this would not work as the memory management of the stack is automated by the compiler, in contrast the heap is not automated and requires calls to new and delete to manage the live time of an object.
Any object created by a call to new has to be deleted once, forgetting to do this results in an memory leak as the objects memory will never be released.
Approach 1 declared a variable and created an object. In Approach 2, you created an instance and pointer to it.
EDIT : In approach 1, the object will go out of scope and will be automatically deleted. In approach 2, the pointer will be automatically deleted, but not what it is pointing to. That will be your job.
Imagine stack as void* stack = malloc(1.000.000);
Now, this memory block is managed internally by the compiler and the CPU.
It is a shared piece of memory. Every function can use it to store temporary objects there.
That's called automatic storage. You cannot delete parts of that memory because its purpose is
to be used again again. If you explicitly delete memory, that memory returns back to the system,
and you don't want that to happen in a shared memory.
In a way, automatic objects are also get deleted. When an object gets out of scope the compiler places
an invisible call to object's destructor and the memory is available again.
You cannot delete objects on the stack because it's implemented in memory exactly that way -- as a stack. As you create objects on the stack, they are added on top of each other. As the objects leave scope, they are destroyed from the top, in the opposite order they were created in (add to the top of the stack, and remove from the top of the stack). Trying to call delete on something in the stack would break that order. The analogy would be like trying to pull some paper out of the middle of a stack of papers.
The compiler controls how objects are created and removed on the stack. It can do this because it knows exactly how large each object on the stack is. Since the stack size is set at compile time, it means that allocating memory for things on the stack is extremely fast, much faster than allocating memory from the heap, which is controlled by the operating system.
Allocation does two things:
1) Allocates memory for the object
2) Calls the constructor on the allocated memory
Deletion does two things:
1) Calls the destructor on the object
2) Deallocates the memory used by the destructed object
When you allocate on the stack (A a;), you're telling the compiler "please make an object for me, by allocating memory, then call the constructor on that memory. And while you're at it, could you handle calling the destructor and freeing the memory, when it goes out of scope? Thanks!". Once the function (main) ends, the object goes out of scope, the destructor is called, and the memory is freed.
When you allocate on the heap (A* pa = new A();), you're telling the compiler "please make an object for me. I know what I'm doing, so don't bother calling the destructor or freeing the memory. I'll tell you when to do it, some other time". Once the function (main) ends, the object you allocated stays in scope, and is not destructed or freed. Hopefully you have a pointer to it stored somewhere else in your program (as in, you copied pa to some other variable with a bigger scope). You're gonna have to tell the compiler to destruct the object and free the memory at some point in the future. Otherwise, you get a memory leak.
Simply put, the "delete" command is only for objects allocated on the heap, because that's the manual memory management interface in C++ - new/delete. It is a command for the heap allocator, and the heap allocator doesn't know anything about stack allocated objects. If you try to call delete on a stack allocated object, you might as well have called it on a random memory address - they're the same thing as far as the heap allocator is concerned. Very much like trying to access an object outside array bounds:
int a[10];
std::cout << a[37] << "\n"; // a[37] points at... ? no one knows!
It just isn't meant to do that :)
Edit:
P.S. Memory leaks are more important when you are allocating memory in a function other than main. When the program ends, leaked memory gets deallocated, so a memory leak in main might not be a big deal, depending on your scenario. However, destructors never get called on leaked objects. If the destructor does something important, like closing a database or a file, then you might have a more serious bug on your hands.
stack memory is not being managed on the same way as heap memory.
there is no point to delete objects from stack: they will be deleted
automatically upon end of scope/function.
Related
I don't quite get the point of dynamically allocated memory and I am hoping you guys can make things clearer for me.
First of all, every time we allocate memory we simply get a pointer to that memory.
int * dynInt = new int;
So what is the difference between doing what I did above and:
int someInt;
int* dynInt = &someInt;
As I understand, in both cases memory is allocated for an int, and we get a pointer to that memory.
So what's the difference between the two. When is one method preferred to the other.
Further more why do I need to free up memory with
delete dynInt;
in the first case, but not in the second case.
My guesses are:
When dynamically allocating memory for an object, the object doesn't get initialized while if you do something like in the second case, the object get's initialized. If this is the only difference, is there a any motivation behind this apart from the fact that dynamically allocating memory is faster.
The reason we don't need to use delete for the second case is because the fact that the object was initialized creates some kind of an automatic destruction routine.
Those are just guesses would love it if someone corrected me and clarified things for me.
The difference is in storage duration.
Objects with automatic storage duration are your "normal" objects that automatically go out of scope at the end of the block in which they're defined.
Create them like int someInt;
You may have heard of them as "stack objects", though I object to this terminology.
Objects with dynamic storage duration have something of a "manual" lifetime; you have to destroy them yourself with delete, and create them with the keyword new.
You may have heard of them as "heap objects", though I object to this, too.
The use of pointers is actually not strictly relevant to either of them. You can have a pointer to an object of automatic storage duration (your second example), and you can have a pointer to an object of dynamic storage duration (your first example).
But it's rare that you'll want a pointer to an automatic object, because:
you don't have one "by default";
the object isn't going to last very long, so there's not a lot you can do with such a pointer.
By contrast, dynamic objects are often accessed through pointers, simply because the syntax comes close to enforcing it. new returns a pointer for you to use, you have to pass a pointer to delete, and (aside from using references) there's actually no other way to access the object. It lives "out there" in a cloud of dynamicness that's not sitting in the local scope.
Because of this, the usage of pointers is sometimes confused with the usage of dynamic storage, but in fact the former is not causally related to the latter.
An object created like this:
int foo;
has automatic storage duration - the object lives until the variable foo goes out of scope. This means that in your first example, dynInt will be an invalid pointer once someInt goes out of scope (for example, at the end of a function).
An object created like this:
int foo* = new int;
Has dynamic storage duration - the object lives until you explicitly call delete on it.
Initialization of the objects is an orthogonal concept; it is not directly related to which type of storage-duration you use. See here for more information on initialization.
Your program gets an initial chunk of memory at startup. This memory is called the stack. The amount is usually around 2MB these days.
Your program can ask the OS for additional memory. This is called dynamic memory allocation. This allocates memory on the free store (C++ terminology) or the heap (C terminology). You can ask for as much memory as the system is willing to give (multiple gigabytes).
The syntax for allocating a variable on the stack looks like this:
{
int a; // allocate on the stack
} // automatic cleanup on scope exit
The syntax for allocating a variable using memory from the free store looks like this:
int * a = new int; // ask OS memory for storing an int
delete a; // user is responsible for deleting the object
To answer your questions:
When is one method preferred to the other.
Generally stack allocation is preferred.
Dynamic allocation required when you need to store a polymorphic object using its base type.
Always use smart pointer to automate deletion:
C++03: boost::scoped_ptr, boost::shared_ptr or std::auto_ptr.
C++11: std::unique_ptr or std::shared_ptr.
For example:
// stack allocation (safe)
Circle c;
// heap allocation (unsafe)
Shape * shape = new Circle;
delete shape;
// heap allocation with smart pointers (safe)
std::unique_ptr<Shape> shape(new Circle);
Further more why do I need to free up memory in the first case, but not in the second case.
As I mentioned above stack allocated variables are automatically deallocated on scope exit.
Note that you are not allowed to delete stack memory. Doing so would inevitably crash your application.
For a single integer it only makes sense if you need the keep the value after for example, returning from a function. Had you declared someInt as you said, it would have been invalidated as soon as it went out of scope.
However, in general there is a greater use for dynamic allocation. There are many things that your program doesn't know before allocation and depends on input. For example, your program needs to read an image file. How big is that image file? We could say we store it in an array like this:
unsigned char data[1000000];
But that would only work if the image size was less than or equal to 1000000 bytes, and would also be wasteful for smaller images. Instead, we can dynamically allocate the memory:
unsigned char* data = new unsigned char[file_size];
Here, file_size is determined at runtime. You couldn't possibly tell this value at the time of compilation.
Read more about dynamic memory allocation and also garbage collection
You really need to read a good C or C++ programming book.
Explaining in detail would take a lot of time.
The heap is the memory inside which dynamic allocation (with new in C++ or malloc in C) happens. There are system calls involved with growing and shrinking the heap. On Linux, they are mmap & munmap (used to implement malloc and new etc...).
You can call a lot of times the allocation primitive. So you could put int *p = new int; inside a loop, and get a fresh location every time you loop!
Don't forget to release memory (with delete in C++ or free in C). Otherwise, you'll get a memory leak -a naughty kind of bug-. On Linux, valgrind helps to catch them.
Whenever you are using new in C++ memory is allocated through malloc which calls the sbrk system call (or similar) itself. Therefore no one, except the OS, has knowledge about the requested size. So you'll have to use delete (which calls free which goes to sbrk again) for giving memory back to the system. Otherwise you'll get a memory leak.
Now, when it comes to your second case, the compiler has knowledge about the size of the allocated memory. That is, in your case, the size of one int. Setting a pointer to the address of this int does not change anything in the knowledge of the needed memory. Or with other words: The compiler is able to take care about freeing of the memory. In the first case with new this is not possible.
In addition to that: new respectively malloc do not need to allocate exactly the requsted size, which makes things a bit more complicated.
Edit
Two more common phrases: The first case is also known as static memory allocation (done by the compiler), the second case refers to dynamic memory allocation (done by the runtime system).
What happens if your program is supposed to let the user store any number of integers? Then you'll need to decide during run-time, based on the user's input, how many ints to allocate, so this must be done dynamically.
In a nutshell, dynamically allocated object's lifetime is controlled by you and not by the language. This allows you to let it live as long as it is required (as opposed to end of the scope), possibly determined by a condition that can only be calculated at run-rime.
Also, dynamic memory is typically much more "scalable" - i.e. you can allocate more and/or larger objects compared to stack-based allocation.
The allocation essentially "marks" a piece of memory so no other object can be allocated in the same space. De-allocation "unmarks" that piece of memory so it can be reused for later allocations. If you fail to deallocate memory after it is no longer needed, you get a condition known as "memory leak" - your program is occupying a memory it no longer needs, leading to possible failure to allocate new memory (due to the lack of free memory), and just generally putting an unnecessary strain on the system.
Consider the following program:
int main() {
while(...) {
int* foobar = new int;
}
return 0;
}
When does foobar go out of scope?
I know when using new, attributes are allocated on the heap and need to be deleted manually with delete, in the code above, it is causing a memory leak. However, what about scope?
I thought it would go out of scope as soon as the while loop terminates, because you have no direct access to it anymore. For example, you cannot delete it after the loop has terminated.
Be careful here, foobar is local to the while loop, but the allocation on the heap has no scope and will only be destructed if you call delete on it.
The variable and the allocation are not linked in any way as far as the compiler is concerned. Indeed, the allocation happens at run time, so the compiler never even sees it.
foobar is a local variable that goes out of scope at the end of the block.
*foobar is a dynamically allocated object with manual lifetime. Since it doesn't have scoped lifetime, the question makes no sense -- it doesn't have a scope out of which it could go. Its lifetime is managed manually, and the object lives until you delete it.
Your question is dangerously burdened with prejudice and preconceptions. It is best to approach C++ with a clean mind and an open attitude. Only that way will you be able to appreciate the language's wonders to the fullest.
Here's the clean and open approach: Do think about 1) storage classes (automatic, static, dynamic), 2) object lifetime (scoped, permanent, manual), 3) object semantics (value (copies) vs reference (aliases)), 4) RAII and single-responsiblity classes. Purge your mind of a) stack/heap, b) pointers, c) new/delete, d) destructors/copy constructors/assignment operators.
That's a pretty awesome memory leak. You have a variable on the stack pointing to the memory allocated on the heap. You need to delete the memory on the heap before you lose the reference to it when the while loop scope ends. Alternately if you don't want to fuss with memory management always use smart pointers to own the raw memory on the heap and let it clean itself up.
#include <memory>
int main() {
while(...) {
std::unique_ptr<int> foobar = new int;
} // smart pointer foobar deletes the allocated int each iteration
return 0;
}
The pointer (foobar) will go out of scope right as the program gets to the closing brace of the while loop. So if the expression in the ... remains true, memory will be leaked every time the loop executes as you have lost a handle to the allocated object as of that closing brace.
Here foobar is an int pointer occupying memory in the stack. The int instance you are creating dynamically with new goes to heap. When foobar goes out of scope, you lose the reference to it, so you cannot delete the memory allocated in the heap.
The best solution would be:
while(--)
{
int foobar;
}//it goes out of scope here. deleted from stack automatically!!
If you still want to use the dynamic allocation then do this:
while(--)
{
int* foobar=new int;
//do your work here!
delete foobar; //This deletes the heap memory allocated!
foobar=NULL; //avoid dangling pointer! :)
}
foobar goes out of scope after each interation of the loop.
The memory you allocate and assign to foobar is being leaked, in that it is still allocated on the heap but no references to it are available in the program.
Since foobar is declared in the body of the loop, it goes out of scope at the end of every iteration of the loop. It is then redeclared, and new memory is allocated again and again until the loop ends. The actual object that foobar points to never goes out of scope. Scope doesn't apply to dynamically allocated(aka heap) objects, only to automatic(stack) objects.
Foobar the pointer is created on the stack, but the new int is created on the heap. In the case of the while loop, each time the code loops, foobar falls out scope. The newly created int persists on the heap. On each iteration a new int is created, and the pointer is reset, which means the pointer no longer can access any of the previous int(s) on the heap.
What seems to be lacking, in every one of the previous answers, and even in this one, is the heap falling out of scope. Maybe, I am getting the terminology incorrect, but I know that at some point the heap is reset, too. It may occur once the program no longer runs, or when the computer is turned off, but I know it occurs.
Let us look at this question from a different perspective. I have written any number of programs, which leak memory. Over all the years, I have owned my computer, I am positive, I have leaked over 2 gigabytes of memory. My computer only has 1 gig of memory. Therefore, if the heap NEVER falls out of scope, then my computer has some magical memory. Would one of you care to explain when exactly the heap falls out of scope?
In code I'm working on right now I have a method belonging to a class that itself creates an instance of another object to use within the method. Does the memory belonging to that object get automatically get released after the method returns and the object looses scope? Or will I be taking up more and more memory each time the method is called?
The code has this structure:
int Class::method(int input) {
Other_Class local_instance;
int i;
i = local_instance.do_something();
i *= input;
return i;
}
So will the memory belonging to local_instance be released upon returning from the method? Or will I have many instances of Other_Class clogging up memory?
Thanks very much for your time and help!
The local_instance object will be allocated on the stack, and it will be destroyed (its destructor will be called) when the method returns.
Well, to begin with, local_instance is not an object of class Other_Class but a function without arguments returning Other_class. This is also known as C++'s most vexing parse.
EDIT: The code in the question has been corrected; the original version had
Other_Class local_instance();
But let's assume that this line actually read
Other_Class local_instance; // no parentheses
Then yes, this object will be automatically get released at the end of the function. Note however that the same is not true for objects you allocate with new, for those objects it is your responsibility to delete them when no longer needed.
Assuming Other_Class::~Other_Class() politely cleans up any memory Other_Class allocates on the heap.
In C++, unless the object is created on the heap, all memory is automatically deallocated. For example, Other_Class is created on the stack, rather than the heap therefore Other_Class will automatically be deallocated by the function returning.
However, objects on the heap will NOT be automatically deallocated. Instead it's the developers responsibility to clean up any memory on the heap.
For example, although your code is fine, this code creates a memory leak:
int main ()
{
Other_Class *memOnHeap = new Other_Class;
return 0;
}
In the code above, gcc will allow it to compile however, you will create a memory leak the size of a Other_Class object because you have allocated memory for it on the leak, but not deallocated with a call to delete. The problem can be easily fixed by inserting a delete memOnHeap; right before your return main.
Unless the object is created on the heap with the new operator, the object will be destroyed and memory reclaimed when it goes out of scope.
In addition to all the above, if the constructor for the object local_instance instantiates objects using the new operator to which are pointed to by a Other_Class* pointer. Then that object, upon destruction, will not automatically release the memory allocated in these operations. i.e. You have to think about the children as well! (think auto pointers, deletion in destructor's, etc).
This is a very newbie question, but something completely new to me. In my code, and everywhere I have seen it before, new objects are created as such...
MyClass x = new MyClass(factory);
However, I just saw some example code that looks like this...
MyClass x(factory);
Does that do the same thing?
Not at all.
The first example uses dynamic memory allocation, i.e., you are allocating an instance of MyClass on the heap (as opposed to the stack). You would need to call delete on that pointer manually or you end up with a memory leak. Also, operator new returns a pointer, not the object itself, so your code would not compile. It needs to change to:
MyClass* x = new MyClass(factory);
The second example allocated an instance of MyClass on the stack. This is very useful for short lived objects as they will automatically be cleaned up when the leave the current scope (and it is fast; cleaning up the stack involves nothing more than incrementing or decrementing a pointer).
This is also how you would implement the Resource Acquisition is Initialization pattern, more commonly referred to as RAII. The destructor for your type would clean up any dynamically allocated memory, so when the stack allocated variable goes out of scope any dynamically allocated memory is cleaned up for you without the need for any outside calls to delete.
No. When you use new, you create objects off the heap that you must then delete later. In addition, you really need MyClass*. The other form creates an object on the stack that will be automatically destroyed at end of scope.
I've been using C++ for a short while, and I've been wondering about the new keyword. Simply, should I be using it, or not?
With the new keyword...
MyClass* myClass = new MyClass();
myClass->MyField = "Hello world!";
Without the new keyword...
MyClass myClass;
myClass.MyField = "Hello world!";
From an implementation perspective, they don't seem that different (but I'm sure they are)... However, my primary language is C#, and of course the 1st method is what I'm used to.
The difficulty seems to be that method 1 is harder to use with the std C++ classes.
Which method should I use?
Update 1:
I recently used the new keyword for heap memory (or free store) for a large array which was going out of scope (i.e. being returned from a function). Where before I was using the stack, which caused half of the elements to be corrupt outside of scope, switching to heap usage ensured that the elements were intact. Yay!
Update 2:
A friend of mine recently told me there's a simple rule for using the new keyword; every time you type new, type delete.
Foobar *foobar = new Foobar();
delete foobar; // TODO: Move this to the right place.
This helps to prevent memory leaks, as you always have to put the delete somewhere (i.e. when you cut and paste it to either a destructor or otherwise).
Method 1 (using new)
Allocates memory for the object on the free store (This is frequently the same thing as the heap)
Requires you to explicitly delete your object later. (If you don't delete it, you could create a memory leak)
Memory stays allocated until you delete it. (i.e. you could return an object that you created using new)
The example in the question will leak memory unless the pointer is deleted; and it should always be deleted, regardless of which control path is taken, or if exceptions are thrown.
Method 2 (not using new)
Allocates memory for the object on the stack (where all local variables go) There is generally less memory available for the stack; if you allocate too many objects, you risk stack overflow.
You won't need to delete it later.
Memory is no longer allocated when it goes out of scope. (i.e. you shouldn't return a pointer to an object on the stack)
As far as which one to use; you choose the method that works best for you, given the above constraints.
Some easy cases:
If you don't want to worry about calling delete, (and the potential to cause memory leaks) you shouldn't use new.
If you'd like to return a pointer to your object from a function, you must use new
There is an important difference between the two.
Everything not allocated with new behaves much like value types in C# (and people often say that those objects are allocated on the stack, which is probably the most common/obvious case, but not always true). More precisely, objects allocated without using new have automatic storage duration
Everything allocated with new is allocated on the heap, and a pointer to it is returned, exactly like reference types in C#.
Anything allocated on the stack has to have a constant size, determined at compile-time (the compiler has to set the stack pointer correctly, or if the object is a member of another class, it has to adjust the size of that other class). That's why arrays in C# are reference types. They have to be, because with reference types, we can decide at runtime how much memory to ask for. And the same applies here. Only arrays with constant size (a size that can be determined at compile-time) can be allocated with automatic storage duration (on the stack). Dynamically sized arrays have to be allocated on the heap, by calling new.
(And that's where any similarity to C# stops)
Now, anything allocated on the stack has "automatic" storage duration (you can actually declare a variable as auto, but this is the default if no other storage type is specified so the keyword isn't really used in practice, but this is where it comes from)
Automatic storage duration means exactly what it sounds like, the duration of the variable is handled automatically. By contrast, anything allocated on the heap has to be manually deleted by you.
Here's an example:
void foo() {
bar b;
bar* b2 = new bar();
}
This function creates three values worth considering:
On line 1, it declares a variable b of type bar on the stack (automatic duration).
On line 2, it declares a bar pointer b2 on the stack (automatic duration), and calls new, allocating a bar object on the heap. (dynamic duration)
When the function returns, the following will happen:
First, b2 goes out of scope (order of destruction is always opposite of order of construction). But b2 is just a pointer, so nothing happens, the memory it occupies is simply freed. And importantly, the memory it points to (the bar instance on the heap) is NOT touched. Only the pointer is freed, because only the pointer had automatic duration.
Second, b goes out of scope, so since it has automatic duration, its destructor is called, and the memory is freed.
And the barinstance on the heap? It's probably still there. No one bothered to delete it, so we've leaked memory.
From this example, we can see that anything with automatic duration is guaranteed to have its destructor called when it goes out of scope. That's useful. But anything allocated on the heap lasts as long as we need it to, and can be dynamically sized, as in the case of arrays. That is also useful. We can use that to manage our memory allocations. What if the Foo class allocated some memory on the heap in its constructor, and deleted that memory in its destructor. Then we could get the best of both worlds, safe memory allocations that are guaranteed to be freed again, but without the limitations of forcing everything to be on the stack.
And that is pretty much exactly how most C++ code works.
Look at the standard library's std::vector for example. That is typically allocated on the stack, but can be dynamically sized and resized. And it does this by internally allocating memory on the heap as necessary. The user of the class never sees this, so there's no chance of leaking memory, or forgetting to clean up what you allocated.
This principle is called RAII (Resource Acquisition is Initialization), and it can be extended to any resource that must be acquired and released. (network sockets, files, database connections, synchronization locks). All of them can be acquired in the constructor, and released in the destructor, so you're guaranteed that all resources you acquire will get freed again.
As a general rule, never use new/delete directly from your high level code. Always wrap it in a class that can manage the memory for you, and which will ensure it gets freed again. (Yes, there may be exceptions to this rule. In particular, smart pointers require you to call new directly, and pass the pointer to its constructor, which then takes over and ensures delete is called correctly. But this is still a very important rule of thumb)
The short answer is: if you're a beginner in C++, you should never be using new or delete yourself.
Instead, you should use smart pointers such as std::unique_ptr and std::make_unique (or less often, std::shared_ptr and std::make_shared). That way, you don't have to worry nearly as much about memory leaks. And even if you're more advanced, best practice would usually be to encapsulate the custom way you're using new and delete into a small class (such as a custom smart pointer) that is dedicated just to object lifecycle issues.
Of course, behind the scenes, these smart pointers are still performing dynamic allocation and deallocation, so code using them would still have the associated runtime overhead. Other answers here have covered these issues, and how to make design decisions on when to use smart pointers versus just creating objects on the stack or incorporating them as direct members of an object, well enough that I won't repeat them. But my executive summary would be: don't use smart pointers or dynamic allocation until something forces you to.
Which method should I use?
This is almost never determined by your typing preferences but by the context. If you need to keep the object across a few stacks or if it's too heavy for the stack you allocate it on the free store. Also, since you are allocating an object, you are also responsible for releasing the memory. Lookup the delete operator.
To ease the burden of using free-store management people have invented stuff like auto_ptr and unique_ptr. I strongly recommend you take a look at these. They might even be of help to your typing issues ;-)
If you are writing in C++ you are probably writing for performance. Using new and the free store is much slower than using the stack (especially when using threads) so only use it when you need it.
As others have said, you need new when your object needs to live outside the function or object scope, the object is really large or when you don't know the size of an array at compile time.
Also, try to avoid ever using delete. Wrap your new into a smart pointer instead. Let the smart pointer call delete for you.
There are some cases where a smart pointer isn't smart. Never store std::auto_ptr<> inside a STL container. It will delete the pointer too soon because of copy operations inside the container. Another case is when you have a really large STL container of pointers to objects. boost::shared_ptr<> will have a ton of speed overhead as it bumps the reference counts up and down. The better way to go in that case is to put the STL container into another object and give that object a destructor that will call delete on every pointer in the container.
Without the new keyword you're storing that on call stack. Storing excessively large variables on stack will lead to stack overflow.
If your variable is used only within the context of a single function, you're better off using a stack variable, i.e., Option 2. As others have said, you do not have to manage the lifetime of stack variables - they are constructed and destructed automatically. Also, allocating/deallocating a variable on the heap is slow by comparison. If your function is called often enough, you'll see a tremendous performance improvement if use stack variables versus heap variables.
That said, there are a couple of obvious instances where stack variables are insufficient.
If the stack variable has a large memory footprint, then you run the risk of overflowing the stack. By default, the stack size of each thread is 1 MB on Windows. It is unlikely that you'll create a stack variable that is 1 MB in size, but you have to keep in mind that stack utilization is cumulative. If your function calls a function which calls another function which calls another function which..., the stack variables in all of these functions take up space on the same stack. Recursive functions can run into this problem quickly, depending on how deep the recursion is. If this is a problem, you can increase the size of the stack (not recommended) or allocate the variable on the heap using the new operator (recommended).
The other, more likely condition is that your variable needs to "live" beyond the scope of your function. In this case, you'd allocate the variable on the heap so that it can be reached outside the scope of any given function.
The simple answer is yes - new() creates an object on the heap (with the unfortunate side effect that you have to manage its lifetime (by explicitly calling delete on it), whereas the second form creates an object in the stack in the current scope and that object will be destroyed when it goes out of scope.
Are you passing myClass out of a function, or expecting it to exist outside that function? As some others said, it is all about scope when you aren't allocating on the heap. When you leave the function, it goes away (eventually). One of the classic mistakes made by beginners is the attempt to create a local object of some class in a function and return it without allocating it on the heap. I can remember debugging this kind of thing back in my earlier days doing c++.
C++ Core Guidelines R.11: Avoid using new and delete explicitly.
Things have changed significantly since most answers to this question were written. Specifically, C++ has evolved as a language, and the standard library is now richer. Why does this matter? Because of a combination of two factors:
Using new and delete is potentially dangerous: Memory might leak if you don't keep a very strong discipline of delete'ing everything you've allocated when it's no longer used; and never deleteing what's not currently allocated.
The standard library now offers smart pointers which encapsulate the new and delete calls, so that you don't have to take care of managing allocations on the free store/heap yourself. So do other containers, in the standard library and elsewhere.
This has evolved into one of the C++ community's "core guidelines" for writing better C++ code, as the linked document shows. Of course, there exceptions to this rule: Somebody needs to write those encapsulating classes which do use new and delete; but that someone is rarely yourself.
Adding to #DanielSchepler's valid answer:
The second method creates the instance on the stack, along with such things as something declared int and the list of parameters that are passed into the function.
The first method makes room for a pointer on the stack, which you've set to the location in memory where a new MyClass has been allocated on the heap - or free store.
The first method also requires that you delete what you create with new, whereas in the second method, the class is automatically destructed and freed when it falls out of scope (the next closing brace, usually).
The short answer is yes the "new" keyword is incredibly important as when you use it the object data is stored on the heap as opposed to the stack, which is most important!