What's a good policy for when to use "new" to make an instance of a class? I've been hobby programming C++ for a while but I'm still not for sure when is the best time to do this:
MyClass thing(param1, param2);
over this:
MyClass* thing;
thing = new MyClass(param1, param2);
Any advice?
Design-wise, use automatic (stack) allocation as much as possible. Whenever you need to extend the lifetime of an object beyond a certain scope, then dynamically allocate it.
And even so, never dynamically allocate things raw. Always keep them wrapped into some sort of wrapper that implements Scope-Bound Resource Management (SBRM, first known under the dumb/awkward name Resource-Acquisition Is Initialization or RAII.) That is, dynamic allocations should be kept in automatic objects that will clean up automatically!
A good example of this is std::vector: you cannot leak the memory internal to a vector, because it's destructor is run in every scenario when memory should be free'd, and it will free it for you. auto_ptr is the first and only smart pointer available in the standard library, but it's pretty bad. Better is to use shared_ptr, or many of the other popular smart pointers available in Boost and/or TR1 and/or C++0x.
Performance-wise, objects allocated on the stack can be done so very quickly (the stack size is increased per-function-call, so all the required memory has been allocated up-front by a simple move of a pointer.) Contrarily, dynamic allocation generally requires much more time. It's quite possible to get speedy dynamic allocations with custom allocation schemes, but even the best will still be slower than stack allocation.
Occasionally, you might find you spend too much time copying objects around. In this case, it may be worth it to dynamically allocate it and merely move pointers around. However, please note I said "find". This kind of change is something you find by profiling and measuring, never guessing.
So: Automatic allocation when possible, dynamic allocation when needed.
The first approach creates a local instance on the stack that goes away when the calling function exits. The second creates an instance that stays on the heap until (and if) you explicitly release it again. The choice depends on what kind of control and lifetime you want for your object.
The rule of thumb is: if it works without new, don't use new.
In general: you don't need to use new if you plan to delete the object in the same scope. If the object is quite large, you may want to use new.
You may want to look into the difference between heap and stack memory if you want to know the details.
First, ask yourself the question, does it make sense for the object to be copied when another function wants it?
If it makes sense to copy the object, your best bet is to create everything on the stack or as member variables and then just pass copies around when needed.
If it does not make sense to copy the object, you'll need to use new form so that you can safely pass the pointer to the object. You have to use a pointer (or reference) because as noted it does not make sense to copy the object.
There are two exceptions I'm aware of:
If you know the object isn't going to be used after the current function is finished, you can create the object on the stack so that it is deleted. Just make very sure nobody holds on to a pointer to it afterwards! (I rarely find this is the case, but it happens)
If the object is used internally by another class which itself shouldn't be copied around, you can just put it in as a member variable. Since the object it is in won't be copied, and its only for internal use that will be safe.
MyClass thing(param1, param2); //memory for thing is allocated on the process stack(static allocation)
MyClass* thing;
thing = new MyClass(param1, param2); //memory is allocated dynamically on the heap(free store) for thing
The difference lies here:
int main()
{
{
MyClass thing(param1, param2); //thing is local to the scope
} //destructor called for thing
//cannot access thing (thing doesn't exist)
}
int main()
{
{
MyClass* thing;
thing = new MyClass(param1, param2);
}
//the object pointed to by thing still exists
//Memory leak
}
For large objects you must allocate memory dynamically(use new) because the process stack has a limited size.
Related
sometimes I see in various C++ programs, objects declared and used like so:
object *obj = new object;
obj->action();
obj->moreAction();
//etc...
Is there any benefit of doing that, instead of simply doing:
object obj;
obj.action();
obj.moreAction();
//etc
Yes - you can store the pointer in a container or return it from the function and the object will not get destroyed when the pointer goes out of scope. Pointers are used
to avoid unnecessary copying of object,
to facilitate optional object creation,
for custom object lifetime management,
for creating complex graph-like structures,
for the combinations of the above.
This doesn't come for free - you need to destroy the object manually (delete) when you no longer need it and deciding when this moment comes is not always easy, plus you might just forget to code it.
The first form, allocating objects on the heap, gives you full control of (and full responsibility for) the object's live time: you have to delete obj explicitly.
In the second form, the object is automatically and irrevocably destroyed when obj goes out of score (when leaving the current code block).
One other reason no-one has mentioned.
The stack is typically 1Mb, so creating large objects must be done on the heap ( with new)
Basically, you should only use "new" if you want an object to live beyond the lifetime of the scope you create it in. For eg:
X* g(int i) { /* ... */ return new X(i); } // the X outlives the call of g()
If you want an object to live in a scope only, don't use "new" but simply define a variable:
{
ClassName x;
// use x
}
It comes down to having more control over the lifecycle of the object in question (when using new).
Yes there is a good reason: you have much more chance to have a correct program when using the latter form..
The problem is that the former form (pointer) is a Cism, in C++ you should use a smart pointer to ensure proper destruction of the object at the end of its life.
Now, if you use std::auto_ptr<Object> obj(new Object()); you have 3 benefits:
you now manage the life cycle of your object explicitly (but cannot forget it)
you can store you object into a container with polymorphism
you do not clog the stack, and thus have less risk of running into a stack overflow
one can ask in a opposite way: when should you use strange first option? basically if you want to allocate big object, because if you don't have to do it and you can put it on the stack it will be much faster option: this is one of main advantages of using C++ over JAVA, which puts all objects on the heap. and this benefit is specially true when dealing with many, many allocations of little objects: put them on stack to increase speed. there is cost overhead of dereferencing pointer. you can find here info about boost pool library which provides us with tools to manage such allocations.
I am writing a template class that takes as an input a pointer and stores it. The pointer is meant to point to an object allocated by another class, and handed to the this containing class.
Now I want to create a destructor for this container. How should I free the memory pointed to by this pointer? I have no way of knowing a priori whether it is an array or a single element.
I'm sort of new to C++, so bear with me. I've always used C, and Java is my OO language of choice, but between wanting to learn C++ and the speed requirements of my project, I've gone with C++.
Would it be a better idea to change the container from a template to a container for an abstract class that can implement its own destructor?
If you don't know whether it was allocated with new or new[], then it is not safe to delete it.
Your code may appear to work. For example, on one platform I work on, the difference only matters when you have an array of objects that have destructors. So, you do this:
// by luck, this works on my preferred platform
// don't do this - just an example of why your code seems to work
int *ints = new int[20];
delete ints;
but then you do this:
// crashes on my platform
std::string *strings = new std::string[10];
delete strings;
You must document how this class expects to be used, and always allocate as expected. You can also pass a flag to the object specifying how it should destroy. Also look at boost's smart pointers, which can handle this distinction for you.
Short answer:
If you use [] with new you want to use [] with delete.
//allocate some memory
myObject* m = new myObject[100];
//later on...destructor...
delete m; //wrong
delete[] m; //correct
That was the bare bones, the other thing you could look at is boost. Also quite difficult to answer considering you are not sure if its an array or single object. You could check this though via a flag telling your app whether to use delete or delete[].
As a general development rule, you should stick to a design where the class which calls new should also call delete
You shouldn't delete it at all. If your class takes an already initialized pointer, it is not safe to delete it. It might not even point to an object on the heap; calling either delete or delete[] could be disastrous.
The allocation and deallocation of memory should happen in the same scope. Which ever code owns and initializes the instance of your class is also presumably responsible for initializing and passing in the pointer, and that is where your delete should be.
Use delete if you allocated with new.
Use delete[] if you allocated with new[].
After these statements, if you still have a problem (maybe you want to delete an object that was created by someone else), then you are breaking the third rule:
Always delete what you created. Corollary, never delete what you did not create.
(Moving my comment into an answer, by request.)
JonH's answer is right (about using array destruction only when you used array construction), so perhaps you should offer templates: one for arrays, one not.
The other answer is to avoid arrays and instead expect a single instance that may or may not be a proper collection that cleans up after itself, such as vector<>.
edit
Stealing blatantly from Roger Pate, I'll add that you could require the use of a smart pointer, which amounts to a single-item collection.
If you have a class that takes a pointer it's going assume ownership of, then the contract for the use of the class needs to include one of a couple things. Either:
the interface needs to indicate how the object the pointer is pointing to was allocated so the new owner can know how to safely deallocate the object. This option has the advantage of keeping things simple (on one level anyway), but it's not flexible - the class can't handle taking ownership of static objects as well as dynamically allocated objects.
or
the interface needs to include a mechanism where a deallocation policy can be specified by whatever is giving the pointer to the class. This can be as simple as providing a mechanism to pass in a functor (or even a plain old function pointer) that will be called to deallocate the object (preferably in the same function/constructor that passes in the pointer itself). This makes the class arguably more complicated to use (but having a default policy of calling delete on the pointer, for example, might make it as easy to use as option 1 for the majority of uses). Now if someone wants to give the class a pointer to a statically allocated object, they can pass in a no-op functor so nothing happens when the class wants to deallocates it, or a functor to a delete[] operation if the object was allocated by new[], etc.
Since pointer in C++ does not tell us how it was allocated, yes, there's no way to decide what deallocation method to use. The solution is to give the choice to the user that hopefully knows how the memory was allocated. Take a look at Boost smart ptr library, especially at shared_ptr constructor with second parameter, for a great example.
A smart pointer like boost shared_pointer already has this covered, could you use it? linky
Put simply, given only a pointer to dynamically allocated memory there is no way of determining how to de-allocate it safely. The pointer could have been allocated in any of the the following ways:
using new
using new []
using malloc
using a user defined function
etc.
In all cases before you can deallocate the memory you have to know how it was allocated.
While learning different languages, I've often seen objects allocated on the fly, most often in Java and C#, like this:
functionCall(new className(initializers));
I understand that this is perfectly legal in memory-managed languages, but can this technique be used in C++ without causing a memory leak?
Your code is valid (assuming functionCall() actually guarantees that the pointer gets deleted), but it's fragile and will make alarm bells go off in the heads of most C++ programmers.
There are multiple problems with your code:
First and foremost, who owns the pointer? Who is responsible for freeing it? The calling code can't do it, because you don't store the pointer. That means the called function must do it, but that's not clear to someone looking at that function. Similarly, if I call the code from somewhere else, I certainly don't expect the function to call delete on the pointer I passed to it!
If we make your example slightly more complex, it can leak memory, even if the called function calls delete. Say it looks like this: functionCall(new className(initializers), new className(initializers)); Imagine that the first one is allocated successfully, but the second one throws an exception (maybe it's out of memory, or maybe the class constructor threw an exception). functionCall never gets called then, and can't free the memory.
The simple (but still messy) solution is to allocate memory first, and store the pointer, and then free it in the same scope as it was declared (so the calling function owns the memory):
className* p = new className(initializers);
functionCall(p);
delete p;
But this is still a mess. What if functionCall throws an exception? Then p won't be deleted. Unless we add a try/catch around the whole thing, but sheesh, that's messy.
What if the function gets a bit more complex, and may return after functionCall but before delete? Whoops, memory leak. Impossible to maintain. Bad code.
So one of the nice solutions is to use a smart pointer:
boost::shared_ptr<className> p = boost::shared_ptr<className>(new className(initializers));
functionCall(p);
Now ownership of the memory is dealt with. The shared_ptr owns the memory, and guarantees that it'll get freed. We could use std::auto_ptr instead, of course, but shared_ptr implements the semantics you'd usually expect.
Note that I still allocated the memory on a separate line, because the problem with making multiple allocations on the same line as you make the function call still exists. One of them may still throw, and then you've leaked memory.
Smart pointers are generally the absolute minimum you need to handle memory management.
But often, the nice solution is to write your own RAII class.
className should be allocated on the stack, and in its constructor, make what allocations with new are necessary. And in its destructor, it should free that memory. This way, you're guaranteed that no memory leaks will occur, and you can make the function call as simple as this:
functionCall(className(initializers));
The C++ standard library works like this. std::vector is one example. You'd never allocate a vector with new. You allocate it on the stack, and let it deal with its memory allocations internally.
Yes, as long as you deallocate the memory inside the function. But by no means this is a best practice for C++.
It depends.
This passes "ownership" of the memory to functionCAll(). It will either need to free the object or save the pointer so that it can be freed later. Passing the ownership of raw pointers like this is one of the easiest ways to build memory issues into your code -- either leaks or double deletes.
In C++ we would not create the memory dynamically like that.
Instead you would create a temporary stack object.
You only need to create a heap object via new if you want the lifetime of the object to be greater than the call to the function. In this case you can use new in conjunction with a smart pointer (see other answers for an example).
// No need for new or memory management just do this
functionCall(className(initializers));
// This assumes you can change the functionCall to somthing like this.
functionCall(className const& param)
{
<< Do Stuff >>
}
If you want to pass a non const reference then do it like this:
calssName tmp(initializers);
functionCall(tmp);
functionCall(className& param)
{
<< Do Stuff >>
}
It is safe if the function that you are calling has acceptance-of-ownership semantics. I don't recall a time where I needed this, so I would consider it unusual.
If the function works this way, it should take its argument as a smart pointer object so that the intent is clea; i.e.
void functionCall(std::auto_ptr<className> ptr);
rather than
void functionCall(className* ptr);
This makes the transfer of ownership explicit, and the calling function will dispose of the memory pointed to by ptr when execution of the function falls out of scope.
This will work for objects created on the stack, but not a regular pointer in C++.
An auto pointer maybe able to handle it, but I haven't messed with them enough to know.
In general, no, unless you want to leak memory. In fact, in most cases, this won't work, since the result of
new T();
in C++ is a T*, not a T (in C#, new T() returns a T).
Have a look at Smart Pointers or A garbage collector for C and C++.
In C++, when is it best to use the stack? When is it best to use the heap?
Use the stack when your variable will not be used after the current function returns. Use the heap when the data in the variable is needed beyond the lifetime of the current function.
As a rule of thumb, avoid creating huge objects on the stack.
Creating an object on the stack frees you from the burden of remembering to cleanup(read delete) the object. But creating too many objects on the stack will increase the chances of stack overflow.
If you use heap for the object, you get the as much memory the OS can provide, much larger than the stack, but then again you must make sure to free the memory when you are done. Also, creating too many objects too frequently in the heap will tend to fragment the memory, which in turn will affect the performance of your application.
Use the stack when the memory being used is strictly limited to the scope in which you are creating it. This is useful to avoid memory leaks because you know exactly where you want to use the memory, and you know when you no longer need it, so the memory will be cleaned up for you.
int main()
{
if (...)
{
int i = 0;
}
// I know that i is no longer needed here, so declaring i in the above block
// limits the scope appropriately
}
The heap, however, is useful when your memory may be accessed outside of the scope of its creation and you do not wish to copy a stack variable. This can give you explicit control over how memory is allocated and deallocated.
Object* CreateObject();
int main()
{
Object* obj = CreateObject();
// I can continue to manipulate object and I decide when I'm done with it
// ..
// I'm done
delete obj;
// .. keep going if you wish
return 0;
}
Object* CreateObject()
{
Object* returnValue = new Object();
// ... do a bunch of stuff to returnValue
return returnValue;
// Note the object created via new here doesn't go away, its passed back using
// a pointer
}
Obviously a common problem here is that you may forget to delete your object. This is called a memory leak. These problems are more prevalent as your program becomes less and less trivial where "ownership" (or who exactly is responsible for deleting things) becomes more difficult to define.
Common solutions in more managed languages (C#, Java) are to implement garbage collection so you don't have to think about deleting things. However, this means there's something in the background that runs aperiodically to check on your heap data. In a non-trivial program, this can become rather inefficient as a "garbage collection" thread pops up and chugs away, looking for data that should be deleted, while the rest of your program is blocked from executing.
In C++, the most common, and best (in my opinion) solution to dealing with memory leaks is to use a smart pointer. The most common of these is boost::shared_ptr which is (reference counted)
So to recreate the example above
boost::shared_ptr CreateObject();
int main()
{
boost::shared_ptr<Object> obj = CreateObject();
// I can continue to manipulate object and I decide when I'm done with it
// ..
// I'm done, manually delete
obj.reset(NULL);
// .. keep going if you wish
// here, if you forget to delete obj, the shared_ptr's destructor will note
// that if no other shared_ptr's point to this memory
// it will automatically get deleted.
return 0;
}
boost::shared_ptr<Object> CreateObject()
{
boost::shared_ptr<Object> returnValue(new Object());
// ... do a bunch of stuff to returnValue
return returnValue;
// Note the object created via new here doesn't go away, its passed back to
// the receiving shared_ptr, shared_ptr knows that another reference exists
// to this memory, so it shouldn't delete the memory
}
An exception to the rule mentioned above that you should generally use the stack for local variables that are not needed outside the scope of the function:
Recursive functions can exhaust the stack space if they allocate large local variables or if they are recursively invoked many times. If you have a recursive function that utilizes memory, it might be a good idea to use heap-based memory instead of stack-based memory.
as a rule of thumb use the stack whenever you can. i.e. when the variable is never needed outside of that scope.
its faster, causes less fragmentation and is going to avoid the other overheads associated with calling malloc or new. allocating off of the stack is a couple of assembler operations, malloc or new is several hundred lines of code in an efficient implementation.
its never best to use the heap... just unavoidable. :)
Use the heap for only allocating space for objects at runtime. If you know the size at compile time, use the stack. Instead of returning heap-allocated objects from a function, pass a buffer into the function for it to write to. That way the buffer can be allocated where the function is called as an array or other stack-based structure.
The fewer malloc() statements you have, the fewer chances for memory leaks.
There are situations where you need the stack, others where you need the heap, others where you need the static storage, others where you need the const memory data, others where you need the free store.
The stack is fast, because allocation is just an "increment" over the SP, and all "allocation" is performed at invocation time of the function you are in. Heap (or free store) allocation/deallocation is more time expensive and error prone.
I've been using C++ for a short while, and I've been wondering about the new keyword. Simply, should I be using it, or not?
With the new keyword...
MyClass* myClass = new MyClass();
myClass->MyField = "Hello world!";
Without the new keyword...
MyClass myClass;
myClass.MyField = "Hello world!";
From an implementation perspective, they don't seem that different (but I'm sure they are)... However, my primary language is C#, and of course the 1st method is what I'm used to.
The difficulty seems to be that method 1 is harder to use with the std C++ classes.
Which method should I use?
Update 1:
I recently used the new keyword for heap memory (or free store) for a large array which was going out of scope (i.e. being returned from a function). Where before I was using the stack, which caused half of the elements to be corrupt outside of scope, switching to heap usage ensured that the elements were intact. Yay!
Update 2:
A friend of mine recently told me there's a simple rule for using the new keyword; every time you type new, type delete.
Foobar *foobar = new Foobar();
delete foobar; // TODO: Move this to the right place.
This helps to prevent memory leaks, as you always have to put the delete somewhere (i.e. when you cut and paste it to either a destructor or otherwise).
Method 1 (using new)
Allocates memory for the object on the free store (This is frequently the same thing as the heap)
Requires you to explicitly delete your object later. (If you don't delete it, you could create a memory leak)
Memory stays allocated until you delete it. (i.e. you could return an object that you created using new)
The example in the question will leak memory unless the pointer is deleted; and it should always be deleted, regardless of which control path is taken, or if exceptions are thrown.
Method 2 (not using new)
Allocates memory for the object on the stack (where all local variables go) There is generally less memory available for the stack; if you allocate too many objects, you risk stack overflow.
You won't need to delete it later.
Memory is no longer allocated when it goes out of scope. (i.e. you shouldn't return a pointer to an object on the stack)
As far as which one to use; you choose the method that works best for you, given the above constraints.
Some easy cases:
If you don't want to worry about calling delete, (and the potential to cause memory leaks) you shouldn't use new.
If you'd like to return a pointer to your object from a function, you must use new
There is an important difference between the two.
Everything not allocated with new behaves much like value types in C# (and people often say that those objects are allocated on the stack, which is probably the most common/obvious case, but not always true). More precisely, objects allocated without using new have automatic storage duration
Everything allocated with new is allocated on the heap, and a pointer to it is returned, exactly like reference types in C#.
Anything allocated on the stack has to have a constant size, determined at compile-time (the compiler has to set the stack pointer correctly, or if the object is a member of another class, it has to adjust the size of that other class). That's why arrays in C# are reference types. They have to be, because with reference types, we can decide at runtime how much memory to ask for. And the same applies here. Only arrays with constant size (a size that can be determined at compile-time) can be allocated with automatic storage duration (on the stack). Dynamically sized arrays have to be allocated on the heap, by calling new.
(And that's where any similarity to C# stops)
Now, anything allocated on the stack has "automatic" storage duration (you can actually declare a variable as auto, but this is the default if no other storage type is specified so the keyword isn't really used in practice, but this is where it comes from)
Automatic storage duration means exactly what it sounds like, the duration of the variable is handled automatically. By contrast, anything allocated on the heap has to be manually deleted by you.
Here's an example:
void foo() {
bar b;
bar* b2 = new bar();
}
This function creates three values worth considering:
On line 1, it declares a variable b of type bar on the stack (automatic duration).
On line 2, it declares a bar pointer b2 on the stack (automatic duration), and calls new, allocating a bar object on the heap. (dynamic duration)
When the function returns, the following will happen:
First, b2 goes out of scope (order of destruction is always opposite of order of construction). But b2 is just a pointer, so nothing happens, the memory it occupies is simply freed. And importantly, the memory it points to (the bar instance on the heap) is NOT touched. Only the pointer is freed, because only the pointer had automatic duration.
Second, b goes out of scope, so since it has automatic duration, its destructor is called, and the memory is freed.
And the barinstance on the heap? It's probably still there. No one bothered to delete it, so we've leaked memory.
From this example, we can see that anything with automatic duration is guaranteed to have its destructor called when it goes out of scope. That's useful. But anything allocated on the heap lasts as long as we need it to, and can be dynamically sized, as in the case of arrays. That is also useful. We can use that to manage our memory allocations. What if the Foo class allocated some memory on the heap in its constructor, and deleted that memory in its destructor. Then we could get the best of both worlds, safe memory allocations that are guaranteed to be freed again, but without the limitations of forcing everything to be on the stack.
And that is pretty much exactly how most C++ code works.
Look at the standard library's std::vector for example. That is typically allocated on the stack, but can be dynamically sized and resized. And it does this by internally allocating memory on the heap as necessary. The user of the class never sees this, so there's no chance of leaking memory, or forgetting to clean up what you allocated.
This principle is called RAII (Resource Acquisition is Initialization), and it can be extended to any resource that must be acquired and released. (network sockets, files, database connections, synchronization locks). All of them can be acquired in the constructor, and released in the destructor, so you're guaranteed that all resources you acquire will get freed again.
As a general rule, never use new/delete directly from your high level code. Always wrap it in a class that can manage the memory for you, and which will ensure it gets freed again. (Yes, there may be exceptions to this rule. In particular, smart pointers require you to call new directly, and pass the pointer to its constructor, which then takes over and ensures delete is called correctly. But this is still a very important rule of thumb)
The short answer is: if you're a beginner in C++, you should never be using new or delete yourself.
Instead, you should use smart pointers such as std::unique_ptr and std::make_unique (or less often, std::shared_ptr and std::make_shared). That way, you don't have to worry nearly as much about memory leaks. And even if you're more advanced, best practice would usually be to encapsulate the custom way you're using new and delete into a small class (such as a custom smart pointer) that is dedicated just to object lifecycle issues.
Of course, behind the scenes, these smart pointers are still performing dynamic allocation and deallocation, so code using them would still have the associated runtime overhead. Other answers here have covered these issues, and how to make design decisions on when to use smart pointers versus just creating objects on the stack or incorporating them as direct members of an object, well enough that I won't repeat them. But my executive summary would be: don't use smart pointers or dynamic allocation until something forces you to.
Which method should I use?
This is almost never determined by your typing preferences but by the context. If you need to keep the object across a few stacks or if it's too heavy for the stack you allocate it on the free store. Also, since you are allocating an object, you are also responsible for releasing the memory. Lookup the delete operator.
To ease the burden of using free-store management people have invented stuff like auto_ptr and unique_ptr. I strongly recommend you take a look at these. They might even be of help to your typing issues ;-)
If you are writing in C++ you are probably writing for performance. Using new and the free store is much slower than using the stack (especially when using threads) so only use it when you need it.
As others have said, you need new when your object needs to live outside the function or object scope, the object is really large or when you don't know the size of an array at compile time.
Also, try to avoid ever using delete. Wrap your new into a smart pointer instead. Let the smart pointer call delete for you.
There are some cases where a smart pointer isn't smart. Never store std::auto_ptr<> inside a STL container. It will delete the pointer too soon because of copy operations inside the container. Another case is when you have a really large STL container of pointers to objects. boost::shared_ptr<> will have a ton of speed overhead as it bumps the reference counts up and down. The better way to go in that case is to put the STL container into another object and give that object a destructor that will call delete on every pointer in the container.
Without the new keyword you're storing that on call stack. Storing excessively large variables on stack will lead to stack overflow.
If your variable is used only within the context of a single function, you're better off using a stack variable, i.e., Option 2. As others have said, you do not have to manage the lifetime of stack variables - they are constructed and destructed automatically. Also, allocating/deallocating a variable on the heap is slow by comparison. If your function is called often enough, you'll see a tremendous performance improvement if use stack variables versus heap variables.
That said, there are a couple of obvious instances where stack variables are insufficient.
If the stack variable has a large memory footprint, then you run the risk of overflowing the stack. By default, the stack size of each thread is 1 MB on Windows. It is unlikely that you'll create a stack variable that is 1 MB in size, but you have to keep in mind that stack utilization is cumulative. If your function calls a function which calls another function which calls another function which..., the stack variables in all of these functions take up space on the same stack. Recursive functions can run into this problem quickly, depending on how deep the recursion is. If this is a problem, you can increase the size of the stack (not recommended) or allocate the variable on the heap using the new operator (recommended).
The other, more likely condition is that your variable needs to "live" beyond the scope of your function. In this case, you'd allocate the variable on the heap so that it can be reached outside the scope of any given function.
The simple answer is yes - new() creates an object on the heap (with the unfortunate side effect that you have to manage its lifetime (by explicitly calling delete on it), whereas the second form creates an object in the stack in the current scope and that object will be destroyed when it goes out of scope.
Are you passing myClass out of a function, or expecting it to exist outside that function? As some others said, it is all about scope when you aren't allocating on the heap. When you leave the function, it goes away (eventually). One of the classic mistakes made by beginners is the attempt to create a local object of some class in a function and return it without allocating it on the heap. I can remember debugging this kind of thing back in my earlier days doing c++.
C++ Core Guidelines R.11: Avoid using new and delete explicitly.
Things have changed significantly since most answers to this question were written. Specifically, C++ has evolved as a language, and the standard library is now richer. Why does this matter? Because of a combination of two factors:
Using new and delete is potentially dangerous: Memory might leak if you don't keep a very strong discipline of delete'ing everything you've allocated when it's no longer used; and never deleteing what's not currently allocated.
The standard library now offers smart pointers which encapsulate the new and delete calls, so that you don't have to take care of managing allocations on the free store/heap yourself. So do other containers, in the standard library and elsewhere.
This has evolved into one of the C++ community's "core guidelines" for writing better C++ code, as the linked document shows. Of course, there exceptions to this rule: Somebody needs to write those encapsulating classes which do use new and delete; but that someone is rarely yourself.
Adding to #DanielSchepler's valid answer:
The second method creates the instance on the stack, along with such things as something declared int and the list of parameters that are passed into the function.
The first method makes room for a pointer on the stack, which you've set to the location in memory where a new MyClass has been allocated on the heap - or free store.
The first method also requires that you delete what you create with new, whereas in the second method, the class is automatically destructed and freed when it falls out of scope (the next closing brace, usually).
The short answer is yes the "new" keyword is incredibly important as when you use it the object data is stored on the heap as opposed to the stack, which is most important!