Can I exhaust stack? - c++

I know that by using operator new() I can exhaust memory and I know how to protect myself against such a case, but can I exhaust memory by creating objects on stack? And if yes, how can I check if object creation was succesful?
Thank you.

You can exhaust a stack. In such cases, your program will probably crash with the stack overflow exception immediately.
A stack has a size too, so you can look at it as simply a block of memory. Variables inside functions for example are allocated here. Also when you call a function, the call itself is stored on the stack (very simplified, i know). So if you make a infinite recursion (as mentioned in another answer) then the stack gets filled but not emptied (this happens when a function returns, the information about the call is "deleted") so at some time you will fill the whole space allocated for your programs stack and your app will crash.
Note that there are ways how to determine/change the size of stack.

Just look at the title of this site and you will see the answer.
Write some infinite recursion if you want to see "live" what happens.
i.e.
void fun() { fun(); }

Yes, you can exhaust the stack. On common systems, the hardware/OS traps that and aborts your program. However, it is hard to do so. You would have to either create huge objects on the stack (automatic arrays) or do deep recursion.
Note that, if you use common abstractions such as std::string, std::vector etc., you can hardly ever exhaust the stack, because while they live on the stack, they have their data on the heap. (This is true for all STL containers coming with the std lib except for std::tr1::array.)

Yes, see the site's name. You can't really check that the object creation is successful -- the program simply crashes on stack overflow.

Memory is not infinite, so wherever you allocate objects you will eventually run out of it.

Ok, but you'll need sharp reactions to spot when the 'object creation' succeeds.
class MyObject {
private:
int x
public:
MyObject() { x = 0; }
};
int main(int argc, char **argv) {
IWantToExhaustTheStack();
return 0;
}
void IWantToExhaustTheStack() {
MyObject o;
IWantToExhaustTheStack();
}
Now compile and run this, for a very short while your object creation will work. You will know that the object creation has failed, when your program fails.
Joking aside, and in response to your updated question, there is no standard way to determine the stack size. See : This Stackoverflow Question in relation to Win32. However, the stack is used to call methods and hold local temporary and return variables. If you are allocating large objects on the stack, you really should be thinking of putting them on the heap.

Yes, you can exhaust the stack and you cannot test whether object creation fails because after failure this is already too late.
Generally, the only way to protect from stack overflow is to design application in a such way that it will not exceed given limit. I.e. if recursion modifies an image then put limit on image size or use other algorithm for huge images.
Watch recursions (not too deep), watch alloca (not too much). Watch peeks when examining stack usage.
In OpenSolaris there is few functions that lets you to control the stack.

Related

Why can't you free variables on the stack?

The languages in question are C/C++.
My prof said to free memory on the heap when your done using it, because otherwise you can end up with memory that can't be accessed. The problem with this is you might end up with all your memory used up and you are unable to access any of it.
Why doesn't the same concept apply to the stack? I understand that you can always access the memory you used on the stack, but if you keep creating new variables, you will eventually run out of space right? So why can't you free variables on the stack to make room for new variables like you can on the heap?
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
Dynamically allocated objects ("heap objects" in colloquial language) are never variables. Thus, they can never go out of scope. They don't live inside any scope. The only way you can handle them is via a pointer that you get at the time of allocation.
(The pointer is usually assigned to a variable, but that doesn't help.)
To repeat: Variables have scope; objects don't. But many objects are variables.
And to answer the question: You can only free objects, not variables.
The end of the closed "}" braces is where the stack "frees" its memory. So if I have:
{
int a = 1;
int b = 2;
{
int c = 3; // c gets "freed" at this "}" - the stack shrinks
// and c is no longer on the stack.
}
} // a and b are "freed" from the stack at this last "}".
You can think of c as being "higher up" on the stack than "a" and "b", so c is getting popped off before them. Thus, every time you write a "}" symbol, you are effectively shrinking the stack and "freeing" data.
There are already nice answers but I think you might need some more clarification, so I'll try to make this a more detailed answer and also try to make it simple (if I manage to). If something isn't clear (which with me being not a native english speaker and having problems with formulating answers sometimes might be likely) just ask in the comments. Also gonna take the use the Variables vs Objects idea that Kerrek SB uses in his answer.
To make that clearer I consider Variables to be named Objects with an Object being something to store data within your program.
Variables on the stack got automatic storage duration they automatically get destroyed and reclaimed once their scope ends.
{
std::string first_words = "Hello World!";
// do some stuff here...
} // first_words goes out of scope and the memory gets reclaimed.
In this case first_words is a Variable (since it got its own name) which means it is also an Object.
Now what about the heap? Lets describe what you might consider being "something on the heap" as a Variable pointing to some memory location on the heap where an Object is located. Now these things got what's called dynamic storage duration.
{
std::string * dirty = nullptr
{
std::string * ohh = new std::string{"I don't like this"} // ohh is a std::string* and a Variable
// The actual std::string is only an unnamed
// Object on the heap.
// do something here
dirty = ohh; // dirty points to the same memory location as ohh does now.
} // ohh goes out of scope and gets destroyed since it is a Variable.
// The actual std::string Object on the heap doesn't get destroyed
std::cout << *dirty << std::endl; // Will work since the std::string on the heap that dirty points to
// is still there.
delete dirty; // now the object being pointed to gets destroyed and the memory reclaimed
dirty = nullptr; can still access dirty since it's still in its scope.
} // dirty goes out of scope and get destroyed.
As you can see objects don't adhere to scopes and you got to manually manage their memory. That's also a reason why "most" people prefer to use "wrappers" around it. See for example std::string which is a wrapper around a dynamic "String".
Now to clarify some of your questions:
Why can't we destroy objects on the stack?
Easy answer: Why would you want to?
Detailed answer: It would be destroued by you and then destroyed again once it leaves the scope which isn't allowed. Also you should generally only have variables in your scope that you actually need for your computation and how would you destroy it if you actually need that variable to finish your computation? But if you really were to only need a variable for a small time within a computation you could just make a new smaller scope with { } so your variable gets automatically destroyed once it isn't needed anymore.
Note: If you got a lot of variables that you only need for a small part of your computation it might be a hint that that part of the computation should be in its own function/scope.
From your comments: Yeah I get that, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope?
They don't. Objects on the heap got no scope, you can pass their address out of a function and it still persists. The pointer pointing to it can go out of scope and be destroyed but the Object on the heap still exists and you can't access it anymore (memory leak). That's also why it's called manual memory management and most people prefer wrappers around them so that it gets automatically destroyed when it isn't needed anymore. See std::string, std::vector as examples.
From your comments: Also how can you run out of memory on a computer? An int takes up like 4 bytes, most computers have billions of bytes of memory... (excluding embedded systems)?
Well, computer programs don't always just hold a few ints. Let me just answer with a little "fake" quote:
640K [of computer memory] ought to be enough for anybody.
But that isn't enough like we all should know. And how many memory is enough? I don't know but certainly not what we got now. There are many algorithms, problems and other stuff that need tons of memory. Just think about something like computer games. Could we make "bigger" games if we had more memory? Just think about it... You can always make something bigger with more resources so I don't think there's any limit where we can say it's enough.
So why can't you free variables on the stack to make room for new variables like you can on the heap?
All information that "stack allocator" knows is ESP which is pointer to the bottom of stack.
N: used
N-1: used
N-2: used
N-3: used <- **ESP**
N-4: free
N-5: free
N-6: free
...
That makes "stack allocation" very efficient - just decrease ESP by the size of allocation, plus it is locality/cache-friendly.
If you would allow arbitrary deallocations, of different sizes - that will turn your "stack" into "heap", with all associated additional overhead - ESP would not be enough, because you have to remember which space is deallocated and which is not:
N: used
N-1: free
N-2: free
N-3: used
N-4: free
N-5: used
N-6: free
...
Clearly - ESP is not more enough. And you also have to deal with fragmentation problems.
I get that the compiler frees variables on the stack, but thats at the end of the scope of the variable right. Doesn't it also free a variable on the heap at the end of its scope? If not, why not?
One of the reasons is that you don't always want that - sometimes you want to return allocated data to caller of your function, that data should outlive scope where it was created.
That said, if you really need scope-based lifetime management for "heap" allocated data (and most of time it is scope-based, indeed) - it is common practice in C++ to use wrappers around such data. One of examples is std::vector:
{
std::vector<int> x(1024); // internally allocates array of 1024 ints on heap
// use x
// ...
} // at the end of the scope destructor of x is called automatically,
// which does deallocation
Read about function calls - each call pushes data and function address on the stack. Function pops data from stack and eventually pushes its result.
In general, stack is managed by OS, and yes - it can be depleted. Just try doing something like this:
int main(int argc, char **argv)
{
int table[1000000000];
return 0;
}
That should end quickly enough.
Local variables on the stack don't actually get freed. The registers pointing at the current stack are just moved back up and the stack "forgets" about them. And yes, you can occupy so much stack space that it overflows and the program crashes.
Variables on the heap do get freed automatically - by the operating system, when the program exits. If you do
int x;
for(x=0; x<=99999999; x++) {
int* a = malloc(sizeof(int));
}
the value of a keeps getting overwritten and the place in the heap where a was stored is lost. This memory is NOT freed, because the program doesn't exit. This is called a "memory leak". Eventually, you will use up all the memory on the heap, and the program will crash.
The heap is managed by code: Deleting a heap allocation is done by calling the heap manager. The stack is managed by hardware. There is no manager to call.

Can stack memory be allocated within a function automatically?

I'm sorry if this has been asked before, but I didn't find anything...
For a "normal" x86 architecture:
When I call a large function in C++, is the memory then allocated immediately for all stack variables?
Or are there compilers which can (and do) modify the stack size even if the function is not finished.
For example if a new scope starts:
int largeFunction(){
int a = 1;
int b = 2;
// .... long code ....
{ // new scope
int c = 5;
// .... code again ....
}
// .....
}
Could the call stack "grow" also for the variable c at the beginning of the separate scope and "shrink" at its end?
Or will current compilers always produce code which affects the stack pointer at the entry and return value of the function?
Thanks for your answer in advance.
1) How long a function is has nothing to do with the allocation of memory, independent of stack or heap.
2) When stack is "allocated" depends only on the compiler's way to make the most efficient code. "Efficient" has a wide range of requirements. All compilers have options to modify the optimizer goals for speed & size and most compilers can optimize also for lower stack consumption and other parameters.
3) Automatic variables can go on the stack but that is not a must. A lot of variables should be "allocated" to registers of your cpu. This speeds up the code a lot and saves stack. But this depends very much on the cpu platform.
4) When a compiler generates a new stack frame is also a question of optimization of code. Compilers can do "out of order execution" if this saves resources or fits better with the architecture. So the question when a stack frame comes in use cannot be answered. A new scope (open brace) can be the point for allocating a new stack frame, but this is never a guarantee. Sometimes it is not efficient to do a recalculation of all stack relative addresses of all called functions from the actual scope.
5) Some compilers can also use heap memory for auto variables. This is often seen on embedded cores if access via special instructions is faster as a stack relative addressing.
But normally it is not very important when a compiler do what he wants. The only thing which is sometimes to remember is, that you have to guarantee that your stack is large enough. Often system calls for new threads have params to set the stack size. So you have to know how many stack size your implementation needs. But in all other cases: Forget to think about. This job is done perfectly from your compiler developers.
I don't know the answer (and I hope you only want to know because you're curious, as no valid program should be able to tell the difference), but you could test the behaiour of your compiler by calling a function like this before the new scope and again after the new scope:
std::intptr_t stackaddr()
{
int i;
return reinterpret_cast<std::intptr_t>(&i);
}
If you get the same result then it means the stack was already adjusted in advance of creating c.
There was a change in G++ 4.7 which allows the compiler to re-use the stack space of c after its scope ends, where previously any new variables after that point would have increased the stack usage: "G++ now properly re-uses stack space allocated for temporary objects when their lifetime ends, which can significantly lower stack consumption for some C++ functions." But I think that only affects how much stack is reserved on entry to the function, not when/where it's reserved.
This is entirely dependent on the runtime conventions of the system you are using, however, the CPU architecture usually plays a big part in the decision, because the architecture defines what stack management can safely be used. On the old PowerPCs under MacOS X, for instance, stack frames were always of fixed size, one atomical store of the stackpointer at the low end of a new stack frame would allocate it, dereferencing the stackpointer was equivalent to poping an entire stack frame.
Current systems like Linux and (correct me, if I'm wrong) Windows on x86 have a more dynamic approach with atomic push and pop instructions (there is no atomic pop on PowerPC), where the parameters to a function call are pushed unto the stack before each function call, effectively resizing the allocated stack frame each time.
So, yes, on many current systems the compiler can resize the stack frame, but on other systems such an operation is at least hard to accomplish (never impossiple, though).

main function stack size

Max function stack size is limited and can be quickly exhausted if we use big stack variables or get careless with recursive functions.
But main's stack isn't really a stack. main is always called exactly once and never recursively. By all means main's stack is static storage allocated at the very beginning and it lives until the very end. Does it mean I can allocate big arrays in main's stack?
int main()
{
double a[5000000];
}
main is just a normal function. Stack size is system dependent.
Alos remember your process shares only one stack, for all function calls. Items are pushed and popped from the stack, as function are called by main.
It's implementation-defined (the language standard doesn't talk about stacks, AFAIK). But typically, main lives on the stack just like any other function.
It's 100% compiler and system dependent, like most of this kind of funny business. Heck, even the existence of the stack isn't mandated by the standard.
In practice, yes, it's on the stack, and no, you can't allocate things like that on the stack without running into trouble.
When you allocate an array in that manner, it is allocated on the stack. There is a platform-dependent maximum size the stack can grow to. And yes, you've exceeded it.
Second thought, I just remembered - it can be called recursively. Check out this obfuscated code:
http://en.wikipedia.org/wiki/Obfuscated_code
It calls main many times and works wonders :) It's a fun link anyway. So, its definitely stack allocated, sorry about that!
The stack is something that is used by all functions - the way you've worded your question suggests that each function is given a stack which is not the case.
Stack usage grows with each function call - main() being the first. The allocation that you used in your example is just as bad as making a stack allocation in another function.
For most modern systems, there is no real reason the stack size needs to be limited. You can probably adjust an operating system parameter and that program will work fine. (As will any that allocates an equal amount of data on the stack, main or not.)
However, if you really want an object with a lifetime equal to the duration of the program, create a global variable instead of a local inside main. Most platforms do not artificially limit the size of global objects — they can usually be as large as the memory map allows.
By the way, main is not active for the duration of a C++ program. It may be preceded by construction of global objects and followed by destruction of same and atexit handlers.

Proper stack and heap usage in C++?

I've been programming for a while but It's been mostly Java and C#. I've never actually had to manage memory on my own. I recently began programming in C++ and I'm a little confused as to when I should store things on the stack and when to store them on the heap.
My understanding is that variables which are accessed very frequently should be stored on the stack and objects, rarely used variables, and large data structures should all be stored on the heap. Is this correct or am I incorrect?
No, the difference between stack and heap isn't performance. It's lifespan: any local variable inside a function (anything you do not malloc() or new) lives on the stack. It goes away when you return from the function. If you want something to live longer than the function that declared it, you must allocate it on the heap.
class Thingy;
Thingy* foo( )
{
int a; // this int lives on the stack
Thingy B; // this thingy lives on the stack and will be deleted when we return from foo
Thingy *pointerToB = &B; // this points to an address on the stack
Thingy *pointerToC = new Thingy(); // this makes a Thingy on the heap.
// pointerToC contains its address.
// this is safe: C lives on the heap and outlives foo().
// Whoever you pass this to must remember to delete it!
return pointerToC;
// this is NOT SAFE: B lives on the stack and will be deleted when foo() returns.
// whoever uses this returned pointer will probably cause a crash!
return pointerToB;
}
For a clearer understanding of what the stack is, come at it from the other end -- rather than try to understand what the stack does in terms of a high level language, look up "call stack" and "calling convention" and see what the machine really does when you call a function. Computer memory is just a series of addresses; "heap" and "stack" are inventions of the compiler.
I would say:
Store it on the stack, if you CAN.
Store it on the heap, if you NEED TO.
Therefore, prefer the stack to the heap. Some possible reasons that you can't store something on the stack are:
It's too big - on multithreaded programs on 32-bit OS, the stack has a small and fixed (at thread-creation time at least) size (typically just a few megs. This is so that you can create lots of threads without exhausting address space. For 64-bit programs, or single threaded (Linux anyway) programs, this is not a major issue. Under 32-bit Linux, single threaded programs usually use dynamic stacks which can keep growing until they reach the top of the heap.
You need to access it outside the scope of the original stack frame - this is really the main reason.
It is possible, with sensible compilers, to allocate non-fixed size objects on the heap (usually arrays whose size is not known at compile time).
It's more subtle than the other answers suggest. There is no absolute divide between data on the stack and data on the heap based on how you declare it. For example:
std::vector<int> v(10);
In the body of a function, that declares a vector (dynamic array) of ten integers on the stack. But the storage managed by the vector is not on the stack.
Ah, but (the other answers suggest) the lifetime of that storage is bounded by the lifetime of the vector itself, which here is stack-based, so it makes no difference how it's implemented - we can only treat it as a stack-based object with value semantics.
Not so. Suppose the function was:
void GetSomeNumbers(std::vector<int> &result)
{
std::vector<int> v(10);
// fill v with numbers
result.swap(v);
}
So anything with a swap function (and any complex value type should have one) can serve as a kind of rebindable reference to some heap data, under a system which guarantees a single owner of that data.
Therefore the modern C++ approach is to never store the address of heap data in naked local pointer variables. All heap allocations must be hidden inside classes.
If you do that, you can think of all variables in your program as if they were simple value types, and forget about the heap altogether (except when writing a new value-like wrapper class for some heap data, which ought to be unusual).
You merely have to retain one special bit of knowledge to help you optimise: where possible, instead of assigning one variable to another like this:
a = b;
swap them like this:
a.swap(b);
because it's much faster and it doesn't throw exceptions. The only requirement is that you don't need b to continue to hold the same value (it's going to get a's value instead, which would be trashed in a = b).
The downside is that this approach forces you to return values from functions via output parameters instead of the actual return value. But they're fixing that in C++0x with rvalue references.
In the most complicated situations of all, you would take this idea to the general extreme and use a smart pointer class such as shared_ptr which is already in tr1. (Although I'd argue that if you seem to need it, you've possibly moved outside Standard C++'s sweet spot of applicability.)
You also would store an item on the heap if it needs to be used outside the scope of the function in which it is created. One idiom used with stack objects is called RAII - this involves using the stack based object as a wrapper for a resource, when the object is destroyed, the resource would be cleaned up. Stack based objects are easier to keep track of when you might be throwing exceptions - you don't need to concern yourself with deleting a heap based object in an exception handler. This is why raw pointers are not normally used in modern C++, you would use a smart pointer which can be a stack based wrapper for a raw pointer to a heap based object.
To add to the other answers, it can also be about performance, at least a little bit. Not that you should worry about this unless it's relevant for you, but:
Allocating in the heap requires finding a tracking a block of memory, which is not a constant-time operation (and takes some cycles and overhead). This can get slower as memory becomes fragmented, and/or you're getting close to using 100% of your address space. On the other hand, stack allocations are constant-time, basically "free" operations.
Another thing to consider (again, really only important if it becomes an issue) is that typically the stack size is fixed, and can be much lower than the heap size. So if you're allocating large objects or many small objects, you probably want to use the heap; if you run out of stack space, the runtime will throw the site titular exception. Not usually a big deal, but another thing to consider.
Stack is more efficient, and easier to managed scoped data.
But heap should be used for anything larger than a few KB (it's easy in C++, just create a boost::scoped_ptr on the stack to hold a pointer to the allocated memory).
Consider a recursive algorithm that keeps calling into itself. It's Very hard to limit and or guess the total stack usage! Whereas on the heap, the allocator (malloc() or new) can indicate out-of-memory by returning NULL or throw ing.
Source: Linux Kernel whose stack is no larger than 8KB!
For completeness, you may read Miro Samek's article about the problems of using the heap in the context of embedded software.
A Heap of Problems
The choice of whether to allocate on the heap or on the stack is one that is made for you, depending on how your variable is allocated. If you allocate something dynamically, using a "new" call, you are allocating from the heap. If you allocate something as a global variable, or as a parameter in a function it is allocated on the stack.
In my opinion there are two deciding factors
1) Scope of variable
2) Performance.
I would prefer to use stack in most cases but if you need access to variable outside scope you can use heap.
To enhance performance while using heaps you can also use the functionality to create heap block and that can help in gaining performance rather than allocating each variable in different memory location.
probably this has been answered quite well. I would like to point you to the below series of articles to have a deeper understanding of low level details. Alex Darby has a series of articles, where he walks you through with a debugger. Here is Part 3 about the Stack.
http://www.altdevblogaday.com/2011/12/14/c-c-low-level-curriculum-part-3-the-stack/

Why should/shouldn't I use the "new" operator to instantiate a class, and why?

I understand that this may be construed as one of those "what's your preference" questions, but I really want to know why you would choose one of the following methods over the other.
Suppose you had a super complex class, such as:
class CDoSomthing {
public:
CDoSomthing::CDoSomthing(char *sUserName, char *sPassword)
{
//Do somthing...
}
CDoSomthing::~CDoSomthing()
{
//Do somthing...
}
};
How should I declare a local instance within a global function?
int main(void)
{
CDoSomthing *pDoSomthing = new CDoSomthing("UserName", "Password");
//Do somthing...
delete pDoSomthing;
}
-- or --
int main(void)
{
CDoSomthing DoSomthing("UserName", "Password");
//Do somthing...
return 0;
}
Prefer local variables, unless you need the object's lifetime to extend beyond the current block. (Local variables are the second option). It's just easier than worrying about memory management.
P.S. If you need a pointer, because you need it to pass to another function, just use the address-of operator:
SomeFunction(&DoSomthing);
There are two main considerations when you declare a variable on the stack vs. in the heap - lifetime control and resource management.
Allocating on the stack works really well when you have tight control over the lifetime of the object. That means you are not going to pass a pointer or a reference of that object to code outside of the scope of the local function. This means, no out parameters, no COM calls, no new threads. Quite a lot of limitations, but you get the object cleaned up properly for you on normal or exceptional exit from the current scope (Though, you might want to read up on stack unwinding rules with virtual destructors). The biggest drawback of the stack allocation - the stack is usually limited to 4K or 8K, so you might want to be careful what you put on it.
Allocating on the heap on the other hand would require you to cleanup the instance manually. That also means that you have a lot of freedom how to control the lifetime of the instance. You need to do this in two scenarios: a) you are going to pass that object out of scope; or b) the object is too big and allocating it on the stack could cause stack overflow.
BTW, a nice compromise between these two is allocating the object on the heap and allocating a smart pointer to it on the stack. This ensures that you are not wasting precious stack memory, while still getting the automatic cleanup on scope exit.
The second form is the so called RAII (Resource Acquisition Is Initialization) pattern. It has many advantages over the first.
When use new, you have to use delete yourself, and guarantee it will always be deleted, even if an exception is thrown. You must guarantee all that yourself.
If you use the second form, when the variable goes out of scope, it is always cleaned up automatically. And if an exception is thrown, the stack unwinds and it is also cleaned up.
So, you should prefer RAII (the second option).
In addition to what has been said so far, but there are additional performance considerations to be taken into account, particularly in memory-allocation-intensive applications:
Using new will allocate memory from the heap. In the case of intense (extremely frequent) allocation and deallocation, you will be paying a high price in:
locking: the heap is a resource shared by all threads in your process. Operations on the heap may require locking in the heap manager (done for you in the runtime library), which may slow things down significantly.
fragmentation: heap fragments. You may see the time it takes malloc/new and free/delete to return increase 10-fold. This compounds with the locking problem above, as it takes more time to manage a fragmented heap and more threads queue up waiting for the heal lock. (On Windows there is a special flag you can set for the heap manager so it heuristically attempts to reduce fragmentation.)
Using the RAII pattern, memory is simply taken off the stack. Stack is a per-thread resource, it does not fragment, there is no locking involved, and may turn out to play in your advantage in terms of memory locality (i.e. memory caching at the CPU level.)
So, when you need objects for a brief (or scoped) period of time, definitely use the second approach (local variable, on the stack.) If you need to share data between threads, use new/malloc (on one hand you have to, on the second hand these objects are typically long-lived enough so you pay essentially 0-cost vis-a-vis the heap manager.)
The second version will unwind the stack if an exception is thrown. The first will not. I don't see much difference otherwise.
The biggest difference between the two is that the new initiates a pointer to the object.
By creating the object without new, the object initiated is stored on the stack. If it is initiated with new, it returns a pointer to the new object that has been created on the heap. It actually returns a memory address that points to the new object. When this happens, you need to memory manage the variable. When you are done using the variable, you would need to call delete on it to avoid a memory leak. Without the new operator, when the variable goes out of scope the memory will be freed automatically.
So if you need to pass the variable outside of the current scope, using new is more efficient. However, if you need to make a temporary variable, or something that will only be used temporarily, having the object on the stack is going to be better, since you don't have to worry about memory management.
Mark Ransom is right, also you'll need to instantiate with new if you are going to pass the variable as a parameter to the CreateThread-esque function.