Sometimes, I see that there is a mix of concepts between the duration of the storage and where does this occur. That is because sometimes I've seen the following statement:
int i; // This is in the stack!
int* j = new int; // This is in the heap!
But is this really true 100% of the time? Does C++ ensure where the storage takes place? Or, is it decided by the compiler?
Is the location of the storage independent from the duration?
For example, taking those two snippets:
void something()
{
int i;
std::cout << "i is " << i << std::endl;
}
vs:
void something()
{
int* i = new int;
std::cout << "i is " << i << std::endl;
delete i;
}
Both are more or less equivalent regarding the lifetime of i, which is created at the begining and deleted at the end of the block, here the compiler could just use the stack (I don't know!), and the opposite could happen too:
void something()
{
int n[100000000]; // Man this is big
}
vs:
void something()
{
int* n = new int[100000000];
delete n;
}
Those two cases should be in the heap to avoid stack-overflow (Or at least is what I've been told so far...), does the compiler that also that into account, besides the storage duration?
Is the location of the storage independent from the duration?
A0: Duration specifies expected/required behavior.
A1: The standard does not specify how that is implemented.
A2: The standard does not even require a heap or stack!
void something()
{
int i;
std::cout << "i is " << i << std::endl;
}
void something()
{
int* i = new int;
std::cout << "i is " << i << std::endl;
delete i;
}
In the first example you have "automatic" storage duration and the second case is "dynamic" storage duration. The difference is that "automatic" will always be destroyed at the end of scope while the second will only be destroyed if the delete is executed.
Where the objects are created is not specified by the standard and completely left to the implementing.
On implementations that use an underlying heap that would be an easy implementation choice for the first example; but not a requirement. The implementation can quite easily call the OS for dynamic memory for the space required for the integer and still behave like the standard defines as long as the code to release the memory is also planted and executed when the object goes out of scope.
Conversely the easy way to implement the dynamic storage duration (second example) is to allocate memory from the runtime and then release it (assuming your implementation has this ability) when you hit the delete. But this is not a requirement. If the compiler can prove that there are not exceptions and you will always hit the delete then it could just as easily put it on the heap and destroy it normally. NOTE: If the compiler determines that the object is always leaked. It could still put it on the heap and simply not destroy it when it goes out fo scope (that is a perfectly valid implementation).
The second set of examples adds some complications:
Code:
int n[100000000]; // Man this is big
This is indeed very large. Some implementations may not be able to support this on a stack (the stack frame size may be limited by the OS or hardware or compiler).
A perfectly valid implementation is to dynamically allocate the memory for this and ensure that the memory is released when the object goes out of scope.
Another implementation is to simply pre-allocate the memory not on the stack but in the bzz (going from memory here. This an assembler zone of an application that stores memory). As long as it implements the expected behavior of calling any destructors at the end of scope (I know int does not have a destructor so it makes that easy).
Does C++ ensure where the storage takes place? Or it is decided by the compiler?
When you declare a variable like:
int i;
It has automatic storage. It could indeed be on the stack, but it's also common to just allocate a register for it, if enough registers are available. Theoretically it is also valid for the compiler to allocate heap memory for this variable, but in practice this does not happen.
When you use new, it is actually up to the standard library to allocate the memory for you. By defeault, it will use the heap. However, it could in theory also allocate the memory on the stack, but of course this would normally be the wrong thing to do, as any stack storage disappears when you return from the function where you called new.
In fact, new is just an operator, like +, and you can overload it. Typically, you would overload it inside a class, but you can also overload the global new operator (and similarly, the delete operator), and have it allocate storage from whereever you want.
Is the location of the storage independent from the duration?
In principle yes, but in practice automatic variables that only have the lifetime of the duration of a function are placed on the stack, whereas data you allocate with new is usually intended to outlive the function that called it, and that goes on the heap.
Those two cases should be in the heap to avoid stack-overflow (Or at least is what I've been told so far...), does the compiler that also that into account, besides the storage duration?
GCC and Clang never use heap allocation for variables with automatic storage as far as I can tell, regardless of their size. So you have to either use new and delete yourself, or use a container that manages the storage for you. Some containers, like std::string, will avoid heap allocations if you only store a small number of elements in them.
Related
I am aware of the differences between free and delete in C++. But one thing I never understood is why in C malloc/free can allocate de-allocate both single 'objects' and arrays but in C++ we need to use the correct new/delete vs new[]/delete[] pair.
Searching on Stackoverflow, it seems that in C++, new[] allocates extra memory to hold the size of the allocated array and new only allocates the memory to the object itself. And because of that, you should be aware of this extra overhead.
If the previous paragraph is indeed the case, then how malloc/free handles this overhead? Or they just accept this overhead? And if it is tolerable in C, why not in C++?
On the other hand, in case it's not because of memory overhead, but because of calling constructors and destructors, couldn't the compiler be smart enough to generate the appropriate code under the hood and let the programmer just write new/delete for both single objects and arrays of objects?
I am writing a compiler for a toy language whose semantics is similar to C++ and it seems that it is possible to let the compiler decides how to allocate and de-allocate only using new/delete, but as C++ uses new/delete and new[]/delete[], maybe there's a catch that I am not seeing right now. Maybe something related to polymorphism and virtual tables?
If you're curious, my naive idea is to simple allocate an integer together with the object/array where this integer is the size of the array or simple 1 in case of being an object. Then, when calling delete, it checks the value of the integer, if it is 1, it calls the destructor. If it greater than 1, then it iterates the array calling the destructor to each object in the array. As I said, it seems to work and would let the programmer just write new/delete instead of new/delete vs new[]/delete. But then again, maybe there's a catch that I am not seeing.
Edited part:
After some answers, I decided to try to provide some pseudo-code and a better background.
In C language, memory allocations are usually made with malloc() and de-allocations with free(). Either if you are allocating a single POD, a single struct or an array, malloc() fits all these cases. There is no need for different versions of malloc() if you are allocating a single struct vs a malloc_array() version if you are allocating an array. At least at the public API level. In other words, it seems it doesn't matter if you are allocating a few bytes or many bytes, there will be no overhead for bookkeeping the allocation size information.
As many of you are aware, including myself, new and delete do more than just allocate and de-allocate memory. New allocate memory and call the constructor and delete calls the destructor and then de-allocate memory. But in C++, you need to be aware if you are allocating just a single object or an array of objects. In case you are allocating an array, you need to use the new[]/delete[] pair.
In C, if you implement a binary tree, nodes will be allocated with malloc and de-allocated with free and in C++ with new and delete. But if you are implementing something like the vector class in C++, in C you still would use malloc/free, but now in C++ you would need to use new[]/delete[] (considering a sane implementation without too much black magic).
Consider the following pseudo-code that is executed by the compiler. In this pseudo-code, the delete function somehow gets access to the malloc internals and knows how many bytes there are, which in turn can be easily used to calculate how many objects there are. As this delete implementation is using malloc internals to know how much memory is allocated, in theory there should be no overhead of bookkeeping.
// ClassType is a meta type only know by the compiler
// it stores a class info such as name, size, constructors and so on
void *new(ClassType c) {
// allocates memory with malloc. Malloc() do the storage bookkeeping
// note that somehow malloc is allocating just a single object
c *ptr = malloc(sizeof(c));
// now, call the constructor of the requested class
c.constructor(ptr);
// return the new object
return ptr;
}
void *new(ClassType c, size_t n) {
c *ptr = malloc(sizeof(c) * n);
// iterate over the array and construct each object
for (i = 0; i < n; ++i) {
c.constructor(ptr[i]);
}
return ptr;
}
// this delete version seems to be able to de-allocate both single
// objects and arrays of objects with no overhead of bookkeeping because
// the bookkeeping is made by malloc/free. So I would need
// just a new/delete pair instead of new/delete vs new[]/delete[]
// Why C++ doesn't use something like my proposed implementation?
// What low-level details prohibits this implementation from working?
void delete(ClassType c, void *ptr) {
// get raw information of how many bytes are used by ptr;
size_t n = malloc_internals_get_size(ptr);
// convert the number of bytes to number of objects in the array
n = c.bytesToClassSize(n);
c* castedPointer = (c*) ptr;
// calls the destructor
for (i = 0; i < n; ++i) {
c.destructor(castedPointer[i]);
}
// free memory chunk
free(ptr);
}
why in C malloc/free can allocate de-allocate both single 'objects'
Malloc doesn't create any objects. It allocates "raw memory" which doesn't contain any objects. Correspondingly, free doesn't destroy any objects. new expressions do create objects, and delete destroys an object, while delete[] destroys an array of objects.
In order for the language implementation to know how many objects need to be destroyed by delete[], that number has to be stored somewhere. In order for the language implementation to know how many objects need to be destroyed by delete, that number does not need to be stored anywhere because it is always one.
Storing a number is not free, and storing an unused number is an unnecessary overhead. The different forms of deletion exist so that the language implementation can destroy the correct number of objects without having to store the number of objects created by a non-array new.
then how malloc/free handles this overhead?
malloc/free doesn't have this overhead since it doesn't create or destroy objects. As such, there is nothing that needs to be handled.
There is an analogous issue of storing the number of allocated bytes that malloc does need to deal with. There is no analogous separate function for allocation or freeing of a single byte. This may be because such use case is probably rare. Malloc has more clever ways of dealing with storing this because allocating more memory than is needed is not observable, while such trick is not possible with number of objects because creation and destruction of objects is observable (at least in case of non-trivial types).
new typically deals with the issue of storing the number of allocated bytes through using malloc internally.
couldn't the compiler be smart enough to generate the appropriate code under the hood
Not without some kind of overhead, no. With overhead yes, it could.
But then again, maybe there's a catch that I am not seeing.
I'm not sure if it is a catch that you haven't seen, but the catch with your idea is the overhead of the integer that is to be allocated even when a single object is allocated.
Some clever implementations of malloc don't actually keep track of the size per allocation (by clever use of rounding up), thus it have extremely low space overhead. They'll allocate a large block of memory, and anytime a dev allocates <64 bytes, it'll just use the next 64 bytes of this block, and mark a single bit elsewhere that tracks that this block of 64 bytes is now in use. Even if the user only wants to allocate 1 byte, it hands out a whole block. This means each allocation has only a single bit overhead, so every 8 allocations has a shared byte of overhead. Practically nothing. (There are far smarter strategies than this, this is just a simplified example)
new and delete can share this super-low-overhead implementation, because delete knows to always destroy one object, regardless of the amount of space it actually has. This is again, super fast, and has low space overhead.
delete[] can't do that because it has to know exactly how many destructors to call. So it has to keep track of how many items are in the array, up to std::size_t, which means ~4 bytes have to be added to every new[]. If the data requires an alignment >4, then each allocation also has to have wasted padding bytes between the count and the first item in the array. And delete[] therefore has to know how to look past the padding, find the count, so it knows exactly how many objects to destroy. This takes both time and more space, for every allocation.
C++ gives you the choice between "always works, but slower and bigger", and "only works for one item, but faster and smaller", so the program can run as fast as possible.
Not all implementations have a difference between new/delete vs new[]/delete[] but they exist because new expression act differently in case of array. thing is that the new expression is not just call to the new operator, underlying operators might be identical:
#include <iostream>
// class-specific allocation functions
struct X {
static void* operator new(std::size_t sz)
{
std::cout << "custom new for size " << sz << '\n';
return ::operator new(sz);
}
static void* operator new[](std::size_t sz)
{
std::cout << "custom new[] for size " << sz << '\n';
return ::operator new(sz);
}
};
int main() {
X* p1 = new X;
delete p1;
X* p2 = new X[10];
delete[] p2;
}
But they maybe overloaded to different ones as well. It might be important difference for native compiler, if you're writing an interpreter or if you don't expect users to modify new\delete, you can do this differently. In C++ new expression call one constructor after allocating memory with provided function, new[] expression would call several and store the count. Of course if compiler got object-oriented memory model, like Java does, then array would be an object with property of size. Single object is just an array of one instance. And as you said, we won't need a separate delete expression.
I've always declared my arrays using this method:
bool array[256];
However, I've recently been told to declare my arrays using:
bool* array = new bool[256];
What is the difference and which is better? Honestly, I don't fully understand the second way, so an explanation on that would be helpful too.
bool array[256];
This allocates a bool array with automatic storage duration.
It will be automatically cleaned up when it goes out of scope.
In most implementations this would be allocated on the stack if it's not declared static or global.
Allocations/deallocations on the stack are computationally really cheap compared to the alternative. It also might have some advantages for data-locality but that's not something you usually have to worry about. But you might need to be careful of allocating many large arrays to avoid a stack overflow.
bool* array = new bool[256];
This allocates an array with dynamic storage duration.
You need to clean it up yourself with a call to delete[] later on. If you do not then you will leak memory.
Alternatively (as mentioned by #Fibbles) you can use smart-pointers to express the desired ownership/lifetime requirements. This will leave the responsibility of cleaning up to the smart-pointer class. Which helps a lot with guaranteeing deletion, even in cases of exceptions.
It has the advantage of being able to pass it to outer scopes and other objects without copying (RVO will avoid copying for the first case too in certain cases, but storing it as a data-member and other uses can't be optimized in the first case).
The first is allocation of memory on stack:
// inside main (or function, or non-static member of class) -> stack
int main() {
bool array[256];
}
or maybe as a static memory:
// outside main (and any function, or static member of class) -> static
bool array[256];
int main() {
}
The last is allocation of dynamic memory (in heap):
int main() {
bool* array = new bool[256];
delete[] array; // you should not forget to release memory allocated in heap
}
The advantage of dynamic memory is that it can be created with variable number of elements (not 256, but from some user input for example). But you should release it each time by yourself.
More about stack, static and heap memory and when you should use each is here: Stack, Static, and Heap in C++
The difference is static vs dynamic allocation, as previous answers have indicated. There are reasons for using one over the other. This video by Herb Sutter explains when you should use what. https://www.youtube.com/watch?v=JfmTagWcqoE It is just over 1 1/2 hours.
My preference is to use
bool array[256];
unless there's a reason to do otherwise.
Mike
I'm sure this is answered somewhere, but I'm lacking the vocabulary to formulate a search.
#include <iostream>
class Thing
{
public:
int value;
Thing();
virtual ~Thing() { std::cout << "Destroyed a thing with value " << value << std::endl; }
};
Thing::Thing(int newval)
{
value = newval;
}
int main()
{
Thing *myThing1 = new Thing(5);
std::cout << "Value 1: " << myThing1->value << std::endl;
Thing myThing2 = Thing(6);
std::cout << "Value 2: " << myThing2.value << std::endl;
return 0;
}
Output indicates myThing2 was destroyed, my myThing1 was not.
So... do I need to deconstruct it manually somehow? Is this a memory leak? Should I avoid using the * in this situation, and if so, when would it be appropriate?
The golden rule is, wherever you use a new you must use a delete. You are creating dynamic memory for myThing1, but you never release it, hence the destructor for myThing1 is never called.
The difference between this and myThing2 is that myThing2 is a scoped object. The operation:
Thing myThing2 = Thing(6);
is not similar at all to:
Thing *myThing1 = new Thing(5);
Read more about dynamic allocation here. But as some final advice, you should be using the new keyword sparingly, read more about that here:
Why should C++ programmers minimize use of 'new'?
myThing1 is a Thing* not a Thing. When a pointer goes out of scope nothing happens except that you leak the memory it was holding as there is no way to get it back. In order for the destructor to be called you need to delete myThing1; before it goes out of scope. delete frees the memory that was allocated and calls the destructor for class types.
The rule of thumb is for every new/new[] there should be a corresponding delete/delete[]
You need to explicitly delete myThing1 or use shared_ptr / unique_ptr.
delete myThing1;
The problem is not related to using pointer Thing *. A pointer can point to an object with automatic storage duration.
The problem is that in this statement
Thing *myThing1 = new Thing(5);
there is created an object new Thing(5) using the new operator. This object can be deleted by using the delete operator.
delete myThing1;
Otherwise it will preserve the memory until the program will not finish.
Thing myThing2 = Thing(6);
This line creates a Thing in main's stack with automatic storage duration. When main() ends it will get cleaned up.
Thing *myThing1 = new Thing(5);
This, on the other hand, creates a pointer to a Thing. The pointer resides on the stack, but the actual object is in the heap. When the pointer goes out of scope nothing happens to the pointed-to thing, the only thing reclaimed is the couple of bytes used by the pointer itself.
In order to fix this you have two options, one good, one less good.
Less good:
Put a delete myThing1; towards the end of your function. This will free to allocated object. As noted in other answers, every allocation of memory must have a matching deallocation, else you will leak memory.
However, in modern C++, unless you have good reason not to, you should really be using shared_ptr / unique_ptr to manage your memory. If you had instead declared myThing1 thusly:
shared_ptr<Thing> myThing1(new Thing(5));
Then the code you have now would work the way you expect. Smart pointers are powerful and useful in that they greatly reduce the amount of work you have to do to manage memory (although they do have some gotchas, circular references take extra work, for example).
I am trying to understand the difference between the stack and heap memory, and this question on SO as well as this explanation did a pretty good job explaining the basics.
In the second explanation however, I came across an example to which I have a specific question, the example is this:
It is explained that the object m is allocated on the heap, I am just wondering if this is the full story. According to my understanding, the object itself indeed is allocated on the heap as the new keyword has been used for its instantiation.
However, isn't it that the pointer to object m is on the same time allocated on the stack? Otherwise, how would the object itself, which of course is sitting in the heap be accessed. I feel like for the sake of completeness, this should have been mentioned in this tutorial, leaving it out causes a bit of confusion to me, so I hope someone can clear this up and tell me that I am right with my understanding that this example should have basically two statements that would have to say:
1. a pointer to object m has been allocated on the stack
2. the object m itself (so the data that it carries, as well as access to its methods) has been allocated on the heap
Your understanding may be correct, but the statements are wrong:
A pointer to object m has been allocated on the stack.
m is the pointer. It is on the stack. Perhaps you meant pointer to a Member object.
The object m itself (the data that it carries, as well as access to its methods) has been allocated on the heap.
Correct would be to say the object pointed by m is created on the heap
In general, any function/method local object and function parameters are created on the stack. Since m is a function local object, it is on the stack, but the object pointed to by m is on the heap.
"stack" and "heap" are general programming jargon. In particular , no storage is required to be managed internally via a stack or a heap data structure.
C++ has the following storage classes
static
automatic
dynamic
thread
Roughly, dynamic corresponds to "heap", and automatic corresponds to "stack".
Moving onto your question: a pointer can be created in any of these four storage classes; and objects being pointed to can also be in any of these storage classes. Some examples:
void func()
{
int *p = new int; // automatic pointer to dynamic object
int q; // automatic object
int *r = &q; // automatic pointer to automatic object
static int *s = p; // static pointer to dynamic object
static int *s = r; // static pointer to automatic object (bad idea)
thread_local int **t = &s; // thread pointer to static object
}
Named variables declared with no specifier are automatic if within a function, or static otherwise.
When you declare a variable in a function, it always goes on the stack. So your variable Member* m is created on the stack. Note that by itself, m is just a pointer; it doesn't point to anything. You can use it to point to an object on either the stack or heap, or to nothing at all.
Declaring a variable in a class or struct is different -- those go where ever the class or struct is instantiated.
To create something on the heap, you use new or std::malloc (or their variants). In your example, you create an object on the heap using new and assign its address to m. Objects on the heap need to be released to avoid memory leaks. If allocated using new, you need to use delete; if allocated using std::malloc, you need to use std::free. The better approach is usually to use a "smart pointer", which is an object that holds a pointer and has a destructor that releases it.
Yes, the pointer is allocated on the stack but the object that pointer points to is allocated on the heap. You're correct.
However, isn't it that the pointer to object m is on the same time
allocated on the stack?
I suppose you meant the Member object. The pointer is allocated on the stack and will last there for the entire duration of the function (or its scope). After that, the code might still work:
#include <iostream>
using namespace std;
struct Object {
int somedata;
};
Object** globalPtrToPtr; // This is into another area called
// "data segment", could be heap or stack
void function() {
Object* pointerOnTheStack = new Object;
globalPtrToPtr = &pointerOnTheStack;
cout << "*globalPtrToPtr = " << *globalPtrToPtr << endl;
} // pointerOnTheStack is NO LONGER valid after the function exits
int main() {
// This can give an access violation,
// a different value after the pointer destruction
// or even the same value as before, randomly - Undefined Behavior
cout << "*globalPtrToPtr = " << *globalPtrToPtr << endl;
return 0;
}
http://ideone.com/BwUVgm
The above code stores the address of a pointer residing on the stack (and leaks memory too because it doesn't free Object's allocated memory with delete).
Since after exiting the function the pointer is "destroyed" (i.e. its memory can be used for whatever pleases the program), you can no longer safely access it.
The above program can either: run properly, crash or give you a different result. Accessing freed or deallocated memory is called undefined behavior.
I am still new to C++. I have found that you can instantiate an instance in C++ with two different ways:
// First way
Foo foo;
foo.do_something();
// Second way
Baz *baz = new Baz();
baz->do_something();
And with both I don't see big difference and can access the attributes. Which is the preferred way in C++? Or if the question is not relevant, when do we use which and what is the difference between the two?
Thank you for your help.
The question is not relevant: there's no preferred way, those just do different things.
C++ both has value and reference semantics. When a function asks for a value, it means you'll pass it a copy of your whole object. When it asks for a reference (or a pointer), you'll only pass it the memory address of that object. Both semantics are convertible, that is, if you get a value, you can get a reference or a pointer to it and then use it, and when you get a reference you can get its value and use it. Take this example:
void foo(int bar) { bar = 4; }
void foo(int* bar) { *bar = 4; }
void test()
{
int someNumber = 3;
foo(someNumber); // calls foo(int)
std::cout << someNumber << std::endl;
// printed 3: someNumber was not modified because of value semantics,
// as we passed a copy of someNumber to foo, changes were not repercuted
// to our local version
foo(&someNumber); // calls foo(int*)
std::cout << someNumber << std::endl;
// printed 4: someNumber was modified, because passing a pointer lets people
// change the pointed value
}
It is a very, very common thing to create a reference to a value (i.e. get the pointer of a value), because references are very useful, especially for complex types, where passing a reference notably avoids a possibly costly copy operation.
Now, the instantiation way you'll use depends on what you want to achieve. The first way you've shown uses automatic storage; the second uses the heap.
The main difference is that objects on automatic storage are destroyed with the scope in which they existed (a scope being roughly defined as a pair of matching curly braces). This means that you must not ever return a reference to an object allocated on automatic storage from a regular function, because by the time your function returns, the object will have been destroyed and its memory space may be reused for anything at any later point by your program. (There are also performance benefits for objects allocated on automatic storage because your OS doesn't have to look up a place where it might put your new object.)
Objects on the heap, on the other hand, continue to exist until they are explicitly deleted by a delete statement. There is an OS- and platform-dependant performance overhead to this, since your OS needs to look up your program's memory to find a large enough unoccupied place to create your object at. Since C++ is not garbage-collected, you must instruct your program when it is the time to delete an object on the heap. Failure to do so leads to leaks: objects on the heap that are no longer referenced by any variable, but were not explicitly deleted and therefore will exist until your program exits.
So it's a matter of tradeoff. Either you accept that your values can't outlive your functions, or you accept that you must explicitly delete it yourself at some point. Other than that, both ways of allocating objects are valid and work as expected.
For further reference, automatic storage means that the object is allocated wherever its parent scope was. For instance, if you have a class Foo that contains a std::string, the std::string will exist wherever you allocate your Foo object.
class Foo
{
public:
// in this context, automatic storage refers to wherever Foo will be allocated
std::string a;
};
int foo()
{
// in this context, automatic storage refers to your program's stack
Foo bar; // 'bar' is on the stack, so 'a' is on the stack
Foo* baz = new Foo; // 'baz' is on the heap, so 'a' is on the heap too
// but still, in both cases 'a' will be deleted once the holding object
// is destroyed
}
As stated above, you cannot directly leak objects that reside on automatic storage, but you cannot use them once the scope in which they were created is destroyed. For instance:
int* foo()
{
int a; // cannot be leaked: automatically managed by the function scope
return &a; // BAD: a doesn't exist anymore
}
int* foo()
{
int* a = new int; // can be leaked
return a; // NOT AS BAD: now the pointer points to somewhere valid,
// but you eventually need to call `delete a` to release the memory
}
The first way -- "allocating on the stack" -- is generally faster and preferred much of the time. The constructed object is destroyed when the function returns. This is both a blessing -- no memory leaks! -- and a curse, because you can't create an object that lives for a longer time.
The second way -- "allocating on the heap" is slower, and you have to manually delete the objects at some point. But it has the advantage that the objects can live on until you delete them.
The first way allocates the object on the stack (though the class itself may have heap-allocated members). The second way allocates the object on the heap, and must be explicitly delete'd later.
It's not like in languages like Java or C# where objects are always heap-allocated.
They do very different things. The first one allocates an object on the stack, the 2nd on the heap. The stack allocation only lasts for the lifetime of the declaring method; the heap allocation lasts until you delete the object.
The second way is the only way to dynamically allocate objects, but comes with the added complexity that you must remember to return that memory to the operating system (via delete/delete[]) when you are done with it.
The first way will create the object on the stack, and the object will go away when you return from the function it was created in.
The second way will create the object on the heap, and the object will stick around until you call delete foo;.
If the object is just a temporary variable, the first way is better. If it's more permanent data, the second way is better - just remember to call delete when you're finally done with it so you don't build up cruft on your heap.
Hope this helps!