Here's the line of code:
A a = static_cast<A>(*(new A)); // ?
It compiles fine on 64bit clang at least.
But where is the memory actually allocated and what happens to variable a?
Besides there's no static cast needed, the memory allocated with new A simply leaks. You have lost access to that pointer and can never delete it properly anymore.
But where is the memory actually allocated and what happens to variable a?
Variable a is destroyed as soon it leaves scope as usual.
A a = static_cast<A>(*(new A)); // ?
This does the following.
(new A) // allocate a new A on the heap
*(new A) // the type is now A instead of *A
static_cast<A>(*(new A)) // makes it into an type A from type A in an potentially unsafe way (here it is a no-op as they are the same type)
A a = static_cast<A>(*(new A)); // copies the (default) value of A to a
; // leaks the allocted A as the last reference to it disappear.
I'm going to answer this question on the assumption that this line of code appears inside a function. If it appears elsewhere, the bit about the "stack" is inaccurate but everything else is still accurate.
This line of code compiles to four operations, which we can write as their own lines of C++ to make things clearer. It makes two allocations, in two different places, and one of them is "leaked".
A a;
{
A *temp = new A;
a = *temp;
}
The first operation allocates space for an object of type A on the "stack", and default-initializes it. This object is accessible through the variable a. It will be automatically destructed and deallocated no later than when the function returns; depending on surrounding context, this might happen earlier, but in no case while the variable a is in scope.
The second operation allocates space for another object of type A, but on the "heap" instead of the "stack". This object is also default-initialized. The new operator returns a pointer to this object, which the compiler stores in an temporary variable. (I gave that variable the name temp because I had to give it some name; in your original code the temporary is not accessible by any means.) This object will only ever be deallocated if, at some point in the future, the pointer returned by new is used in a delete operation.
The third operation, finally, copies the contents of the object on the heap, pointed to by temp, into the object on the stack, accessible via the variable a. (Note: the static_cast<A>(...) that you had written here has no effect whatsoever, because *temp already has the type A. Therefore, I took it out.)
Finally, the temporary variable holding the pointer to the object on the heap is discarded. The object on the heap is not deallocated when this happens; in fact, it becomes impossible for anything ever to deallocate it. That object is said to have leaked.
You probably wanted to write either
A a;
which allocates an object on the stack and does nothing else, or
// note: C++11 only; C++03 equivalent is std::shared_ptr<A> a(new A());
auto a = std::make_shared<A>();
which allocates an object on the heap and arranges to reference-count it, so that it probably won't leak. (There are a few other things you might have meant, but those are the most likely.)
For a simple definition of A, it is equivalent to:
A a(*new(A));
An A is dynamically allocated on the heap, a is copy constructed on the stack, and the dynamic allocation is leaked.
For a trivial definition of A the overall effect might as well be:
new A;
A a;
this copy implements the leak without the wasteful copy operation or the messy, redundant cast :)
Related
I am little bit curious about that why memory is not allocated to a class or structure when we create pointer type object ?
For example :-
class A
{
public:
void show()
{
cout<<" show function "<<endl;
}
};
int main()
{
A *a;
a->show();
return 0;
};
Because pointers and memory allocation are a priori completely unrelated. In fact, in modern C++ it’s downright bad to use pointers to point to manually allocated memory directly.
In most cases, pointers point to existing objects – that’s their purpose: to provide indirection. To reiterate: this is completely unrelated to memory allocation.
If you want to directly have an object you don’t need a pointer: just declare the object as-is (= by value):
A a;
a.show();
This code:
A *a;
a->show();
just declares a pointer of type A*. Pointer alone is nothing but a variable that holds an address of some memory in it, i.e. it just points somewhere, nothing else. Pointer of type A* means that it points to memory, where an instance of type A is expected to be found.
a->show(); then just relies on this "expectation", but in fact it just uses uninitialized pointer, which results in undefined behavior.
This could be either solved by dynamically creating an instance of A:
A *a = new A();
a->show();
(which however gives you unpleasant responsibility for cleaning up this memory by calling delete a;) or even better: using an object with automatic storage duration instead:
A a;
a.show();
In the second case, an instance of type A is created automatically and its lifetime is tied to the scope, in which it has been created. Once the execution leaves this scope, a is destructed and memory is freed. All of that is taken care of, without you worrying about it at all.
Allocating a pointer does not equate to allocating an object. You need to use new and instantiate an object on the heap, or create the object on the stack:
A* a = new A();
// Or
A a;
A* aPntr = &a;
Pointer is not an object, it’s just a link that points somewhere. The reason to use them is that you can dynamically change what they’re pointing to.
A a;
A b;
A *pA;
{
bool condition;
// set condition here according to user input, a file or anything else...
if(condition)
pA = &a;
else
pA = &b;
}
Now I don’t have to take care about condition, it even doesn’t have to exist anymore and still I can profit from the choice made above.
pA->show();
Or I can use pointer to iterate over an array:
A array[10];
for(A* pA = array; pA < array+10; pA++)
{
pA->show();
}
(Note I used the original declaration of class A in both examples altough more meaningful it would be if each object of class A contained its specific information.)
There may not be one single reason for A *a; not to allocate an instance of A. It would be at odds with how C++ is designed. I would be somewhat surprised if Stroustrup considered it for long enough to identify a definitive reason not to do it.
A few different ways to look at it:
You didn't ask for an object of type A, so you don't get one. That's how C and C++ work.
A pointer object is an object that holds an address. You may as well ask why stationary manufacturers don't build a house when they manufacture an envelope, as ask why C++ doesn't allocate an object to be pointed at when you define a pointer.
There are many ways to allocate memory. Supposing that memory was going to be allocated, which one would you like? You could argue that in C++ new would be a sensible default for class types, but then it would probably be quite confusing either if char *c; called new char (because the behavior would be different from C) or if char *c; didn't allocate memory at all (because the behavior would be different from char *A;.
How and when would the memory be freed? If it's allocated with new then someone is going to have to call delete. It's much easier to keep things straight if each delete corresponds to a new, rather than each delete corresponding either to new or to defining a pointer with implicit memory allocation.
A pointer can be the location of an object, but it isn't always (sometimes it's null, sometimes it's off-the-end of an array). The object can be dynamically allocated but doesn't have to be. It would be very unhelpful of the language to make a pointer point to an object in cases where you don't need it. Therefore the language gives you the option not to allocate memory when defining a pointer. If you don't want that, then you should initialize the pointer with the result of a call to the memory-allocation mechanism of your choice.
You just create a pointer *a, but not allocate memory for it.
you should use A *a = new A();
Basic Question: when does a program call a class' destructor method in C++? I have been told that it is called whenever an object goes out of scope or is subjected to a delete
More specific questions:
1) If the object is created via a pointer and that pointer is later deleted or given a new address to point to, does the object that it was pointing to call its destructor (assuming nothing else is pointing to it)?
2) Following up on question 1, what defines when an object goes out of scope (not regarding to when an object leaves a given {block}). So, in other words, when is a destructor called on an object in a linked list?
3) Would you ever want to call a destructor manually?
1) If the object is created via a pointer and that pointer is later deleted or given a new address to point to, does the object that it was pointing to call its destructor (assuming nothing else is pointing to it)?
It depends on the type of pointers. For example, smart pointers often delete their objects when they are deleted. Ordinary pointers do not. The same is true when a pointer is made to point to a different object. Some smart pointers will destroy the old object, or will destroy it if it has no more references. Ordinary pointers have no such smarts. They just hold an address and allow you to perform operations on the objects they point to by specifically doing so.
2) Following up on question 1, what defines when an object goes out of scope (not regarding to when an object leaves a given {block}). So, in other words, when is a destructor called on an object in a linked list?
That's up to the implementation of the linked list. Typical collections destroy all their contained objects when they are destroyed.
So, a linked list of pointers would typically destroy the pointers but not the objects they point to. (Which may be correct. They may be references by other pointers.) A linked list specifically designed to contain pointers, however, might delete the objects on its own destruction.
A linked list of smart pointers could automatically delete the objects when the pointers are deleted, or do so if they had no more references. It's all up to you to pick the pieces that do what you want.
3) Would you ever want to call a destructor manually?
Sure. One example would be if you want to replace an object with another object of the same type but don't want to free memory just to allocate it again. You can destroy the old object in place and construct a new one in place. (However, generally this is a bad idea.)
// pointer is destroyed because it goes out of scope,
// but not the object it pointed to. memory leak
if (1) {
Foo *myfoo = new Foo("foo");
}
// pointer is destroyed because it goes out of scope,
// object it points to is deleted. no memory leak
if(1) {
Foo *myfoo = new Foo("foo");
delete myfoo;
}
// no memory leak, object goes out of scope
if(1) {
Foo myfoo("foo");
}
Others have already addressed the other issues, so I'll just look at one point: do you ever want to manually delete an object.
The answer is yes. #DavidSchwartz gave one example, but it's a fairly unusual one. I'll give an example that's under the hood of what a lot of C++ programmers use all the time: std::vector (and std::deque, though it's not used quite as much).
As most people know, std::vector will allocate a larger block of memory when/if you add more items than its current allocation can hold. When it does this, however, it has a block of memory that's capable of holding more objects than are currently in the vector.
To manage that, what vector does under the covers is allocate raw memory via the Allocator object (which, unless you specify otherwise, means it uses ::operator new). Then, when you use (for example) push_back to add an item to the vector, internally the vector uses a placement new to create an item in the (previously) unused part of its memory space.
Now, what happens when/if you erase an item from the vector? It can't just use delete -- that would release its entire block of memory; it needs to destroy one object in that memory without destroying any others, or releasing any of the block of memory it controls (for example, if you erase 5 items from a vector, then immediately push_back 5 more items, it's guaranteed that the vector will not reallocate memory when you do so.
To do that, the vector directly destroys the objects in the memory by explicitly calling the destructor, not by using delete.
If, perchance, somebody else were to write a container using contiguous storage roughly like a vector does (or some variant of that, like std::deque really does), you'd almost certainly want to use the same technique.
Just for example, let's consider how you might write code for a circular ring-buffer.
#ifndef CBUFFER_H_INC
#define CBUFFER_H_INC
template <class T>
class circular_buffer {
T *data;
unsigned read_pos;
unsigned write_pos;
unsigned in_use;
const unsigned capacity;
public:
circular_buffer(unsigned size) :
data((T *)operator new(size * sizeof(T))),
read_pos(0),
write_pos(0),
in_use(0),
capacity(size)
{}
void push(T const &t) {
// ensure there's room in buffer:
if (in_use == capacity)
pop();
// construct copy of object in-place into buffer
new(&data[write_pos++]) T(t);
// keep pointer in bounds.
write_pos %= capacity;
++in_use;
}
// return oldest object in queue:
T front() {
return data[read_pos];
}
// remove oldest object from queue:
void pop() {
// destroy the object:
data[read_pos++].~T();
// keep pointer in bounds.
read_pos %= capacity;
--in_use;
}
~circular_buffer() {
// first destroy any content
while (in_use != 0)
pop();
// then release the buffer.
operator delete(data);
}
};
#endif
Unlike the standard containers, this uses operator new and operator delete directly. For real use, you probably do want to use an allocator class, but for the moment it would do more to distract than contribute (IMO, anyway).
When you create an object with new, you are responsible for calling delete. When you create an object with make_shared, the resulting shared_ptr is responsible for keeping count and calling delete when the use count goes to zero.
Going out of scope does mean leaving a block. This is when the destructor is called, assuming that the object was not allocated with new (i.e. it is a stack object).
About the only time when you need to call a destructor explicitly is when you allocate the object with a placement new.
1) Objects are not created 'via pointers'. There is a pointer that is assigned to any object you 'new'. Assuming this is what you mean, if you call 'delete' on the pointer, it will actually delete (and call the destructor on) the object the pointer dereferences. If you assign the pointer to another object there will be a memory leak; nothing in C++ will collect your garbage for you.
2) These are two separate questions. A variable goes out of scope when the stack frame it's declared in is popped off the stack. Usually this is when you leave a block. Objects in a heap never go out of scope, though their pointers on the stack may. Nothing in particular guarantees that a destructor of an object in a linked list will be called.
3) Not really. There may be Deep Magic that would suggest otherwise, but typically you want to match up your 'new' keywords with your 'delete' keywords, and put everything in your destructor necessary to make sure it properly cleans itself up. If you don't do this, be sure to comment the destructor with specific instructions to anyone using the class on how they should clean up that object's resources manually.
Pointers -- Regular pointers don't support RAII. Without an explicit delete, there will be garbage. Fortunately C++ has auto pointers that handle this for you!
Scope -- Think of when a variable becomes invisible to your program. Usually this is at the end of {block}, as you point out.
Manual destruction -- Never attempt this. Just let scope and RAII do the magic for you.
To give a detailed answer to question 3: yes, there are (rare) occasions when you might call the destructor explicitly, in particular as the counterpart to a placement new, as dasblinkenlight observes.
To give a concrete example of this:
#include <iostream>
#include <new>
struct Foo
{
Foo(int i_) : i(i_) {}
int i;
};
int main()
{
// Allocate a chunk of memory large enough to hold 5 Foo objects.
int n = 5;
char *chunk = static_cast<char*>(::operator new(sizeof(Foo) * n));
// Use placement new to construct Foo instances at the right places in the chunk.
for(int i=0; i<n; ++i)
{
new (chunk + i*sizeof(Foo)) Foo(i);
}
// Output the contents of each Foo instance and use an explicit destructor call to destroy it.
for(int i=0; i<n; ++i)
{
Foo *foo = reinterpret_cast<Foo*>(chunk + i*sizeof(Foo));
std::cout << foo->i << '\n';
foo->~Foo();
}
// Deallocate the original chunk of memory.
::operator delete(chunk);
return 0;
}
The purpose of this kind of thing is to decouple memory allocation from object construction.
Remember that Constructor of an object is called immediately after the memory is allocated for that object and whereas the destructor is called just before deallocating the memory of that object.
Whenever you use "new", that is, attach an address to a pointer, or to say, you claim space on the heap, you need to "delete" it.
1.yes, when you delete something, the destructor is called.
2.When the destructor of the linked list is called, it's objects' destructor is called. But if they are pointers, you need to delete them manually.
3.when the space is claimed by "new".
Yes, a destructor (a.k.a. dtor) is called when an object goes out of scope if it is on the stack or when you call delete on a pointer to an object.
If the pointer is deleted via delete then the dtor will be called. If you reassign the pointer without calling delete first, you will get a memory leak because the object still exists in memory somewhere. In the latter instance, the dtor is not called.
A good linked list implementation will call the dtor of all objects in the list when the list is being destroyed (because you either called some method to destory it or it went out of scope itself). This is implementation dependent.
I doubt it, but I wouldn't be surprised if there is some odd circumstance out there.
If the object is created not via a pointer(for example,A a1 = A();),the destructor is called when the object is destructed, always when the function where the object lies is finished.for example:
void func()
{
...
A a1 = A();
...
}//finish
the destructor is called when code is execused to line "finish".
If the object is created via a pointer(for example,A * a2 = new A();),the destructor is called when the pointer is deleted(delete a2;).If the point is not deleted by user explictly or given a new address before deleting it, the memory leak is occured. That is a bug.
In a linked list, if we use std::list<>, we needn't care about the desctructor or memory leak because std::list<> has finished all of these for us. In a linked list written by ourselves, we should write the desctructor and delete the pointer explictly.Otherwise, it will cause memory leak.
We rarely call a destructor manually. It is a function providing for the system.
Sorry for my poor English!
Assume you have an object of class Fool.
class Fool
{
int a,b,c;
double* array ;
//...
~Fool()
{
// destroys the array..
delete[] array ;
}
};
Fool *fool = new Fool() ;
Now, I know you shouldn't, but some fool calls the destructor on fool anyway. fool->~Fool();.
Does that mean fool's memory is freed, (ie a,b,c are invalid) or does that mean only whatever deallocations in ~Fool() function occur (ie the array is deleted only?)
So I guess my question is, is a destructor just another function that gets called when delete is called on an object, or does it do more?
If you write
fool->~Fool();
You end the object's lifetime, which invokes the destructor and reclaims the inner array array. The memory holding the object is not freed, however, which means that if you want to bring the object back to life using placement new:
new (fool) Fool;
you can do so.
According to the spec, reading or writing the values of the fields of fool after you explicitly invoke the destructor results in undefined behavior because the object's lifetime has ended, but the memory holding the object should still be allocated and you will need to free it by invoking operator delete:
fool->~Fool();
operator delete(fool);
The reason to use operator delete instead of just writing
delete fool;
is that the latter has undefined behavior, because fool's lifetime has already ended. Using the raw deallocation routine operator delete ensures that the memory is reclaimed without trying to do anything to end the object's lifetime.
Of course, if the memory for the object didn't come from new (perhaps it's stack-allocated, or perhaps you're using a custom allocator), then you shouldn't use operator delete to free it. If you did, you'd end up with undefined behavior (again!). This seems to be a recurring theme in this question. :-)
Hope this helps!
The destructor call does just that, it calls the destructor. Nothing more and nothing less. Allocation is separate from construction, and deallocation from destruction.
The typical sequence is this:
1. Allocate memory
2. Construct object
3. Destroy object (assuming no exception during construction)
4. Deallocate memory
In fact, if you run this manually, you will have to call the destructor yourself:
void * addr = ::operator new(sizeof(Fool));
Fool * fp = new (addr) Fool;
fp->~Fool();
::operator delete(addr);
The automatic way of writing this is of course Fool * fp = new Fool; delete fp;. The new expression invokes allocation and construction for you, and the delete expression calls the destructor and deallocates the memory.
Does that mean fool's memory is freed, (ie a,b,c are invalid) or does
that mean only whatever deallocations in ~Fool() function occur (ie
the array is deleted only?)
Fool::~Fool() has zero knowledge about whether the Fool instance is stored in dynamic storage (via new) or whether it is stored in automatic storage (i.e. stack objects). Since the object ceases to exist after the destructor is run, you can't assume that a, b, c and array will stay valid after the destructor exits.
However, because Fool::~Fool() knows nothing about how the Fool was allocated, calling the destructor directly on a new-allocated Fool will not free the underlying memory that backs the object.
You should not access a, b, and c after the destructor is called, even if it's an explicit destructor call. You never know what your compiler puts in your destructor that might make those values invalid.
However, the memory is not actually freed in the case of an explicit destructor call. This is by design; it allows an object constructed using placement new to be cleaned up.
Example:
char buf[sizeof (Fool)];
Fool* fool = new (buf) Fool; // fool points to buf
// ...
fool->~Fool();
The easiest place to see that the destructor is distinct from deallocation via delete is when the allocation is automatic in the first place:
{
Fool fool;
// ~Fool called on exit from block; nary a sign of new or delete
}
Note also the STL containers make full use of the explicit destructor call. For example, a std::vector<> treats storage and contained objects lifetime quite separately.
I am still new to C++. I have found that you can instantiate an instance in C++ with two different ways:
// First way
Foo foo;
foo.do_something();
// Second way
Baz *baz = new Baz();
baz->do_something();
And with both I don't see big difference and can access the attributes. Which is the preferred way in C++? Or if the question is not relevant, when do we use which and what is the difference between the two?
Thank you for your help.
The question is not relevant: there's no preferred way, those just do different things.
C++ both has value and reference semantics. When a function asks for a value, it means you'll pass it a copy of your whole object. When it asks for a reference (or a pointer), you'll only pass it the memory address of that object. Both semantics are convertible, that is, if you get a value, you can get a reference or a pointer to it and then use it, and when you get a reference you can get its value and use it. Take this example:
void foo(int bar) { bar = 4; }
void foo(int* bar) { *bar = 4; }
void test()
{
int someNumber = 3;
foo(someNumber); // calls foo(int)
std::cout << someNumber << std::endl;
// printed 3: someNumber was not modified because of value semantics,
// as we passed a copy of someNumber to foo, changes were not repercuted
// to our local version
foo(&someNumber); // calls foo(int*)
std::cout << someNumber << std::endl;
// printed 4: someNumber was modified, because passing a pointer lets people
// change the pointed value
}
It is a very, very common thing to create a reference to a value (i.e. get the pointer of a value), because references are very useful, especially for complex types, where passing a reference notably avoids a possibly costly copy operation.
Now, the instantiation way you'll use depends on what you want to achieve. The first way you've shown uses automatic storage; the second uses the heap.
The main difference is that objects on automatic storage are destroyed with the scope in which they existed (a scope being roughly defined as a pair of matching curly braces). This means that you must not ever return a reference to an object allocated on automatic storage from a regular function, because by the time your function returns, the object will have been destroyed and its memory space may be reused for anything at any later point by your program. (There are also performance benefits for objects allocated on automatic storage because your OS doesn't have to look up a place where it might put your new object.)
Objects on the heap, on the other hand, continue to exist until they are explicitly deleted by a delete statement. There is an OS- and platform-dependant performance overhead to this, since your OS needs to look up your program's memory to find a large enough unoccupied place to create your object at. Since C++ is not garbage-collected, you must instruct your program when it is the time to delete an object on the heap. Failure to do so leads to leaks: objects on the heap that are no longer referenced by any variable, but were not explicitly deleted and therefore will exist until your program exits.
So it's a matter of tradeoff. Either you accept that your values can't outlive your functions, or you accept that you must explicitly delete it yourself at some point. Other than that, both ways of allocating objects are valid and work as expected.
For further reference, automatic storage means that the object is allocated wherever its parent scope was. For instance, if you have a class Foo that contains a std::string, the std::string will exist wherever you allocate your Foo object.
class Foo
{
public:
// in this context, automatic storage refers to wherever Foo will be allocated
std::string a;
};
int foo()
{
// in this context, automatic storage refers to your program's stack
Foo bar; // 'bar' is on the stack, so 'a' is on the stack
Foo* baz = new Foo; // 'baz' is on the heap, so 'a' is on the heap too
// but still, in both cases 'a' will be deleted once the holding object
// is destroyed
}
As stated above, you cannot directly leak objects that reside on automatic storage, but you cannot use them once the scope in which they were created is destroyed. For instance:
int* foo()
{
int a; // cannot be leaked: automatically managed by the function scope
return &a; // BAD: a doesn't exist anymore
}
int* foo()
{
int* a = new int; // can be leaked
return a; // NOT AS BAD: now the pointer points to somewhere valid,
// but you eventually need to call `delete a` to release the memory
}
The first way -- "allocating on the stack" -- is generally faster and preferred much of the time. The constructed object is destroyed when the function returns. This is both a blessing -- no memory leaks! -- and a curse, because you can't create an object that lives for a longer time.
The second way -- "allocating on the heap" is slower, and you have to manually delete the objects at some point. But it has the advantage that the objects can live on until you delete them.
The first way allocates the object on the stack (though the class itself may have heap-allocated members). The second way allocates the object on the heap, and must be explicitly delete'd later.
It's not like in languages like Java or C# where objects are always heap-allocated.
They do very different things. The first one allocates an object on the stack, the 2nd on the heap. The stack allocation only lasts for the lifetime of the declaring method; the heap allocation lasts until you delete the object.
The second way is the only way to dynamically allocate objects, but comes with the added complexity that you must remember to return that memory to the operating system (via delete/delete[]) when you are done with it.
The first way will create the object on the stack, and the object will go away when you return from the function it was created in.
The second way will create the object on the heap, and the object will stick around until you call delete foo;.
If the object is just a temporary variable, the first way is better. If it's more permanent data, the second way is better - just remember to call delete when you're finally done with it so you don't build up cruft on your heap.
Hope this helps!
Using C++:
I currently have a method in which if an event occurs an object is created, and a pointer to that object is stored in a vector of pointers to objects of that class. However, since objects are destroyed once the local scope ends, does this mean that the pointer I stored to the object in the vector is now null or undefined? If so, are there any general ways to get around this - I'm assuming the best way would be to allocate on the heap.
I ask this because when I try to access the vector and do operations on the contents I am getting odd behavior, and I'm not sure if this could be the cause or if it's something totally unrelated.
It depends on how you allocate the object. If you allocate the object as an auto variable, (i.e. on the stack), then any pointer to that object will become invalid once the object goes out of scope, and so dereferencing the pointer will lead to undefined behavior.
For example:
Object* pointer;
{
Object myobject;
pointer = &myobject;
}
pointer->doSomething(); // <--- INVALID! myobject is now out of scope
If, however, you allocate the object on the Heap, using the new operator, then the object will remain valid even after you exit the local scope. However, remember that there is no automatic garbage collection in C++, and so you must remember to delete the object or you will have a memory leak.
So if I understand correctly you have described the following scenario:
class MyClass
{
public:
int a;
SomeOtherClass b;
};
void Test()
{
std::vector<MyClass*> v;
for (int i=0; i < 10; ++i)
{
MyClass b;
v.push_back(&b);
}
// now v holds 10 items pointers to strange and scary places.
}
This is definitely bad.
There are two primary alternatives:
allocate the objects on the heap using new.
make the vector hold instances of MyClass (i.e. std::vector<MyClass>)
I generally prefer the second option when possible. This is because I don't have to worry about manually deallocating memory, the vector does it for me. It is also often more efficient. The only problem, is that I would have to be sure to create a copy constructor for MyClass. That means a constructor of the form MyClass(const MyClass& other) { ... }.
If you store a pointer to an object, and that object is destroyed (e.g. goes out of scope), that pointer will not be null, but if you try to use it you will get undefined behavior. So if one of the pointers in your vector points to a stack-allocated object, and that object goes out of scope, that pointer will become impossible to use safely. In particular, there's no way to tell whether a pointer points to a valid object or not; you just have to write your program in such a way that pointers never ever ever point to destroyed objects.
To get around this, you can use new to allocate space for your object on the heap. Then it won't be destroyed until you delete it. However, this takes a little care to get right as you have to make sure that your object isn't destroyed too early (leaving another 'dangling pointer' problem like the one you have now) or too late (creating a memory leak).
To get around that, the common approach in C++ is to use what's called (with varying degrees of accuracy) a smart pointer. If you're new to C++ you probably shouldn't worry about these yet, but if you're feeling ambitious (or frustrated with memory corruption bugs), check out shared_ptr from the Boost library.
If you have a local variable, such as an int counter, then it will be out of scope when you exit the function, but, unless you have a C++ with a garbage collector, then your pointer will be in scope, as you have some global vector that points to your object, as long as you did a new for the pointer.
I haven't seen a situation where I have done new and my memory was freed without me doing anything.
To check (in no particular order):
Did you hit an exception during construction of member objects whose pointers you store?
Do you have a null-pointer in the container that you dereference?
Are you using the vector object after it goes out of scope? (Looks unlikely, but I still have to ask.)
Are you cleaning up properly?
Here's a sample to help you along:
void SomeClass::Erase(std::vector<YourType*> &a)
{
for( size_t i = 0; i < a.size(); i++ ) delete a[i];
a.clear();
}