Dynamic memory allocation in Vector - c++

I have a doubt regarding memory allocation in vector(STL - C++). As far as I know, its capacity gets doubled dynamically every time the size of vector gets equal to its capacity. If this is the case, how come the allocation be continuous? How does it still allow to use the [] access operator for O(1) access just like arrays? Can anyone explain this behavior?
(List also has dynamic memory allocation but we cannot access its elements using [] access operator, how is it still possible with vector? )
#include<iostream>
#include<vector>
using namespace std;
int main() {
// your code goes here
vector<int> v;
for(int i=0;i<10;i++){
v.push_back(i);
cout<<v.size()<<" "<<v.capacity()<<" "<<"\n";
}
return 0;
}
Output:
1 1
2 2
3 4
4 4
5 8
6 8
7 8
8 8
9 16
10 16

As far as I know, its capacity gets doubled dynamically every time the size of vector gets equal to its capacity.
It does not need to double like in your case, it's implementation defined. So it may differ if you use another compiler.
If this is the case, how come the allocation be continuous?
If there is no more continuous memory which the vector could allocate, the vector has to move it's data to a new continuous memory block which meets it's size requirements. The old block will be marked as free, so that other can use it.
How does it still allow to use the [] access operator for O(1) access just like arrays?
Because of the facts said before the access will be possible with the [] operator or a pointer + offset. The access to the data will be O(1).
List also has dynamic memory allocation but we cannot access its elements using [] access operator, how is it still possible with vector?
A list (std::list for example) is totally different from a std::vector. In the case of a C++ std::list it saves nodes with data, a pointer to the next node and a pointer the previous node (double-linked list). So you have to walk through the list to get one specific node you want.
Vectors work like said above.

The vector has to store the objects in one continuous memory area. Thus when it needs to increase its capacity, it has to allocate a new (larger) memory area (or expand the one it already has, if that's possible), and either copy or move the objects from the "old, small" area to the newly allocated one.
This can be made apparent by using a class with a copy/move constructor with some side effect (ideone link):
#include <iostream>
#include <vector>
using std::cout;
using std::endl;
using std::vector;
#define V(p) static_cast<void const*>(p)
struct Thing {
Thing() {}
Thing(Thing const & t) {
cout << "Copy " << V(&t) << " to " << V(this) << endl;
}
Thing(Thing && t) /* noexcept */ {
cout << "Move " << V(&t) << " to " << V(this) << endl;
}
};
int main() {
vector<Thing> things;
for (int i = 0; i < 10; ++i) {
cout << "Have " << things.size() << " (capacity " << things.capacity()
<< "), adding another:\n";
things.emplace_back();
}
}
This will lead to output similar to
[..]
Have 2 (capacity 2), adding another:
Move 0x2b652d9ccc50 to 0x2b652d9ccc30
Move 0x2b652d9ccc51 to 0x2b652d9ccc31
Have 3 (capacity 4), adding another:
Have 4 (capacity 4), adding another:
Move 0x2b652d9ccc30 to 0x2b652d9ccc50
Move 0x2b652d9ccc31 to 0x2b652d9ccc51
Move 0x2b652d9ccc32 to 0x2b652d9ccc52
Move 0x2b652d9ccc33 to 0x2b652d9ccc53
[..]
This shows that, when adding a third object to the vector, the two objects it already contains are moved from one continuous area (look at the by 1 (sizeof(Thing)) increasing addresses to another continuous area. Finally, when adding the fifth object, you can see that the third object was indeed placed directly after the second.
When does it move and when copy? The move constructor is considered when it is marked as noexcept (or the compiler can deduce that). Otherwise, if it would be allowed to throw, the vector could end up in a state where some part of its objects are in the new memory area, but the rest is still in the old one.

The question should be considered at 2 different levels.
From a standard point of view, it is required to provided a continuous storage to allow the programmer to use the address of its first element as the address of the first element of an array. And it is required to let its capacity grow when you add new elements by reallocation still keeping previous elements - but their address may change.
From an implementation point of view, it can try to extend the allocated memory in place and, if it cannot, allocate a brand new piece of memory and move or copy construct existing elements in the new allocated memory zone. The size increase is not specified by the standard and is left to the implementation. But you are right, doubling allocated size on each time is the common usage.

Related

Vector of pointers undefined behaviour

I'm trying to make a vector of pointers whose elements are pointing to vector of int elements. (I'm solving a competitive programming-like problem, that's why it sounds kinda nonsense).
but here's the code:
#include <bits/stdc++.h>
using namespace std;
int ct = 0;
vector<int> vec;
vector<int*> rf;
void addRef(int n){
vec.push_back(n);
rf.push_back(&vec[ct]);
ct++;
}
int main(){
addRef(1);
addRef(2);
addRef(5);
for(int i = 0; i < ct; i++){
cout << *rf[i] << ' ';
}
cout << endl;
for(int i = 0; i < ct; i++){
cout << vec[i] << ' ';
}
}
When I execute the code, it's showing weird behaviour that I don't understand. The first element of rf (vector<int*>) seems not pointing to the vec's (vector<int>) element, where the rest of the elements are pointing to it.
here's the output when I run it on Dev-C++:
1579600 2 5
1 2 5
When I tried to run the code here, the output is even weirder:
1197743856 0 5
1 2 5
The code is intended to have same output between the first line and the second.
Can you guys explain why it happens? Is there any mistake in my implementation?
thanks
Adding elements to a std::vector with push_back or similar may invalidate all iterators and references to its elements. See https://en.cppreference.com/w/cpp/container/vector/push_back.
The idea is that in order to grow the vector, it may not have enough free memory to expand into, and thus may have to move the whole array to some other location in memory, freeing the old block. That means in particular that your pointers now point to memory that has been freed, or reused for something else.
If you want to keep this approach, you will need to resize() or reserve() a sufficient number of elements in vec before starting. Which of course defeats the whole purpose of a std::vector, and you might as well use an array instead.
The vector is changing sizes and the addresses you are saving might not be those you want. You can preallocate memory using reserve() and the vector will not resize.
vec.reserve(3);
addRef(1);
addRef(2);
addRef(5);
The problem occurs when you call vec.push_back(n) and vec’s internal array is already full. When that happens, the std::vector::push_back() method allocates a larger array, copies the contents of the full array over to the new array, then frees the old/full array and keeps the new one.
Usually that’s all you need, but your program is keeping pointers to elements of the old array inside (rf), and these pointers all become dangling/invalid when the reallocation occurs, hence the funny (undefined) behavior.
An easy fix would be to call vec.reserve(100) (or similar) at the top of your program (so that no further reallocations are necessary). Or alternatively you could postpone the adding of pointers to (rf) until after you’ve finished adding all the values to (vec).
Just do not take pointer from a vector that may change soon. vector will copy the elements to a new space when it enlarges its capacity.
Use an array to store the ints instead.

What does std::vector look like in memory?

I read that std::vector should be contiguous. My understanding is, that its elements should be stored together, not spread out across the memory. I have simply accepted the fact and used this knowledge when for example using its data() method to get the underlying contiguous piece of memory.
However, I came across a situation, where the vector's memory behaves in a strange way:
std::vector<int> numbers;
std::vector<int*> ptr_numbers;
for (int i = 0; i < 8; i++) {
numbers.push_back(i);
ptr_numbers.push_back(&numbers.back());
}
I expected this to give me a vector of some numbers and a vector of pointers to these numbers. However, when listing the contents of the ptr_numbers pointers, there are different and seemingly random numbers, as though I am accessing wrong parts of memory.
I have tried to check the contents every step:
for (int i = 0; i < 8; i++) {
numbers.push_back(i);
ptr_numbers.push_back(&numbers.back());
for (auto ptr_number : ptr_numbers)
std::cout << *ptr_number << std::endl;
std::cout << std::endl;
}
The result looks roughly like this:
1
some random number
2
some random number
some random number
3
So it seems as though when I push_back() to the numbers vector, its older elements change their location.
So what does it exactly mean, that std::vector is a contiguous container and why do its elements move? Does it maybe store them together, but moves them all together, when more space is needed?
Edit: Is std::vector contiguous only since C++17? (Just to keep the comments on my previous claim relevant to future readers.)
It roughly looks like this (excuse my MS Paint masterpiece):
The std::vector instance you have on the stack is a small object containing a pointer to a heap-allocated buffer, plus some extra variables to keep track of the size and and capacity of the vector.
So it seems as though when I push_back() to the numbers vector, its older elements change their location.
The heap-allocated buffer has a fixed capacity. When you reach the end of the buffer, a new buffer will be allocated somewhere else on the heap and all the previous elements will be moved into the new one. Their addresses will therefore change.
Does it maybe store them together, but moves them all together, when more space is needed?
Roughly, yes. Iterator and address stability of elements is guaranteed with std::vector only if no reallocation takes place.
I am aware, that std::vector is a contiguous container only since C++17
The memory layout of std::vector hasn't changed since its first appearance in the Standard. ContiguousContainer is just a "concept" that was added to differentiate contiguous containers from others at compile-time.
The Answer
It's a single contiguous storage (a 1d array).
Each time it runs out of capacity it gets reallocated and stored objects are moved to the new larger place — this is why you observe addresses of the stored objects changing.
It has always been this way, not since C++17.
TL; DR
The storage is growing geometrically to ensure the requirement of the amortized O(1) push_back(). The growth factor is 2 (Capn+1 = Capn + Capn) in most implementations of the C++ Standard Library (GCC, Clang, STLPort) and 1.5 (Capn+1 = Capn + Capn / 2) in the MSVC variant.
If you pre-allocate it with vector::reserve(N) and sufficiently large N, then addresses of the stored objects won't be changing when you add new ones.
In most practical applications is usually worth pre-allocating it to at least 32 elements to skip the first few reallocations shortly following one other (0→1→2→4→8→16).
It is also sometimes practical to slow it down, switch to the arithmetic growth policy (Capn+1 = Capn + Const), or stop entirely after some reasonably large size to ensure the application does not waste or grow out of memory.
Lastly, in some practical applications, like column-based object storages, it may be worth giving up the idea of contiguous storage completely in favor of a segmented one (same as what std::deque does but with much larger chunks). This way the data may be stored reasonably well localized for both per-column and per-row queries (though this may need some help from the memory allocator as well).
std::vector being a contiguous container means exactly what you think it means.
However, many operations on a vector can re-locate that entire piece of memory.
One common case is when you add element to it, the vector must grow, it can re-allocate and copy all elements to another contiguous piece of memory.
So what does it exactly mean, that std::vector is a contiguous container and why do its elements move? Does it maybe store them together, but moves them all together, when more space is needed?
That's exactly how it works and why appending elements does indeed invalidate all iterators as well as memory locations when a reallocation takes place¹. This is not only valid since C++17, it has been the case ever since.
There are a couple of benefits from this approach:
It is very cache-friendly and hence efficient.
The data() method can be used to pass the underlying raw memory to APIs that work with raw pointers.
The cost of allocating new memory upon push_back, reserve or resize boil down to constant time, as the geometric growth amortizes over time (each time push_back is called the capacity is doubled in libc++ and libstdc++, and approx. growths by a factor of 1.5 in MSVC).
It allows for the most restricted iterator category, i.e., random access iterators, because classical pointer arithmetic works out well when the data is contiguously stored.
Move construction of a vector instance from another one is very cheap.
These implications can be considered the downside of such a memory layout:
All iterators and pointers to elements are invalidate upon modifications of the vector that imply a reallocation. This can lead to subtle bugs when e.g. erasing elements while iterating over the elements of a vector.
Operations like push_front (as std::list or std::deque provide) aren't provided (insert(vec.begin(), element) works, but is possibly expensive¹), as well as efficient merging/splicing of multiple vector instances.
¹ Thanks to #FrançoisAndrieux for pointing that out.
In terms of the actual structure, an std::vector looks something like this in memory:
struct vector { // Simple C struct as example (T is the type supplied by the template)
T *begin; // vector::begin() probably returns this value
T *end; // vector::end() probably returns this value
T *end_capacity; // First non-valid address
// Allocator state might be stored here (most allocators are stateless)
};
Relevant code snippet from the libc++ implementation as used by LLVM
Printing the raw memory contents of an std::vector:
(Don't do this if you don't know what you're doing!)
#include <iostream>
#include <vector>
struct vector {
int *begin;
int *end;
int *end_capacity;
};
int main() {
union vecunion {
std::vector<int> stdvec;
vector myvec;
~vecunion() { /* do nothing */ }
} vec = { std::vector<int>() };
union veciterator {
std::vector<int>::iterator stditer;
int *myiter;
~veciterator() { /* do nothing */ }
};
vec.stdvec.push_back(1); // Add something so we don't have an empty vector
std::cout
<< "vec.begin = " << vec.myvec.begin << "\n"
<< "vec.end = " << vec.myvec.end << "\n"
<< "vec.end_capacity = " << vec.myvec.end_capacity << "\n"
<< "vec's size = " << vec.myvec.end - vec.myvec.begin << "\n"
<< "vec's capacity = " << vec.myvec.end_capacity - vec.myvec.begin << "\n"
<< "vector::begin() = " << (veciterator { vec.stdvec.begin() }).myiter << "\n"
<< "vector::end() = " << (veciterator { vec.stdvec.end() }).myiter << "\n"
<< "vector::size() = " << vec.stdvec.size() << "\n"
<< "vector::capacity() = " << vec.stdvec.capacity() << "\n"
;
}

C++ Growth of containers containing containers?

If I have a std::vector<std::set<int>>. The vector will reallocate if you insert past its capacity. In the case where you have another resizable type inside the vector, is the vector only holding a pointer to the said type?
In particular I want to know about how memory is allocated if a vector is holding an arbitrary type.
std::vector<int> a(10); //Size will be sizeof(int) * 10
std::vector<std::set<int>> b(10);
b[0] = {0, 0, 0, 0, 0, 0, 0, .... }; //Is b's size effected by the sets inside?
C++ objects can only have one size, but may include pointers to arbitrarily sized heap memory. So, yes, container objects themselves generally include a pointer to heap memory and probably don't include any actual items. (The only typical exception is string types, which sometimes have a "small string optimization" that allows string objects to contain small strings directly in the object without allocating heap memory.)
The memory that any vector will allocate "by itself" will always be sizeof(element_type) * vector.size().
The vector can only allocate memory for element data that is visible at compile time. It doesn't care about any allocations done by the element class.
Think of a vector as an array on steroids. Like an array, a vector consists of a contiguous block of memory where all elements have the same size. To fullfill this requirement it must know at compile time how big each element will be.
Imagine a std::set to have these member variables:
struct SomeSet
{
size_t size;
SomeMagicInternalType* data;
};
So no matter how data will be allocated at runtime, the vector only allocates memory per element for what it knows at compile time:
sizeof(SomeSet::size) + sizeof(SomeSet::data)
Which would be 4 + 4 on a 32-bit machine.
Consider this example:
#include <iostream>
#include <vector>
int main() {
std::vector<int> v;
std::cout << sizeof(v) << "\n";
std::cout << v.size() << "\n";
v.push_back(3);
std::cout << sizeof(v) << "\n";
std::cout << v.size() << "\n";
}
The exact number may differ, but I get as output:
24
0
24
1
The size (size=size of the object) of a vector does not change when you add an element. The same is true for a set, thus a vector<set> does not need to reallocate if one of its elements adds or removes an element.
A set does not store its elements as members, otherwise sets with different number of elements would be different types. They are stored on the heap and as such do not contribute to the size of the set directly.
A std::vector<T> holds objects of type T. When it gets resized it copies or moves those objects as needed. A std::vector<std::set<int>> is no different; it holds objects of type std::set<int>.

Maximum number of stl::list objects

The problem is to find periodic graph patterns in a dataset. So I have 1000 timesteps with a graph(encoded as integers) in each timestep. So, there are 999 possible periods in which the graph can occur. Also I define a phase offset defined as (timestep mod period). For a graph which was first seen in the 5th timestep with period 2, the phase offset is 1.
I am trying to create a bidimensional array of lists in C++. Each cell contains a list containing graphs having a specified period and phase offset. I keep inserting graphs in the corresponding lists.
list<ListNode> A[timesteps][phase offsets]
ListNode is a class with 4 integer variables.
This gives me Segmentation fault. Using 500 for the size runs fine. Is this due to lack of memory or some other issue?
Thanks.
Probably due to limited stack size.
You're creating an array of 1000x1000 = 1000000 objects that are almost certainly at least 4 bytes apiece, so roughly 4 megabytes at a minimum. Assuming that's inside a function, it'll be auto storage class, which normally translates to being allocated on the stack. Typical stack sizes are around 1 to 4 megabytes.
Try something like: std::vector<ListNode> A(1000*1000); (and, if necessary, create a wrapper to make it look 2-dimensional).
Edit: The wrapper would overload an operator to give you 2D addressing:
template <class T>
class array_2D {
std::vector<T> data;
size_t cols;
public:
array_2D(size_t x, size_t y) : cols(x), data(x*y) {}
T &operator()(size_t x, size_t y) { return data[y*cols+x]; }
};
You may want to embellish that (e.g., with bounds checking) but that's the general idea. Addressing it would use (), as in:
array_2d<int> x(1000, 1000);
x(100, 3) = 2;
y = x(20, 20);
Sounds like you're running out of stack space. Try allocating it on the heap, e.g. through std::vector, and wrap in try ... catch to see out of memory errors instead of crashing.
(Edit: Don't use std::array since it also allocates on the stack.)
try {
std::vector<std::list<ListNode> > a(1000000); // create 1000*1000 lists
// index a by e.g. [index1 * 1000 + index2]
a[42 * 1000 + 18].size(); // size of that list
// or if you really want double subscripting without a wrapper function:
std::vector<std::vector<std::list<ListNode> > > a(1000);
for (size_t i = 0; i < 1000; ++i) { // do 1000 times:
a[i].resize(1000); // default-create and create 1000 in each
}
a[42][18].size(); // size of that list
} catch (std::exception const& e) {
std::cerr << "Caught " << typeid(e).name() << ": " << e.what() << std::endl;
}
In libstdc++ on a 32 bit system a std::list object weights 8 bytes (only the object itself, not counting the allocations it may make), and even in other implementations I don't think it will be much different; so, you are allocating about 8 MB of data, which isn't much per se on a regular computer, but, if you are putting that declaration in a function it will be a local variable, thus allocated on the stack, which is quite limited in size (few MBs at most).
You should allocate that thing on the heap, e.g. using new, or, even better using a std::vector.
By the way, it doesn't seem right that you need a 1000x1000 array of std::list, could you specify exactly what you are trying to achieve? Probably there are data structures that better fit your needs.
You're declaring a two-dimensional array [1000]x[1000] of list<ListNode>. I don't think that's what you intended.
The segmentation fault is probably from trying to use elements of the list that aren't valid.

std::vector on VisualStudio2008 appears to be suboptimally implemented - too many copy constructor calls

I've been comparing a STL implementation of a popular XmlRpc library with an implementation that mostly avoids STL. The STL implementation is much slower - I got 47s down to 4.5s. I've diagnosed some of the reasons: it's partly due to std::string being mis-used (e.g. the author should have used "const std::string&" wherever possible - don't just use std::string's as if they were Java strings), but it's also because copy constructors were being constantly called each time the vector outgrew its bounds, which was exceedingly often. The copy constructors were very slow because they did deep-copies of trees (of XmlRpc values).
I was told by someone else on StackOverflow that std::vector implementations typically double the size of the buffer each time they outgrow. This does not seem to be the case on VisualStudio 2008: to add 50 items to a std::vector took 177 calls of the copy constructor. Doubling each time should call the copy constructor 64 times. If you were very concerned about keeping memory usage low, then increasing by 50% each time should call the copy constructor 121 times. So where does the 177 come from?
My question is: (a) why is the copy constructor called so often? (b) is there any way to avoid using the copy constructor if you're just moving an object from one location to another? (In this case and indeed most cases a memcpy() would have sufficed - and this makes a BIG difference).
(NB: I know about vector::reserve(), I'm just a bit disappointed that application programmers would need to implement the doubling trick when something like this is already part of any good STL implementation.)
My test program:
#include <string>
#include <iostream>
#include <vector>
using namespace std;
int constructorCalls;
int assignmentCalls;
int copyCalls;
class C {
int n;
public:
C(int _n) { n = _n; constructorCalls++; }
C(const C& orig) { copyCalls++; n = orig.n; }
void operator=(const C &orig) { assignmentCalls++; n = orig.n; }
};
int main(int argc, char* argv[])
{
std::vector<C> A;
//A.reserve(50);
for (int i=0; i < 50; i++)
A.push_back(i);
cout << "constructor calls = " << constructorCalls << "\n";
cout << "assignment calls = " << assignmentCalls << "\n";
cout << "copy calls = " << copyCalls << "\n";
return 0;
}
Don't forget to count the copy constructor calls needed to push_back a temporary C object into the vector. Each iteration will call C's copy constructor at least once.
If you add more printing code, it's a bit clearer what is going on:
std::vector<C> A;
std::vector<C>::size_type prevCapacity = A.capacity();
for (int i=0; i < 50; i++) {
A.push_back(i);
if(prevCapacity != A.capacity()) {
cout << "capacity " << prevCapacity << " -> " << A.capacity() << "\n";
}
prevCapacity = A.capacity();
}
This has the following output:
capacity 0 -> 1
capacity 1 -> 2
capacity 2 -> 3
capacity 3 -> 4
capacity 4 -> 6
capacity 6 -> 9
capacity 9 -> 13
capacity 13 -> 19
capacity 19 -> 28
capacity 28 -> 42
capacity 42 -> 63
So yes, the capacity increases by 50% each time, and this accounts for 127 of the copies:
1 + 2 + 3 + 4 + 6 + 9 + 13 + 19 + 28 + 42 = 127
Add the 50 additional copies from 50 calls to push_back and you have 177:
127 + 50 = 177
My question is:
(a) why is the copy constructor called so often?
Because when the vector is re-sized you need to copy all the elements from the old buffer into the new buffer. This is because the vector guarantees that the objects are stored in consecutive memory locations.
(b) is there any way to avoid using the copy constructor if you're just moving
an object from one location to another?
No there is no way to avoid the use of the copy constructor.
This because the object has several members that need to be initialized correctly.
If you used memcpy how do you know the object has been initialized correctly for the object!
For example. IF the object contained a smart pointer. You can't just memcpy a smart pointer. It needs to do extra work to track ownership. Otherwise when the original goes out of scope the memory is deleted and the new object has a dangling pointer. The same principle applies to all objects that have a constructor (copy constructor) the constructor actually does required work.
The way to stop the copy of the content is too reserve the space.
This makes vector allocate enough space for all the objects it will store. Thus it does not need to keep reallocating the main buffer. It just copies the objects into the vector.
Doubling each time should call the copy constructor 64 times.
If you were very concerned about keeping memory usage low, then increasing by
50% each time should call the copy constructor 121 times.
So where does the 177 come from?
Vector allocated size = 1:
Add element 1: (no reallocation) But copies element 1 into vector.
Add element 2: Reallocate buffer (size 2): Copy element 1 across. Copy element 2 into vector.
Add element 3: Reallocate buffer (size 4): Copy element 1-2 across. Copy element 3 into vector.
Add element 4: Copy element 4 into vector
Add element 5: Reallocate buffer (size 8): Copy element 1-4 across. Copy element 5 into vector.
Add element 6: Copy element 6 into vector
Add element 7: Copy element 7 into vector
Add element 8: Copy element 8 into vector
Add element 9: Reallocate buffer (size 16): Copy element 1-8 across. Copy element 9 into vector.
Add element 10: Copy element 10 into vector
etc.
First 10 elements took 25 copy constructions.
If you had used reserve first it would have only taken 10 copy constructions.
The STL does tend to cause this sort of thing. The spec doesn't allow memcpy'ing because that doesn't work in all cases. There's a document describing EASTL, a bunch of alterations made by EA to make it more suitable for their purposes, which does have a method of declaring that a type is safe to memcpy. Unfortunately it's not open source AFAIK so we can't play with it.
IIRC Dinkumware STL (the one in VS) grows vectors by 50% each time.
However, doing a series of push_back's on a vector is a common inefficiency. You can either use reserve to alleviate it (at the cost of possibly wasting memory if you overestimate significantly) or use a different container - deque performs better for a series of insertions like that but is a little slower in random access, which may/may not be a good tradeoff for you.
Or you could look at storing pointers instead of values which will make the resizing much cheaper if you're storing large elements. If you're storing large objects this will always win because you don't have to copy them ever - you'll always save that one copy for each item on insertion at least.
If I recall correctly, C++0x may have move semantics (in addition to copy semantics), that said, you can implement a more efficient copy constructor if you really want to.
Unless the copy constructor is complex, it is normally very efficient - after all, you are supposed to be doing little more than merely copying the object, and copying memory is very fast these days.
It looks like additions to C++0x will help here; see Rvalue and STL upgrades.
To circumvent this issue, why not use a vector of pointers instead of a vector of objects? Then delete each element when destructing the vector.
In other words, std::vector<C*> instead of std::vector<C>. Memcpy'ing pointers is very fast.
Just a note, be careful of adding pointers to the vector as a way of minimizing copying costs, since
The bad data locality of the pointers in the vector makes the non-pointer version with consecutive objects run circles around the pointer version when the vector is actually used.
Heap allocation is slower than stack allocation.
Do you more often use the vector or add stuff to it?