c++ placement new in a home made vector container - c++

There are some questions quite similar around here, but they couldn't help me get my mind around it.
Also, I'm giving a full example code, so it might be easier for others to understand.
I have made a vector container (couldn't use stl for memory reasons) that used to use only operator= for push_back*, and once I came accross placement new, I decided to introduce an additional "emplace_back" to it**.
*(T::operator= is expected to deal with memory management)
**(the name is taken from a similar function in std::vector that I've encountered later, the original name I gave it was a mess).
I read some stuff about the danger of using placement new over operator new[] but couldn't figure out if the following is ok or not, and if not, what's wrong with it, and what should I replace it with, so I'd appreciate your help.
This is of couse a simplified code, with no iterators, and no extended functionality, but it makes the point :
template <class T>
class myVector {
public :
myVector(int capacity_) {
_capacity = capacity_;
_data = new T[_capacity];
_size = 0;
}
~myVector() {
delete[] _data;
}
bool push_back(T const & t) {
if (_size >= _capacity) { return false; }
_data[_size++] = t;
return true;
}
template <class... Args>
bool emplace_back(Args const & ... args) {
if (_size >= _capacity) { return false; }
_data[_size].~T();
new (&_data[_size++]) T(args...);
return true;
}
T * erase (T * p) {
//assert(/*p is not aligned*/);
if (p < begin() || p >= end()) { return end(); }
if (p == &back()) { --_size; return end(); }
*p = back();
--_size;
return p;
}
// The usual stuff (and more)
int capacity() { return _capacity; }
int size() { return _size; }
T * begin() { return _data; }
T * end() { return _data + _size; }
T const * begin() const { return _data; }
T const * end() const { return _data + _size; }
T & front() { return *begin(); }
T & back() { return *(end() - 1); }
T const & front() const { return *begin(); }
T const & back() const { return *(end() - 1); }
T & operator[] (int i) { return _data[i]; }
T const & operator[] (int i) const { return _data[i]; }
private:
T * _data;
int _capacity;
int _size;
};
Thanks

I read some stuff about the danger of using placement new over
operator new[] but couldn't figure out if the following is ok or not,
and if not, what's wrong with it [...]
For operator new[] vs. placement new, it's only really bad (as in typically-crashy type of undefined behavior) if you mix the two strategies together.
The main choice you typically have to make is to use one or the other. If you use operator new[], then you construct all the elements for the entire capacity of the container in advance and overwrite them in methods like push_back. You don't destroy them on removal in methods like erase, just kind of keep them there and adjust the size, overwrite elements, and so forth. You both construct and allocate a multiple elements all in one go with operator new[], and destroy and deallocate them all in one go using operator delete[].
Why Placement New is Used For Standard Containers
First thing to understand if you want to start rolling your own vectors or other standard-compliant sequences (that aren't simply linked structures with one element per node) in a way that actually destroys elements when they are removed, constructs elements (not merely overwrite them) when added, is to separate the idea of allocating the memory for the container and constructing the elements for it in place. So quite to the contrary, in this case, placement new isn't bad. It's a fundamental necessity to achieve the general qualities of the standard containers. But we can't mix it with operator new[] and operator delete[] in this context.
For example, you might allocate the memory to hold 100 instances of T in reserve, but you don't want to default construct them as well. You want to construct them in methods like push_back, insert, resize, the fill ctor, range ctor, copy ctor, etc. -- methods that actually add elements and not merely the capacity to hold them. That's why we need placement new.
Otherwise we lose the generality of std::vector which avoids constructing elements that aren't there, can copy construct in push_backs rather than simply overwriting existing ones with operator=, etc.
So let's start with the constructor:
_data = new T[_capacity];
... this will invoke the default constructors for all the elements. We don't want that (neither the default ctor requirement nor this expense), as the whole point of using placement new is to construct elements in-place of allocated memory, and this would have already constructed all elements. Otherwise any use of placement new anywhere will try to construct an already-constructed element a second time, and will be UB.
Instead you want something like this:
_data = static_cast<T*>(malloc(_capacity * sizeof(T)));
This just gives us a raw chunk of bytes.
Second, for push_back, you're doing:
_data[_size++] = t;
That's trying to use the assignment operator, and, after our previous modification, on an uninitialized/invalid element which hasn't been constructed yet. So we want:
new(_data + _size) T(t);
++size;
... that makes it use the copy constructor. It makes it match up with what push_back is actually supposed to do: creating new elements in the sequence instead of simply overwriting existing ones.
Your erase method needs some work even at the basic logic level if you want to handle removals from the middle of the container. But just from the resource management standpoint, if you use placement new, you want to manually invoke destructors for removed elements. For example:
if (p == &back()) { --_size; return end(); }
... should be more like:
if (p == &back())
{
--size;
(_data + _size)->~T();
return end();
}
Your emplace_back manually invokes a destructor but it shouldn't do this. emplace_back should only add, not remove (and destroy) existing elements. It should be quite similar to push_back but simply invoking the move ctor.
Your destructor does this:
~myVector() {
delete[] _data;
}
But again, that's UB when we take this approach. We want something more like:
~myVector() {
for (int j=0; j < _size; ++j)
(_data + j)->~T();
free(_data);
}
There's still a whole lot more to cover like exception-safety which is a whole different can of worms.
But this should get you started with respect to proper usage of placement new in a data structure against some memory allocator (malloc/free in this exemplary case).
Last but not least:
(couldn't use stl for memory reasons)
... this might be an unusual reason. Your implementation doesn't necessarily use any less memory than a vector with reserve called in advance to give it the appropriate capacity. You might shave off a few bytes for on a per-container-level (not on a per-element level) with the choice of 32-bit integrals and no need to store an allocator, but it's going to be a very small memory savings in exchange for a boatload of work.
This kind of thing can be a useful learning exercise though to help you build some data structures outside the standard in a more standard-compliant way (ex: unrolled lists which I find quite useful).
I ended up having to reinvent some vectors and vector-like containers for ABI reasons (we wanted a container we could pass through our API that was guaranteed to have the same ABI regardless of what compiler was used to build a plugin). Even then, I would have much preferred simply using std::vector.
Note that if you just want to take control of how vector allocates memory, you can do that by specifying your own allocator with a compliant interface. This might be useful, for example, if you want a vector which allocates 128-bit aligned memory for use with aligned move instructions using SIMD.

Related

What does std::vector::swap actually do?

What triggered this question is some code along the line of:
std::vector<int> x(500);
std::vector<int> y;
std::swap(x,y);
And I was wondering if swapping the two requires twice the amount of memory that x needs.
On cppreference I found for std::vector::swap (which is the method that the last line effectively calls):
Exchanges the contents of the container with those of other. Does not invoke any move, copy, or swap operations on individual elements.
All iterators and references remain valid. The past-the-end iterator is invalidated.
And now I am more confused than before. What does std::vector::swap actually do when it does not move, copy or swap elements?
How is it possible that iterators stay valid?
Does that mean that something like this is valid code?
std::vector<int> x(500);
std::vector<int> y;
auto it = x.begin();
std::swap(x,y);
std::sort(it , y.end()); // iterators from different containers !!
vector internally stores (at least) a pointer to the actual storage for the elements, a size and a capacity.† std::swap just swaps the pointers, size and capacity (and ancillary data if any) around; no doubling of memory or copies of the elements are made because the pointer in x becomes the pointer in y and vice-versa, without any new memory being allocated.
The iterators for vector are generally lightweight wrappers around pointers to the underlying allocated memory (that's why capacity changes generally invalidate iterators), so iterators produced for x before the swap seamlessly continue to refer to y after the swap; your example use of sort is legal, and sorts y.
If you wanted to swap the elements themselves without swapping storage (a much more expensive operation, but one that leaves preexisting iterators for x refering to x), you could use std::swap_range, but that's a relatively uncommon use case. The memory usage for that would depend on the implementation of swap for the underlying object; by default, it would often involve a temporary, but only for one of the objects being swapped at a time, not the whole of one vector.
† Per the comments, it could equivalently use pointers to the end of the used space and the end of the capacity, but either approach is logically equivalent, and just microoptimizes in favor of slightly different expected use cases; storing all pointers optimizes for use of iterators (a reasonable choice), while storing size_type optimizes for .size()/.capacity() calls.
I will write a toy vector.
struct toy_vector {
int * buffer = 0;
std::size_t valid_count = 0;
std::size_t buffer_size = 0;
int* begin() { return buffer; }
int* end() { return buffer+valid_count; }
std::size_t capacity() const { return buffer_size; }
std::size_t size() const { return valid_count; }
void swap( toy_vector& other ) {
std::swap( buffer, other.buffer );
std::swap( valid_count, other.valid_count );
std::swap( buffer_size, other.buffer_size );
}
That is basically it. I'll implement a few methods so you see we have enough tools to work with:
int& operator[](std::size_t i) { return buffer[i]; }
void reserve(int capacity) {
if (capacity <= buffer_size)
return;
toy_vector tmp;
tmp.buffer = new int[capacity];
for (std::size_t i = 0; i < valid_count; ++i)
tmp.buffer[i] = std::move(buffer[i]);
tmp.valid_count = valid_count;
tmp.buffer_size = capacity;
swap( tmp );
}
void push_back(int x) {
if (valid_count+1 > buffer_size) {
reserve( (std::max)((buffer_size*3/2), buffer_size+1) );
buffer[valid_count] = std::move(x);
++valid_count;
}
// real dtor is more complex.
~toy_vector() { delete[] buffer; }
};
actual vectors have exception safety issues, more concerns about object lifetime and allocators (I'm using ints, so don't care if I properly construct/destroy them), and might store 3 pointers instead of a pointer and 2 sizes. But swapping those 3 pointers is just as easy as swapping the pointer and 2 sizes.
Iterators in real vectors tend not to be raw pointers (but they can be). As you can see above, the raw pointer iterators to the vector a become raw pointer iterators into vector b when you do an a.swap(b); non-raw pointer vector iterators are basically fancy wrapped pointers, and have to follow the same semantics.
The C++ standard does not explicitly mandate an implementation that looks like this, but it was based off an implementation that looks like this, and it implicitly requires one that is almost identical to this (I'm sure someone could come up with a clever standard compliant vector that doesn't look like this; but every vector in every standard library I have seen has looked like this.)

C++ vector with fixed capacity after initialization

I need a C++ container with the following requirements:
The container can store non-copyable AND non-movable objects in continuous memory. For std::vector the object has to be either copyable or movable.
The capacity of the container is known during the construction at run-time, and fixed until destruction. All the needed memory space is allocated during the construction. For boost::static_vector the capacity is known at compile time.
The size of the container can increase over time when emplace_back more element in the container, but should never exceeds capacity.
Since the object is not copyable or movable, reallocation is not allowed.
It appears that neither STL nor BOOST has the container type I need. I have also searched in this side extensively but did not find an answer. So I have implemented one.
#include <memory>
template<class T>
class FixedCapacityVector {
private:
using StorageType = std::aligned_storage_t<sizeof(T), alignof(T)>;
static_assert(sizeof(StorageType) == sizeof(T));
public:
FixedCapacityVector(FixedCapacityVector const&) = delete;
FixedCapacityVector& operator=(FixedCapacityVector const&) = delete;
FixedCapacityVector(size_t capacity = 0):
capacity_{ capacity },
data_{ std::make_unique<StorageType[]>(capacity) }
{ }
~FixedCapacityVector()
{
for (size_t i = 0; i < size_; i++)
reinterpret_cast<T&>(data_[i]).~T();
}
template<class... Args>
T& emplace_back(Args&&... args)
{
if (size_ == capacity_)
throw std::bad_alloc{};
new (&data_[size_]) T{ std::forward<Args>(args)... };
return reinterpret_cast<T&>(data_[size_++]);
}
T& operator[](size_t i)
{ return reinterpret_cast<T&>(data_[i]); }
T const& operator[](size_t i) const
{ return reinterpret_cast<T const&>(data_[i]); }
size_t size() const { return size_; }
size_t capacity() const { return capacity_; }
T* data() { return reinterpret_cast<T*>(data_.get()); }
T const* data() const { return reinterpret_cast<T const*>(data_.get()); }
private:
size_t const capacity_;
std::unique_ptr<StorageType[]> const data_;
size_t size_{ 0 };
};
My questions are:
Why would I do this by hand? I could not find a standard container. Or maybe I did not look at the right place? Or because what I am trying to do is not conventional?
Is the hand-written container correct implemented? How about the exception safety, memory safety, etc.?
It probably does not answer the question completely but it seems like fixed_capacity_vector might be added into future C++ standard(s) according to the following paper:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0843r1.html
Introduction
This paper proposes a modernized version of boost::container::static_vector [1]. That is, a dynamically-resizable vector with compile-time fixed capacity and contiguous embedded storage in which the elements are stored within the vector object itself.
Its API closely resembles that of std::vector. It is a contiguous container with O(1) insertion and removal of elements at the end (non-amortized) and worst case O(size()) insertion and removal otherwise. Like std::vector, the elements are initialized on insertion and destroyed on removal. For trivial value_types, the vector is fully usable inside constexpr functions.
Why would I do this by hand? I could not find a standard container. Or maybe I did not look at the right place? Or because what I am trying to do is not conventional?
This is not conventional. By convention, something that is MoveConstructible is also MoveAssignable
Is the hand-written container correct implemented? How about the exception safety, memory safety, etc.?
data is problematic. Callers are likely to assume that they can increment that pointer to get at other elements, but that is strictly undefined. You don't actually have an array of T. The standard requires implementation-defined magic in std::vector.
I know it's a bit old post and not proposing exact requirements but, we open-sourced the implementation of the FixedCapacityVector used in production code of my company since years, available here. Capacity should be a compile-time constant.
It requires a C++11 compiler but API is in accordance with C++17's std::vector.

Is it nessary to destroy a string before constructing it again?

As an exercise, I'm trying to write a class like a std::vector without using a template. The only type it holds is std::string.
Below is the strvec.h file:
class StrVec
{
public:
//! Big 3
StrVec():
element(nullptr), first_free(nullptr), cap(nullptr)
{}
StrVec(const StrVec& s);
StrVec&
operator =(const StrVec& rhs);
~StrVec();
//! public members
void push_back(const std::string &s);
std::size_t size() const { return first_free - element; }
std::size_t capacity() const { return cap - element; }
std::string* begin() const { return element; }
std::string* end() const { return first_free; }
void reserve(std::size_t n);
void resize(std::size_t n);
//^^^^^^^^^^^^^^^^^^^^^^^^^^^
private:
//! data members
std::string* element; // pointer to the first element
std::string* first_free; // pointer to the first free element
std::string* cap; // pointer to one past the end
std::allocator<std::string> alloc;
//! utilities
void reallocate();
void chk_n_alloc() { if (size() == capacity()) reallocate(); }
void free();
void wy_alloc_n_move(std::size_t n);
std::pair<std::string*, std::string*>
alloc_n_copy (std::string* b, std::string* e);
};
The three string*, element, first_free, and cap can be thought of as:
[0][1][2][3][unconstructed elements]
^ ^ ^
element first_free cap
When implementing the member resize(size_t n), I have a problem. Say, v.resize(3) is called. As a result the pointer first_free must be moved forward one place and point to [3]. Something like:
[0][1][2][3][unconstructed elements]
^ ^ ^
element first_free cap
My question is how should I deal with [3]? Leave it there untouched? Or destroy it like:
if(n < size())
{
for(auto p = element + n; p != first_free; /* empty */)
alloc.destroy(p++);
first_free = element + n;
}
Is the code alloc.destroy( somePointer) necessary here?
Yes, definitely, you should call destroy on elements that are removed from the vector when resize() is called with an argument smaller than the current size of the vector. That's what std::vector does, too.
Note that destroy only calls the destructor on those elements; it does not deallocate any space (which would be wrong).
Since you are dealing with std::string, you probably think you could do without destruction if you are sure that you re-initialize the same std::string object later with a new value. But firstly, you can't be sure that a new string will be stored in the same place later, and secondly, for the new string a new object would be created (using placement-new, not copy-assignment), leaking the memory of the previous string (whose destructor would never have been called).
What you should do depends on how you've initialised element, as you need your code to be consistent.
if you use new std::string[n] to create the array of strings, then they will all be pre-initialised, and when you necessarily use delete[] to deallocate them later their destructors will all be run. For that reason, you must not call the destructors manually in the intervening time unless you are certain you'll placement-new a valid object there again.
if you use something like static_cast<std::string*>(new char[sizeof(std::string) * n]) to create a buffer of un-initialised memory, then you must take full responsibility for calling the constructor and destructor of every element at appropriate times
With the first option, you wouldn't need to do anything for resize(3), but could call .clear() on the string to potentially free up some memory if you wanted.
With the second option, you must trigger the destructor for [3] (unless you're keeping some other record of which element eventually need destruction, which seems a clumsy model).
The issues are identical to just having memory for a single string that is "in use" at different times during the program. Do you spend the time to construct it before first use then assign to it, or do you leave it uninitialised then copy-construct it with placement new? Do you clear it when unused or destruct it? Either model can work with careful implementation. The first approach tends to be easier to implement correctly, the second model slightly more efficient when the array capacity is much greater than the number of element that end up being used.

C++ move semantics- what exactly is it to achieve? [duplicate]

This question already has answers here:
What is move semantics?
(11 answers)
Closed 9 years ago.
What exactly is the purpose of this "move" semantic? I understand if you don't pass in by reference a copy is made of non-primitive types, but how does "move" change anything? Why would we want to "move" the data? Why cant it just be kept at the same address and not copied? If it is sent to another address, isn't this just a "copy and delete"?
In short, I don't really get what move semantics is achieving exactly.
Move semantics combines the advantages of passing by value and passing by reference. You allocate classes statically, so you don't have to take responsibility for their lifetime, and you can pass them as parameters and return from functions easily. On the other hand, in the case, when ordinarily objects would have been copied, they are moved (only their internals are copied). This operation may be implemented a lot less time-costlier than copying (because you do know, that rhs object won't be used anymore).
MyObj * f()
{
// Ok, but caller has to take care of
// freeing the result
return new MyObj();
}
MyObj f()
{
// Assuming, that MyObj does not have move-ctor
// This may be time-costly
MyObj result;
return result;
}
MyObj f()
{
// This is both fast and safe
MyObj result;
return std::move(result);
// Note, if MyObj implements a move-ctor,
// usually you don't have to call std::move.
}
Why cant it just be kept at the same address and not copied
This is actually what move semantics generally does. It often keeps the resource (often memory, but could be file handles, etc.) in the exact same state, but it updates the references in the objects.
Imagine two vectors, src and dest. The src vector contains a large block of data which is allocated on the heap, and dest is empty. When src is moved to dest all that happens is that dest is updated to point to the block of memory on the heap, whilst src is updated to point to whatever dest was pointing to, in this case, nothing.
Why is this useful? Because it means that vector can be written with the confidence that only one vector will ever point to the block of memory it allocates. This means that the destructor can ensure that it cleans up the memory that has been allocated.
This can be extended for objects which manage other resources, such as file handles. It is now possible to write objects that can own a file handle. These objects can be movable, but not copyable. Because STL containers support movable objects, these can be put into containers far easier than they could in C++03. They file handle, or other resource, is guaranteed to only have on reference to it and the destructor can close it appropriately.
I'd answer with a simple example for vector algebra:
class Vector{
size_t dim_;
double *data_;
public:
Vector(const Vector &arg)
: dim_(arg.dim_)
, data_(new double[dim_])
{
std::copy_n(arg.data_, dim_, data_);
}
Vector(Vector &&arg)
: dim_(arg.dim_)
, data_(arg.data_)
{
arg.data_ = nullptr;
}
~Vector()
{
delete[] data_;
}
Vector& operator+= (const Vector &arg)
{
if (arg.dim_ != dim_) throw error;
for (size_t idx = 0; idx < dim_; ++idx) data_[idx] += arg.data_[idx];
return *this;
}
};
Vector operator+ (Vector a, const Vector &b)
{
a += b;
return a;
}
extern Vector v1, v2;
int main()
{
Vector v(v1 + v2);
}
The addition returns a new vector by value. As it's an r-value, it will be moved into v, which means that no extra copies of the potentially huge array data_ will happen.

Is there a C++ container with reasonable random access that never calls the element type's copy constructor?

I need a container that implements the following API (and need not implement anything else):
class C<T> {
C();
T& operator[](int); // must have reasonably sane time constant
// expand the container by default constructing elements in place.
void resize(int); // only way anything is added.
void clear();
C<T>::iterator begin();
C<T>::iterator end();
}
and can be used on:
class I {
public:
I();
private: // copy and assignment explicate disallowed
I(I&);
I& operator=(I&);
}
Dose such a beast exist?
vector<T> doesn't do it (resize moves) and I'm not sure how fast deque<T> is.
I don't care about allocation
Several people have assumed that the reason I can't do copies is memory allocation issues. The reason for the constraints is that the element type explicitly disallows copying and I can't change that.
Looks like I've got my answer: STL doesn't have one. But now I'm wondering Why not?
I'm pretty sure that the answer here is a rather emphatic "No". By your definition, resize() should allocate new storage and initialize with the default constructor if I am reading this correctly. Then you would manipulate the objects by indexing into the collection and manipulating the reference instead of "inserting" into the collection. Otherwise, you need the copy constructor and assignment operator. All of the containers in the Standard Library have this requirement.
You might want to look into using something like boost::ptr_vector<T>. Since you are inserting pointers, you don't have to worry about copying. This would require that you dynamically allocate all of your objects though.
You could use a container of pointers, like std::vector<T*>, if the elements cannot be copied and their memory is managed manually elsewhere.
If the vector should own the elements, something like std::vector< std::shared_ptr<T> > could be more appropriate.
And there is also the Boost Pointer Container library, which provides containers for exception safe handling of pointers.
Use deque: performance is fine.
The standard says, "deque is the data structure of choice when most insertions and deletions take place at the beginning or at the end of the sequence" (23.1.1). In your case, all insertions and deletions take place at the end, satisfying the criterion for using deque.
http://www.gotw.ca/gotw/054.htm has some hints on how you might measure performance, although presumably you have a particular use-case in mind, so that's what you should be measuring.
Edit: OK, if your objection to deque is in fact not, "I'm not sure how fast deque is", but "the element type cannot be an element in a standard container", then we can rule out any standard container. No, such a beast does not exist. deque "never copies elements", but it does copy-construct them from other objects.
Next best thing is probably to create arrays of elements, default-constructed, and maintain a container of pointers to those elements. Something along these lines, although this can probably be tweaked considerably.
template <typename T>
struct C {
vector<shared_array<T> > blocks;
vector<T*> elements; // lazy, to avoid needing deque-style iterators through the blocks.
T &operator[](size_t idx) { return *elements[idx]; }
void resize(size_t n) {
if (n <= elements.size()) { /* exercise for the reader */ }
else {
boost::shared_array<T> newblock(new T[elements.size() - n]);
blocks.push_back(newblock);
size_t old = elements.size();
// currently we "leak" newblock on an exception: see below
elements.resize(n);
for (int i = old; j < n; ++i) {
elements[i] = &newblock[i - old];
}
}
void clear() {
blocks.clear();
elements.clear();
}
};
As you add more functions and operators, it will approach deque, but avoiding anything that requires copying of the type T.
Edit: come to think of it, my "exercise for the reader" can't be done quite correctly in cases where someone does resize(10); resize(20); resize(15);. You can't half-delete an array. So if you want to correctly reproduce container resize() semantics, destructing the excess elements immediately, then you will have to allocate the elements individually (or get acquainted with placement new):
template <typename T>
struct C {
deque<shared_ptr<T> > elements; // or boost::ptr_deque, or a vector.
T &operator[](size_t idx) { return *elements[idx]; }
void resize(size_t n) {
size_t oldsize = elements.size();
elements.resize(n);
if (n > oldsize) {
try {
for (size_t i = oldsize; i < n; ++i) {
elements[i] = shared_ptr<T>(new T());
}
} catch(...) {
// closest we can get to strong exception guarantee, since
// by definition we can't do anything copy-and-swap-like
elements.resize(oldsize);
throw;
}
}
}
void clear() {
elements.clear();
}
};
Nicer code, not so keen on the memory access patterns (but then, I'm not clear whether performance is a concern or not since you were worried about the speed of deque.)
As you've discovered, all of the standard containers are incompatible with your requirements. If we can make a couple of additional assumptions, it wouldn't be too hard to write your own container.
The container will always grow - resize will always be called with a greater number than previously, never lesser.
It is OK for resize to make the container larger than what was asked for; constructing some number of unused objects at the end of the container is acceptable.
Here's a start. I leave many of the details to you.
class C<T> {
C();
~C() { clear(); }
T& operator[](int i) // must have reasonably sane time constant
{
return blocks[i / block_size][i % block_size];
}
// expand the container by default constructing elements in place.
void resize(int n) // only way anything is added.
{
for (int i = (current_size/block_size)+1; i <= n/block_size; ++i)
{
blocks.push_back(new T[block_size]);
}
current_size = n;
}
void clear()
{
for (vector<T*>::iterator i = blocks.begin(); i != blocks.end(); ++i)
delete[] *i;
current_size = 0;
}
C<T>::iterator begin();
C<T>::iterator end();
private:
vector<T*> blocks;
int current_size;
const int block_size = 1024; // choose a size appropriate to T
}
P.S. If anybody asks you why you want to do this, tell them you need an array of std::auto_ptr. That should be good for a laugh.
All the standard containers require copyable elements. At the very least because push_back and insert copy the element passed to them. I don't think you can get away with std::deque because even its resize method takes parameter to be copied for filling the elements.
To use a completely non-copyable class in the standard containers, you would have to store pointers to those objects. That can sometimes be a burden but usage of shared_ptr or the various boost pointer containers can make it easier.
If you don't like any of those solutions then take a browse through the rest of boost. Maybe there's something else suitable in there. Perhaps intrusive containers?
Otherwise, if you don't think any of that suits your needs then you could always try to roll your own container that does what you want. (Or else do more searching to see if anyone else has ever made such a thing.)
You shouldn't pick a container based on how it handles memory. deque for example is a double-ended queue, so you should only use it when you need a double-ended queue.
Pretty much every container will allocate memory if you resize it! Of course, you could change the capacity up front by calling vector::reserve. The capacity is the number of physical elements in memory, the size is how many you are actively using.
Obviously, there will still be an allocation if you grow past your capacity.
Look at ::boost::array. It doesn't allow the container to be resized after creating it, but it doesn't copy anything ever.
Getting both resize and no copying is going to be a trick. I wouldn't trust a ::std::deque because I think maybe it can copy in some cases. If you really need resizing, I would code your own deque-like container. Because the only way you're going to get resizing and no copying is to have a page system like ::std::deque uses.
Also, having a page system necessarily means that at isn't going to be quite as fast as it would be for ::std::vector and ::boost::array with their contiguous memory layout, even though it can still be fairly fast.