Can deque memory allocation be sparse? - c++

Is it allowed by the standard for a deque to allocate is memory in a sparse way?
My understanding is that most implementations of deque allocate memory internally in blocks of some size. I believe, although I don't know this for a fact, that implementations allocate at least enough blocks to store all the items for there current size. So if a block is 100 items and you do
std::deque<int> foo;
foo.resize( 1010 );
You will get at least 11 blocks allocated. However given that in the above all 1010 int are default constructed do you really need to allocate the blocks at the time of the resize call? Could you instead allocate a given block only when someone actually inserts an item in some way. For instance the implementation could mark a block as "all default constructed" and not allocate it until someone uses it.
I ask as I have a situation where I potentially want a deque with a very large size that might be quite sparse in terms of which elements I end up using. Obviously I could use other data structures like a map but I'm interested in what the rules are for deque.
A related question given the signature of resize void resize ( size_type sz, T c = T() ); is whether the standard requires that the default constructor is called exactly sz times? If the answer is yes then I guess you can't do a sparse allocation at least for types that have a non trivial default constructor, although presumably it may still be possible for built in types like int or double.

All elements in the deque must be correctly constructed. If you need
a sparse implementation, I would suggest a deque (or a vector) to
pointers or a Maybe class (you really should have one in your toolbox
anyway) which doesn't contruct the type until it is valid. This is not
the role of deque.

23.3.3.3 states that deque::resize will append sz - size() default-inserted elements (sz is the first argument of deque::resize).
Default-insertion (see 23.2.1/13) means that an element is initialized by the expression allocator_traits<Allocator>::construct(m, p) (where m is an allocator and p a pointer of the type that is to be constructed). So memory has to be available and the element will be default constructed (initialized?).
All-in-all: deque::resize cannot be lazy about constructing objects if it wants to be conforming. You can add lazy construction to a type easily by wrapping it in a boost::optional or any other Maybe container.

Answering your second question. Interestingly enough, there is a difference between C++03 and C++11.
Using C++03, your signature
void resize ( size_type sz, T c = T() );
will default construct the parameter (unless you supply another value), and then copy construct the new members from this value.
In C++11, this function has been replaced by two overloads
void resize(size_type sz);
void resize(size_type sz, const T& c);
where the first one default constructs (or value initializes) sz elements.

Related

In C++ can I treat an array of single-member unions as an array of the element?

Suppose I am writing a fixed-size array class of runtime size, somewhat equivalent to Rust's Box<[T]> in order to save the space of tracking capacity when I know the array isn't going to change size after initialization.
In order to support types which do not have a default constructor, I want to be able to allow the user to supply a generator function that takes the index of the element and produces a T. In order to do this and decouple allocation and initialization, I follow the advice in CJ Johnson's CppCon 2019 talk "How to Hold a T" and initially create the array in terms of a single-member union:
template<typename T>
union MaybeUninit
{
MaybeUninit() {}
~MaybeUninit() {}
T val;
};
// ...
m_array = new MaybeUninit<T>[size];
// initialize all elements by setting val for each item
T* items = reinterpret_cast<T*>(m_array); // is this OK and dereferenceable?
My question is, once the generator is done and all the elements of m_array are initialized, am I allowed (according to the standard, regardless of whether a given compiler implementation permits it) to use reinterpret_cast<T*>(m_array) to treat the result as an array of the actual objects (the line marked "is this OK")? If not, is there any way to get from MaybeUninit<T>* to T* without copying?
In terms of which standard, I'm mainly interested in C++17 or later.
am I allowed (according to the standard, regardless of whether a given compiler implementation permits it) to use reinterpret_cast<T*>(m_array) to treat the result as an array of the actual objects (the line marked "is this OK")?
No, any pointer arithmetic on the resulting pointer will result in UB (for indices >1) or result in a one-past the end pointer that can't be dereferenced (for index 1). Only accessing the element at index 0 this way is allowed (but needs to still be constructed).
The only way you are allowed to perform pointer arithmetic is on pointers to the elements of an array. Your pointer is not pointing to an object that is element of an array (which then for the purpose of pointer arithmetic is considered to be belong to an array of length 1).
If not, is there any way to get from MaybeUninit* to T* without copying?
The pointer conversion is not an issue, but you can't index into the resulting pointer. The only way to avoid this is to have an actual array of T objects.
Note however that you don't need to construct every element in an array of T objects. For example:
std::allocator<T> alloc;
T* ptr = std::allocator_traits<decltype(alloc)>::allocate(size);
Now ptr is a pointer to an array of size objects of type T, but no actual objects of type T in it have their lifetime started. You can construct individual elements into it with placement-new (or std::construct_at or std::allocator_traits<decltype(alloc)>::construct) and destruct them with a destructor call (or std::destroy_at or std::allocator_traits<decltype(alloc)>::destruct). You need to do this with your union approach as well anyhow. This approach also allows you to easily exchange the allocator with a different one.
There will be no overhead for size or capacity management. All of that is now responsibility of the user. Whether this is a good idea is a different question.
Instead of std::allocator or an alternative Allocator implementation you could also use other functions that allocate memory and are specified to implicitly create objects, e.g. operator new or std::malloc, etc.

reserve() - data() trick on empty vector - is it correct?

As we know, std::vector when initialized like std::vector vect(n) or empty_vect.resize(n) not only allocates required amount of memory but also initializes it with default value (i.e. calls default constructor). This leads to unnecessary initialization especially if I have an array of integers and I'd like to fill it with some specific values that cannot be provided via any vector constructor.
Capacity on the other hand allocates the memory in call like empty_vect.reserve(n), but in this case vector still is empty. So size() returns 0, empty() returns true, operator[] generates exceptions.
Now, please look into the code:
{ // My scope starts here...
std::vector<int> vect;
vect.reserve(n);
int *data = vect.data();
// Here I know the size 'n' and I also have data pointer so I can use it as a regular array.
// ...
} // Here ends my scope, so vector is destroyed, memory is released.
The question is if "so I can use it as array" is a safe assumption?
No matter for arguments, I am just curious of above question. Anyway, as for arguments:
It allocates memory and automatically frees it on any return from function
Code does not performs unnecessary data initialization (which may affect performance in some cases)
No, you cannot use it.
The standard (current draft, equivalent wording in C++11) says in [vector.data]:
constexpr T* data() noexcept;
constexpr const T* data() const noexcept;
Returns: A pointer such that [data(), data() + size()) is a valid range.
For a non-empty vector, data() == addressof(front()).
You don't have any guarantee that you can access through the pointer beyond the vector's size. In particular, for an empty vector, the last sentence doesn't apply and so you cannot even be sure that you are getting a valid pointer to the underlying array.
There is currently no way to use std::vector with default-initialized elements.
As mentioned in the comments, you can use std::unique_ptr instead (requires #inclue<memory>):
auto data = std::unique_ptr<int[]>{new int[n]};
which will give you a std::unique_ptr<int[]> smart pointer to a dynamically sized array of int's, which will be destroyed automatically when the lifetime of data ends and that can transfer it's ownership via move operations.
It can be dereferenced and indexed directly with the usual pointer syntax, but does not allow direct pointer arithmetic. A raw pointer can be obtained from it via data.get().
It does not offer you the std::vector interface, though. In particular it does not provide access to its allocation size and cannot be copied.
Note: I made a mistake in a previous version of this answer. I used std::make_unique<int[]> without realizing that it actually also performs value-initialization (initialize to zero for ints). In C++20 there will be std::make_unique_default_init<int[]> which will default-initialize (and therefore leave ints with indeterminate value).

Is the vector reserved space always linerar? [duplicate]

My question is simple: are std::vector elements guaranteed to be contiguous? In other words, can I use the pointer to the first element of a std::vector as a C-array?
If my memory serves me well, the C++ standard did not make such guarantee. However, the std::vector requirements were such that it was virtually impossible to meet them if the elements were not contiguous.
Can somebody clarify this?
Example:
std::vector<int> values;
// ... fill up values
if( !values.empty() )
{
int *array = &values[0];
for( int i = 0; i < values.size(); ++i )
{
int v = array[i];
// do something with 'v'
}
}
This was missed from C++98 standard proper but later added as part of a TR. The forthcoming C++0x standard will of course contain this as a requirement.
From n2798 (draft of C++0x):
23.2.6 Class template vector [vector]
1 A vector is a sequence container that supports random access iterators. In addition, it supports (amortized)
constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage
management is handled automatically, though hints can be given to improve efficiency. The elements of a
vector are stored contiguously, meaning that if v is a vector where T is some type other
than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().
As other answers have pointed out, the contents of a vector is guaranteed to be continuous (excepting bool's weirdness).
The comment that I wanted to add, is that if you do an insertion or a deletion on the vector, which could cause the vector to reallocate it's memory, then you will cause all of your saved pointers and iterators to be invalidated.
The standard does in fact guarantee that a vector is continuous in memory and that &a[0] can be passed to a C function that expects an array.
The exception to this rule is vector<bool> which only uses one bit per bool thus although it does have continuous memory it can't be used as a bool* (this is widely considered to be a false optimization and a mistake).
BTW, why don't you use iterators? That's what they're for.
As other's have already said, vector internally uses a contiguous array of objects. Pointers into that array should be treated as invalid whenever any non-const member function is called IIRC.
However, there is an exception!!
vector<bool> has a specialised implementation designed to save space, so that each bool only uses one bit. The underlying array is not a contiguous array of bool and array arithmetic on vector<bool> doesn't work like vector<T> would.
(I suppose it's also possible that this may be true of any specialisation of vector, since we can always implement a new one. However, std::vector<bool> is the only, err, standard specialisation upon which simple pointer arithmetic won't work.)
I found this thread because I have a use case where vectors using contiguous memory is an advantage.
I am learning how to use vertex buffer objects in OpenGL. I created a wrapper class to contain the buffer logic, so all I need to do is pass an array of floats and a few config values to create the buffer.
I want to be able to generate a buffer from a function based on user input, so the length is not known at compile time. Doing something like this would be the easiest solution:
void generate(std::vector<float> v)
{
float f = generate_next_float();
v.push_back(f);
}
Now I can pass the vector's floats as an array to OpenGL's buffer-related functions. This also removes the need for sizeof to determine the length of the array.
This is far better than allocating a huge array to store the floats and hoping I made it big enough, or making my own dynamic array with contiguous storage.
cplusplus.com:
Vector containers are implemented as dynamic arrays; Just as regular arrays, vector containers have their elements stored in contiguous storage locations, which means that their elements can be accessed not only using iterators but also using offsets on regular pointers to elements.
Yes, the elements of a std::vector are guaranteed to be contiguous.

Is std::vector X(0) guaranteed not to allocate?

In theory, given:
std::vector X(0);
Then X will allocate memory from the stack for itself, but is it guaranteed not to allocate heap memory?
In other words, since implementations will generally use a pointer for the vector, is this pointer always initially 0?
Note: this is not the same as Initial capacity of vector in C++ since that asks about capacity when no argument is passed to the constructor, not about guarantees on heap allocations when capacity is 0; The fact that capacity can be non-zero in this case illustrates the difference.
That constructor calls explicit vector( size_type count ) which does:
Constructs the container with count default-inserted instances of T.
No copies are made.
The only guarantee you get is that the vector will be empty, its size() will be 0. Implementations are allowed to allocate whatever they want for book keeping or whatever on initialization.
So if your question is if you can count on X taking up 0 bytes of free store space then the answer is no.

Are std::vector elements guaranteed to be contiguous?

My question is simple: are std::vector elements guaranteed to be contiguous? In other words, can I use the pointer to the first element of a std::vector as a C-array?
If my memory serves me well, the C++ standard did not make such guarantee. However, the std::vector requirements were such that it was virtually impossible to meet them if the elements were not contiguous.
Can somebody clarify this?
Example:
std::vector<int> values;
// ... fill up values
if( !values.empty() )
{
int *array = &values[0];
for( int i = 0; i < values.size(); ++i )
{
int v = array[i];
// do something with 'v'
}
}
This was missed from C++98 standard proper but later added as part of a TR. The forthcoming C++0x standard will of course contain this as a requirement.
From n2798 (draft of C++0x):
23.2.6 Class template vector [vector]
1 A vector is a sequence container that supports random access iterators. In addition, it supports (amortized)
constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage
management is handled automatically, though hints can be given to improve efficiency. The elements of a
vector are stored contiguously, meaning that if v is a vector where T is some type other
than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().
As other answers have pointed out, the contents of a vector is guaranteed to be continuous (excepting bool's weirdness).
The comment that I wanted to add, is that if you do an insertion or a deletion on the vector, which could cause the vector to reallocate it's memory, then you will cause all of your saved pointers and iterators to be invalidated.
The standard does in fact guarantee that a vector is continuous in memory and that &a[0] can be passed to a C function that expects an array.
The exception to this rule is vector<bool> which only uses one bit per bool thus although it does have continuous memory it can't be used as a bool* (this is widely considered to be a false optimization and a mistake).
BTW, why don't you use iterators? That's what they're for.
As other's have already said, vector internally uses a contiguous array of objects. Pointers into that array should be treated as invalid whenever any non-const member function is called IIRC.
However, there is an exception!!
vector<bool> has a specialised implementation designed to save space, so that each bool only uses one bit. The underlying array is not a contiguous array of bool and array arithmetic on vector<bool> doesn't work like vector<T> would.
(I suppose it's also possible that this may be true of any specialisation of vector, since we can always implement a new one. However, std::vector<bool> is the only, err, standard specialisation upon which simple pointer arithmetic won't work.)
I found this thread because I have a use case where vectors using contiguous memory is an advantage.
I am learning how to use vertex buffer objects in OpenGL. I created a wrapper class to contain the buffer logic, so all I need to do is pass an array of floats and a few config values to create the buffer.
I want to be able to generate a buffer from a function based on user input, so the length is not known at compile time. Doing something like this would be the easiest solution:
void generate(std::vector<float> v)
{
float f = generate_next_float();
v.push_back(f);
}
Now I can pass the vector's floats as an array to OpenGL's buffer-related functions. This also removes the need for sizeof to determine the length of the array.
This is far better than allocating a huge array to store the floats and hoping I made it big enough, or making my own dynamic array with contiguous storage.
cplusplus.com:
Vector containers are implemented as dynamic arrays; Just as regular arrays, vector containers have their elements stored in contiguous storage locations, which means that their elements can be accessed not only using iterators but also using offsets on regular pointers to elements.
Yes, the elements of a std::vector are guaranteed to be contiguous.