Create C++ array of unknown type - c++

Is there some way to create an array in C++ where we don't know the type, but we do know it's size and alignmnent requirements?
Let's say we have a template:
template<typename T>
T* create_array(size_t numElements) { return new T[numElements]; }
This works because each element T has known size and alignment, which is known at compile-time. But I'm looking for something where we can delegate the creation for later by simply extracting size and align and passing them on. This is the interface that I seek:
// my_header.hpp
// "internal" helper function, implementation in source file!
void* _create_array(size_t s, size_t a, size_t n);
template<typename T>
T* create_array(size_t numElements) {
return (T*)_create_array(sizeof(T), alignof(T), numElements);
}
Can we implement this in a source file?:
#include "my_header.hpp"
void* _create_array(size_t s, size_t a, size_t n) {
// ... ?
}
Requirements:
Each array element must have the correct alignment.
The total array size must be equal to s*n, and be aligned to a.
Type safety is assumed to be managed by the templated interface.
Indexing into the array should use correct size and align offsets.
I'm using C++20, so newer features may also be considered.
In advance, thank you!

While you can also implement this yourself, you can simply use std::allocator:
template<typename T>
constexpr T* create_array(size_t numElements) {
std::allocator<T> a;
return std::allocator_traits<decltype(a)>::allocate(a, numElements);
}
and then
template<typename T>
constexpr void destroy_array(T* ptr) noexcept {
std::allocator<T> a;
std::allocator_traits<decltype(a)>::deallocate(a, ptr);
}
The benefit over doing it yourself via a call to operator new is that this will also be usable in constant expression evaluation.
You then need to create objects in the returned storage via placement-new, std::allocator_traits<std::allocator<T>>::construct or std::construct_at.
Anyway, first make sure that you really need to do all of this memory management manually. Standard library containers already offer similar functionality, e.g. std::vector has a .reserve member function to reserve memory in which objects can be placed later via push_back, emplace_back, resize, etc.
If you want to implement the above yourself, you basically need
#include<new>
//...
void* create_array(size_t s, size_t a, size_t n) {
// CAREFUL: check here that `s*n` does not overflow! Potential for vulnerabilities!
return ::operator new(s*n, std::align_val_t{a});
}
void destroy_array(void* ptr, size_t a) noexcept {
::operator delete(ptr, std::align_val_t{a});
}
(Note that identifiers starting with an underscore are reserved in the global namespace scope and may not be used there as function names, so I changed the name.)

Related

How to custom deallocate an object from a base class pointer?

I have a class hierarchy that I'm storing in a std::vector<std::unique_ptr<Base>>. There is frequent adding and removing from this vector, so I wanted to experiment with custom memory allocation to avoid all the calls to new and delete. I'd like to use STL tools only, so I'm trying std::pmr::unsynchronized_pool_resource for the allocation, and then adding a custom deleter to the unique_ptr.
Here's what I've come up with so far:
#include <memory_resource>
#include <vector>
#include <memory>
// dummy classes
struct Base
{
virtual ~Base() {}
};
struct D1 : public Base
{
D1(int i_) : i(i_) {}
int i;
};
struct D2 : public Base
{
D2(double d_) : d(d_) {}
double d;
};
// custom deleter: this is what I'm concerned about
struct Deleter
{
Deleter(std::pmr::memory_resource& m, std::size_t s, std::size_t a) :
mr(m), size(s), align(a) {}
void operator()(Base* a)
{
a->~Base();
mr.get().deallocate(a, size, align);
}
std::reference_wrapper<std::pmr::memory_resource> mr;
std::size_t size, align;
};
template <typename T>
using Ptr = std::unique_ptr<T, Deleter>;
// replacement function for make_unique
template <typename T, typename... Args>
Ptr<T> newT(std::pmr::memory_resource& m, Args... args)
{
auto aPtr = m.allocate(sizeof(T), alignof(T));
return Ptr<T>(new (aPtr) T(args...), Deleter(m, sizeof(T), alignof(T)));
}
// simple construction of vector
int main()
{
auto pool = std::pmr::unsynchronized_pool_resource();
auto vec = std::vector<Ptr<Base>>();
vec.push_back(newT<Base>(pool));
vec.push_back(newT<D1>(pool, 2));
vec.push_back(newT<D2>(pool, 4.0));
return 0;
}
This compiles, and I'm pretty sure that it doesn't leak (please tell me if I'm wrong!) But I'm not too happy with the Deleter class, which has to take extra arguments for the size and alignment.
I first tried making it a template, so that I could work out the size and alignment automatically:
template <typename T>
struct Deleter
{
Deleter(std::pmr::memory_resource& m) :
mr(m) {}
void operator()(Base* a)
{
a->~Base();
mr.get().deallocate(a, sizeof(T), alignof(T));
}
std::reference_wrapper<std::pmr::memory_resource> mr;
};
But then the unique_ptrs for each type are incompatible, and the vector won't hold them.
Then I tried deallocating through the base class:
mr.get().deallocate(a, sizeof(Base), alignof(Base));
But this is clearly a bad idea, as the memory that's deallocated has a different size and alignment from what was allocated.
So, how do I deallocate through the base pointer without storing the size and alignment at runtime? delete seems to manage, so it seems like it should be possible here as well.
After writing my answer, I would recommend you stick with your code.
Letting unique_ptr handle the storage is not bad at all, it is allocated on stack if unique_ptr itself is, it is safe, and there is no additional overhead at deallocation time. The latter is not true for std::shared_ptr which uses type-erause for its deleters.
I think it is the cleanest and simplest way how to achieve the goal. And there's nothing wrong with your code as far as I can tell.
Most allocators to my knowledge allocate extra space for storing any data they need for deallocation directly next to the pointer they return to you. We can do the same to the aPtr blob:
// Extra information needed for deallocation
struct Header {
std::size_t s;
std::size_t a;
std::pmr::memory_resource* res;
};
// Deleter is now just a free function
void deleter(Base* a) {
// First delete the object itself.
a->~Base();
// Obtain the header
auto* ptr = reinterpret_cast<unsigned char*>(a);
Header* header = reinterpret_cast<Header*>(ptr - sizeof(Header));
// Deallocate the allocated blob.
header->res->deallocate(ptr, header->s, header->a);
};
// Use the new custom function.
template <typename T>
using Ptr = std::unique_ptr<T, decltype(&deleter)>;
template <typename T, typename... Args>
Ptr<T> newT(std::pmr::memory_resource& m, Args... args) {
// Let the compiler calculate the correct way how to store `T` and `H`
// together.
struct Storage {
Header header;
T type;
};
Header h = {sizeof(Storage), alignof(Storage)};
auto aPtr = m.allocate(h.s, h.a);
// Use dummy header.
Storage* storage = new (aPtr) Storage{h, T(args...)};
static_assert(sizeof(Storage) == (sizeof(Header) + sizeof(T)),
"No padding bytes allowed in Storage.");
return Ptr<T>(&storage->type, deleter);
}
We store all information necessary for deallocation in Header structure.
Allocating both T and the header in a single blob is not straight forward as it might seem - see below. We need at least sizeof(T)+sizeof(Header) bytes but must also respect the alignof(T). So we let the compiler figure it out via Storage.
This way we can allocate T properly and return a pointer to &storage->type to the user. The issue now is that there might be some to-deleter-unknown amount of padding in Storage between header and type, thus the deleter function would not be able to recover &storage->header only from &storage->type pointer.
I have two proposals for this:
Just assert the padding amount to 0.
Manually write the header at the known place, albeit I cannot guarantee 100% safe.
Restricting to known padding
Although the extra padding in Storage is unlikely because Header is aligned to 8 bytes on normal 64-bit systems which should be generally enough for all Ts, there is no such alignment guarantee in C++. vtable pointer makes this even less guaranteed IMHO and the fact that alignas(N) offers some user-control over the alignment, increasing it in particular for e.g. vector instructions, doesn't help either. So to be safe, we can just use static_assert and if any "weird" type comes along, the code will not compile and remain safe.
If that happens, one can manually add extra padding to Storage and modify the subtraction amount. The cost would be extra memory for that padding for all allocations.
Writing the header manually
Another option is that we just ignore storage->header member and write the header ourselves directly before type, potentially into the padding area. This requires the use of memcopy because we cannot just placement-new it there because of possible alignof(Header) mismatch. Same in deleter itself because there is no Header object at ptr-sizeof(Header), simple reinterpret_cast<Header*>(ptr-sizeof(header)) would break the strict aliasing rule.
// Extra information needed for deallocation
struct Header {
std::size_t s;
std::size_t a;
std::pmr::memory_resource* res;
};
// Deleter is now just a free function
void deleter(Base* a) {
// First delete the object itself.
a->~Base();
// Obtain the header
auto* ptr = reinterpret_cast<unsigned char*>(a);
Header header;
std::memcpy(&header, ptr - sizeof(Header), sizeof(Header));
// Deallocate the allocated blob.
header.res->deallocate(ptr, header.s, header.a);
};
// Use the new custom function.
template <typename T>
using Ptr = std::unique_ptr<T, decltype(&deleter)>;
template <typename T, typename... Args>
Ptr<T> newT(std::pmr::memory_resource& m, Args... args) {
// Let the compiler calculate the correct way how to store `T` and `H`
// together.
struct Storage {
Header header;
// Padding???
T type;
};
Header h = {sizeof(Storage), alignof(Storage)};
auto aPtr = m.allocate(h.s, h.a);
// Use dummy header.
Storage* storage = new (aPtr) Storage{{0, 0}, T(args...)};
// Write our own header at the known -sizeof(Header) offset.
auto* ptr = reinterpret_cast<unsigned char*>(storage);
std::memcpy(ptr - sizeof(Header), &h, sizeof(Header));
return Ptr<T>(&storage->type, deleter);
}
I know this solution is safe w.r.t strict aliasing, object lifetime and allocating T. What I am not 100% certain about is whether the compiler is allowed to store anything relevant to T inside the potential padding bytes, which would thus be overwritten by the manually-written header.

Allocation: Smart pointers and dynamically sized structure

I've a structure that can be dynamically sized.
I want to use smart pointer (unique_ptr here) to allocate this structure.
The problem is that this struct is dynamically sized..
Here is the structure (of Windows library):
typedef struct _STORAGE_DEPENDENCY_INFO
{
STORAGE_DEPENDENCY_INFO_VERSION Version;
ULONG NumberEntries;
union
{
STORAGE_DEPENDENCY_INFO_TYPE_1 Version1Entries[];
STORAGE_DEPENDENCY_INFO_TYPE_2 Version2Entries[];
};
} STORAGE_DEPENDENCY_INFO, *PSTORAGE_DEPENDENCY_INFO;
I CAN get the total size of the structure.
So, I know i can do this with malloc:
STORAGE_DEPENDENCY_INFO *info = std::malloc(struct_size);
But I don't know how to allocate it with make_unique..
You can use smart pointers with any allocator-deallocator pair by using a custom deallocation function. Here is an example using std::malloc and std::free:
struct freer
{
void operator()(void* p) const noexcept {
std::free(p);
}
};
template<class T>
using unique_c_ptr = std::unique_ptr<T, freer>;
template<class T>
[[nodiscard]] unique_c_ptr<T>
make_unique_malloc(std::size_t size) noexcept
{
static_asset(std::is_trvial_v<T>);
return unique_c_ptr<T>{static_cast<T*>(std::malloc(size))};
}
auto unique = make_unique_malloc<STORAGE_DEPENDENCY_INFO>(struct_size);
STORAGE_DEPENDENCY_INFO_TYPE_1 Version1Entries[];
Array of unspecified bound cannot be a non-static member in C++, so this class is ill-formed. There is no such thing as "dynamically sized structure" in C++.
Given that it is from a system library, it probably relies on some language extension.
C language does have "flexible array members", but those are allowed only for structs; not for unions.
STORAGE_DEPENDENCY_INFO *info = std::malloc(struct_size);
std::malloc returns a void* so this implicit conversion is also ill-formed in C++.

std::unique_ptr<T[]> and custom allocator deleter

I am trying to use std::unique_ptr<T[]> with custom memory allocators. Basically, I have custom allocators that are subclasses of IAllocator, which provides the following methods:
void* Alloc( size_t size )
template<typename T> T* AllocArray( size_t count )
void Free( void* mem )
template<typename T> void FreeArray( T* arr, size_t count )
Since the underlying memory might come from a pre-allocated block, I need the special ...Array()-methods to allocate and free arrays, they allocate/free memory and call T() / ~T() on every element in the range.
Now, as far as I know, custom deleters for std::unique_ptr use the signature:
void operator()(T* ptr) const
In the case of unique_ptr<T[]>, normally you would call delete[] and be done with it, but I have to call FreeArray<T>, for which I need the number of elements in the range. Given only the raw pointer, I think there is no way of obtaining the size of the range, hence the only thing I could come up with is this:
std::unique_ptr<T[], MyArrDeleter> somePtr( allocator.AllocArray<T>( 20 ), MyArrDeleter( allocator, 20 ) );
Where essentially the size of the array has to be passed into the deleter object manually. Is there a better way to do this? This seems quite error-prone to me...
Yes, there most certainly is a better way:
Use a maker-function.
template<class T, class A> std::unique_ptr<T[], MyArrDeleter>
my_maker(size_t count, A&& allocator) {
return {somePtr(allocator.AllocArray<T>(count), MyArrDeleter(allocator, count)};
}
auto p = my_maker<T>(42, allocator);
T* doesn't contain such information, neither unique_ptr knows about the size of the array (since it uses directly a delete [] as you stated). You could let the T be a unique_ptr<T> to manage the destruction automatically but this could not be possible if the whole contiguous T* is managed by a memory allocator (and not a single T* object). Eg:
unique_ptr<unique_ptr<Foo>[]> data;
data.reset(new unique_ptr<Foo>[50]);
data[0].reset(new Foo());

C++ STL with jemalloc

How is it possible to use C++ STL containers with jemalloc (or any other malloc implementation)?
Is it as simple as include jemalloc/jemalloc.h? Or should I write an allocator for them?
Edit: The application I'm working on allocates and frees relatively small objects over its lifetime. I want the replace the default allocator, because benchmarks showed that the application doesn't scale beyond 2 cores. Profiling showed that it was waiting for memory allocation, that's what caused the scaling issues. As I understand, jemalloc will help with that.
I'd like to see a solution, that's platform-neutral as the application has to work on both Linux and Windows. (Linking against a different implementation is easy under Linux, but it's very hard on Windows as far as I know.)
C++ allows you to replace operator new. If this replacement operator new calls je_malloc, then std::allocator will indirectly call je_malloc, and in turn all standard containers will.
This is by far the simplest approach. Writing a custom allocator requires writing an entire class. Replacing malloc may not be sufficient (there's no guarantee that the non-replaced operator new calls malloc), and it has the risks noted earlier by Adrian McCarthy
If you want to replace malloc everywhere in your program (which I wanted to and also seems the only logical solution), then all you have to do is link against it.
So, if you use gcc then all you have to do is:
g++ yourprogram.cpp -ljemalloc
But, if it's not possible, then you have to use jemalloc via another functions e.g. je_malloc and je_free, and then you have to overload the new and delete operators.
There's no need for including any header if you don't use implementation-specific features (statistics, mostly).
Writing an allocator is going to be the easiest solution, since the stl was designed to have interchangeable allocators. This will be the easiest path.
Some projects play games try to get the alternate malloc implementation to replace the malloc and news provided by the compiler's companion library. That's prone to all sorts of issues because you end up relying on specific implementation details of your compiler and the library it normally uses. This path is fraught with danger.
Some dangers of trying to replace malloc globally:
Static initializer order has limited guarantees in C++. There's no way to guarantee the allocator replacement is initialized before the first caller tries to use it, unless you ban static objects that might allocate memory. The runtime doesn't have this problem, since the compiler and the runtime work together to make sure the runtime is fully initialized before initializing any statics.
If you dynamically link to the runtime library, then there's no way to ensure some of the runtime library's code isn't already bound to its own implementation. Trying to modify the compiler's runtime library might lead to licensing issues when redistributing your application.
All other methods of allocation might not always ultimately rely on malloc. For example, an implementation of new might bypass malloc for large allocations and directly call the OS to allocate memory. That requires tracking to make sure such allocations aren't accidentally sent to the replacement free.
I believe Chromium and Firefox has both replaced the allocator, but they play some dirty tricks and probably have to update their approach as the compiler, linker, and runtime evolve.
Make yourself allocator. Do like this:
#include <vector>
template<typename T>
struct RemoveConst
{
typedef T value_type;
};
template<typename T>
struct RemoveConst<const T>
{
typedef T value_type;
};
template <class T>
class YourAlloc {
public:
// type definitions
typedef RemoveConst<T> Base;
typedef typename Base::value_type value_type;
typedef value_type* pointer;
typedef const value_type* const_pointer;
typedef value_type& reference;
typedef const value_type& const_reference;
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
// rebind allocator to type U
template <class U>
struct rebind {
typedef YourAlloc<U> other;
};
// return address of values
pointer address(reference value) const {
return &value;
}
const_pointer address(const_reference value) const {
return &value;
}
/* constructors and destructor
* - nothing to do because the allocator has no state
*/
YourAlloc() throw() {
}
YourAlloc(const YourAlloc&) throw() {
}
template <class U>
YourAlloc(const YourAlloc<U>&) throw() {
}
~YourAlloc() throw() {
}
// return maximum number of elements that can be allocated
size_type max_size() const throw() {
return std::numeric_limits<std::size_t>::max() / sizeof(T);
}
// allocate but don't initialize num elements of type T
pointer allocate(size_type num, const void* = 0) {
return (pointer)je_malloc(num * sizeof(T));
}
// initialize elements of allocated storage p with value value
void construct(pointer p, const T& value) {
// initialize memory with placement new
new((void*)p)T(value);
}
// destroy elements of initialized storage p
void destroy(pointer p) {
// destroy objects by calling their destructor
p->~T();
}
// deallocate storage p of deleted elements
void deallocate(pointer p, size_type num) {
je_free(p);
}
};
// return that all specializations of this allocator are interchangeable
template <class T1, class T2>
bool operator== (const YourAlloc<T1>&,
const YourAlloc<T2>&) throw() {
return true;
}
template <class T1, class T2>
bool operator!= (const YourAlloc<T1>&,
const YourAlloc<T2>&) throw() {
return false;
}
int main()
{
std::vector<int, YourAlloc<int>> vector;
return 0;
}
The code is copied from here
There may be problems as the constructors won't be called. You may use differnt options of operator new (has more options than just new) which can just allocate memory without calling constructor, or call the constructor in already allocated memory. http://www.cplusplus.com/reference/std/new/operator%20new%5B%5D/

Stack-buffer based STL allocator?

I was wondering if it practicable to have an C++ standard library compliant allocator that uses a (fixed sized) buffer that lives on the stack.
Somehow, it seems this question has not been ask this way yet on SO, although it may have been implicitly answered elsewhere.
So basically, it seems, as far as my searches go, that it should be possible to create an allocator that uses a fixed size buffer. Now, on first glance, this should mean that it should also be possible to have an allocator that uses a fixed size buffer that "lives" on the stack, but it does appear, that there is no widespread such implementation around.
Let me give an example of what I mean:
{ ...
char buf[512];
typedef ...hmm?... local_allocator; // should use buf
typedef std::basic_string<char, std::char_traits<char>, local_allocator> lstring;
lstring str; // string object of max. 512 char
}
How would this be implementable?
The answer to this other question (thanks to R. Martinho Fernandes) links to a stack based allocator from the chromium sources: http://src.chromium.org/viewvc/chrome/trunk/src/base/stack_container.h
However, this class seems extremely peculiar, especially since this StackAllocator does not have a default ctor -- and there I was thinking that every allocator class needs a default ctor.
It's definitely possible to create a fully C++11/C++14 conforming stack allocator*. But you need to consider some of the ramifications about the implementation and the semantics of stack allocation and how they interact with standard containers.
Here's a fully C++11/C++14 conforming stack allocator (also hosted on my github):
#include <functional>
#include <memory>
template <class T, std::size_t N, class Allocator = std::allocator<T>>
class stack_allocator
{
public:
typedef typename std::allocator_traits<Allocator>::value_type value_type;
typedef typename std::allocator_traits<Allocator>::pointer pointer;
typedef typename std::allocator_traits<Allocator>::const_pointer const_pointer;
typedef typename Allocator::reference reference;
typedef typename Allocator::const_reference const_reference;
typedef typename std::allocator_traits<Allocator>::size_type size_type;
typedef typename std::allocator_traits<Allocator>::difference_type difference_type;
typedef typename std::allocator_traits<Allocator>::const_void_pointer const_void_pointer;
typedef Allocator allocator_type;
public:
explicit stack_allocator(const allocator_type& alloc = allocator_type())
: m_allocator(alloc), m_begin(nullptr), m_end(nullptr), m_stack_pointer(nullptr)
{ }
explicit stack_allocator(pointer buffer, const allocator_type& alloc = allocator_type())
: m_allocator(alloc), m_begin(buffer), m_end(buffer + N),
m_stack_pointer(buffer)
{ }
template <class U>
stack_allocator(const stack_allocator<U, N, Allocator>& other)
: m_allocator(other.m_allocator), m_begin(other.m_begin), m_end(other.m_end),
m_stack_pointer(other.m_stack_pointer)
{ }
constexpr static size_type capacity()
{
return N;
}
pointer allocate(size_type n, const_void_pointer hint = const_void_pointer())
{
if (n <= size_type(std::distance(m_stack_pointer, m_end)))
{
pointer result = m_stack_pointer;
m_stack_pointer += n;
return result;
}
return m_allocator.allocate(n, hint);
}
void deallocate(pointer p, size_type n)
{
if (pointer_to_internal_buffer(p))
{
m_stack_pointer -= n;
}
else m_allocator.deallocate(p, n);
}
size_type max_size() const noexcept
{
return m_allocator.max_size();
}
template <class U, class... Args>
void construct(U* p, Args&&... args)
{
m_allocator.construct(p, std::forward<Args>(args)...);
}
template <class U>
void destroy(U* p)
{
m_allocator.destroy(p);
}
pointer address(reference x) const noexcept
{
if (pointer_to_internal_buffer(std::addressof(x)))
{
return std::addressof(x);
}
return m_allocator.address(x);
}
const_pointer address(const_reference x) const noexcept
{
if (pointer_to_internal_buffer(std::addressof(x)))
{
return std::addressof(x);
}
return m_allocator.address(x);
}
template <class U>
struct rebind { typedef stack_allocator<U, N, allocator_type> other; };
pointer buffer() const noexcept
{
return m_begin;
}
private:
bool pointer_to_internal_buffer(const_pointer p) const
{
return (!(std::less<const_pointer>()(p, m_begin)) && (std::less<const_pointer>()(p, m_end)));
}
allocator_type m_allocator;
pointer m_begin;
pointer m_end;
pointer m_stack_pointer;
};
template <class T1, std::size_t N, class Allocator, class T2>
bool operator == (const stack_allocator<T1, N, Allocator>& lhs,
const stack_allocator<T2, N, Allocator>& rhs) noexcept
{
return lhs.buffer() == rhs.buffer();
}
template <class T1, std::size_t N, class Allocator, class T2>
bool operator != (const stack_allocator<T1, N, Allocator>& lhs,
const stack_allocator<T2, N, Allocator>& rhs) noexcept
{
return !(lhs == rhs);
}
This allocator uses a user-provided fixed-size buffer as an initial source of memory, and then falls back on a secondary allocator (std::allocator<T> by default) when it runs out of space.
Things to consider:
Before you just go ahead and use a stack allocator, you need to consider your allocation patterns. Firstly, when using a memory buffer on the stack, you need to consider what exactly it means to allocate and deallocate memory.
The simplest method (and the method employed above) is to simply increment a stack pointer for allocations, and decrement it for deallocations. Note that this severely limits how you can use the allocator in practice. It will work fine for, say, an std::vector (which will allocate a single contiguous memory block) if used correctly, but will not work for say, an std::map, which will allocate and deallocate node objects in varying order.
If your stack allocator simply increments and decrements a stack pointer, then you'll get undefined behavior if your allocations and deallocations are not in LIFO order. Even an std::vector will cause undefined behavior if it first allocates a single contiguous block from the stack, then allocates a second stack block, then deallocates the first block, which will happen every time the vector increases it's capacity to a value that is still smaller than stack_size. This is why you'll need to reserve the stack size in advance. (But see the note below regarding Howard Hinnant's implementation.)
Which brings us to the question ...
What do you really want from a stack allocator?
Do you actually want a general purpose allocator that will allow you to allocate and deallocate memory chunks of various sizes in varying order, (like malloc), except it draws from a pre-allocated stack buffer instead of calling sbrk? If so, you're basically talking about implementing a general purpose allocator that maintains a free list of memory blocks somehow, only the user can provide it with a pre-existing stack buffer. This is a much more complex project. (And what should it do if it runs out space? Throw std::bad_alloc? Fall back on the heap?)
The above implementation assumes you want an allocator that will simply use LIFO allocation patterns and fall back on another allocator if it runs out of space. This works fine for std::vector, which will always use a single contiguous buffer that can be reserved in advance. When std::vector needs a larger buffer, it will allocate a larger buffer, copy (or move) the elements in the smaller buffer, and then deallocate the smaller buffer. When the vector requests a larger buffer, the above stack_allocator implementation will simply fall back to a secondary allocator (which is std::allocator by default.)
So, for example:
const static std::size_t stack_size = 4;
int buffer[stack_size];
typedef stack_allocator<int, stack_size> allocator_type;
std::vector<int, allocator_type> vec((allocator_type(buffer))); // double parenthesis here for "most vexing parse" nonsense
vec.reserve(stack_size); // attempt to reserve space for 4 elements
std::cout << vec.capacity() << std::endl;
vec.push_back(10);
vec.push_back(20);
vec.push_back(30);
vec.push_back(40);
// Assert that the vector is actually using our stack
//
assert(
std::equal(
vec.begin(),
vec.end(),
buffer,
[](const int& v1, const int& v2) {
return &v1 == &v2;
}
)
);
// Output some values in the stack, we see it is the same values we
// inserted in our vector.
//
std::cout << buffer[0] << std::endl;
std::cout << buffer[1] << std::endl;
std::cout << buffer[2] << std::endl;
std::cout << buffer[3] << std::endl;
// Attempt to push back some more values. Since our stack allocator only has
// room for 4 elements, we cannot satisfy the request for an 8 element buffer.
// So, the allocator quietly falls back on using std::allocator.
//
// Alternatively, you could modify the stack_allocator implementation
// to throw std::bad_alloc
//
vec.push_back(50);
vec.push_back(60);
vec.push_back(70);
vec.push_back(80);
// Assert that we are no longer using the stack buffer
//
assert(
!std::equal(
vec.begin(),
vec.end(),
buffer,
[](const int& v1, const int& v2) {
return &v1 == &v2;
}
)
);
// Print out all the values in our vector just to make sure
// everything is sane.
//
for (auto v : vec) std::cout << v << ", ";
std::cout << std::endl;
See: http://ideone.com/YhMZxt
Again, this works fine for vector - but you need to ask yourself what exactly you intend to do with the stack allocator. If you want a general purpose memory allocator that just happens to draw from a stack buffer, you're talking about a much more complex project. A simple stack allocator, however, which merely increments and decrements a stack pointer will work for a limited set of use cases. Note that for non-POD types, you'll need to use std::aligned_storage<T, alignof(T)> to create the actual stack buffer.
I'd also note that unlike Howard Hinnant's implementation, the above implementation doesn't explicitly make a check that when you call deallocate(), the pointer passed in is the last block allocated. Hinnant's implementation will simply do nothing if the pointer passed in isn't a LIFO-ordered deallocation. This will enable you to use an std::vector without reserving in advance because the allocator will basically ignore the vector's attempt to deallocate the initial buffer. But this also blurs the semantics of the allocator a bit, and relies on behavior that is pretty specifically bound to the way std::vector is known to work. My feeling is that we may as well simply say that passing any pointer to deallocate() which wasn't returned via the last call to allocate() will result in undefined behavior and leave it at that.
*Finally - the following caveat: it seems to be debatable whether or not the function that checks whether a pointer is within the boundaries of the stack buffer is even defined behavior by the standard. Order-comparing two pointers from different new/malloc'd buffers is arguably implementation defined behavior (even with std::less), which perhaps makes it impossible to write a standards-conforming stack allocator implementation that falls back on heap allocation. (But in practice this won't matter unless you're running a 80286 on MS-DOS.)
** Finally (really now), it's also worth noting that the word "stack" in stack allocator is sort of overloaded to refer both to the source of memory (a fixed-size stack array) and the method of allocation (a LIFO increment/decrement stack pointer). When most programmers say they want a stack allocator, they're thinking about the former meaning without necessarily considering the semantics of the latter, and how these semantics restrict the use of such an allocator with standard containers.
Apparently, there is a conforming Stack Allocator from one Howard Hinnant.
It works by using a fixed size buffer (via a referenced arena object) and falling back to the heap if too much space is requested.
This allocator doesn't have a default ctor, and since Howard says:
I've updated this article with a new allocator that is fully C++11 conforming.
I'd say that it is not a requirement for an allocator to have a default ctor.
Starting in c++17 it's actually quite simple to do.
Full credit goes to the author of the dumbest allocator, as that's what this is based on.
The dumbest allocator is a monotonic bump allocator which takes a char[] resource as its underlying storage. In the original version, that char[] is placed on the heap via mmap, but it's trivial to change it to point at a char[] on the stack.
template<std::size_t Size=256>
class bumping_memory_resource {
public:
char buffer[Size];
char* _ptr;
explicit bumping_memory_resource()
: _ptr(&buffer[0]) {}
void* allocate(std::size_t size) noexcept {
auto ret = _ptr;
_ptr += size;
return ret;
}
void deallocate(void*) noexcept {}
};
This allocates Size bytes on the stack on creation, default 256.
template <typename T, typename Resource=bumping_memory_resource<256>>
class bumping_allocator {
Resource* _res;
public:
using value_type = T;
explicit bumping_allocator(Resource& res)
: _res(&res) {}
bumping_allocator(const bumping_allocator&) = default;
template <typename U>
bumping_allocator(const bumping_allocator<U,Resource>& other)
: bumping_allocator(other.resource()) {}
Resource& resource() const { return *_res; }
T* allocate(std::size_t n) { return static_cast<T*>(_res->allocate(sizeof(T) * n)); }
void deallocate(T* ptr, std::size_t) { _res->deallocate(ptr); }
friend bool operator==(const bumping_allocator& lhs, const bumping_allocator& rhs) {
return lhs._res == rhs._res;
}
friend bool operator!=(const bumping_allocator& lhs, const bumping_allocator& rhs) {
return lhs._res != rhs._res;
}
};
And this is the actual allocator. Note that it would be trivial to add a reset to the resource manager, letting you create a new allocator starting at the beginning of the region again. Also could implement a ring buffer, with all the usual risks thereof.
As for when you might want something like this: I use it in embedded systems. Embedded systems usually don't react well to heap fragmentation, so having the ability to use dynamic allocation that doesn't go on the heap is sometimes handy.
It really depends on your requirements, sure if you like you can create an allocator that operates only on the stack but it would be very limited since the same stack object is not accessible from everywhere in the program as a heap object would be.
I think this article explains allocators it very well
http://www.codeguru.com/cpp/cpp/cpp_mfc/stl/article.php/c4079
A stack-based STL allocator is of such limited utility that I doubt you will find much prior art. Even the simple example you cite quickly blows up if you later decide you want to copy or lengthen the initial lstring.
For other STL containers such as the associative ones (tree-based internally) or even vector and deque which use either a single or multiple contiguous blocks of RAM, the memory usage semantics quickly become unmanageable on the stack in almost any real-world usage.
This is actually an extremely useful practice and used in performant development, such as games, quite a bit. To embed memory inline on the stack or within the allocation of a class structure can be critical for speed and or management of the container.
To answer your question, it comes down to the implementation of the stl container. If the container not only instantiates but also keeps reference to your allocator as a member then you are good to go to create a fixed heap, I've found this to not always be the case as it is not part of the spec. Otherwise it becomes problematic. One solution can be to wrap the container, vector, list, etc, with another class who contains the storage. Then you can use an allocator to draw from that. This could require a lot of template magickery (tm).