Compile time method to determine whether object has automatic storage duration - c++

I'd like to be able to enforce at compile time that a particular type can be used only to create objects with automatic storage duration.
template<typename T, typename Alloc>
struct Array
{
T* data; // owned resource
Array(std::size_t size); // allocates via Alloc
~Array(); // deallocates via Alloc
};
typedef Array<int, AutoAllocator<int>> AutoArray;
void foo(AutoArray a) // ok
{
AutoArray l = AutoArray(); // ok
static AutoArray s; // error
new AutoArray(); // error
std::vector<AutoArray> v(1); // error
}
The application for this would be to enable choosing an optimal allocation strategy for resources owned by an instance of AutoArray. The idea being that the resource allocation pattern required for objects with automatic storage duration is compatible with a LIFO resource allocator.
What method could I use to achieve this in C++?
EDIT: The secondary goal is to allow the allocation strategy for Array to be transparently switched by dropping in either AutoAllocator or the default std::allocator.
typedef Array<int, std::allocator<int>> DynamicArray;
Assume that there is a large base of code that already uses DynamicArray.

This cannot be done. Consider that you created a type that held this as a member. When the compiler generates the code for the constructor of that type it does not know where the object is being created, is the complete object in the stack, is it in the heap?
You need to solve your problem with a different mind set, for example, you can pass the allocator to the constructor of the object (the way BSL does) and possibly default to a safe allocator (based on new-delete), then for those use cases where a lifo allocator is a better option the user can explicitly request it.
This won't be the same as a compiler error, but it will be obvious enough to detect on a code review.
If you are really interested on interesting uses of allocators, you might want to take a look at the BSL replacement for the standard library, as it allows for polymorphic allocators that are propagated to the members of containers. In the BSL world, your examples would become:
// Assume a blsma::Allocator implementing LIFO, Type uses that protocol
LifoAllocator alloc; // implements the bslma::Allocator protocol
Type l(&alloc); // by convention bslma::Allocator by pointer
static Type s; // defaults to new-delete if not passed
new (&alloc) Type(&alloc); // both 'Type' and it's contents share the allocator
// if the lifetime makes sense, if not:
new Type; // not all objects need to use the same allocator
bsl::vector<Type> v(&alloc);
v.resize(1); // nested object uses the allocator in the container
Using allocators in general is not simple, and you will have to be careful of the relative lifetimes of the objects with respect to each other and to the allocators.

Related

Can I use an int (as opposed to a char) array as a memory arena where objects are created with placement new?

The question concerns a home-grown container template (a kind of std::array/vector hybrid) which holds an untyped array for storage. New elements are added by a push_back() member function which copy-constructs an element via placement new. This way the container does not require the contained type to have a default constructor, and we avoid default-constructing potentially never needed elements.
Typically, such storage would be a character type like std::byte.
We are using a bouquet of compilers. One of them predates the C++11 alignment facilities like alignasor aligned_storage. Absent that, they all all need different pragmas or attributes to guarantee alignment. In order to simplify the build and avoid manual alignment computation noise, we had the idea to use an array of 32 bit integers which have an alignment guarantee. Here is the core of the implementation:
template <class T> struct vec
{
uint32_t storage[NUM];
T *freeMem;
vec() : freeMem((T *)storage) {}
T *push_back(const T &t) { return new (freeMem++) T(t); }
};
Notably, we use a typed (differently typed than the storage array) pointer to pass the storage location to placement new; we think that's not an aliasing violation because we don't read or write through it before the object with type T is created.
Also notable is that the newly created objects may incompletely straddle two or more of the original int objects in the storage area, if sizeof(T) is not a multiple of sizeof(uint32_t).
I would think we have neither aliasing nor object lifetime issues. Is that so?

Can I use allocator specified for some type to allocate objects of another type in C++?

Some container A has a template parameter Alloc (that is a template too) representing an allocator type. A specifies Alloc for the type A::Node.
template <template <T> Alloc>
class A {
struct Node {
};
Alloc<Node> allocator_; // the allocator object
};
Please excuse me for possibly wrong C++ code above.
So, allocator_.allocate(1) will allocate sizeof(A::Node) bytes. But during operation, container A needs a memory for some object of other than A::Node type, say a temporary string (of chars).
From technical point of view, I could use existing allocator in such a dirty way:
size_t string_len = 500;
// how much objects spanned in memory is enough to fit our string?
size_t equal_size = (string_len / sizeof(Node)) + 1;
auto mem = allocator_.allocate(equal_size);
char *p = (char*)mem; // reinterpret cast
// ... use p to store the string ... memcpy(p, str_src, str_len); //
// Now p is not needed, so return memory to allocator:
allocator_.deallocate(mem, equal_size);
Is there a less dirty approach, considering I need no more than 1 allocator and I wish to put all the memory management to it?
All this comes from those needs:
to have a single allocator that could be killed to free all (possibly leaked) memory that A is allocated for any its purposes during operation
to have not more than 1 allocator (including the default ::new, ::delete)
std::allocator has a member type rebind for exactly that purpose:
std::allocator<Node> alloc;
std::allocator<Node>::rebind<char>::other char_alloc;
char * mem = char_alloc.allocate(string_len);
You can use an allocator's rebind for this. From this documentation:
A structure that enables an allocator for objects of one type to allocate storage for objects of another type.
it is built exactly for your case - taking an allocator type oriented to one type, and building the corresponding one oriented to some other type.
In your case, it might look like
typename Alloc<T>::rebind<Node>::other allocator_;
You should probably use Alloc::rebind member template to get an allocator for that another object.
However, that does mean that you do have 2 allocators. The advantage of rebind is to allow the user of your template to specify the allocator type only for a single allocated type.
Also note that rebind is optional, so if you must support such allocators, you'll need to pass the other allocator as an argument, but you can still use the rebound allocator as a default value.

Why the custom deleter doesn't increase the size of unique_ptr object? [duplicate]

I am reading "Effective Modern C++". In the item related to std::unique_ptr it's stated that if the custom deleter is a stateless object, then no size fees occur, but if it's a function pointer or std::function size fee occurs. Could you explain why?
Let's say that we have the following code:
auto deleter_ = [](int *p) { doSth(p); delete p; };
std::unique_ptr<int, decltype(deleter_)> up(new int, deleter_);
To my understanding, the unique_ptr should have an object of type decltype(deleter_) and assign deleter_ to that internal object. But obviously that's not what's happening. Could you explain the mechanism behind this using smallest possible code example?
A unique_ptr must always store its deleter. Now, if the deleter is a class type with no state, then the unique_ptr can make use of empty base optimization so that the deleter does not use any additional space.
How exactly this is done differs between implementations. For instance, both libc++ and MSVC store the managed pointer and the deleter in a compressed pair, which automatically gets you empty base optimization if one of the types involved is an empty class.
From the libc++ link above
template <class _Tp, class _Dp = default_delete<_Tp> >
class _LIBCPP_TYPE_VIS_ONLY unique_ptr
{
public:
typedef _Tp element_type;
typedef _Dp deleter_type;
typedef typename __pointer_type<_Tp, deleter_type>::type pointer;
private:
__compressed_pair<pointer, deleter_type> __ptr_;
libstdc++ stores the two in an std::tuple and some Google searching suggests their tuple implementation employs empty base optimization but I can't find any documentation stating so explicitly.
In any case, this example demonstrates that both libc++ and libstdc++ use EBO to reduce the size of a unique_ptr with an empty deleter.
If the deleter is stateless there's no space required to store it. If the deleter is not stateless then the state needs to be stored in the unique_ptr itself.
std::function and function pointers have information that is only available at runtime and so that must be stored in the object alongside the pointer the object itself. This in turn requires allocating (in the unique_ptr itself) space to store that extra state.
Perhaps understanding the Empty Base Optimization will help you understand how this could be implemented in practice.
The std::is_empty type trait is another possibility of how this could be implemented.
How exactly library writers implement this is obviously up to them and what the standard allows.
From a unique_ptr implementation:
template<class _ElementT, class _DeleterT = std::default_delete<_ElementT>>
class unique_ptr
{
public:
// public interface...
private:
// using empty base class optimization to save space
// making unique_ptr with default_delete the same size as pointer
class _UniquePtrImpl : private deleter_type
{
public:
constexpr _UniquePtrImpl() noexcept = default;
// some other constructors...
deleter_type& _Deleter() noexcept
{ return *this; }
const deleter_type& _Deleter() const noexcept
{ return *this; }
pointer& _Ptr() noexcept
{ return _MyPtr; }
const pointer _Ptr() const noexcept
{ return _MyPtr; }
private:
pointer _MyPtr;
};
_UniquePtrImpl _MyImpl;
};
The _UniquePtrImpl class contains the pointer and derives from the deleter_type.
If the deleter happens to be stateless, the base class can be optimized so that it takes no bytes for itself. Then the whole unique_ptr can be the same size as the contained pointer - that is: the same size as an ordinary pointer.
In fact there will be a size penalty for lambdas that are not stateless, i.e., lambdas that capture one or more values.
But for non-capturing lambdas, there are two key facts to notice:
The type of the lambda is unique and known only to the compiler.
Non-capturing lambdas are stateless.
Therefore, the compiler is able to invoke the lambda purely based on its type, which is recorded as part of the type of the unique_ptr; no extra runtime information is required.
This is in fact why non-capturing lambdas are stateless. In terms of the size penalty question, there is of course nothing special about non-capturing lambdas compared to any other stateless deletion functor type.
Note that std::function is not stateless, which is why the same reasoning does not apply to it.
Finally, note that although stateless objects are typically required to have nonzero size in order to ensure that they have unique addresses, stateless base classes are not required to add to the total size of the derived type; this is called the empty base optimization. Thus unique_ptr can be implemented (as in Bo Perrson's answer) as a type that derives from the deleter type, which, if it's stateless, will not contribute a size penalty. (This may in fact be the only way to correctly implement unique_ptr without a size penalty for stateless deleters, but I'm not sure.)

Is there a better way to do this than writing a wrapper allocator that stores a reference to a stateful allocator object?

For example:
struct Foo {
MyPoolAlloc<char> pool;
std::vector<int , MyPoolAlloc<char>> vec_int; // The wrapper allocator would replace MyPoolAlloc<> here.
std::vector<std::function<void()> , MyPoolAlloc<char>> vec_funcs; // The wrapper allocator would replace MyPoolAlloc<> here.
Foo() : vec_int(pool) , vec_funcs(pool) {}
// I want to store lambdas with captured variables using the custom allocator as well:
template<typename Func>
void emplace_back(const Func& func) {
vec_funcs.emplace_back(std::allocator_arg , pool , func);
}
};
In the above code, I want ALL allocations (besides the pool itself) to pull from the same pool object. Is the best way to do this writing a wrapper allocator that stores a reference to an actual stateful allocator object? And then pass the following into the constructors (example):
: vec_int ((MyWrapperAlloc<char>(pool)));
Is there a cleaner way to do this than writing a whole extra wrapper class for MyPoolAlloc<>?
The standard "Allocator" concept would have been better named "AllocatorReference." Each object either refers to a global instance (stateless) or to an external object (stateful).
Either way, the allocator instance within an allocator-aware container does not own a memory pool by itself. It's only a proxy. Note that allocator objects are often copied as they are rebound and returned by value. You don't want vector::get_allocator to copy a whole memory pool.
So, yes, you need two classes.
The "wrapper," "proxy," or "reference" which satisfies the standard Allocator requirements and takes a template parameter for the allocated type.
The memory pool which has nothing to do with the Allocator interface but does know how to perform allocations.

Possibility to construct std::tuple's elements later with an allocator?

As far as I understood it, one reason to use C++'s allocators for my own container would be that I can seperate allocation and construction.
Now, I wonder if this is possible for std::tuples in the following way: Each time I construct an std::tuple, the space is reserved, but the objects are not constructed (yet). Instead, I can use the allocator in order to construct the i-th argument just when I want to.
Pseudo-Code:
struct my_struct {
const bool b; // note that we can use const
my_struct(int x) : b(x==42) {}
};
int main()
{
std::tuple<int, my_struct> t;
// the tuple knows an allocator named my_allocator here
// this allocator will force the stack to reserve space for t,
// but the contained objects are not constructed yet.
my_allocator.construct(std::get<0>(t), 42);
// this line just constructed the first object, which was an int
my_allocator.construct(std::get<1>(t), std::get<0>(t));
// this line just constructed the 2nd object
// (with help of the 1st one
return 0;
}
One possible problem is that allocators are usually bound to a type, so I'd need one allocator per type. Another question is whether the memory for the std::tuple must be allocated on the heap, or if stack might work. Both is ok for me.
Still, is it possible somehow? Or if not, could this be done with an allocator I write my own?
Allocators won't help you with initializing objects: the role of an allocator is to provide raw, i.e., uninitialized memory. The allocator could be used with a std::tuple<...> to customize how, e.g., memory for a std::string or a std::vector<...> is allocated.
If you want to delay construction of objects you'll need to use something like an "optional" object which would indicate with flag that it isn't constructed, yet. The implementation strategy for a corresponding class would be a wrapper around a suitable union.