I am re-working an old circular buffer class i wrote to be more robust. I am allocating a buffer on the heap that is of type T (so the class is templated). However I am having issues with freeing resources that could be possible with T being a pointer to dynamically allocated space.
Here is a ctor with default value parameter
in short
template <typename T, unsigned int SIZE>
CircularBuffer(const T default_val) {
_buffer = new T[SIZE];
// assign each block default value, etc
}
// dtor
~CircularBuffer()
{
delete [] _buffer;
}
however, say for example someone decides to do this:
CircularBuffer<int*, 4> cb(new int); // buffer of 4, holding int*, with default value of new int
// later ~CircularBuffer call is made
// user allocated memory is not freed
How would I be able to (or let the user) free this memory?
I have tried manually from the user perspective:
delete cb.at(0); // .at returns T& (so it would effectively return the pointer)
// above is access violation
I tried to figure out how to do this in the destructor, but I wasnt able to do any sort of delete _buffer[i] since the compiler thinks that template T is not a pointer (even though it could be).
Can I safely handle this situation, or is there something the user can do about this so that the responsibility is not mine (since the class is not internally allocating this, the user is)?
Edit***
I just realized that the allocation with new when passing a T* as template parameter does not return the buffer size expected.
// call default ctor
CircularBuffer<double*, 2> cb(); // _buffer = new T[SIZE];
// sizeof(_buffer) == 4;
// sizeof(double*) == 4;
// sizeof(double*[2]) == 8; // _buffer should be this size, as it holds 2 4byte pointers, what happened?
Im not sure if I should make a new question for this, or leave it here with the original question, but I think this explains some access violations I was getting before (after trying to dereference _buffer[1] of the above instance. Unfortunately Im not sure what is causing this.
template<typename T>
struct is_pointer{
static const bool value = false;
};
template<typename T>
struct is_pointer<T*>{
static const bool value = true;
};
template <bool, class T = void>
struct enable_if
{};
template <class T>
struct enable_if<true, T>
{
typedef T type;
};
template <bool, class T = void>
struct disable_if
{};
template <class T>
struct disable_if<false, T>
{
typedef T type;
};
template <typename T, unsigned int SIZE>
class CircularBuffer
{
public:
CircularBuffer(){
_buffer = new T[SIZE];
}
~CircularBuffer()
{
free_helper<T,SIZE>();
delete [] _buffer;
}
private:
template<class U, unsigned int SIZE>
void free_helper( typename enable_if< is_pointer<U>::value >::type *dummy=0 ){
//pointer container
for(int i=0;i<SIZE;++i){
//delete it?
U t = _buffer[i];
}
}
template<class U, unsigned int SIZE>
void free_helper( typename disable_if< is_pointer<U>::value >::type *dummy=0 ){
//none pointer container
}
T* _buffer;
};
void main(){
CircularBuffer<int,10> cb;
CircularBuffer<int*,10> cb2;
}
One idea is to partially specialize the template by writing
template<typename T, unsigned int SIZE>
class CircularBuffer<T*> { ... }
so that you operate on pointers you can delete. You can then use some SFINAE tricks like enable_if so compilation of templates specialized for pointers will fail. Anyway, first delete the memory the pointers from the array point to, then delete the array itself. This sounds like a lot of possible trouble.
Other idea is to allow your users to control some part of the memory; you can copy the ideas from other containers, e.g. by defining custom allocators and deallocators
template<typename T>
class Buffer {
public:
typedef function<T(unsigned)> Allocator;
typedef function<void(T)> Deallocator;
Buffer(unsigned size, Allocator allocator, Deallocator deallocator )
: _size(size), _buffer(new T[1024]),
_allocator(allocator), _deallocator(deallocator)
{
for(unsigned i=0; i<_size; ++i)
_buffer[i] = _allocator(i);
};
~Buffer(){
for(unsigned i=0; i<_size; ++i)
_deallocator(_buffer[i]);
delete[] _buffer;
};
private:
unsigned _size;
Allocator _allocator;
Deallocator _deallocator;
T* _buffer;
};
...
Buffer<double*> b2(
128,
[](unsigned idx) { return new double(idx + 0.123); },
[](double* d) { cout << "deleting " << *d << endl; delete d; }
);
This is a very simple and quick draft, but you should get the idea.
By the way, using a template parameter for buffer's size (unsigned int SIZE) is probably not the best idea, since the compiler will generate independent code for the templates differing only in size, which will grow your executable size... try making SIZE a construction argument, just like some containers do.
Related
For a tensor class I would like to have a template creating functions like
double* mat(int size1);
double** mat(int size1, int size2);
double*** mat(int size1, int size2, int size3);
i.e. the pointer level of the return type depends on the number of inputs.
The base case would just be
double* mat(int size1){
return new double[size1];
}
I thought a variadic template could somehow do the trick
template<typename T, typename... size_args>
T* mat(int size1, size_args... sizes) {
T* m = new T[size1];
for(int j = 0; j < size1; j++){
m[j] = mat(sizes...);
}
return m;
}
Deducing the template parameter in m[j] = mat(sizes...); doesn't seem to work though when more than one recursive call would be necessary, as in
auto p = mat<double**>(2,3,4);
but how could I provide the parameter in the recursive call? The correct parameter would be one pointer level below T, that is mat<double**> for T=double***.
I feel like I miss something important about templates here.
you cannot declare m and return type as T* since it is not in multiple dimension.
template<typename T, typename size_type>
auto mat(size_type size){return new T[size];}
template<typename T, typename size_type, typename... size_types>
auto mat(size_type size, size_types... sizes){
using inner_type = decltype(mat<T>(sizes...));
inner_type* m = new inner_type[size];
for(int j = 0; j < size; j++){
m[j] = mat<T>(sizes...);
}
return m;
}
For lack of a better name, I'm going to call the template meta-function that creates a pointer type with the desired "depth" and type, a "recursive pointer". Such a thing can be implemented like so:
template <size_t N, class T>
struct RecursivePtr
{
using type = RecursivePtr<N-1, T>::type*;
};
template <class T>
struct RecursivePtr<0, T>
{
using type = T;
};
template <size_t N, class T>
using recursive_ptr_t = RecursivePtr<N, T>::type;
recursive_ptr_t<4, int> for example creates int****. So in your case you can go ahead and implement the mat function as:
template <class... Args>
auto mat(Args... args)
{
recursive_ptr_t<sizeof...(Args), double> m;
// Runtime allocate your Npointer here.
return m;
}
Demo
Some ideas to give added type safety to the mat function are:
Add static asserts that all provided types are sizes
Statically assert that at least one size has been provided
Minor note, when I say runtime allocate your Npointer above, I mean something like:
template <class T, class... Sz>
void alloc_ar(T* &ar, size_t s1, Sz... ss)
{
if constexpr (sizeof...(Sz))
{
ar = new T[s1];
for (size_t i(0); i < s1; i++)
alloc_ar(ar[i], ss...);
}
else
{
ar = new T[s1];
}
}
Made a Demo where I show the allocation, but not the deallocation.
A reasonable alternative to this is to allocate one contiguous chunk of memory (sized == multiplicate of dimensions) and use the multidimensional pointer to the beginning of that chunk for syntactic sugar when accessing it. This also provides an easier way to deallocate memory.
A second alternative is to use nested vector of vectors (of vectors of vectors...) with the same generation mechanics as the Npointer. This eliminates the need for manual memory management and would probably force you to wrap the whole thing in a more expressive class:
template <class... Dims>
class mat
{
template <size_t N, class T>
struct RecursivePtr
{
using type = std::vector<RecursivePtr<N-1, T>::type>;
};
template <class T>
struct RecursivePtr<0, T>
{
using type = T;
};
// This is the replacement to double***
// translates to vector<vector<vector<double>>>
RecursivePtr<N, T>::type _data;
public:
// construction, argument deduction etc ...
};
I've been trying to expand a non value parameter pack recently in C++. Is this possible? And if it's not, why?
I mean, as you can see, in the line with the comment //, given a parameter pack for the TypeMap class, how can I call addType<T>() with each type of the parameter pack? Thanks in advance!
template <typename... T>
class TypeMap
{
using vari_cmps = std::variant<T*...>;
private:
template<typename Type>
void addType()
{
typemap[typeid(Type).name()] = std::make_unique<Type>(0).get();
}
public:
std::map<const char*, vari_cmps> typemap{};
TypeMap()
{
(addType<T,...>()); // Idk how to use this in order to make it work
}
~TypeMap()
{
typemap.clear();
}
};
As #HolyBlackCat has already answered in the comments, you can expand it like this:
TypeMap() {
(addType<T>(), ...);
}
If T is std::string, int, float this would expand to:
TypeMap() {
(addType<std::string>(), addType<int>(), addType<float>());
}
There are however a few more issues in this code-snippet:
1. addType()
addType() will not work as you'd expect, due to the unique_ptr deleteing your object after you put it into the map.
.get() only retrieves the pointer that the unique_ptr manages but does not transfer ownership, so the unique_ptr will still delete the pointed-to object once it gets out of scope, leaving a dangling pointer in your map.
so your addType() is roughly equivalent to:
template<typename Type>
void addType() {
Type* tptr = new Type(0); // unique pointer constructs object
typemap[typeid(Type).name()] = tptr; // insert pointer value of unique pointer
delete tptr; // unique pointer destructs
}
You could fix this by releasing the unique_ptr after inserting its value into the map & then cleaning it up in the destructor:
template<typename Type>
void addType() {
auto ptr = std::make_unique<Type>(0);
typemap[typeid(Type).name()] = ptr.get();
ptr.release(); // now unique_ptr won't delete the object
}
~TypeMap() {
// cleanup all pointers
for(auto& [name, ptrVariant] : typemap)
std::visit([](auto ptr) { delete ptr; }, ptrVariant);
}
2. Consider using std::type_index instead of const char* as map key
std::type_info::name() returns an implementation-defined name for the given type, so you have no guarantee that you will get an unique name for a given type.
Returns an implementation defined null-terminated character string containing the name of the type. No guarantees are given; in particular, the returned string can be identical for several types and change between invocations of the same program.
std::type_index on the other hand is build specifically for this purpose - using types as keys - and comes with all comparison operators & a std::hash specialization, so you can use it with std::map & std::unordered_map out of the box.
e.g.:
template <class... T>
class TypeMap
{
using vari_cmps = std::variant<T*...>;
private:
template<typename Type>
void addType()
{
typemap[std::type_index(typeid(Type))] = /* something */;
}
public:
std::map<std::type_index, vari_cmps> typemap{};
TypeMap() { /* ... */ }
~TypeMap() { /* ... */ }
template<class U>
U* get() {
auto it = typemap.find(std::type_index(typeid(U)));
return std::get<U*>(it->second);
}
};
Consider using std::tuple
std::tuple is basically built for this task, storing a list of arbitrary types:
e.g.:
template <class... T>
class TypeMap
{
private:
std::tuple<std::unique_ptr<T>...> values;
public:
TypeMap() : values(std::make_unique<T>(0)...) {
}
template<class U> requires (std::is_same_v<U, T> || ...)
U& get() { return *std::get<std::unique_ptr<U>>(values); }
template<class U> requires (std::is_same_v<U, T> || ...)
U const& get() const { return *std::get<std::unique_ptr<U>>(values); }
};
usage:
TypeMap<int, double, float> tm;
tm.get<int>() = 12;
If you want you can also store T's directly in the tuple, avoiding the additional allocations.
I've searched StackOverflow, but I couldn't find a question that directly addresses this issue.
First some context: I'm trying to implement an Either type in C++ that can handle polymorphic data, much like you can throw a std::runtime_error without the new-keyword. Everything works fine with primitive types, PODs and references, but given that we cannot know the size of a polymorphic data structure upfront, things get more difficult. I then thought about copying the structure to a raw buffer on the heap so that I can pass it around as if it was on the stack.
Example of an Either<L, R>-type:
Either<std::runtime_error, int> doSomeStuff() {
if (err) {
return left(std::runtime_error("Not right!"));
}
return right(42);
}
I experimented with things like std::memcpy(buf, reinterpret_cast<char*>(static_cast<T*>(&value)), sizeof(T)), but I keep getting SIGSEGV errors. Is this because, as I suspect, polymorphic structures hold extra bookkeeping that becomes corrupt when copying? Is there a way to hold an arbitrary polymorphic structure T on the heap so I can pass it as if it were a normal stack-allocated object? Or is such a thing "undefined" in today's C++ standards?
Update: here's the code I have so far. It's not pretty, but it's the best I've got.
struct ConstBoxRefTag { };
struct BoxMoveTag { };
struct PlainValueTag { };
// struct BoxValueTag { };
template<typename T>
struct GetTag { using type = PlainValueTag; };
template<typename T>
struct GetTag<const Box<T>&> { using type = ConstBoxRefTag; };
template<typename T>
struct GetTag<Box<T>&&> { using type = BoxMoveTag; };
template<typename T>
struct GetTag<Box<T>> { using type = ConstBoxRefTag; };
template<typename T>
class Box<T, typename std::enable_if<std::is_polymorphic<T>::value>::type> {
void* buf;
size_t sz;
template<typename R, typename Enabler>
friend class Box;
public:
using Type = T;
template<typename R>
Box(R val): Box(typename box::GetTag<R>::type {}, val) {}
template<typename R>
Box(ConstBoxRefTag, R oth): buf(std::malloc(oth.sz)), sz(oth.sz) {
std::memcpy(buf, oth.buf, oth.sz);
}
template<typename R>
Box(BoxMoveTag, R oth): buf(std::move(oth.buf)), sz(std::move(oth.sz)) {
oth.buf = nullptr;
};
template<typename R>
Box(PlainValueTag, R val): buf(std::malloc(sizeof(R))), sz(sizeof(R)) {
std::memcpy(buf, reinterpret_cast<void*>(static_cast<T*>(&val)), sizeof(R));
}
template<typename R>
R as() const {
static_assert(std::is_base_of<T, R>::value, "Class is not a subtype of base class");
return *static_cast<const R*>(reinterpret_cast<const T*>(&buf));
}
T& reference() {
return *reinterpret_cast<T*>(&buf);
}
const T& reference() const {
return *static_cast<T*>(&buf);
}
~Box() {
if (buf != nullptr) {
reference().~T();
std::free(buf);
}
}
};
Indeed, the standard recently added a concept "trivially copyable", such that using memcpy on an object which isn't trivially copyable doesn't result in a valid object. Before "trivially copyable" was introduced, this was controlled by POD-ness.
To make a copy of a C++ object, you need to call its copy constructor. There's no standard polymorphic way of doing that, but some class hierarchies choose to include a virtual clone() function (or similar) which would meet your need.
Your other option is to find the way to avoid the copy entirely.
I have this simplified class (many details omitted) :
template<class T, size_t nChunkSize = 1000>
class Holder
{
size_t m_nSize = 0;
size_t m_nChunkSize = nChunkSize;
public:
Holder(size_t nSize)
: m_nSize(nSize)
{
}
size_t GetChunkSize()
{
return m_nChunkSize;
}
T* GetChunk(size_t nChunkIndex)
{
// returns the address of the chunk nChunkIndex
return ###;
}
T& operator[](size_t nIndex)
{
// returns the element with index nIndex
return T();
}
};
The idea is to have a simple memory manager that allocates really large number of objects but if there is not enough memory to hold all objects in one place it splits them in chunks and encapsulates everything. I know I should use STL but I have specific reasons to do it this way.
I want to provide the users the ability to specify the chunk size and be able to get a pointer to a specific chunk but only if they have specified the template parameter otherwise I want that functionality to be disabled at compile time.
I know the compiler should know whether nChunkSize is defaulted or user specified but is there a way I can get that information and use it to delete GetChunk function or make it's usage not compilable.
For example:
Holder<int, 200> intHolder(5000); // allocates 5000 integeres each chunk holds 200 of them
intHolder[312] = 2;
int* pChunk = intHolder.GetChunk(3); // OK, Compiles
Holder<int> intAnotherHolder(500); // allocates 500 but chunk size is hidden/implementation defined
pChunk = intAnotherHolder.GetChunk(20); // COMPILE ERROR
You could use a common base class with two derived classes: one that specializes for the scenario where a size_t is provided, and another where one is not provided:
Base (Basically your current class):
template<typename T, size_t nChunkSize=1000>
class Base
{
size_t m_nSize = 0;
size_t m_nChunkSize = nChunkSize;
public:
Base(size_t nSize)
: m_nSize(nSize)
{
}
size_t GetChunkSize()
{
return m_nChunkSize;
}
T& operator[](size_t nIndex)
{
// returns the element with index nIndex
return T();
}
};
Defaulted (no way to call GetChunk):
// empty argument list
template<typename T, size_t... ARGS>
class Holder : public Base<T>
{
static_assert(sizeof...(ARGS) == 0, "Cannot instantiate a Holder type with more than one size_t");
using Base<T>::Base;
};
Nondefaulted (has GetChunk method):
template<typename T, size_t nChunkSize>
class Holder<T, nChunkSize> : public Base<T, nChunkSize>
{
using Base<T>::Base;
public:
T* GetChunk(size_t nChunkIndex)
{
// returns the address of the chunk nChunkIndex
return nullptr;
}
};
Demo
If nChunkSize was a type template parameter you could use a default tag and work based on that. Since it's a non-type parameter, you could use a flag value for the default, then correct it in the class definition:
template<class T, size_t nChunkSize = std::numeric_limits<size_t>::max()>
// flag value ^--------------------------------^
class Holder
{
size_t m_nSize = 0;
size_t m_nChunkSize =
nChunkSize == std::numeric_limits<size_t>::max() ? 1000 : nChunkSize;
//^If the flag value was used, correct it
T* GetChunk(size_t nChunkIndex)
{
//Check if the flag value was used
static_assert(nChunkSize != std::numeric_limits<size_t>::max(),
"Can't call GetChunk without providing a chunk size");
// return the address of the chunk nChunkIndex
}
This will make GetChunk fail to compile if no default argument was passed. Of course, if you pass the max size_t to Holder then it'll silently get fixed up to 1000, but presumably you aren't planning on passing values that high.
Live Demo
I will suggest to use two different classes: if they're expected to have different implementations, why stick with one single definition?
template<class T, size_t nChunkSize>
class ChunkHolder
{
size_t m_nSize = 0;
size_t m_nChunkSize = nChunkSize;
public:
ChunkHolder(size_t nSize) : m_nSize(nSize) {}
size_t GetChunkSize() { return m_nChunkSize; }
// returns the address of the chunk nChunkIndex
T* GetChunk(size_t nChunkIndex) { return nullptr; }
// returns the element with index nIndex
T& operator[](size_t nIndex) { return T(); }
};
template<class T>
class UnchunkHolder
{
size_t m_nSize = 0;
public:
UnchunkHolder(size_t nSize) : m_nSize(nSize) {}
// returns the address of the chunk nChunkIndex
T& operator[](size_t nIndex) { return T(); }
};
Then, we define helper functions to create one class or the other:
template <typename T, size_t SIZE> ChunkHolder<T, SIZE>
Holder(size_t nSize) { return {nSize}; }
template <typename T> UnchunkHolder<T>
Holder(size_t nSize) { return {nSize}; }
Finally, we can use it this way:
auto x = Holder<int, 200u>(5000u);
auto y = Holder<int>(500u);
x is a Holder1 wit the chunk feature and y lacks of that feature and fails to compile the GetChunk call, just because the underlying type lacks of that function.
See the live demo here.
Well, it isn't, is a ChunkHolder, you can create a base class with the common implementation (operator[], ...) or use different classes; it depends on your implementation needs.
There is essentially no standard way of knowing whether the compiler added the default value or if the user physically typed it in. By the time you code could start to differentiate, the value is already there.
A specific compiler could offer such a hook (e.g. something like __is_defaulted(nChunkSize)), careful examination of your compiler documentation may help, but the common compilers don't seem to offer such a facility.
I'm not sure the exact nature of the use case; but the "usual" option is to use partial template specialisation to differentiate between the implementations and not to really care where the value came from for nChunkSize, but rather what the value is.
#include <iostream>
using namespace std;
template <typename T, size_t nChunkSize = 1000>
struct A {
A() { cout << "A()" << endl; }
};
template <typename T>
struct A<T, 1000> {
A() { cout << "A() special" << endl; }
};
int main() {
A<int, 100> a1; // prints A()
A<int> a2; // prints A() special
return 0;
}
Demo sample. Further common details can be moved to a traits class or a base class as desired.
The above doesn't quite achieve what you want however. Alternatives that get you nearer the goal post would include using a "special" value that could be used to differentiate the user provided value, or to use a default. Ideally the value would be such that is it unlikely the user would use it; 0 comes to mind in this case. It still doesn't guarantee that the user would use 0, but a 0 chunk size is unlikely in "reasonable" client code.
template<class T, size_t nChunkSize = 0>
class Holder
{
size_t m_nSize = 0;
size_t m_nChunkSize = nChunkSize == 0 ? 1000 : nChunkSize;
// ...
static_assert can then be used to either allow the compilation of GetChunk or not, based on the value of nChunkSize - this works since nChunkSize is known at compile time.
T* GetChunk(size_t nChunkIndex)
{
static_assert(nChunkSize != 0, "Method call invalid without client chunk size");
// ...
The disadvantage is that GetChunk is still "visible" during development, but the compile will fail if it is invoked.
The closest you could get has been mentioned already, but repeated here for comparison; is to defer the implementation for the class to some BaseHolder and then combine that with partial template specialisation to determine if the client code used a chunk size (a value for nChunkSize) or not.
template<typename T, size_t nChunkSize /*=1000*/>
// default not provided here, it is not needed
class BaseHolder
{
size_t m_nSize = 0;
size_t m_nChunkSize = nChunkSize;
// ...
};
template<typename T, size_t... ARGS>
class Holder : public Base<T, 1000>
{
// "default" value for nChunkSize required
// When sizeof...(ARGS) = 1, the specialisation is used
// when 0, the client has not provided the default
// when 2 or more, it is invalid usage
static_assert(sizeof...(ARGS) == 0, "Only 1 size allowed in the client code");
// ...
};
template<typename T, size_t nChunkSize>
class Holder<T, nChunkSize> : public Base<T, nChunkSize>
{
// non-default chunk size used (could still be 1000)
// includes the implementation of GetChunk
public:
T* GetChunk(size_t nChunkIndex)
{
// ...
}
};
The disadvantage of this approach is that multiple size_t arguments could be provided, this can be controlled at compile time with a static_assert; the documentation for the code should make this clear as well.
Is there something wrong with the code below?
#include <iostream>
#include <type_traits>
template <typename T>
void assign_lambda(T&& f)
{
typedef typename std::remove_reference<T>::type functor_type;
typedef typename std::aligned_storage<sizeof(functor_type),
std::alignment_of<functor_type>::value>::type buffer_type;
static char store[sizeof(buffer_type)];
auto const p(new (store) functor_type(std::forward<T>(f)));
(*p)();
}
int main()
{
for (int i(0); i != 5; ++i)
{
assign_lambda([i](){ std::cout << i << std::endl; });
}
return 0;
}
I worry though that this might be non-standard and/or dangerous to do.
EDIT:
Why initialize into a char array you ask? One might allocate a block of size sizeof(buffer_type) from the heap and reuse for repeated assignments (i.e. avoid repeated memory allocations), if the block should prove large enough.
void*operator new(std::size_t size);
Effects: The allocation function (3.7.4.1) called by a new-expression (5.3.4) to allocate size bytes of storage suitably aligned to represent any object of that size.
I suppose if I allocate from the heap the alignment issues will go away.
You'll have to make sure that store has the proper alignment for functor_type. Apart from that, I don't see any problems regarding standard conformance. However, you can easily address the multithreading issue by making the array nonstatic, because sizeof gives a compiletime constant.
The alignment is demanded by §5.3.4,14:
[ Note: when the allocation function returns a value other than null, it must be a pointer to a block of storage in which space for the object has been reserved. The block of storage is assumed to be appropriately aligned and of the requested size. [...] -end note ]
There is another paragraph, §3.7.4.1 about alignment, but that one does explicitly not apply to placement new (§18.6.1.3,1).
To get the alignment right, you can do the following:
template <typename T>
void assign_lambda(T&& f)
{
typedef typename std::remove_reference<T>::type functor_type;
//alignas(functor_type) char store[sizeof(functor_type)];
std::aligned_storage<sizeof(functor_type),
std::alignment_of<functor_type>::value>::type store;
auto const p(new (&store) functor_type(std::forward<T>(f)));
(*p)();
//"placement delete"
p->~functor_type();
}
Update:
The approach shown above is not different from using just a normal variable:
template <typename T>
void assign_lambda(T&& f)
{
typedef typename std::remove_reference<T>::type functor_type;
functor_type func{std::forward<T>(f)};
func();
}
If it has to be a static variable inside the function you will need an RAII wrapper for functors that are not assignable. Just placement-newing is not sufficient since the functors will not get destroyed properly and ressources they possess (e.g. via captured smartpointers) will not get released.
template <typename F>
struct RAIIFunctor {
typedef typename std::remove_reference<F>::type functor_type;
std::aligned_storage<sizeof(functor_type),
std::alignment_of<functor_type>::value>::type store;
functor_type* f;
RAIIFunctor() : f{nullptr} {}
~RAIIFunctor() { destroy(); }
template <class T>
void assign(T&& t) {
destroy();
f = new(&store) functor_type {std::forward<T>(t)};
}
void destroy() {
if (f)
f->~functor_type();
f = nullptr;
}
void operator() {
(*f)();
}
};
template <typename T>
void assign_lambda(T&& f)
{
static RAIIFunctor<T> func;
func.assign(std::forward<T>(f));
func();
}
You can see the code in action here
I don't get it. Why would one use aligned_storage merely to get some size to create uninitialised storage, instead of... using the aligned storage it provides? It's almost like travelling from Berlin to Lisbon by taking a Berlin -> Lisbon flight followed by a Lisbon -> Moscow flight.
typedef typename std::remove_reference<T>::type functor_type;
typedef typename std::aligned_storage<sizeof(functor_type),
std::alignment_of<functor_type>::value>::type buffer_type;
static buffer_type store;
auto const p(new (&store) functor_type(std::forward<T>(f)));
In addition to the alignment issue already mentioned, you are creating a copy of the lambda through placement new but you are not destroying the copy.
The following code illustrates the problem:
// This class plays the role of the OP's lambdas
struct Probe {
Probe() { std::cout << "Ctr" << '\n'; }
Probe(const Probe&) { std::cout << "Cpy-ctr" << '\n'; }
~Probe() { std::cout << "Dtr" << '\n'; }
};
// This plays the role of the OP's assign_lambda
void f(const Probe& p) {
typedef typename std::aligned_storage<sizeof(Probe),
std::alignment_of<Probe>::value>::type buffer_type;
static buffer_type store;
new (&store) Probe(p);
}
int main() {
Probe p;
// This plays the role of the loop
f(p);
f(p);
f(p);
}
The output is:
Ctr
Cpy-ctr
Cpy-ctr
Cpy-ctr
Dtr
Therefore, 4 objects are constructed and only one is destroyed.
In addition, in the OP's code the store is static and this means that one lambda is repeatedly constructed on top of the other as if the latter was just raw memory.