WrapperPointer class and deallocation of stack-allocated objects in C++ - c++

I am designing a wrapper class (a bit similar to std::autoPtr but I have different purpose) for scalar values:
template <typename T>
class ScalarPtr
{
private:
T* m_data;
...
public:
ScalarPtr(T *data): m_data(data)
{ ... }
T& operator* ();
T* operator -> ();
~ScalarPtr()
{
if(m_data)
delete m_data; ...
}
};
Now the problem is that when I also want to use this class for stack-allocated memory objects like this:
float temp=...
ScalarPtr<float> fltPtr(&temp);
The naive way is to pass boolean in constructor to specify whether to deallocate or not but is there any better way?

I am not sure if there is a better approach other than the boolean flag.
As you are aware(and hence ask the Q)this makes the interface rather non-intutive to the end user.
The purpose of the wrapper/resource managing class is to implement an RAII, where the resource itself takes care of releasing its resources(in this case dynamic memory) implicitly. Given that the stack variables are automatically destroyed beyond their scopes,its seems rather odd to use a resource managing wrapper for them. I would rather not prefer to do so.
But, Given that you want to maintain a uniform acess to your class through this wrapper class, the simplest yet not so elegant way seems to be the boolean flag.

Related

Why is LLVM's Optional<T> implemented this way?

I stumbled upon an implemenation of Optional<T> which is based on LLVM's Optional.h class and couldn't quite figure out why it is implemented the way it is.
To keep it short, I'm only pasting the parts I don't understand:
template <typename T>
class Optional
{
private:
inline void* getstg() const { return const_cast<void*>(reinterpret_cast<const void*>(&_stg)); }
typedef typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type storage_type;
storage_type _stg;
bool _hasValue;
public:
Optional(const T &y) : _hasValue(true)
{
new (getstg()) T(y);
}
T* Get() { return reinterpret_cast<T*>(getstg()); }
}
And the most naive implementation I could think of:
template <typename T>
class NaiveOptional
{
private:
T* _value;
bool _hasValue;
public:
NaiveOptional(const T &y) : _hasValue(true), _value(new T(y))
{
}
T* Get() { return _value; }
}
Questions:
How do I interpret the storage_type? what was the author's intention?
What is the semantics of this line: new (getstg()) T(y); ?
Why doesn't the naive implementation work (or, what pros does the Optional<T> class have over NaiveOptional<T>) ?
The short answer is "performance".
Longer answer:
storage_type provides an in-memory region that is (a) big enough to fit the type T and (b) is aligned properly for type T. Unaligned memory access is slower. See also the doc.
new (getstg()) T(y) is a placement new. It does not allocate memory, but instead it constructs an object in memory region passed to it. The doc (on all forms of new - search for the "placement new").
The naive implementation does work, but it has worse performance. It uses dynamic memory allocation, which often can be a bottleneck. The Optional<T> implementation does not use dynamic memory allocation (see the point above).
Std::optional is supposed to be returned from functions. That means you would have to store the content referenced by the pointer somewhere. This defeats the purpose of this class
Further, you cant just use plain T, because otherwise you would have to construct it in some way. Optional allows the content to be uninitialized, some types can't be default constructed.
To make the class more flexible in terms of supported types, a fitting and correctly aligned storage is used. And only if optional is active, the true type will be constructed on it
What you had in mind is probably something like std::variant?

Template class optimizations during compiling

I was curious to know the internals of template class compilation in specific circumstances. I ask this because I'd like to extend some existing classes.
For example, let's assume a starting
template<typename T>
class LargeClass {
private:
std::unique_ptr<T> data;
// many other fields
public:
const std::unique_ptr<T>& getData() { return data; }
void setData(T* value) { data.reset(value); }
// many other methods that don't depend on T
}
This example makes me think that, since sizeof(std::unique_ptr<T>) == sizeof(T*) (without a custom deleter) then sizeof(LargeClass<T1>) == sizeof(LargeClass<T2>) for any T1 and T2. Which implies that all offsetof(field, LargeClass<T>) are the same for any field and T.
So why the compiler would create a copy of each LargeClass method if they don't depend on T?
A different approach, like
class LargeClass {
private:
T data
...
private
...
}
instead should force the compiler to create multiple definitions of the methods for different types since sizeof(T) could change, but using an std::aligned_storage with a static_assert (to avoid storing too large data) could make this fallback to the first case. Would a compiler be smart enough to realise it?
In general I was wondering if a compiler is smart enough to avoid generating multiple definition of a template class method if the type variable is not used and the class structure doesn't change (or at least doesn't change for accessing the fields accessed in the method) or not. Does the standard enforce anything about it?

Vector of pointers to instances of a templated class

I am implementing a task runtime system that maintains buffers for user-provided objects of various types. In addition, all objects are wrapped before they are stored into the buffers. Since the runtime doesn't know the types of objects that the user will provide, the Wrapper and the Buffer classes are templated:
template <typename T>
class Wrapper {
private:
T mdata;
public:
Wrapper() = default;
Wrapper(T& user_data) : mdata(user_data) {}
T& GetData() { return mdata; }
...
};
template <typename T>
class Buffer {
private:
std::deque<Wrapper<T>> items;
public:
void Write(Wrapper<T> wd) {
items.push_back(wd);
}
Wrapper<T> Read() {
Wrapper<T> tmp = items.front();
items.pop_front();
return tmp;
}
...
};
Now, the runtime system handles the tasks, each of which operates on a subset of aforementioned buffers. Thus, each buffer is operated by one or more tasks. This means that a task must keep references to the buffers since the tasks may share buffers.
This is where my problem is:
1) each task needs to keep references to a number of buffers (this number is unknown in compile time)
2) the buffers are of different types (based on the templeted Buffer class).
3) the task needs to use these references to access buffers.
There is no point to have a base class to the Buffer class and then use base class pointers since the methods Write and Read from the Buffer class are templeted and thus cannot be virtual.
So I was thinking to keep references as void pointers, where the Task class would look something like:
class Task {
private:
vector<void *> buffers;
public:
template<typename T>
void AddBuffer(Buffet<T>* bptr) {
buffers.push_back((void *) bptr);
}
template<typename T>
Buffer<T>* GetBufferPtr(int index) {
return some_way_of_cast(buffers[index]);
}
...
};
The problem with this is that I don't know how to get the valid pointer from the void pointer in order to access the Buffer. Namely, I don't know how to retain the type of the object pointed by buffers[index].
Can you help me with this, or suggest some other solution?
EDIT: The buffers are only the implementation detail of the runtime system and the user is not aware of their existence.
In my experience, when the user types are kept in user code, run-time systems handling buffers do not need to worry about the actual type of these buffer. Users can invoke operations on typed buffers.
class Task {
private:
vector<void *> buffers;
public:
void AddBuffer(char* bptr) {
buffers.push_back((void *) bptr);
}
char *GetBufferPtr(int index) {
return some_way_of_cast(buffers[index]);
}
...
};
class RTTask: public Task {
/* ... */
void do_stuff() {
Buffer<UserType1> b1; b1Id = b1.id();
Buffer<UserType2> b2; b2Id = b2.id();
AddBuffer(cast(&b1));
AddBuffer(cast(&b2));
}
void do_stuff2() {
Buffer<UserType1> *b1 = cast(GetBufferPtr(b1Id));
b1->push(new UserType1());
}
};
In these cases casts are in the user code. But perhaps you have a different problem. Also the Wrapper class may not be necessary if you can switch to pointers.
What you need is something called type erasure. It's way to hide the type(s) in a template.
The basic technique is the following:
- Have an abstract class with the behavior you want in declared in a type independent maner.
- Derive your template class from that class, implement its virtual methods.
Good news, you probably don't need to write your own, there boost::any already. Since all you need is get a pointer and get the object back, that should be enough.
Now, working with void* is a bad idea. As perreal mentioned, the code dealing with the buffers should not care about the type though. The good thing to do is to work with char*. That is the type that is commonly used for buffers (e.g. socket apis). It is safer than too: there is a special rule in the standard that allows safer conversion to char* (see aliasing rules).
This isn't exactly an answer to your question, but I just wanted to point out that the way you wrote
Wrapper<T> Read() {
makes it a mutator member function which returns by value, and as such, is not good practice as it forces the user write exception unsafe code.
For the same reason the STL stack::pop() member function returns void, not the object that was popped off the stack.

Overloading new as a friend function?

For one of my classes, I'm writing a program that's going to be using a templated memory pool structure to handle the allocation of new instances of a class while keeping them together. It is currently declared as follows:
template<typename T, unsigned int N>
class MemoryPool
{
//Stuff
};
Where T is the class to create this pool for and N is the maximum number of elements that can be placed in the pool. I want to overload new for the created type to make interactions with the pool a bit easier if it's a reasonable thing to do--but I'm not sure if it is.
My thoughts, currently, are that if it's possible to overload new as a friend function for Twithin MemoryPool that it should be doable from there but I'm not sure. And, I'm not sure of the best way to start setting that up. I've tried a few different ways to just declare the overloaded new and I'm getting errors before even implementing it.
Is this a reasonable way to ensure that new is overridden for any class that uses MemoryPool?
Is doing so even possible?
Is doing so even a good idea?
How would I set up the function declaration to accomplish this?
In case it matters, I'm using Visual Studio 2010.
Note, the specific use of templates and overloading new are not part of the homework assignment. It's just how I want to implement it if possible to make the rest of the assignment easier to read for the future. So, if there's no reasonable way to do it, I just use member functions within MemoryPool to accomplish the same goal.
Thanks!
Example implementation:
MemoryPool<Object, MAX_OBJECTS> objectPool; //Pool to store objects
Object* allObjects[MAX_OBJECTS]; //Locations of objects
//Make a new object (this is how I'd like to do it)
allObjects[0] = new Object(/*args*/);
//(If I can't do the above, this would be the alternative)
allObjects[0] = objectPool.AllocateNewSlot();
allObjects[0]->Initialize(/*args*/);
In this example, the use of the MemoryPool takes care of the actual implementation of new ensuring the Object is created in its pool instead of just anywhere on the heap (to ensure all the Objects are in a centralized, more controllable location.
It is possible to overload the new operator, however I would advice against it.
I think you are going in the wrong direction. You don't want to hide things and make users unsure what is happening. In this case you should be explicit that you are allocating through a pool.
Here is what you could do.
template<typename T, unsigned int N>
class MemoryPool
{
T* malloc()
{
return ... // your pool impl
}
void free(T* ptr)
{
... // your pool impl
}
void destory(T* ptr)
{
ptr->T::~T(); // call destructor
free(ptr);
}
};
int main()
{
MemoryPool<my_class> pool;
my_class* instance = new (pool.malloc()) my_class(/*args*/); // in-place new
return 0;
}
You should also take a look at how boost pool is implemented.

Any RAII template in boost or C++0x

Is there any template available in boost for RAII. There are classes like scoped_ptr, shared_ptr which basically work on pointer. Can those classes be used for any other resources other than pointers. Is there any template which works with a general resources.
Take for example some resource which is acquired in the beginning of a scope and has to be somehow released at the end of scope. Both acquire and release take some steps. We could write a template which takes two(or maybe one object) functors which do this task. I havent thought it through how this can be achieved, i was just wondering are there any existing methods to do it
Edit: How about one in C++0x with support for lambda functions
shared_ptr provides the possibility to specify a custom deleter. When the pointer needs to be destroyed, the deleter will be invoked and can do whatever cleanup actions are necessary. This way more complicated resources than simple pointers can be managed with this smart pointer class.
The most generic approach is the ScopeGuard one (basic idea in this ddj article, implemented e.g. with convenience macros in Boost.ScopeExit), and lets you execute functions or clean up resources at scope exit.
But to be honest, i don't see why you'd want that. While i understand that its a bit annoying to write a class every time for a one-step-aquire and one-step-release pattern, you are talking about multi-step-aquire and -release.
If its taken multiple steps, it, in my opinion, belongs in an appropiately named utility class so that the details are hidden and the code in place (thus reducing error probability).
If you weigh it against the gains, those few additional lines are not really something to worry about.
A more generic and more efficient (no call through function pointer) version is as follows:
#include <boost/type_traits.hpp>
template<typename FuncType, FuncType * Func>
class RAIIFunc
{
public:
typedef typename boost::function_traits<FuncType>::arg1_type arg_type;
RAIIFunc(arg_type p) : p_(p) {}
~RAIIFunc() { Func(p_); }
arg_type & getValue() { return p_; }
arg_type const & getValue() const { return p_; }
private:
arg_type p_;
};
Example use:
RAIIFunc<int (int), ::close> f = ::open("...");
I have to admit I don't really see the point. Writing a RAII wrapper from scratch is ridiculously simple already. There's just not much work to be saved by using some kind of predefined wrapper:
struct scoped_foo : private boost::noncopyable {
scoped_foo() : f(...) {}
~scoped_foo() {...}
foo& get_foo() { return f; }
private:
foo f;
};
Now, the ...'s are essentially the bits that'd have to be filled out manually if you used some kind of general RAII template: creation and destruction of our foo resource. And without them there's really not much left. A few lines of boilerplate code, but it's so little it just doesn't seem worth it to extract it into a reusable template, at least not at the moment. With the addition of lambdas in C++0x, we could write the functors for creation and destruction so concisely that it might be worth it to write those and plug them into a reusable template. But until then, it seems like it'd be more trouble than worth. If you were to define two functors to plug into a RAII template, you'd have already written most of this boilerplate code twice.
I was thinking about something similar:
template <typename T>
class RAII {
private:
T (*constructor)();
void (*destructor)(T);
public:
T value;
RAII(T (*constructor)(), void (*destructor)(T)) :
constructor(constructor),
destructor(destructor) {
value = constructor();
}
~RAII() {
destructor(value);
}
};
and to be used like this (using OpenGL's GLUquadric as an example):
RAII<GLUquadric*> quad = RAII<GLUquadric*>(gluNewQuadric, gluDeleteQuadric);
gluSphere(quad.value, 3, 20, 20)
Here's yet another C++11 RAII helper: https://github.com/ArtemGr/libglim/blob/master/raii.hpp
It runs a C++ functor at destruction:
auto unmap = raiiFun ([&]() {munmap (fd, size);});