Any RAII template in boost or C++0x - c++

Is there any template available in boost for RAII. There are classes like scoped_ptr, shared_ptr which basically work on pointer. Can those classes be used for any other resources other than pointers. Is there any template which works with a general resources.
Take for example some resource which is acquired in the beginning of a scope and has to be somehow released at the end of scope. Both acquire and release take some steps. We could write a template which takes two(or maybe one object) functors which do this task. I havent thought it through how this can be achieved, i was just wondering are there any existing methods to do it
Edit: How about one in C++0x with support for lambda functions

shared_ptr provides the possibility to specify a custom deleter. When the pointer needs to be destroyed, the deleter will be invoked and can do whatever cleanup actions are necessary. This way more complicated resources than simple pointers can be managed with this smart pointer class.

The most generic approach is the ScopeGuard one (basic idea in this ddj article, implemented e.g. with convenience macros in Boost.ScopeExit), and lets you execute functions or clean up resources at scope exit.
But to be honest, i don't see why you'd want that. While i understand that its a bit annoying to write a class every time for a one-step-aquire and one-step-release pattern, you are talking about multi-step-aquire and -release.
If its taken multiple steps, it, in my opinion, belongs in an appropiately named utility class so that the details are hidden and the code in place (thus reducing error probability).
If you weigh it against the gains, those few additional lines are not really something to worry about.

A more generic and more efficient (no call through function pointer) version is as follows:
#include <boost/type_traits.hpp>
template<typename FuncType, FuncType * Func>
class RAIIFunc
{
public:
typedef typename boost::function_traits<FuncType>::arg1_type arg_type;
RAIIFunc(arg_type p) : p_(p) {}
~RAIIFunc() { Func(p_); }
arg_type & getValue() { return p_; }
arg_type const & getValue() const { return p_; }
private:
arg_type p_;
};
Example use:
RAIIFunc<int (int), ::close> f = ::open("...");

I have to admit I don't really see the point. Writing a RAII wrapper from scratch is ridiculously simple already. There's just not much work to be saved by using some kind of predefined wrapper:
struct scoped_foo : private boost::noncopyable {
scoped_foo() : f(...) {}
~scoped_foo() {...}
foo& get_foo() { return f; }
private:
foo f;
};
Now, the ...'s are essentially the bits that'd have to be filled out manually if you used some kind of general RAII template: creation and destruction of our foo resource. And without them there's really not much left. A few lines of boilerplate code, but it's so little it just doesn't seem worth it to extract it into a reusable template, at least not at the moment. With the addition of lambdas in C++0x, we could write the functors for creation and destruction so concisely that it might be worth it to write those and plug them into a reusable template. But until then, it seems like it'd be more trouble than worth. If you were to define two functors to plug into a RAII template, you'd have already written most of this boilerplate code twice.

I was thinking about something similar:
template <typename T>
class RAII {
private:
T (*constructor)();
void (*destructor)(T);
public:
T value;
RAII(T (*constructor)(), void (*destructor)(T)) :
constructor(constructor),
destructor(destructor) {
value = constructor();
}
~RAII() {
destructor(value);
}
};
and to be used like this (using OpenGL's GLUquadric as an example):
RAII<GLUquadric*> quad = RAII<GLUquadric*>(gluNewQuadric, gluDeleteQuadric);
gluSphere(quad.value, 3, 20, 20)

Here's yet another C++11 RAII helper: https://github.com/ArtemGr/libglim/blob/master/raii.hpp
It runs a C++ functor at destruction:
auto unmap = raiiFun ([&]() {munmap (fd, size);});

Related

What is the use of a custom unique_ptr deleter that calls delete?

In the C++ samples provided by NVidia's TensorRT library, there is a file named common.h that contains definitions of structures used throughout the examples.
Among other things, the file contains the following definitions:
struct InferDeleter
{
template <typename T>
void operator()(T* obj) const
{
delete obj;
}
};
template <typename T>
using SampleUniquePtr = std::unique_ptr<T, InferDeleter>;
The SampleUniquePtr alias is used throughout the code samples to wrap various pointers to interface classes returned by some functions, e.g. SampleUniquePtr<INetworkDefinition>(builder->createNetworkV2(0));
My question is, in what practical aspects are std::unique_ptr and SampleUniquePtr different? The behavior of SampleUniquePtr is pretty much what I would expect from std::unique_ptr, at least now. Could it be for compatibility with old versions of C++?
From the history I see it was for a while
template <typename T>
void InferDeleter::operator()(T* obj) const
{
if (obj)
{
obj->destroy();
}
}
Then they declared destroy() methods deprecated:
Destructors for classes with destroy() methods were previously protected. They are now public, enabling use of smart pointers for these classes. The destroy() methods are deprecated.
Although it is deprecated, it is still not removed and the old ABI InferDeleter should be kept for backward compatibility for applications linked with the old TensorRT. Thus they are using since recently
template <typename T>
void InferDeleter::operator()(T* obj) const
{
delete obj;
}
not removing the struct InferDeleter and using SampleUniquePtr = std::unique_ptr<T, InferDeleter>. But it may be removed in the future. When they remove struct InferDeleter and change using SampleUniquePtr = std::unique_ptr<T>, it will make TensorRT library incompatible with old software.
I don't have the time to scan through the entirety of the linked library, so there is a chance I'm wrong on this count.
But, what you're seeing is probably a default implementation that's used as a fallback for any type that doesn't need special handling.
The Library Implementor always has the option of specializing this function in the situation where they're dealing with a type that actually does need special handling, which is probably pretty common for a library made by Nvidia (and therefore probably meant to interact with the GPU).
class GPUResource {
//...
~GPUResource() noexcept {} //Does NOT perform the special handling, for whatever library-specific reason
};
template<>
void InferDeleter::operator()<GPUResource>(GPUResource * obj) {
performSpecialCleanupOnGPUResource(obj->handle);
delete obj;
}
In practice, this kind of stuff always smells like an anti-pattern (are you sure, Library Implementor, that you couldn't do the cleanup in the destructor of this object??) but if they had good reasons for separating out the logic like this, the way they've defined InferDeleter permits them this kind of flexibility.
The example looks pointless as that is exactly what I expect the default deleter to do.
But maybe this is better (untested):
#include <vector>
struct VectorDeleter
{
template <typename T>
void operator()(std::vector<T*> *v) const
{
for(auto p : *v) delete p;
delete v;
}
};
template <typename V>
using UniquePtrVector = std::unique_ptr<V, VectorDeleter>;
When you have a vector of pointers (to e.g. polymorphic classes) you have to delete the objects the vector points to when the vector is deleted. The VectorDeleter does that before the vector itself is deleted.

C++11 make_shared instancing

Apologies for the long question, but some context is necessary. I have a bit of code that seems to be a useful pattern for the project I'm working on:
class Foo
{
public:
Foo( int bar = 1 );
~Foo();
typedef std::shared_ptr< Foo > pointer_type;
static pointer_type make( int bar = 1 )
{
return std::make_shared< Foo >( bar );
}
...
}
As you can see, it provides a straightforward way of constructing any class as a PointerType which encapsulates a shared_ptr to that type:
auto oneFoo = Foo::make( 2 );
And therefore you get the advantages of shared_ptr without putting references to make_shared and shared_ptr all over the code base.
Encapsulating the smart pointer type within the class provides several advantages:
It lets you control the copyability and moveability of the pointer types.
It hides the shared_ptr details from callers, so that non-trivial object constructions, such as those that throw exceptions, can be placed within the Instance() call.
You can change the underlying smart pointer type when you're working with projects that use multiple smart pointer implementations. You could switch to a unique_ptr or even to raw pointers for a particular class, and calling code would remain the same.
It concentrates the details about (smart) pointer construction and aliasing within the class that knows most about how to do it.
It lets you decide which classes can use smart pointers and which classes must be constructed on the stack. The existence of the PointerType field provides a hint to callers about what types of pointers can be created that correspond for the class. If there is no PointerType defined for a class, this would indicate that no pointers to that class may be created; therefore that particular class must be created on the stack, RAII style.
However, I see no obvious way of applying this bit of code to all the classes in my project without typing the requisite typedef and static PointerType Instance() functions directly. I suspect there should be some consistent, C++11 standard, cross-platform way of doing this with policy-based templates, but a bit of experimentation has not turned up an obvious way of applying this trivially to a bunch of classes in a way that compiles cleanly on all modern C++ compilers.
Can you think of an elegant way to add these concepts to a bunch of classes, without a great deal of cutting and pasting? An ideal solution would conceptually limit what types of pointers can be created for which types of classes (one class uses shared_ptr and another uses raw pointers), and it would also handle instancing of any supported type by its own preferred method. Such a solution might even handle and/or limit coercion, by failing appropriately at compile time, between non-standard and standard smart and dumb pointer types.
One way is to use the curiously recurring template pattern.
template<typename T>
struct shared_factory
{
using pointer_type = std::shared_ptr<T>;
template<typename... Args>
static pointer_type make(Args&&... args)
{
return std::make_shared<T>(std::forward<Args>(args)...);
}
};
struct foo : public shared_factory<foo>
{
foo(char const*, int) {}
};
I believe this gives you what you want.
foo::pointer_type f = foo::make("hello, world", 42);
However...
I wouldn't recommend using this approach. Attempting to dictate how users of a type instantiate the type is unnecessarily restrictive. If they need a std::shared_ptr, they can create one. If they need a std::unique_ptr, they can create one. If they want to create an object on the stack, they can. I see nothing to be gained by mandating how your users' objects are created and managed.
To address your points:
It lets you control the copyability and moveability of the pointer types.
Of what benefit is this?
It hides the shared_ptr details from callers, so that non-trivial object constructions, such as those that throw exceptions, can be placed within the Instance() call.
I'm not sure what you mean here. Hopefully not that you can catch the exception and return a nullptr. That would be Java-grade bad.
You can change the underlying smart pointer type when you're working with projects that use multiple smart pointer implementations. You could switch to a unique_ptr or even to raw pointers for a particular class, and calling code would remain the same.
If you are working with multiple kinds of smart pointer, perhaps it would be better to let the user choose the appropriate kind for a given situation. Besides, I'd argue that having the same calling code but returning different kinds of handle is potentially confusing.
It concentrates the details about (smart) pointer construction and aliasing within the class that knows most about how to do it.
In what sense does a class know "most" about how to do pointer construction and aliasing?
It lets you decide which classes can use smart pointers and which classes must be constructed on the stack. The existence of the PointerType field provides a hint to callers about what types of pointers can be created that correspond for the class. If there is no PointerType defined for a class, this would indicate that no pointers to that class may be created; therefore that particular class must be created on the stack, RAII style.
Again, I disagree fundamentally with the idea that objects of a certain type must be created and managed in a certain way. This is one of the reasons why the singleton pattern is so insidious.
I wouldn't advise adding those static functions. Among other drawbacks, they really get pretty burdensome to create and maintain when there are multiple constructors. This is a case where auto can help as well as a typedef outside the class. Plus, you can use the std namespace (but please not in the header):
class Foo
{
public:
Foo();
~Foo();
Foo( int bar = 1 );
...
}
typedef std::shared_ptr<Foo> FooPtr;
In the C++ file:
using namespace std;
auto oneFoo = make_shared<Foo>( 2 );
FooPtr anotherFoo = make_shared<Foo>( 2 );
I think you'll find this to not be too burdensome on typing. Of course, this is all a matter of style.
This is a refinement of Joseph's answer for the sake of making the kind of pointer more configurable:
#include <memory>
template <typename T, template <typename...> class PtrT = std::shared_ptr>
struct ptr_factory {
using pointer_type = PtrT<T>;
template <typename... Args>
static pointer_type make(Args&&... args) {
return pointer_type{new T{args...}};
}
};
template <typename T>
struct ptr_factory<T, std::shared_ptr> {
using pointer_type = std::shared_ptr<T>;
template <typename... Args>
static pointer_type make(Args&&... args) {
return std::make_shared<T>(args...);
}
};
struct foo : public ptr_factory<foo> {
foo(char const*, int) {}
};
struct bar : public ptr_factory<bar, std::unique_ptr> {
bar(char const*, int) {}
};
ptr_factory defaults to using std::shared_ptr, but can be configured to use different smart pointer templates, thanks to template template parameters, as illustrated by struct bar.

Dependency inversion principle: trying to understand

I'm learning design patterns and things around it (like SOLID and Dependency inversion principle in particular) and it looks like I'm loosing something:
Following the DIP rule I should be able to make classes less fragile by not creating an object in the class (composition) but sending the object reference/pointer to the class constructor (aggregation). But this means that I have to make an instance somewhere else: so the more flexible one class with aggregation is, the more fragile the other.
Please explain me where am I wrong.
You just need to follow the idea through to its logical conclusion. Yes you have to make the instance somewhere else, but this might not be just in the class that is one level above your class, it keeps needing to be pushed out and out until objects are only created at the very outer layer of your application.
Ideally you create all of your objects in a single place, this is called the composition root (the exception being objects which are created from factories, but the factories are created in the composition root). Exactly where this is depends on the type of app you are building.
In a desktop app, that would be in the Main method (or very close to it)
In an ASP.NET (including MVC) application, that would be in Global.asax
In WCF, that would be in a ServiceHostFactory
etc.
this place may end up being 'fragile' but you only have a single place to change things in order to be able to reconfigure your application, and then all other classes are testable and configurable.
See this excellent answer (which is quoted above)
Yes, you would need to instantiate the class somewhere. If you follow the DIP correctly, you will end up creating all the instances in one place. I called this place as class composition. Read my blog on more depth understanding on this topic Here
One important possibility that you've missed is the injection of factories, rather than the class itself. One advantage of this is that it allows your ownership of instances to be cleaner. Note the code is a little uglier since I'm explicitly giving the container ownership of its components. Things can be a little neater if you use shared_ptr, rather than unique_ptr, but then ownership is ambiguous.
So starting with code that looks like this:
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer() : foo(new Foo() ) {};
void doit() { foo->doit(); }
}
void useContainer() {
MyContainer container;
container.doit();
}
The non-factory version would look like this
struct MyContainer {
std::unique_ptr<IFoo> foo;
template<typename T>
explicit MyContainer(std::unique_ptr<T> && f) :
foo(std::move(f))
{}
void doit() { foo->doit(); }
}
void useContainer() {
std::unique_ptr<Foo> foo( new Foo());
MyContainer container(std::move(foo));
container.doit();
}
The factory version would look like
struct FooFactory : public IFooFactory {
std::unique_ptr<IFoo> createFoo() override {
return std::make_unique<Foo>();
}
};
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer(IFooFactory & factory) : foo(factory.createFoo()) {};
void doit() { foo->doit(); }
}
void useContainer() {
FooFactory fooFactory;
MyContainer container(fooFactory);
container.doit();
}
IMO being explicit about ownership & lifetime in C++ is important - its a crucial part of C++'s identity - it lies at the heart of RAII and lots of other C++ patterns.

Forcing class inheritance in c++11 for wrapped classes -- is it a bad thing?

I've been working on a wrapping library for scripted languages (partially to learn c++11 features, and partially for a specific need). One issue that has come up is that of exporting inherited objects to the scripted language.
The problem involves using proxy objects of wrapped classes for the invocation of functions. Specifically, if a function takes a Foo *, then the object proxy from whatever scripted language is being used must be cast appropriately.
There are two ways (that I can think of) to model the object proxy appropriately:
template <class T>
struct ObjectProxy {
T *ptr;
};
or:
struct WrappedClass {
virtual ~WrappedClass() {}
};
struct ObjectProxy {
WrappedClass *ptr;
template <typename T>
boost::shared_ptr<T> castAs() {
return boost::dynamic_pointer_cast<T>(instance);
}
};
The problem with the first version is that you need to know ahead of time what type ObjectProxy is pointing to. Unfortunately, there is no easy solutions to this (see many of my previous questions). After some investigation, it looks like most of the popular libraries that do this (e.g. boost::python, LuaBind, etc.) keep a graph of all the class relationships in order to allow for the proper casting.
The second method avoid having to do all that, but does add the constraint that every class you wrap must inherit from WrappedClass.
Here's my question: can anyone think of any major problems, besides being slightly annoying to the user, with the second approach? Even if you didn't make a specific class, you should always be able to subclass it. For example, if you had some library the provide class Foo, then you could do:
class FooWrapped: public Foo, public WrappedClass {};
This does make things a little less seamless for the user (though I've been looking into ways of automating this), it does mean you can rely on the built-in dynamic_cast rather than having to write your own variant.
edit
Added castAs() to make use-case clearer
Your problem sounds like what boost::any was designed to solve. The solution basically combines your two ideas: (code untested)
struct ObjectProxyBase {
virtual ~ObjectProxyBase() {}
};
template <class T>
struct ObjectProxy : public ObjectProxyBase {
T *ptr;
};
template <class T>
T *proxy_cast(ObjectProxyBase *obj) {
auto ptr = dynamic_cast<ObjectProxy<T> *>(obj);
if (!ptr)
return nullptr;
return ptr->ptr;
}

Adding member functions to a Boost.Variant

In my C++ library I have a type boost::variant<A,B> and lots of algorithms getting this type as an input. Instead of member functions I have global functions on this type, like void f( boost::variant<A,B>& var ). I know that this can also be achieved with templates, but this is not suitable for my design.
I am very fine with this style of programming:
boost::variant<A, B> v;
f( v );
but some of the users of this library are not used to it, and since the Boost.Variant concept is hidden by a type definition, they feel like calling v.f().
To achieve this, I can think of two possibilities: 1) overriding from boost::variant and 2) re-implementing boost::variant and adding my own member functions. I am not sure whether these ideas are good or not. Can you give me some help with this please? Are there other possibilities?
Another possibility: Use aggregation. Then you do not directly expose the boost.variant to the users of the library, giving you way more freedom for future improvements, and may simplify some debugging tasks by a significant amount.
General Advice:
Aggregation is less tightly coupled than inheritance, therefore better by default, except you know a use-case where you explicitly want to pass your object instance to already existing functions only taking variants. And even than the base class should have been designed with inheritance in mind.
Example for Aggregation for Your Problem:
As far as I understand it, the free functions already exist, and take a variant. Just define a class with the sole data member of the variant, and provide public member functions which do nothing but invoke the already existing free functions with the member variant, like
class variant_wrapper {
boost::variant<A,B> m_variant;
public:
variant_wrapper(...) : m_variant(...) {} // whatever c_tor you need.
void f() { f(m_variant); }
};
Using this approach you abstract away the fact that you are using boost.variant for your implementation (which you already do through a typedef for the library's users), giving you the freedom of later changing that (for optimization or feature extensions or whatever), you can decide to make the values immutable, have a more simple approach to debug accesses to your algorithms, etc. etc..
The disadvantage with the aggregation is that you cannot just pass the wrapper to a static_visitor, but as your users shall not know that there is a variant, and you know to simply pass the member variable, I do not see a big issue here.
Final rant:
C++ is not Java. You need to fix the users of the library...
What you would like to have are C# extension methods; such things do not exist in C++. However, I would not reimplement/implementation-copy boost.variant (maintenance burden), and I would not inherit from it. Use aggregation where possible.
I'd derive from boost::variant. That should be fine so long as you dont add data members to the class, and don't add virtual functions. (You may be able to do some of those but things are a little more iffy I think). Anyway this seems to work OK for me.
#include "boost/variant.hpp"
#include <iostream>
template<typename T1, typename T2>
struct my_print : public boost::static_visitor<>
{
void operator()( T1 t1 ) const
{
std::cout<<"FIRST TYPE "<<t1<<std::endl;
}
void operator()( T2 t2 ) const
{
std::cout<<"SECOND TYPE "<<t2<<std::endl;
}
};
template<typename T1, typename T2>
class MyVariant : public boost::variant<T1,T2>
{
public:
void print()
{
boost::apply_visitor(my_print<T1,T2>(), *this );
}
template<typename T>
MyVariant<T1,T2>& operator=(const T& t)
{
boost::variant<T1,T2>::operator=(t);
return *this;
}
MyVariant(const T1& t) : boost::variant<T1,T2>(t)
{
}
MyVariant(const T2& t) : boost::variant<T1,T2>(t)
{
}
template<typename T>
explicit MyVariant(const T& t) : boost::variant<T1,T2>(t)
{
}
};
int main()
{
MyVariant<int,std::string> s=1;
s.print();
s=std::string("hello");
s.print();
MyVariant<int,std::string> v2 = s;
v2.print();
s=boost::variant<int,std::string>(3);
s.print();
}