I'm learning design patterns and things around it (like SOLID and Dependency inversion principle in particular) and it looks like I'm loosing something:
Following the DIP rule I should be able to make classes less fragile by not creating an object in the class (composition) but sending the object reference/pointer to the class constructor (aggregation). But this means that I have to make an instance somewhere else: so the more flexible one class with aggregation is, the more fragile the other.
Please explain me where am I wrong.
You just need to follow the idea through to its logical conclusion. Yes you have to make the instance somewhere else, but this might not be just in the class that is one level above your class, it keeps needing to be pushed out and out until objects are only created at the very outer layer of your application.
Ideally you create all of your objects in a single place, this is called the composition root (the exception being objects which are created from factories, but the factories are created in the composition root). Exactly where this is depends on the type of app you are building.
In a desktop app, that would be in the Main method (or very close to it)
In an ASP.NET (including MVC) application, that would be in Global.asax
In WCF, that would be in a ServiceHostFactory
etc.
this place may end up being 'fragile' but you only have a single place to change things in order to be able to reconfigure your application, and then all other classes are testable and configurable.
See this excellent answer (which is quoted above)
Yes, you would need to instantiate the class somewhere. If you follow the DIP correctly, you will end up creating all the instances in one place. I called this place as class composition. Read my blog on more depth understanding on this topic Here
One important possibility that you've missed is the injection of factories, rather than the class itself. One advantage of this is that it allows your ownership of instances to be cleaner. Note the code is a little uglier since I'm explicitly giving the container ownership of its components. Things can be a little neater if you use shared_ptr, rather than unique_ptr, but then ownership is ambiguous.
So starting with code that looks like this:
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer() : foo(new Foo() ) {};
void doit() { foo->doit(); }
}
void useContainer() {
MyContainer container;
container.doit();
}
The non-factory version would look like this
struct MyContainer {
std::unique_ptr<IFoo> foo;
template<typename T>
explicit MyContainer(std::unique_ptr<T> && f) :
foo(std::move(f))
{}
void doit() { foo->doit(); }
}
void useContainer() {
std::unique_ptr<Foo> foo( new Foo());
MyContainer container(std::move(foo));
container.doit();
}
The factory version would look like
struct FooFactory : public IFooFactory {
std::unique_ptr<IFoo> createFoo() override {
return std::make_unique<Foo>();
}
};
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer(IFooFactory & factory) : foo(factory.createFoo()) {};
void doit() { foo->doit(); }
}
void useContainer() {
FooFactory fooFactory;
MyContainer container(fooFactory);
container.doit();
}
IMO being explicit about ownership & lifetime in C++ is important - its a crucial part of C++'s identity - it lies at the heart of RAII and lots of other C++ patterns.
Related
One of the nice things in Java is implementing interface. For example consider the following snippet:
interface SimpleInterface()
{
public: void doThis();
}
...
SimpleInterface simple = new SimpleInterface()
{
#Override public doThis(){ /**Do something here*/}
}
The only way I could see this being done is through Lambda in C++ or passing an instance of function<> to a class. But I am actually checking if this is possible somehow? I have classes which implements a particular interface and these interfaces just contain 1-2 methods. I can't write a new file for it or add a method to a class which accepts a function<> or lambda so that it can determine on what to do. Is this strictly C++ limitation? Will it ever be supported?
Somehow, I wanted to write something like this:
thisClass.setAction(int i , new SimpleInterface()
{
protected:
virtual void doThis(){}
});
One thing though is that I haven't check the latest spec for C++14 and I wanted to know if this is possible somehow.
Thank you!
Will it ever be supported?
You mean, will the language designers ever add a dirty hack where the only reason it ever existed in one language was because those designers were too stupid to add the feature they actually needed?
Not in this specific instance.
You can create a derived class that derives from it and then uses a lambda, and then use that at your various call sites. But you'd still need to create one converter for each interface.
struct FunctionalInterfaceImpl : SimpleInterface {
FunctionalInterfaceImpl(std::function<void()> f)
: func(f) {}
std::function<void()> func;
void doThis() { func(); }
};
You seem to think each class needs a separate .h and .cpp file. C++ allows you to define a class at any scope, including local to a function:
void foo() {
struct SimpleInterfaceImpl : SimpleInterface
{
protected:
void doThis() override {}
};
thisClass.setAction(int i , new SimpleInterfaceImpl());
}
Of course, you have a naked new in there which is probably a bad idea. In real code, you'd want to allocate the instance locally, or use a smart pointer.
This is indeed a "limitation" of C++ (and C#, as I was doing some research some time ago). Anonymous java classes are one of its unique features.
The closest way you can emulate this is with function objects and/or local types. C++11 and later offers lambdas which are semantic sugar of those two things, for this reason, and saves us a lot of writing. Thank goodness for that, before c++11 one had to define a type for every little thing.
Please note that for interfaces that are made up of a single method, then function objects/lambdas/delegates(C#) are actually a cleaner approach. Java uses interfaces for this case as a "limitation" of its own. It would be considered a Java-ism to use single-method interfaces as callbacks in C++.
Local types are actually a pretty good approximation, the only drawback being that you are forced to name the types (see edit) (a tiresome obligation, which one takes over when using static languages of the C family).
You don't need to allocate an object with new to use it polymorphically. It can be a stack object, which you pass by reference (or pointer, for extra anachronism). For instance:
struct This {};
struct That {};
class Handler {
public:
virtual ~Handler ();
virtual void handle (This) = 0;
virtual void handle (That) = 0;
};
class Dispatcher {
Handler& handler;
public:
Dispatcher (Handler& handler): handler(handler) { }
template <typename T>
void dispatch (T&& obj) { handler.handle(std::forward<T>(obj)); }
};
void f ()
{
struct: public Handler {
void handle (This) override { }
void handle (That) override { }
} handler;
Dispatcher dispatcher { handler };
dispatcher.dispatch(This {});
dispatcher.dispatch(That {});
}
Also note the override specifier offered by c++11, which has more or less the same purpose as the #Override annotation (generate a compile error in case this member function (method) does not actually override anything).
I have never heard about this feature being supported or even discussed, and I personally don't see it even being considered as a feature in C++ community.
EDIT right after finishing this post, I realised that there is no need to name local types (naturally), so the example becomes even more java-friendly. The only difference being that you cannot define a new type within an expression. I have updated the example accordingly.
In c++ interfaces are classes which has pure virtual functions in them, etc
class Foo{
virtual Function() = 0;
};
Every single class that inherits this class must implement this function.
I've been working on a wrapping library for scripted languages (partially to learn c++11 features, and partially for a specific need). One issue that has come up is that of exporting inherited objects to the scripted language.
The problem involves using proxy objects of wrapped classes for the invocation of functions. Specifically, if a function takes a Foo *, then the object proxy from whatever scripted language is being used must be cast appropriately.
There are two ways (that I can think of) to model the object proxy appropriately:
template <class T>
struct ObjectProxy {
T *ptr;
};
or:
struct WrappedClass {
virtual ~WrappedClass() {}
};
struct ObjectProxy {
WrappedClass *ptr;
template <typename T>
boost::shared_ptr<T> castAs() {
return boost::dynamic_pointer_cast<T>(instance);
}
};
The problem with the first version is that you need to know ahead of time what type ObjectProxy is pointing to. Unfortunately, there is no easy solutions to this (see many of my previous questions). After some investigation, it looks like most of the popular libraries that do this (e.g. boost::python, LuaBind, etc.) keep a graph of all the class relationships in order to allow for the proper casting.
The second method avoid having to do all that, but does add the constraint that every class you wrap must inherit from WrappedClass.
Here's my question: can anyone think of any major problems, besides being slightly annoying to the user, with the second approach? Even if you didn't make a specific class, you should always be able to subclass it. For example, if you had some library the provide class Foo, then you could do:
class FooWrapped: public Foo, public WrappedClass {};
This does make things a little less seamless for the user (though I've been looking into ways of automating this), it does mean you can rely on the built-in dynamic_cast rather than having to write your own variant.
edit
Added castAs() to make use-case clearer
Your problem sounds like what boost::any was designed to solve. The solution basically combines your two ideas: (code untested)
struct ObjectProxyBase {
virtual ~ObjectProxyBase() {}
};
template <class T>
struct ObjectProxy : public ObjectProxyBase {
T *ptr;
};
template <class T>
T *proxy_cast(ObjectProxyBase *obj) {
auto ptr = dynamic_cast<ObjectProxy<T> *>(obj);
if (!ptr)
return nullptr;
return ptr->ptr;
}
I've been reading and searching the web for a while now but haven't found a nice solution. Here's what I want to do:
I am writing a library that defines an abstract base class - lets call it IFoo.
class IFoo {
public:
virtual void doSomething() = 0;
virtual ~IFoo() {}
};
The library also defines a couple of implementations for that - lets call them FooLibOne and FooLibTwo.
In order to encapsulate the creation process and decide which concrete implementation is used depending on some runtime paramter, I use a factory FooFactory that maps std::string to factory methods (in my case boost::function, but that should not be the point here). It also allows new factory methods to be registered. It looks something like this:
class FooFactory {
public:
typedef boost::function<IFoo* ()> CreatorFunction;
IFoo* create(std::string name);
void registerCreator(std::string name, CreatorFunction f);
private:
std::map<std::string, CreatorFunction> mapping_;
};
For now, I added the implementations provided (FooLibOne, FooLibTwo) by the library directly in the constructor of FooFactory - thus they are always available. Some of the library code uses the FooFactory to initialize certain objects etc. I have refrained from using the Singleton pattern for the factories so far since tee pattern is debated often enough and I wasn't sure, how different implementations of the Singleton pattern would work in combination with possibly multiple shared libraries etc.
However, passing around the factories can be a little cumbersome and I still think, this is one of the occassions the Singleton pattern could be of good use. Especially if I consider, that the users of the library should be able to add more implementations of IFoo which should also be accessible for the (already existing) library code. Of course, Dependency Injection - meaning I pass an instance of a factory through the constructor - could do the trick (and does it for now). But this approach kind of fails if I want to be even more flexible and introduce a second layer of dynamic object creation. Meaning: I want to dynamicly create objects (see above) within dynamically created objects (say implementations of an abstract base class IBar - BarOne and BarTwo - again via a factory BarFactory).
Lets say BarOne requires an IFoo object but BarTwo doesn't. I still have to provide the FooFactory to the BarFactory in any case, since one of the IBar implementations might need it. Having globally accessible factories would mitigate this problem and I wouldn't be forced to forsee, which factories may be needed by implementations of a specific interface. In addition I could register the creation methods directly in the source file of the implementations.
FooFactory::Instance().registerCreator("new_creator", boost::bind(...));
Since I think it is a good idea, what would be the right way to implement it? I was going for a templated approach like the SingletonHolder from Modern C++ Design (see also Loki library) to wrap the factories. However, I'd rather implement it as a Meyer's Singleton instead. But I still think there will be issues with shared libraries. The solution should work with GCC (and preferably MSVC). I'm also open for other ideas from a design point of view but please avoid the common "Singletons are evil"-rants. ;-)
Thanks in advance.
Hopefully, 76 lines of code speak more than a few words - (using C++11 version of these features instead of the boost ones, but they're pretty much the same anyway)
I would put the definition of the factory(ies) and the definition of the creators in the same (or nearby) scope, so that each of the creators can "see" any of their dependent factories - avoiding the need to pass factories around too much, and avoiding singletons
Cars & Sirens:
class ISiren {};
class Siren : public ISiren
{
public:
Siren() { std::cout << "Siren Created" << std::endl; }
};
class ICar{};
class EstateCar : public ICar
{
public:
EstateCar() { std::cout << "EstateCar created" << std::endl;}
};
class PoliceCar : public ICar
{
std::shared_ptr<ISiren> siren;
public:
PoliceCar( std::shared_ptr<ISiren> siren)
: siren( siren )
{
std::cout << "PoliceCar created" << std::endl;
}
};
Factories:
typedef std::function< std::shared_ptr<ICar> () > CreatorType;
class CarFactory
{
std::map<std::string, CreatorType> creators;
public:
void AddFactory( std::string type, CreatorType func )
{
creators.insert( std::make_pair(type, func) );
}
std::shared_ptr<ICar> CreateCar( std::string type )
{
CreatorType& create( creators[type] );
return create();
}
};
class SirenFactory
{
public: // Simple factory creating 1 siren type just for brevity
std::shared_ptr<ISiren> CreateSiren() { return std::make_shared<Siren>(); }
};
"Factory Root" (main function, wherever factories are defined) :
int main()
{
CarFactory car_factory; // Car factory unaware of Siren factory
SirenFactory siren_factory;
auto EstateCarLambda = []() {
return std::make_shared<EstateCar>();
}; // Estate car lambda knows nothing of the Siren Factory
auto PoliceCarLambda = [&siren_factory]() {
return std::make_shared<PoliceCar>( siren_factory.CreateSiren() );
}; // Police car creation lambda using the Siren Factory
car_factory.AddFactory( "EstateCar", EstateCarLambda );
car_factory.AddFactory( "PoliceCar", PoliceCarLambda );
std::shared_ptr<ICar> car1 = car_factory.CreateCar( "EstateCar" );
std::shared_ptr<ICar> car2 = car_factory.CreateCar( "PoliceCar" );
}
I am struggling to understand why the initialization of pprocessor, below, is written like this:
class X
{
...
private:
boost::scoped_ptr<f_process> pprocessor_;
};
X:X()
: pprocessor_( f_process_factory<t_process>().make() ) //why the factory with template
{...}
instead of just writing
X:X()
: pprocessor_( new t_process() )
{...}
Other relevant code is:
class f_process {
...
};
class t_process : public f_process {
...
};
//
class f_process_factory_base {
public:
f_process_factory_base() { }
virtual ~f_process_factory_base() = 0 { }
virtual f_process* make() = 0;
};
template < typename processClass >
class f_process_factory : public f_process_factory_base {
public:
f_process_factory() { }
virtual ~f_process_factory() { }
virtual processClass* make() { return new processClass(); }
};
The guy who wrote the code is very clever so perhaps there is a good reason for it.
(I can't contact him to ask)
As it is, it seems kinda pointless, but I can think of a few possible uses that aren't shown here, but may be useful in the future:
Memory management: It's possible that at some point in the future the original author anticipated needing a different allocation scheme for t_process. For example, he may want to reuse old objects or allocate new ones from an arena.
Tracking creation: There may be stats collected by the f_process_factory objects when they're created. Maybe the factory can keep some static state.
Binding constructor parameters: Perhaps a specialization of the f_process_factory for t_process at some point in the future needs to pass constructor parameters to the t_process creator, but X doesn't want to know about them.
Preparing for dependency injection: It might be possible to specialize these factories to return mocks, instead of real t_process. That could be achieved in a few ways, but not exactly as written.
Specialized object creation: (This is really just the general case for the previous two), there may be specializations of t_process that get created in different circumstances - for example, it might create different t_process types based on environmental variables or operating system. This would require specializations of the factory.
If it were me, and none of these sound plausible, I'd probably rip it out, as it seems like gratuitous design pattern usage.
This look like he is using the factory design pattern to create new instances of t_process. This will allow you to delegate the responsibility of creating different types of t_process away from class X
Well, in this case it doesn't make much sense, unless the author expects the default factory's definition will be updated sometime in the future. It would make sense, though, if the factory object were passed in as a parameter; a factory gives you more flexibility in constructing an object, but if you instantiate the factory at the same place that you use it, then it really doesn't provide an advantage. So, you're right.
Is there any template available in boost for RAII. There are classes like scoped_ptr, shared_ptr which basically work on pointer. Can those classes be used for any other resources other than pointers. Is there any template which works with a general resources.
Take for example some resource which is acquired in the beginning of a scope and has to be somehow released at the end of scope. Both acquire and release take some steps. We could write a template which takes two(or maybe one object) functors which do this task. I havent thought it through how this can be achieved, i was just wondering are there any existing methods to do it
Edit: How about one in C++0x with support for lambda functions
shared_ptr provides the possibility to specify a custom deleter. When the pointer needs to be destroyed, the deleter will be invoked and can do whatever cleanup actions are necessary. This way more complicated resources than simple pointers can be managed with this smart pointer class.
The most generic approach is the ScopeGuard one (basic idea in this ddj article, implemented e.g. with convenience macros in Boost.ScopeExit), and lets you execute functions or clean up resources at scope exit.
But to be honest, i don't see why you'd want that. While i understand that its a bit annoying to write a class every time for a one-step-aquire and one-step-release pattern, you are talking about multi-step-aquire and -release.
If its taken multiple steps, it, in my opinion, belongs in an appropiately named utility class so that the details are hidden and the code in place (thus reducing error probability).
If you weigh it against the gains, those few additional lines are not really something to worry about.
A more generic and more efficient (no call through function pointer) version is as follows:
#include <boost/type_traits.hpp>
template<typename FuncType, FuncType * Func>
class RAIIFunc
{
public:
typedef typename boost::function_traits<FuncType>::arg1_type arg_type;
RAIIFunc(arg_type p) : p_(p) {}
~RAIIFunc() { Func(p_); }
arg_type & getValue() { return p_; }
arg_type const & getValue() const { return p_; }
private:
arg_type p_;
};
Example use:
RAIIFunc<int (int), ::close> f = ::open("...");
I have to admit I don't really see the point. Writing a RAII wrapper from scratch is ridiculously simple already. There's just not much work to be saved by using some kind of predefined wrapper:
struct scoped_foo : private boost::noncopyable {
scoped_foo() : f(...) {}
~scoped_foo() {...}
foo& get_foo() { return f; }
private:
foo f;
};
Now, the ...'s are essentially the bits that'd have to be filled out manually if you used some kind of general RAII template: creation and destruction of our foo resource. And without them there's really not much left. A few lines of boilerplate code, but it's so little it just doesn't seem worth it to extract it into a reusable template, at least not at the moment. With the addition of lambdas in C++0x, we could write the functors for creation and destruction so concisely that it might be worth it to write those and plug them into a reusable template. But until then, it seems like it'd be more trouble than worth. If you were to define two functors to plug into a RAII template, you'd have already written most of this boilerplate code twice.
I was thinking about something similar:
template <typename T>
class RAII {
private:
T (*constructor)();
void (*destructor)(T);
public:
T value;
RAII(T (*constructor)(), void (*destructor)(T)) :
constructor(constructor),
destructor(destructor) {
value = constructor();
}
~RAII() {
destructor(value);
}
};
and to be used like this (using OpenGL's GLUquadric as an example):
RAII<GLUquadric*> quad = RAII<GLUquadric*>(gluNewQuadric, gluDeleteQuadric);
gluSphere(quad.value, 3, 20, 20)
Here's yet another C++11 RAII helper: https://github.com/ArtemGr/libglim/blob/master/raii.hpp
It runs a C++ functor at destruction:
auto unmap = raiiFun ([&]() {munmap (fd, size);});