Abstract factory design pattern with default implementation - c++

I have to create family of objects based on customer type.
I have one base abstract class ApplicationRulesFactory, which defines
virtual interface. A lot of concrete customer classes inherits from this class.
The problem is that for some customers say CustomerB we do not use the objects Rule2 and Rule3 because the features in the application which are using these objects Rule2 and Rule3 are disabled from the application user interface for that customer, so we are not really needing to instantiate these objects at all.
The simplified code is here, i.e in reality ApplicationRulesFactory has much more virtual methods, and more concrete customer classes that inherits from it :
class ApplicationRulesFactory
{
virtual Rule1* GetRule1() = 0;
virtual Rule2* GetRule2() = 0;
virtual Rule3* GetRule3() = 0;
.....
};
class ACustomerRulesFactory : public ApplicationRulesFactory
{
Rule1* GetRule1()
{
return new ACustomerRule1();
}
Rule2 * GetRule2()
{
return new ACustomerRule2();
}
Rule3* GetRule3()
{
return new ACustomerRule3();
}
};
class BCustomerRulesFactory : public ApplicationRulesFactory
{
Rule1* GetRule1()
{
return new BCustomerRule1();
}
Rule2* GetRule2() // not needed
{
// what to return here ?
}
Rule3* GetRule3() // not needed
{
// what to return here ?
}
};
So how should I go to implement this :
1) Return some default implementation in the base class ApplicationRulesFactory :
class ApplicationRulesFactory
{
virtual Rule1* GetRule1() = 0;
virtual Rule2* GetRule2() { return new Rule2DefaultImpl();}
virtual Rule3* GetRule3() { return new Rule3DefaultIml();}
};
But this seems wrong, to inherit new classes(Rule1DefaultImpl,Rule2DefaultImpl) from Rule1, Rule2, and probably make them with empty implementation just for the purpose of returnig them like default implementation in the ApplicationRulesFactory
2) or in the concrete class return the default implementaion and leave these methods pure virtual in the base class
class BCustomerRulesFactory : public ApplicationRulesFactory
{
Rule1* GetRule1()
{
return new BCustomerRule1();
}
Rule2* GetRule2()
{
return new Rule2DefaultImpl();
}
Rule3* GetRule3()
{
return new Rule3DefaultImpl();
}
};
These solution also seems very ugly to redefine the methods in every concrete customer class although they are not needed.
3) Also I have a feeling that maybe I should not use inheritance like this, cause this violates the IS-A rule for inheritance, cause a significant number of the methods are not applicable to all of the concrete customer classes, but don' t how to go to implement this without inheritance.
Any ideas

If ApplicationRulesFactory doesn't make sense for certain kinds of Customers, then it isn't the right abstraction for you.
Your domain knows what makes sense, so why would it be asking for Rule2 and Rule3?
Make the object which knows that it only needs Rule1 use a factory which gives it Rule1 only. Give it a context so that it can get the factory it needs.

You seem to be mixing the interface and the factory into one. Surely the interface should be a class on its own, with various rules that have a default behaviour in the base-class and then an overridden behaviour in the derived class, and then the factory returns a pointer to the requested class that implements the right rules for that case.
But maybe I've misunderstood what you are trying to achieve...

If the rules can never be used, I would suggest just returning a null pointer from a base class implementation (mostly like your option one except not even bothering with a default implementation since it can never be called).

Related

Is it bad practice to use an empty base class to force inheritance for an abstract factory?

I have a class called A, and say a few inherited classes based off A.
I'm not including them here to save some space but also assume we have derived classes for A which would require the need for a factory. Same for class B.
struct A
{
...
};
struct B
{
...
};
// start of factory code
//
struct empty_base_factory
{
};
struct factoryA : public empty_base_factory
{
shared_ptr<A> make_object() { return ... }
};
struct factoryB : public empty_base_factory
{
shared_ptr<B> make_object() { return ... }
};
class abstract_factory
{
std::map< uint8_t, std::shared_ptr<empty_base_factory> > iv_factories;
public:
abstract_factory()
{
iv_factories[0] = make_shared<factoryA>();
iv_factories[1] = make_shared<factoryB>();
// .. I might several more similar to this
}
std::shared_ptr<empty_base_factory> make_factory(const uint8_t i_key)
{
return iv_factories[i_key];
}
};
It feels like I'm forcing an unnatural inheritance for with the empty_base_factory in order to get this implementation I found in a book to work nicely. It would make sense for make_object to be an interface method to make empty_base_factory an interface but the return type of make_object is different and I'm not sure how to handle that.
Is this a poor way of trying to implement an abstract factory by forcing the use of an empty base class? Thoughts?
Is this a poor way of trying to implement an abstract factory by
forcing the use of an empty base class? Thoughts?
You don't need "empty_base_factory" if it's just empty.
In the Abstract Factory design pattern:
The abstract factory pattern provides a way to encapsulate a group of
individual factories that have a common theme without specifying their
concrete classes
In your case: abstract_factory depends on factoryA and factoryB (which are your concrete factories) to create a concrete object. User doesn't know anything about these concrete factories' existence.
Going back to your question: It is not really related about the Abstract Factory design pattern.
There's no stopping you to define an abstract base class, BaseFactory, which defines what are the basic functionalities of a factory (eg. make_object()). The virtual functions in BaseFactory are what your concrete factories should implement.

Associating children via parents in parallel inheritence hierarchies

I need to develop a C++ solution to represent an object with features, where the objects and features are represented by different objects, but the actual implementation of the association is implemented in a derived class which exists to encapsulate an external implementation. I know that this kind of thing is typical of inheritance-related problems, so I want opinions on the correct solution. The implementation part should be seen as a sort of API boundary -- the user code should not see it, or see it only once in order to select the implementation.
Here's an example:
#include <cstdio>
// External implementation 1
class SomeShape {};
class SomeBody { public: SomeShape *shape; };
// External implementation 2
class OtherShape {};
class OtherBody { public: OtherShape *shape; };
//////////////
class Shape
{
public:
virtual const char *name() { return "Shape"; }
};
class Body
{
public:
virtual void setShape(Shape *s) = 0;
};
class Factory
{
public:
virtual Shape *makeShape() = 0;
virtual Body *makeBody() = 0;
};
//////////////
class AShape : public Shape
{
public:
SomeShape *someShape;
virtual const char *name() { return "AShape"; }
};
class ABody : public Body
{
protected:
SomeBody *someBody;
AShape *shape;
public:
ABody() { someBody = new SomeBody; }
virtual void setShape(Shape *s)
{
shape = static_cast<AShape*>(s);
printf("Setting shape: %s\n", s->name());
someBody->shape = shape->someShape;
}
};
class AFactory : public Factory
{
public:
virtual Shape *makeShape()
{ return new AShape(); }
virtual Body *makeBody()
{ return new ABody(); }
};
//////////////
class BShape : public Shape
{
public:
OtherShape *otherShape;
virtual const char *name() { return "BShape"; }
};
class BBody : public Body
{
protected:
OtherBody *otherBody;
BShape *shape;
public:
BBody() { otherBody = new OtherBody; }
virtual void setShape(Shape *s)
{
shape = static_cast<BShape*>(s);
printf("Setting shape: %s\n", s->name());
otherBody->shape = shape->otherShape;
}
};
class BFactory : public Factory
{
public:
virtual Shape *makeShape()
{ return new BShape(); }
virtual Body *makeBody()
{ return new BBody(); }
};
Thus, the role of the above is to allow the user to instantiate Body and Shape objects, which exist to manage associating underlying implementations SomeShape/SomeBody or OtherShape/OtherBody.
Then, a main function exercising both implementations could be,
int main()
{
// Of course in a real program we would return
// a particular Factory from some selection function,
// this should ideally be the only place the user is
// exposed to the implementation selection.
AFactory f1;
BFactory f2;
// Associate a shape and body in implementation 1
Shape *s1 = f1.makeShape();
Body *b1 = f1.makeBody();
b1->setShape(s1);
// Associate a shape and body in implementation 2
Shape *s2 = f2.makeShape();
Body *b2 = f2.makeBody();
b2->setShape(s2);
// This should not be possible, compiler error ideally
b2->setShape(s1);
return 0;
}
So, the parts that I am not happy about here are the static_cast<> calls in setShape(), because they build in an assumption that the correct object type has been passed in, without any compile-time type checking. Meanwhile, setShape() can accept any Shape, when in reality only a derived class should be accepted here.
However, I don't see how compile-time type checking could be possible if I want the user code to operate on the Body/Shape level and not the ABody/AShape or BBody/BShape level. However, switching the code so that ABody::setShape() accepts only an AShape* would make the whole factory pattern useless, for one thing, and would force the user code to be aware of which implementation is in use.
In addition it seems like the A/B classes are an extra level of abstraction over Some/Other, which exist only to support them at compile time, yet these are not intended to be exposed to the API, so what's the point... they serve only as a kind of impedance-matching layer, forcing both SomeShape and OtherShape into the Shape mold.
But what are my alternative choices? Some run-time type checking could be used, such as dynamic_cast<> or an enum, but I'm looking for something a little more elegant, if possible.
How would you do this in another language?
Analysis of your design issue
Your solution implements the abstract factory design pattern, with:
AFactory and BFactory are concrete factories of the abstract Factory
ABody and AShape on one hand and BBody and BShape on the other hand are concrete products of abstract products Body and Shape.
The Axxx classes form a familiy of related classes. So do the Bxxx classes.
The issue you worry about is that an the method Body::setShape() depends on an abstract shape argument, whereas the concrete implementation expects in reality a concrete shape.
As you've rightly pointed out, the downcast to the concrete Shape suggests a potential design flaw. And it will not be possible to catch the errors at compile-time, because the whole pattern is designed to be dynamic and flexible at run time, and the virtual function can't be templatized.
Alternative 1: make your current design a little bit safer
Use the dynamic_cast<> to check at runtime if the downcast is valid. Consequence:
the ugly casting is very well isolated in a single function.
the runtime check is only done when necessary, i.e. the only time you set the shape.
Alternative 2: adopt a design with strong isolation
A better design, would be to isolate the different products. So one product class would only use the abstract interface of the other classes of the same family and ignore their concrete specificity.
Consequences:
very robust design enforcing superior separation of concerns
you could factorize the Shape* member at the level of the abstract class, and perhaps even de-virtualize setShape().
but this comes at expense fo rigidity: you couldn't make use of family specific interface. This could be very embarassing, if for example the goal is that the family represents a native UI, knowing that products are highly interdependent and need to use native API (that's the typical example in the book of the Gang of 4).
Alternative 3: templatize dependent types
Opt for a template based implementation of your abstract factory. The general idea, is that you define the internal dependencies between products, using a template implementation.
So in your example Shape, AShape and BShape are unchanged as there is no dependency to other produts. But Body depends on a Shape, ad you want to have ABody depending on AShape, whereas BBody should depend on BShape.
The trick is then to use a template instead of an abstract class:
template<class Shape>
class Body
{
Shape *shape;
public:
void setShape(Shape *s) {
shape=s;
printf("Setting shape: %s\n", s->name());
}
};
Then you would define ABody by deriving it from Body<AShape>:
class ABody : public Body<AShape>
{
protected:
SomeBody *someBody;
public:
ABody() { someBody = new SomeBody; }
};
This is all very nice, but how shall this work with the abstract factory ? Well same principle: templatize instead of virtualize.
template <class Shape, class Body>
class Factory
{
public:
Shape *makeShape()
{ return new Shape(); }
Body *makeBody()
{ return new Body(); }
};
// and now the concrete factories
using BFactory = Factory<BShape, BBody>;
using AFactory = Factory<AShape, ABody>;
The consequence is that you have to know at compile time which concrete factory and concrete products you intend to use. THis can be done using C++11 auto :
AFactory f1; // as before
auto *s1 = f1.makeShape(); // type is deduced from the concrete factory
auto *b1 = f1.makeBody();
b1->setShape(s1);
With this approach you will no longuer be able to mixup products of different families. The following statement will cause an error:
b2->setShape(s1); // error: no way to convert an AShape* to a BShape*
And here an online demo

Correct behavior using virtual methods

Suppose I have a pure virtual method in the base interface that returns to me a list of something:
class base
{
public:
virtual std::list<something> get() = 0;
};
Suppose I have two classes that inherit the base class:
class A : public base
{
public:
std::list<something> get();
};
class B : public base
{
public:
std::list<something> get();
};
I want that only the A class can return a list<something>, but I need also to have the possibility to get the list using a base pointer, like for example:
base* base_ptr = new A();
base_ptr->get();
What I have to do?
Have I to return a pointer to this list? A reference?
Have I to return a null pointer from the method of class B? Or have I to throw an exception when I try to get the list using a B object? Or have I to change the base class method get, making it not pure and do this work in the base class?
Have I to do something else?
You have nothing else to do. The code you provide does exactly that.
When you get a pointer to the base class, since the method was declared in the base class, and is virtual, the actual implementation will be looked up in the class virtual function table and called appropriately.
So
base* base_ptr = new A();
base_ptr->get();
Will call A::get(). You should not return null from the implementation (well you can't, since null is not convertible to std::list< something > anyway). You have to provide an implementation in A/B since the base class method is declared pure virtual.
EDIT:
you cannot have only A return an std::list< something > and not B since B also inherits the base class, and the base class has a pure virtual method that must be overriden in the derived class. Inheriting from a base class is a "is-a" relationship. The only other way around I could see would be to inherit privately from the class, but that would prevent derived to base conversion.
If you really don't want B to have the get method, don't inherit from base.
Some alternatives are:
Throwing an exception in B::get():
You could throw an exception in B::get() but make sure you explain your rationale well as it is counter-intuitive. IMHO this is pretty bad design, and you risk confusing people using your base class. It is a leaky abstraction and is best avoided.
Separate interface:
You could break base into separate interface for that matter:
class IGetSomething
{
public:
virtual ~IGetSomething() {}
virtual std::list<something> Get() = 0;
};
class base
{
public:
// ...
};
class A : public base, public IGetSomething
{
public:
virtual std::list<something> Get()
{
// Implementation
return std::list<something>();
}
};
class B : public base
{
};
The multiple inheritance in that case is OK because IGetSomething is a pure interface (it does not have member variables or non-pure methods).
EDIT2:
Based on the comments it seems you want to be able to have a common interface between the two classes, yet be able to perform some operation that one implementation do, but the other doesn't provide. It is quite a convoluted scenario but we can take inspiration from COM (don't shoot me yet):
class base
{
public:
virtual ~base() {}
// ... common interface
// TODO: give me a better name
virtual IGetSomething *GetSomething() = 0;
};
class A : public Base
{
public:
virtual IGetSomething *GetSomething()
{
return NULL;
}
};
class B : public Base, public IGetSomething
{
public:
virtual IGetSomething *GetSomething()
{
// Derived-to-base conversion OK
return this;
}
};
Now what you can do is this:
base* base_ptr = new A();
IGetSomething *getSmthing = base_ptr->GetSomething();
if (getSmthing != NULL)
{
std::list<something> listOfSmthing = getSmthing->Get();
}
It is convoluted, but there are several advantages of this method:
You return public interfaces, not concrete implementation classes.
You use inheritance for what it's designed for.
It is hard to use mistakenly: base does not provide std::list get() because it is not a common operation between the concrete implementation.
You are explicit about the semantics of GetSomething(): it allows you to return an interface that can be use to retrieve a list of something.
What about just returning an empty std::list ?
That would be possible but bad design, it's like having a vending machine that can give Coke and Pepsi, except it never serves Pepsi; it's misleading and best avoided.
What about just returning a boost::optional< std::list< something > > ? (as suggested by Andrew)
I think that's a better solution, better than returning and interface that sometimes could be NULL and sometimes not, because then you explicitly know that it's optional, and there would be no mistake about it.
The downside is that it puts boost inside your interface, which I prefer to avoid (it's up to me to use boost, but clients of the interface shouldn't have to be forced to use boost).
return boost::optional in case you need an ability to not return (in B class)
class base
{
public:
virtual boost::optional<std::list<something> > get() = 0;
};
What you are doing is wrong. If it is not common to both the derived classes, you should probably not have it in the base class.
That aside, there is no way to achieve what you want. You have to implement the method in B also - which is precisely the meaning of a pure virtual function. However, you can add a special fail case - such as returning an empty list, or a list with one element containing a predetermined invalid value.

c++ factory and casting issue

I have a project where I have a lot of related Info classes and I was considering putting up a hierarchy by having a AbstractInfo class and then a bunch of derived classes, overriding the implementations of AbstractInfo as necessary. However it turns out that in C++ using the AbstractInfo class to then create one of the derived objects is not that simple. (see this question, comment on last answer)
I was going to create like a factory class which creates an Info object and always returns an AbstractInfo object. I know from C# you can do that with interfaces, but in C++ things are a little different it seems.
Down casting becomes a complicated affair and it seems prone to error.
Does anyone have a better suggestion for my problem?
You don't require downcasting. See this example:
class AbstractInfo
{
public:
virtual ~AbstractInfo() {}
virtual void f() = 0;
};
class ConcreteInfo1 : public AbstractInfo
{
public:
void f()
{
cout<<"Info1::f()\n";
}
};
class ConcreteInfo2 : public AbstractInfo
{
public:
void f()
{
cout<<"Info2::f()\n";
}
};
AbstractInfo* createInfo(int id)
{
AbstractInfo* pInfo = NULL;
switch(id)
{
case 1:
pInfo = new ConcreteInfo1;
break;
case 2:
default:
pInfo = new ConcreteInfo2;
}
return pInfo;
}
int main()
{
AbstractInfo* pInfo = createInfo(1);
pInfo->f();
return 0;
}
Don't downcast - use virtual methods. Just return the pointer to a base class from the factory and only work through that pointer.
class AbstractInfo
{
public:
virtual ~AbstractInfo();
virtual X f();
...
};
class Info_1 : public AbstractInfo
{
...
};
class Info_2 : public AbstractInfo
{
...
};
AbstractInfo* factory(inputs...)
{
if (conditions where you would want an Info_1)
return new Info_1(...);
else if (condtions for an Info_2)
return new Info_2(...);
else
moan_loudly();
}
If you don't want the factory method to become a single point of maintenance as downstream client code adds Info types, you can instead provide some mechanism for client code to register methods for creation of those derived objects. Check out the Gang of Four's Design Patterns book for creational patterns, or google them.
While generally you can't overload on return types in C++, there is an exception for covariant return types
Example taken from wikipedia:
// Classes used as return types:
class A {
}
class B : public A {
}
// Classes demonstrating method overriding:
class C {
A* getFoo() {
return new A();
}
}
class D : public C {
B* getFoo() {
return new B();
}
}
Thus eliminating the need for casting.
C++ provides polymorphism just as C# does. The language has no special interface-type, but you can emulate that by using a class that only has pure virtual methods. In C# all methods are virtual by default (meaning they are bound at runtime), whereas in C++ you have to declare that explicitly using the virtual-keyword. Also, C# handles all objects using references (as far as I know), whereas in C++ you have to choose between values, pointers or references. In your case, you most likely want your factory to return a pointer to the interface, or even better a smart pointer, so you don't have to worry about memory management.
To elaborate / pontificate a little, the "good" time to use an abstract interface (eg: base class with virtual functions) is when substantially all the functionality which will be used on the objects can be contained in virtual functions. If that's the case, you can do what you're proposing easily, and just call the virtual functions on the base class pointer, which will automatically call the most-derived version provided.
If you find yourself needing to downcast often to get at child-class specific functions/data, this approach is probably not optimal for your situation. In that case you may find yourself writing some of the functionality outside the classes, providing multiple implementations for each type, and using some sort of RTTI to help downcast as necessary. This is more messy, but tends to be more common outside of the "academic" or well-isolated usages.
Looks like you've got a lot of good info/advice here in the other answers, though.

Polymorphism and checking if an object has a certain member method

I'm developing a GUI library with a friend and we faced the problem of how to determine whether a certain element should be clickable or not (Or movable, or etc.).
We decided to just check if a function exists for a specific object, all gui elements are stored in a vector with pointers to the base class.
So for example if I have
class Base {};
class Derived : public Base
{
void example() {}
}
vector<Base*> objects;
How would I check if a member of objects has a function named example.
If this isn't possible than what would be a different way to implement optional behaviour like clicking and alike.
You could just have a virtual IsClickable() method in your base class:
class Widget {
public:
virtual bool IsClickable(void) { return false; }
};
class ClickableWidget : public Widget
{
public:
virtual bool IsClickable(void) { return true; }
}
class SometimesClickableWidget : public Widget
{
public:
virtual bool IsClickable(void);
// More complex logic punted to .cc file.
}
vector<Base*> objects;
This way, objects default to not being clickable. A clickable object either overrides IsClickable() or subclasses ClickableWidget instead of Widget. No fancy metaprogramming needed.
EDIT: To determine if something is clickable:
if(object->IsClickable()) {
// Hey, it's clickable!
}
The best way to do this is to use mixin multiple inheritance, a.k.a. interfaces.
class HasExample // note no superclass here!
{
virtual void example() = 0;
};
class Derived : public Base, public HasExample
{
void example()
{
printf("example!\n");
}
}
vector<Base*> objects;
objects.push_back(new Derived());
Base* p = objects[0];
HasExample* he = dynamic_cast<HasExample*>(p);
if (he)
he->example();
dynamic_class<>() does a test at runtime whether a given object implements HasExample, and returns either a HasExample* or NULL. However, if you find yourself using HasExample* it's usually a sign you need to rethink your design.
Beware! When using multiple inheritance like this, then (HasExample*)ptr != ptr. Casting a pointer to one of its parents might cause the value of the pointer to change. This is perfectly normal, and inside the method this will be what you expect, but it can cause problems if you're not aware of it.
Edit: Added example of dynamic_cast<>(), because the syntax is weird.
If you're willing to use RTTI . . .
Instead of checking class names, you should create Clickable, Movable, etc classes. Then you can use a dynamic_cast to see if the various elements implement the interface that you are interested in.
IBM has a brief example program illustrating dynamic_cast here.
I would create an interface, make the method(s) part of the interface, and then implement that Interface on any class that should have the functionality.
That would make the most sense when trying to determine if an Object implements some set of functionality (rather than checking for the method name):
class IMoveable
{
public:
virtual ~IMoveable() {}
virtual void Move() = 0;
};
class Base {};
class Derived : public Base, public IMoveable
{
public:
virtual void Move()
{
// Implementation
}
}
Now you're no longer checking for method names, but casting to the IMoveable type and calling Move().
I'm not sure it is easy or good to do this by reflection. I think a better way would be to have an interface (somethign like GUIElement) that has a isClickable function. Make your elements implement the interface, and then the ones that are clickable will return true in their implementation of the function. All others will of course return false. When you want to know if something's clickable, just call it's isClickable function. This way you can at runtime change elements from being clickable to non-clickable - if that makes sense in your context.