I have a project where I have a lot of related Info classes and I was considering putting up a hierarchy by having a AbstractInfo class and then a bunch of derived classes, overriding the implementations of AbstractInfo as necessary. However it turns out that in C++ using the AbstractInfo class to then create one of the derived objects is not that simple. (see this question, comment on last answer)
I was going to create like a factory class which creates an Info object and always returns an AbstractInfo object. I know from C# you can do that with interfaces, but in C++ things are a little different it seems.
Down casting becomes a complicated affair and it seems prone to error.
Does anyone have a better suggestion for my problem?
You don't require downcasting. See this example:
class AbstractInfo
{
public:
virtual ~AbstractInfo() {}
virtual void f() = 0;
};
class ConcreteInfo1 : public AbstractInfo
{
public:
void f()
{
cout<<"Info1::f()\n";
}
};
class ConcreteInfo2 : public AbstractInfo
{
public:
void f()
{
cout<<"Info2::f()\n";
}
};
AbstractInfo* createInfo(int id)
{
AbstractInfo* pInfo = NULL;
switch(id)
{
case 1:
pInfo = new ConcreteInfo1;
break;
case 2:
default:
pInfo = new ConcreteInfo2;
}
return pInfo;
}
int main()
{
AbstractInfo* pInfo = createInfo(1);
pInfo->f();
return 0;
}
Don't downcast - use virtual methods. Just return the pointer to a base class from the factory and only work through that pointer.
class AbstractInfo
{
public:
virtual ~AbstractInfo();
virtual X f();
...
};
class Info_1 : public AbstractInfo
{
...
};
class Info_2 : public AbstractInfo
{
...
};
AbstractInfo* factory(inputs...)
{
if (conditions where you would want an Info_1)
return new Info_1(...);
else if (condtions for an Info_2)
return new Info_2(...);
else
moan_loudly();
}
If you don't want the factory method to become a single point of maintenance as downstream client code adds Info types, you can instead provide some mechanism for client code to register methods for creation of those derived objects. Check out the Gang of Four's Design Patterns book for creational patterns, or google them.
While generally you can't overload on return types in C++, there is an exception for covariant return types
Example taken from wikipedia:
// Classes used as return types:
class A {
}
class B : public A {
}
// Classes demonstrating method overriding:
class C {
A* getFoo() {
return new A();
}
}
class D : public C {
B* getFoo() {
return new B();
}
}
Thus eliminating the need for casting.
C++ provides polymorphism just as C# does. The language has no special interface-type, but you can emulate that by using a class that only has pure virtual methods. In C# all methods are virtual by default (meaning they are bound at runtime), whereas in C++ you have to declare that explicitly using the virtual-keyword. Also, C# handles all objects using references (as far as I know), whereas in C++ you have to choose between values, pointers or references. In your case, you most likely want your factory to return a pointer to the interface, or even better a smart pointer, so you don't have to worry about memory management.
To elaborate / pontificate a little, the "good" time to use an abstract interface (eg: base class with virtual functions) is when substantially all the functionality which will be used on the objects can be contained in virtual functions. If that's the case, you can do what you're proposing easily, and just call the virtual functions on the base class pointer, which will automatically call the most-derived version provided.
If you find yourself needing to downcast often to get at child-class specific functions/data, this approach is probably not optimal for your situation. In that case you may find yourself writing some of the functionality outside the classes, providing multiple implementations for each type, and using some sort of RTTI to help downcast as necessary. This is more messy, but tends to be more common outside of the "academic" or well-isolated usages.
Looks like you've got a lot of good info/advice here in the other answers, though.
Related
In below code I have abstract class TestAlgModule which I will be exposing to library users and there are several functionalities they can use such as VOLUME, MIXER and so on. However, suppose users need a new function which is added only in MixerManager then I need to add that in TestAlgModule abstract class and now suddenly all the derived class needs to add that without any benefit.
How do I avoid this?
#include <iostream>
using namespace std;
enum {VOLUME, MIXER, UNKNONWN};
class TestAlgModule {
public:
virtual void open(int type) = 0;
virtual void close(int type) = 0;
};
class volumeManager : public TestAlgModule
{
public:
void open(int type) {}
void close(int type) {}
};
class mixerManager : public TestAlgModule
{
public:
void open(int type) {}
void close(int type) {}
void differentFunction() {};
};
/* users calls this to get algModule and then call functions to get the job done */
TestAlgModule *getTestAlgModule(int type) {
switch(type) {
case VOLUME:
return new volumeManager();
case MIXER:
return new mixerManager();
default:
break;
}
return nullptr;
}
int main() {
TestAlgModule * test = getTestAlgModule(MIXER);
test->open();
//test->differentFunction(); this can't be called as it is not part of abstract class and users are exposed only abstract class
return 0;
}
If something is not clear please let me know and I will do my best to answer it. I am looking for a better way to do this i.e. change in VolumeManager should be independent of MixerManager.
If you want to use an abstract factory, like you did in above code, then you need to return a pointer to the base class. That is correct. And then you need to invoke all functions through the base pointer.
By the way, please do not use raw pointers. Please use std::unique pointers instead.
There are 2 possible solutions.
Add the interface functions as a none pure, but still virtual function to your base class, with a default behaviour.
virtual void differentFunction() {}
Because of the other pure functions, the base class is still abstract. This may lead to a fat interface. But in many cases it is an acceptable solution.
The second possibility is to downcast the base class pointer to your needed pointer, using dynamic_cast and checking the return value of the dynamic cast.
if(mixerManager* mm = dynamic_cast<mixerManager*>(test)) {
mm->differentFunction();
}
All this depends of course on the overall design and what you want to achieve. But the above 2 are the standard patterns.
There are also other design patterns that may fit your needs, like builder or prototype. Please check.
When trying to access derived class behaviour, the most common approach I read about is using dynamic_casts, i.e. dynamic_cast<DerivedA*>(BasePtr)->DerivedAOnlyMethod(). This isn't really pretty, but everybody understands what's going on.
Now I'm working on a code where this conversion is handled by virtual functions exported to the base class, for each derived class, i.e.:
class Base
{
public:
virtual DerivedA* AsDerivedA() { throw Exception("Not an A"); }
virtual DerivedB* AsDerivedB() { throw Exception("Not a B"); }
// etc.
};
class DerivedA : public Base
{
public:
DerivedA* AsDerivedA() { return this; }
};
// etc.
Use is then BasePtr->AsDerivedA()->DerivedAOnlyMethod(). Imho, this clutters up the base class, and exposes knowledge about the derived classes it shouldn't need.
I'm too inexperienced to say with certainty which is better, so I'm looking for arguments for and against either construct. Which is more idiomatic? How do they compare regarding performance and safety?
Well, putting the AsDerived#-methods into the base-class certainly leads to potentially faster casting.
If you cap the inheritance-hierarchy using final that advantage might be reduced or removed though.
Also, you are right about it being uncommon because it introduces clutter, and it introduces knowledge of all relevant derived classes into the base-class.
In summary, it might sometimes be useful in a bottleneck, but you will pay for that abomination.
Without seeing more code it is difficult to offer too much advice. However, needing to know the type of the object you're calling into argues more for a variant than a polymorphic type.
polymorphism is about information hiding. The caller should not need to know what type he is holding.
something like this, perhaps?
struct base
{
virtual bool can_do_x() const { return false; }
virtual void do_x() { throw std::runtime_error("can't"); }
virtual ~base() = default;
};
struct derived_a : base
{
virtual bool can_do_x() const { return true; }
virtual void do_x() { std::cout << "no problem!"; }
};
int main()
{
std::unique_ptr<base> p = std::make_unique<derived_a>();
if (p->can_do_x()) {
p->do_x();
}
}
Now we're talking to the object in terms of capabilities, not types.
Your intuition is right, the AsDerivedX methods are clutter. The fact that at runtime it can be checked whether these virtual functions were overloaded is equivalent to the cost of a typecheck. So, in my opinion, the C++ way of doing this is:
void doSomething(Base *unsureWhichAorB) {
DerivedA *dA = dynamic_cast<DerivedA*>(unsureWhichAorB);
if(dA) //if the dynamic cast failed, then dA would be 0
dA->DerivedAOnlyMethod();
}
Note that the check for non-zeroness of dA is critical here.
You are totally correct that such a solution not only clutters the base class but also puts unnecessary dependencies on it. In a clean design the base class does not need to and actually should not know anything about its derived classes. Everything else will become a maintenance nightmare pretty soon.
However, I'd like to point out that I am in the "try to avoid dynamic_cast"-Team. Meaning that I often see dynamic_cast that could have been avoided with a proper design. So the question to ask in the first place would be: Why do I need to know the derived type? Usually there is either a way to solve the problem by using polymorphism correctly or "losing" the type information already was the wrong path.
Prefer to use polymorphism instead of dynamic_cast:
class Base
{
public:
virtual void doSomething() = 0;
};
class DerivedA : public Base
{
public:
void doSomething() override { //do something the DerivedA-way };
};
class DerivedB : public Base
{
public:
void doSomething() override { //do something the DerivedB-way };
};
// etc.
I am a relatively new C++ programmer.
In writing some code I've created something similar in concept to the code below. When a friend pointed out this is in fact a factory pattern I read about the pattern and saw it is in similar.
In all of the examples I've found the factory pattern is always implemented using a separate class such as class BaseFactory{...}; and not as I've implemented it using a static create() member function.
My questions are:
(1) Is this in fact a factory pattern?
(2) The code seems to work. Is there something incorrect in the way I've implemented it?
(3) If my implementation is correct, what are the pros/cons of implementing the static create() function as opposed to the separate BaseFactory class.
Thanks!
class Base {
...
virtual ~Base() {}
static Base* create(bool type);
}
class Derived0 : public Base {
...
};
class Derived1 : public Base {
...
};
Base* Base::create(bool type) {
if(type == 0) {
return new Derived0();
}
else {
return new Derived1();
}
}
void foo(bool type) {
Base* pBase = Base::create(type);
pBase->doSomething();
}
This is not a typical way to implement the factory pattern, the main reason being that the factory class isn't typically a base of the classes it creates. A common guideline for when to use inheritance is "Make sure public inheritance models "is-a"". In your case this means that objects of type Derived0 or Derived1 should also be of type Base, and the derived classes should represent a more specialised concept than the Base.
However, the factory pattern pretty much always involves inheritance as the factory will return a pointer to a base type (yous does this too). This means the client code doesn't need to know what type of object the factory created, only that it matches the base class's interface.
With regard to having a static create functions, it depends on the situation. One advantage, as your example shows, is that you won't need to create an instance of the factory in order to use it.
Your factory is ok, except for the fact that you merged the factory and the interface, breaking the SRP principle.
Instead of making the create static method in the base class, create it in another (factory) class.
Suppose I have a pure virtual method in the base interface that returns to me a list of something:
class base
{
public:
virtual std::list<something> get() = 0;
};
Suppose I have two classes that inherit the base class:
class A : public base
{
public:
std::list<something> get();
};
class B : public base
{
public:
std::list<something> get();
};
I want that only the A class can return a list<something>, but I need also to have the possibility to get the list using a base pointer, like for example:
base* base_ptr = new A();
base_ptr->get();
What I have to do?
Have I to return a pointer to this list? A reference?
Have I to return a null pointer from the method of class B? Or have I to throw an exception when I try to get the list using a B object? Or have I to change the base class method get, making it not pure and do this work in the base class?
Have I to do something else?
You have nothing else to do. The code you provide does exactly that.
When you get a pointer to the base class, since the method was declared in the base class, and is virtual, the actual implementation will be looked up in the class virtual function table and called appropriately.
So
base* base_ptr = new A();
base_ptr->get();
Will call A::get(). You should not return null from the implementation (well you can't, since null is not convertible to std::list< something > anyway). You have to provide an implementation in A/B since the base class method is declared pure virtual.
EDIT:
you cannot have only A return an std::list< something > and not B since B also inherits the base class, and the base class has a pure virtual method that must be overriden in the derived class. Inheriting from a base class is a "is-a" relationship. The only other way around I could see would be to inherit privately from the class, but that would prevent derived to base conversion.
If you really don't want B to have the get method, don't inherit from base.
Some alternatives are:
Throwing an exception in B::get():
You could throw an exception in B::get() but make sure you explain your rationale well as it is counter-intuitive. IMHO this is pretty bad design, and you risk confusing people using your base class. It is a leaky abstraction and is best avoided.
Separate interface:
You could break base into separate interface for that matter:
class IGetSomething
{
public:
virtual ~IGetSomething() {}
virtual std::list<something> Get() = 0;
};
class base
{
public:
// ...
};
class A : public base, public IGetSomething
{
public:
virtual std::list<something> Get()
{
// Implementation
return std::list<something>();
}
};
class B : public base
{
};
The multiple inheritance in that case is OK because IGetSomething is a pure interface (it does not have member variables or non-pure methods).
EDIT2:
Based on the comments it seems you want to be able to have a common interface between the two classes, yet be able to perform some operation that one implementation do, but the other doesn't provide. It is quite a convoluted scenario but we can take inspiration from COM (don't shoot me yet):
class base
{
public:
virtual ~base() {}
// ... common interface
// TODO: give me a better name
virtual IGetSomething *GetSomething() = 0;
};
class A : public Base
{
public:
virtual IGetSomething *GetSomething()
{
return NULL;
}
};
class B : public Base, public IGetSomething
{
public:
virtual IGetSomething *GetSomething()
{
// Derived-to-base conversion OK
return this;
}
};
Now what you can do is this:
base* base_ptr = new A();
IGetSomething *getSmthing = base_ptr->GetSomething();
if (getSmthing != NULL)
{
std::list<something> listOfSmthing = getSmthing->Get();
}
It is convoluted, but there are several advantages of this method:
You return public interfaces, not concrete implementation classes.
You use inheritance for what it's designed for.
It is hard to use mistakenly: base does not provide std::list get() because it is not a common operation between the concrete implementation.
You are explicit about the semantics of GetSomething(): it allows you to return an interface that can be use to retrieve a list of something.
What about just returning an empty std::list ?
That would be possible but bad design, it's like having a vending machine that can give Coke and Pepsi, except it never serves Pepsi; it's misleading and best avoided.
What about just returning a boost::optional< std::list< something > > ? (as suggested by Andrew)
I think that's a better solution, better than returning and interface that sometimes could be NULL and sometimes not, because then you explicitly know that it's optional, and there would be no mistake about it.
The downside is that it puts boost inside your interface, which I prefer to avoid (it's up to me to use boost, but clients of the interface shouldn't have to be forced to use boost).
return boost::optional in case you need an ability to not return (in B class)
class base
{
public:
virtual boost::optional<std::list<something> > get() = 0;
};
What you are doing is wrong. If it is not common to both the derived classes, you should probably not have it in the base class.
That aside, there is no way to achieve what you want. You have to implement the method in B also - which is precisely the meaning of a pure virtual function. However, you can add a special fail case - such as returning an empty list, or a list with one element containing a predetermined invalid value.
Consider two classes
class A{
public:
A(){
}
~A(){
}
};
class AImpl : public A{
public:
AImpl(){
a = new AInternal();
}
AImpl(AInternal *a){
this->_a = a;
}
~AImpl(){
if(a){
delete a;
a = null;
}
}
private:
AInternal *a;
};
I am trying to hide the AInternal's implementation and expose only A's interface. Two things I see here
class A is totally empty.
Hiding is achieved basically through inheritance. I have to actually use downcasting and upcasting from A to AImpl and vice versa.
Is this a good design. Being very inexperienced in designing, I cannot see the pitfalls of it and why it is bad?
You're overcomplicating things by using 3 classes. I think what you're looking for is the pimpl idiom.
I am trying to hide the AInternal's implementation and expose only A's interface.
I think you are trying to do something like factory.
Here is an example:
class IA {
public:
IA() {}
virtual ~IA() {}
virtual void dosth() =0;
};
class Factory {
private:
class A : public IA {
public:
A () {}
virtual ~A() {}
void dosth() { cout << "Hello World"; }
};
public:
Factory () {}
virtual ~Factory() {}
IA*newA() { return new A; }
};
And the usage of Factory class:
Factory f;
IA*a = f.newA();
a->dosth();
return 0;
IMO AInternal makes no sense. Whatever you do there, should be done in AImpl. Otherwise, it's ok to do that in C++.
The code is rather obtuse, so I would be concerned with maintaining it six months down the road.
If you're going to do it this way, then the destructor ~A needs to be virtual.
You seem to be combining two common design features:
1) AInternal is a "pimpl". It provides for better encapsulation, for example if you need to add a new field to AInternal, then the size of AImpl doesn't change. That's fine.
2) A is an base class used to indicate an interface. Since you talk about upcasting and downcasting, I assume you want dynamic polymorphism, meaning that you'll have functions which pass around pointers or references to A, and at runtime the referands will actually be of type AImpl. That's also fine, except that A's destructor should either be virtual and public, or non-virtual and protected.
I see no other design problems with this code. Of course you'll need to actually define the interface A, by adding some pure virtual member functions to it that you implemented in AImpl. Assuming you plan to do that, there's nothing wrong with using an empty base class for the purpose which in Java is served by interfaces (if you know Java). Generally you'd have some kind of factory which creates AImpl objects, and returns them by pointer or reference to A (hence, upcasts them). If the client code is going to create AImpl objects directly then that might also be fine, and in fact you might not need dynamic polymorphism at all. You could instead get into templates.
What I don't see is why you would ever have to downcast (that is, cast an A* to AImpl*). That's usually bad news. So there may be some problems in your design which can only be revealed by showing us more of the definitions of the classes, and the client code which actually uses A and AImpl.