Consider two classes
class A{
public:
A(){
}
~A(){
}
};
class AImpl : public A{
public:
AImpl(){
a = new AInternal();
}
AImpl(AInternal *a){
this->_a = a;
}
~AImpl(){
if(a){
delete a;
a = null;
}
}
private:
AInternal *a;
};
I am trying to hide the AInternal's implementation and expose only A's interface. Two things I see here
class A is totally empty.
Hiding is achieved basically through inheritance. I have to actually use downcasting and upcasting from A to AImpl and vice versa.
Is this a good design. Being very inexperienced in designing, I cannot see the pitfalls of it and why it is bad?
You're overcomplicating things by using 3 classes. I think what you're looking for is the pimpl idiom.
I am trying to hide the AInternal's implementation and expose only A's interface.
I think you are trying to do something like factory.
Here is an example:
class IA {
public:
IA() {}
virtual ~IA() {}
virtual void dosth() =0;
};
class Factory {
private:
class A : public IA {
public:
A () {}
virtual ~A() {}
void dosth() { cout << "Hello World"; }
};
public:
Factory () {}
virtual ~Factory() {}
IA*newA() { return new A; }
};
And the usage of Factory class:
Factory f;
IA*a = f.newA();
a->dosth();
return 0;
IMO AInternal makes no sense. Whatever you do there, should be done in AImpl. Otherwise, it's ok to do that in C++.
The code is rather obtuse, so I would be concerned with maintaining it six months down the road.
If you're going to do it this way, then the destructor ~A needs to be virtual.
You seem to be combining two common design features:
1) AInternal is a "pimpl". It provides for better encapsulation, for example if you need to add a new field to AInternal, then the size of AImpl doesn't change. That's fine.
2) A is an base class used to indicate an interface. Since you talk about upcasting and downcasting, I assume you want dynamic polymorphism, meaning that you'll have functions which pass around pointers or references to A, and at runtime the referands will actually be of type AImpl. That's also fine, except that A's destructor should either be virtual and public, or non-virtual and protected.
I see no other design problems with this code. Of course you'll need to actually define the interface A, by adding some pure virtual member functions to it that you implemented in AImpl. Assuming you plan to do that, there's nothing wrong with using an empty base class for the purpose which in Java is served by interfaces (if you know Java). Generally you'd have some kind of factory which creates AImpl objects, and returns them by pointer or reference to A (hence, upcasts them). If the client code is going to create AImpl objects directly then that might also be fine, and in fact you might not need dynamic polymorphism at all. You could instead get into templates.
What I don't see is why you would ever have to downcast (that is, cast an A* to AImpl*). That's usually bad news. So there may be some problems in your design which can only be revealed by showing us more of the definitions of the classes, and the client code which actually uses A and AImpl.
Related
This is very likely a silly question, but I can't seem to figure out if this is at all possible and if it should actually be done.
Say some code relies a lot on using a certain virtual base class acting as an interface and then deriving several subclasses that implements the virtual methods the base class. Is it possible to use functionality not exsisting in the base class (interface) below without completely disregarding the solid principles, when the rest of the program must not see anything but the base class "interface"? Like in the example below, is it possible to use bar() from the B class inside the function that only knows of A? And should I just add bar() to the "interface" instead?
// Base class - "interface"
Class A
{
public:
virtual int foo();
}
// Derived class - implementing the "interface" + more
Class B: public A
{
public:
int foo();
int bar();
}
int main()
{
function(A); // Some magic function that would utlize the bar() method
return 0;
}
The short answer is yes, use the dynamic_cast operator (but see below). The magic function would look something like this:
void function(A& a)
{
B* b = dynamic_cast<B*>(&a);
if (b)
{
// Object is a B...
b->bar();
}
else
{
// fallback logic using only methods on A
}
}
But be aware that many programmers consider this a code smell. In particular, if there's no way to implement the "fallback" branch, then it suggests the function should really accept a B and something in the design may be amiss. (It's hard to say when talking in such generalities, however.) Also be aware that dynamic_cast can be expensive, particularly with complex class hierarchies.
If at all reasonable, it's preferable to move the B-specific logic into the B class somehow. You might also consider making the bar method a member of the A class (A would provide some sensible default implementation). Another approach might be to create a new interface to hold the bar method and have your function accept an object of that type. (The B class would implement both A and the new interface.)
If you have a feature rich class, possibly one you do not own/control, it is often the case where you want to add some functionality so deriving makes sense.
Occasionally you want to subtract as well, that is disallow some part of the base interface. The common idiom I have seen is to derive and make some member functions private and then not implement them. As follows:
class Base
{
public:
virtual void foo() {}
void goo() { this->foo(); }
};
class Derived : public Base
{
private:
void foo();
};
someplace else:
Base * b= new Derived;
and yet another place:
b->foo(); // Any way to prevent this at compile time?
b->goo(); // or this?
It seems that if the compilation doesn't know that it is derived, the best you can do is not implement and have it fail at runtime.
The issue arises when you have a library, that you can't change, that takes a pointer to base, and you can implement some of the methods, but not all. So part of the library is useful, but you run the risk of core dumping if you don't know at compile time which functions will call what.
To make it more difficult, others may inherit from you class and want to use the library, and they may add some of the functions you didn't.
Is there another way? in C++11? in C++14?
Let's analyze this, focused on two major points:
class Base
{
public:
virtual void foo() {} // This 1)
// ...
class Derived : public Base // and this 2)
In 1) you tell the world that every object of Base offers the method foo() publicly. This implies that when I have Base*b I can call b->foo() - and b->goo().
In 2) you tell the world that your class Derived publicly behaves like a Base. Thus the following is possible:
void call(Base *b) { b->foo(); }
int main() {
Derived *b = new Derived();
call(b);
delete b;
}
Hopefully you see that there is no way call(Base*) can know if b is a derived and thus it can't possibly decide at compile-time if calling foo wouldn't be legal.
There are two ways to handle this:
You could change the visibility of foo(). This is probably not what you want because other classes can derive from Base and someone wants to call foo afterall. Keep in mind that virtual methods can be private, so you should probably declare Base as
class Base
{
virtual void foo() {}
public:
void goo() { this->foo(); }
};
You can change Derived so that it inherits either protected or private from Base. This implies that nobody/only inheriting classes can "see" that Derived is a Base and a call to foo()/goo() is not allowed:
class Derived : private Base
{
private:
void foo() override;
// Friends of this class can see the Base aspect
// .... OR
// public: // this way
// void foo(); // would allow access to foo()
};
// Derived d; d.goo() // <-- illegal
// d.foo() // <-- illegal because `private Base` is invisible
You should generally go with the latter because it doesn't involve changing the interface of the Base class - the "real" utility.
TL;DR: Deriving a class is a contract to provide at least that interface. Subtraction is not possible.
This seems to be what you want to do:
struct Library {
int balance();
virtual int giveth(); // overrideable
int taketh(); // part of the library
};
/* compiled into the library's object code: */
int Library::balance() { return giveth() - taketh(); }
/* Back in header files */
// PSEUDO CODE
struct IHaveABadFeelingAboutThis : public Library {
int giveth() override; // my implementation of this
int taketh() = delete; // NO TAKE!
};
So that you can't call taketh() on an IHaveABadFeelingAboutThis even when it is cast as the base class.
int main() {
IHaveABadFeelingAboutThis x;
Library* lib = &x;
lib->taketh(); // Compile error: NO TAKE CANDLE!
// but how should this be handled?
lib->balance();
}
If you want to present a different interface than the underlying library you need a facade to present your interface instead of the that of the library.
class Facade {
struct LibraryImpl : public Library {
int giveth() override;
};
LibraryImpl m_impl;
public:
int balance() { return m_impl.balance(); }
virtual int giveth() { return m_impl.giveth(); }
// don't declare taketh
};
int main() {
Facade f;
int g = f.giveth();
int t = f.taketh(); // compile error: undefined
}
Although I don't think your overall situation is good design, and I share many of the sentiments in the comments, I can also appreciate that a lot of code you don't control is involved. I don't believe there is any compile time solution to your problem that has well defined behavior, but what is far preferable to making methods private and not implementing them is to implement the entire interface and simply make any methods you can't cope with throw an exception. This way at least the behavior is defined, and you can even do try/catch if you think you can recover from a library function needing interface you can't provide. Making the best of a bad situation, I think.
If you have class A:public B, then you should follow the https://en.wikipedia.org/wiki/Liskov_substitution_principle
The Liskov substitution principle is that a pointer-to-A can be used as a pointer-to-B in all circumstances. Any requirements that B has, A should satisfy.
This is tricky to pull off, and is one of the reasons why many consider OO-style inheritance far less useful than it looks.
Your base exposes a virtual void foo(). The usual contract means that such a foo can be called, and if its preconditions are met, it will return.
If you derive from base, you cannot strengthen the preconditions, nor relax the postconditions.
On the other hand, if base::foo() was documented (and consumers of base supported) the possibility of it throwing an error (say, method_does_not_exist), then you could derive, and have your implementation throw that error. Note that even if the contract says it could do this, in practice if this isn't tested consumers may not work.
Violating the Liskov substitution principle is a great way to have lots of bugs and unmaintainable code. Only do it if you really, really need to.
I have a project where I have a lot of related Info classes and I was considering putting up a hierarchy by having a AbstractInfo class and then a bunch of derived classes, overriding the implementations of AbstractInfo as necessary. However it turns out that in C++ using the AbstractInfo class to then create one of the derived objects is not that simple. (see this question, comment on last answer)
I was going to create like a factory class which creates an Info object and always returns an AbstractInfo object. I know from C# you can do that with interfaces, but in C++ things are a little different it seems.
Down casting becomes a complicated affair and it seems prone to error.
Does anyone have a better suggestion for my problem?
You don't require downcasting. See this example:
class AbstractInfo
{
public:
virtual ~AbstractInfo() {}
virtual void f() = 0;
};
class ConcreteInfo1 : public AbstractInfo
{
public:
void f()
{
cout<<"Info1::f()\n";
}
};
class ConcreteInfo2 : public AbstractInfo
{
public:
void f()
{
cout<<"Info2::f()\n";
}
};
AbstractInfo* createInfo(int id)
{
AbstractInfo* pInfo = NULL;
switch(id)
{
case 1:
pInfo = new ConcreteInfo1;
break;
case 2:
default:
pInfo = new ConcreteInfo2;
}
return pInfo;
}
int main()
{
AbstractInfo* pInfo = createInfo(1);
pInfo->f();
return 0;
}
Don't downcast - use virtual methods. Just return the pointer to a base class from the factory and only work through that pointer.
class AbstractInfo
{
public:
virtual ~AbstractInfo();
virtual X f();
...
};
class Info_1 : public AbstractInfo
{
...
};
class Info_2 : public AbstractInfo
{
...
};
AbstractInfo* factory(inputs...)
{
if (conditions where you would want an Info_1)
return new Info_1(...);
else if (condtions for an Info_2)
return new Info_2(...);
else
moan_loudly();
}
If you don't want the factory method to become a single point of maintenance as downstream client code adds Info types, you can instead provide some mechanism for client code to register methods for creation of those derived objects. Check out the Gang of Four's Design Patterns book for creational patterns, or google them.
While generally you can't overload on return types in C++, there is an exception for covariant return types
Example taken from wikipedia:
// Classes used as return types:
class A {
}
class B : public A {
}
// Classes demonstrating method overriding:
class C {
A* getFoo() {
return new A();
}
}
class D : public C {
B* getFoo() {
return new B();
}
}
Thus eliminating the need for casting.
C++ provides polymorphism just as C# does. The language has no special interface-type, but you can emulate that by using a class that only has pure virtual methods. In C# all methods are virtual by default (meaning they are bound at runtime), whereas in C++ you have to declare that explicitly using the virtual-keyword. Also, C# handles all objects using references (as far as I know), whereas in C++ you have to choose between values, pointers or references. In your case, you most likely want your factory to return a pointer to the interface, or even better a smart pointer, so you don't have to worry about memory management.
To elaborate / pontificate a little, the "good" time to use an abstract interface (eg: base class with virtual functions) is when substantially all the functionality which will be used on the objects can be contained in virtual functions. If that's the case, you can do what you're proposing easily, and just call the virtual functions on the base class pointer, which will automatically call the most-derived version provided.
If you find yourself needing to downcast often to get at child-class specific functions/data, this approach is probably not optimal for your situation. In that case you may find yourself writing some of the functionality outside the classes, providing multiple implementations for each type, and using some sort of RTTI to help downcast as necessary. This is more messy, but tends to be more common outside of the "academic" or well-isolated usages.
Looks like you've got a lot of good info/advice here in the other answers, though.
Yes i know the phrase "virtual constructor" makes no sense but i still see articles like this
one: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=184
and i'v heard it mentioned as a c++ interview.
What is the general consensus?
Is a "Virtual Constructor" good practice or something to be avoided completely?
To be more more precise, can some one provide me with a real world scenario where they have had to use it, from where i stand this concept of virt. constr. is a somewhat useless invention but i could be wrong.
All the author has done is implement prototyping and cloning. Both of which are powerful tools in the arsenal of patterns.
You can actually do something a lot closer to "virtual constructors" through the use of the handle/body idiom:
struct object
{
void f();
// other NVI functions...
object(...?);
object(object const&)
object& operator = (object const&);
~object();
private:
struct impl;
impl * pimpl;
};
struct object::impl
{
virtual void f() = 0;
virtual impl* clone() = 0;
// etc...
};
struct impA : object::impl { ... };
struct impB : object::impl { ... };
object::object(...?) : pimpl(select_impl(...?)) {}
object::object(object const& other) : pimpl(other.pimpl->clone()) {}
// etc...
Don't know if anyone has declared this an Idiom but I've found it useful and I'm sure others have come across the same idea themselves.
Edit:
You use factory methods or classes when you need to request an implementation for an interface and do not want do couple your call sites to the inheritance tree behind your abstraction.
You use prototyping (clone()) to provide generic copying of an abstraction so that you do not have to determine type in order to make that copy.
You would use something like I just showed for a few different reasons:
1) You wish to totally encapsulate the inheritance relation behind an abstraction. This is one method of doing so.
2) You want to treat it as a value type at the abstract level (you'd be forced to use pointers or something otherwise)
3) You initially had one implementation and want to add new specifications without having to change client code, which is all using the original name either in an auto declaration or heap allocation by name.
The best way to write "virtual constructors" is to use a Prototype pattern with a Clone() virtual method wich calls the copy constructor of the real type of the object and returns a pointer to a base class.
class Base
{
public:
virtual Base* Clone() { return new Base(*this); }
};
class Derived : public Base
{
virtual Base* Clone() { return new Derived(*this); }
};
It's not considered a good practice and is to use only if you need it (implement a copy-paste function for example)
I consider these so-called "virtual constructors" a bad design when used instead of constructors. These are nothing but virtual functions that are supposed to be called at the beginning of use of an instance of a class.
From the link you've posted:
class Browser
{
public:
//virtual default constructor
virtual Browser* construct() {return new Browser;}
};
Let's add a member field:
class Browser
{
int member;
public:
//virtual default constructor
virtual Browser* construct() {return new Browser;}
};
We need to initialize the member field, how do we do it?
class Browser
{
int member;
public:
//virtual default constructor
virtual Browser* construct()
{
Browser* b = new Browser;
b->member = 0;
return b;
}
};
Consider a situation when someone forgets to use template <class T> void func(T & obj) and does something like this:
Browser b;
printf("member=%d", b.member);
This way an uninitialized field is used. There is no way to prevent it.
Now, in this case
class Browser
{
int member;
public:
Browser() : member(0) { }
virtual Browser* construct() { /* some init stuff */ return new Browser;}
};
the default constructor is always used and member field always initialized.
However calling the construct() a "virtual constructor" I consider a naming abuse.
The pattern I showed above is quite common eg. in MFC. CWnd and similar classes use constructors to init instances and Create(...) functions to fully init and create controls. I would never call the Create(...) function a "virtual constructor", anyway.
It's something you can use, but only when you really, really need it. Even the guy in the link said that it's only something that you use if desperate.
All the answer above doesn't answer the question but are giving some workaround. The answer to the above question is http://www.stroustrup.com/bs_faq2.html#virtual-ctor straight from the language author itself. In short it says you need complete information to construct an object, hence virtual constructor doesn't exist in C++.
I know that it's OK for a pure virtual function to have an implementation. However, why it is like this? Is there conflict for the two concepts? What's the usage? Can any one offer any example?
In Effective C++, Scott Meyers gives the example that it is useful when you are reusing code through inheritance. He starts with this:
struct Airplane {
virtual void fly() {
// fly the plane
}
...
};
struct ModelA : Airplane { ... };
struct ModelB : Airplane { ... };
Now, ModelA and ModelB are flown the same way, and that's believed to be a common way to fly a plane, so the code is in the base class. However, not all planes are flown that way, and we intend planes to be polymorphic, so it's virtual.
Now we add ModelC, which must be flown differently, but we make a mistake:
struct ModelC : Airplane { ... (no fly function) };
Oops. ModelC is going to crash. Meyers would prefer the compiler to warn us of our mistake.
So, he makes fly pure virtual in Airplane with an implementation, and then in ModelA and ModelB, put:
void fly() { Airplane::fly(); }
Now unless we explictly state in our derived class that we want the default flying behaviour, we don't get it. So instead of just the documentation telling us all the things we need to check about our new model of plane, the compiler tells us too.
This does the job, but I think it's a bit weak. Ideally we instead have a BoringlyFlyable mixin containing the default implementation of fly, and reuse code that way, rather than putting code in a base class that assumes certain things about airplanes which are not requirements of airplanes. But that requires CRTP if the fly function actually does anything significant:
#include <iostream>
struct Wings {
void flap() { std::cout << "flapping\n"; }
};
struct Airplane {
Wings wings;
virtual void fly() = 0;
};
template <typename T>
struct BoringlyFlyable {
void fly() {
// planes fly by flapping their wings, right? Same as birds?
// (This code may need tweaking after consulting the domain expert)
static_cast<T*>(this)->wings.flap();
}
};
struct PlaneA : Airplane, BoringlyFlyable<PlaneA> {
void fly() { BoringlyFlyable<PlaneA>::fly(); }
};
int main() {
PlaneA p;
p.fly();
}
When PlaneA declares inheritance from BoringlyFlyable, it is asserting via interface that it is valid to fly it in the default way. Note that BoringlyFlyable could define pure virtual functions of its own: perhaps getWings would be a good abstraction. But since it's a template it doesn't have to.
I've a feeling that this pattern can replace all cases where you would have provided a pure virtual function with an implementation - the implementation can instead go in a mixin, which classes can inherit if they want it. But I can't immediately prove that (for instance if Airplane::fly uses private members then it requires considerable redesign to do it this way), and arguably CRTP is a bit high-powered for the beginner anyway. Also it's slightly more code that doesn't actually add functionality or type safety, it just makes explicit what is already implicit in Meyer's design, that some things can fly just by flapping their wings whereas others need to do other stuff instead. So my version is by no means a total shoo-in.
Was addressed in GotW #31. Summary:
There are three main reasons you might
do this. #1 is commonplace, #2 is
pretty rare, and #3 is a workaround
used occasionally by advanced
programmers working with weaker
compilers.
Most programmers should only ever use #1.
... Which is for pure virtual destructors.
There is no conflict with the two concepts, although they are rarely used together (as OO purists can't reconcile it, but that's beyond the scope of this question/answer).
The idea is that the pure virtual function is given an implementation while at the same time forcing subclasses to override that implementation. The subclasses may invoke the base class function to provide some default behavior. The base cannot be instantiated (it is "abstract") because the virtual function(s) is pure even though it may have an implementation.
Wikipedia sums this up pretty well:
Although pure virtual methods
typically have no implementation in
the class that declares them, pure
virtual methods in C++ are permitted
to contain an implementation in their
declaring class, providing fallback or
default behaviour that a derived class
can delegate to if appropriate.
Typically you don't need to provide base class implementations for pure virtuals. But there is one exception: pure virtual destructors. In fact if your base class has a pure virtual destructor, it must have an implementation. Why would you need a pure virtual destructor instead of just a virtual one? Typically, in order to make a base class abstract without requiring the implementation of any other method. For example, in a class where you might reasonably use the default implementation for any method, but you still don't want people to instantiate the base class, you can mark only the destructor as pure virtual.
EDIT:
Here's some code that illustrates a few ways to call the base implementation:
#include <iostream>
using namespace std;
class Base
{
public:
virtual void DoIt() = 0;
};
class Der : public Base
{
public:
void DoIt();
};
void Base::DoIt()
{
cout << "Base" << endl;
}
void Der::DoIt()
{
cout << "Der" << endl;
Base::DoIt();
}
int main()
{
Der d;
Base* b = &d;
d.DoIt();
b->DoIt(); // note that Der::DoIt is still called
b->Base::DoIt();
return 0;
}
That way you can provide a working implementation but still require the child class implementer to explicitely call that implementation.
Well, we have some great answers already.. I'm to slow at writing..
My thought would be for instance an init function that has try{} catch{}, meaning it shouldn't be placed in a constructor:
class A {
public:
virtual bool init() = 0 {
... // initiate stuff that couldn't be made in constructor
}
};
class B : public A{
public:
bool init(){
...
A::init();
}
};