Related
I have a class hierarchy with lots of shared member functions in a base class, and also a large number of unique member functions in two derived classes:
class Scene {
public:
void Display();
void CreateDefaultShapes();
void AddShape(Shape myShape);
// lots more shared member functions
}
Class AnimatedScene : public Scene {
public:
void SetupAnimation();
void ChangeAnimationSpeed(float x, float y);
// lots of member functions unique to animation
}
Class ControllableScene : public Scene {
public:
void ConfigureControls();
void MoveCamera(float x, float y, float z);
void Rotate(float x, float y, float z);
// lots of member functions unique to a controllable scene
}
Not surprisingly, this doesn't work:
Scene myScene;
myScene.SetupAnimation();
What is the correct solution to this problem? Should I make all of the derived member functions virtual and add them to the base? Should I use a cast when calling SetupAnimation()? Is there a more clever design that solves this problem?
Note: I receive myScene from elsewhere and can't simply declare it as AnimatedScene when I instantiate it.
Edit: I've added a couple more member functions to illustrate the point. A small handful of initialization functions definitely lend themselves to simply using virtual.
You can cast it, preferably using static_cast. The least preferable option. If you are casting things, it usually means your design needs more thought
If you have a particular function/class that needs one or the other, declare the input as the type you need, which more accurately communicates the requirements of the function or class
If the function needs to be generic, and those methods don't require any input, then you could define a virtual method in the parent class, say init, which in the derived classes call the correct methods to set up the instance.
I have a similar problem in my compiler project, where the AST (Abstract Syntax Tree) is constructed from the statements, so while(x != 0) { if (a == b) x = 0; } would construct a whileAST with a binaryExpr inside it, then a blockAST with the ifAST, and so on. Each of these have some common properties, and a lot of things that only apply when you actually do something specific to that part. Most of the time, that is solved by calling a generic (virtual) member function.
However, SOMETIMES you need to do very specific things. There are two ways to do that:
use dynamic_cast (or typeid + reinterpret_cast or static cast).
Set up dozens of virtual member functions, which mostly are completely useless (doesn't do anything or return an "can't do that" indication of some sort)
In my case, I choose the first one. It shouldn't be the common case, but sometimes it is indeed the right thing to do.
So in this case, you'd do something like:
AnimatedScene *animScene = dynamic_cast<AnimatedScene*>(&scene);
if (!animScene)
{
... do something else, since it's not an AnimatedScene ...
}
animScene->SetupAnimation();
I am not yet able to comment, which is what I really wanted to do, but I am also interested in figuring this out as well.
A few months ago I had a similar problem. What I can tell you is that you can use typeid operator to figure out what type the object is, like so:
int main()
{
scene* ptr = new AnimatedScene();
if (typeid(*ptr) == typeid(AnimatedScene))
{
cout<<"ptr is indeed a animatedScene"<<endl;
AnimatedScene* ptr2 = (AnimatedScene*)(ptr);
ptr2->SetupAnimation();
}
else
{
cout<<"not a animatedscene!!!"<<endl;
}
}
This works, you'll then be able to use ptr2 to access the animatedScene's unique members.
Notice the use of pointers, you can't use the objects directly, due to something called "object slicing" when playing with polymorphism: https://en.wikipedia.org/wiki/Object_slicing
Like you I have heard something about the use of typeid and thus, casting being a bad idea, but as to why, I cannot tell you. I am hoping to have a more experienced programmer explain it.
What I can tell you is that this works without problems in this simple example, you've avoided the problem of declaring meaningless virtual functions in the basetype.
Edit: It's amazing how often I forget to use google: Why is usage of the typeid keyword bad design?
If I understand mr Bolas correctly, typeid incentivizes bad coding practices. However, in your example you want to access a subtypes non-virtual function. As far as I know, there is no way of doing that without checking type at compiletime, ie typeid.
If such problem arises with your hierarchy that proves that hierarchy was too generalized. You might want to implement interfaces pattern, if class have certain functionality, it would inherit an interface that defines that functionality.
Proven that dogs are animals, do all animal but dogs fail to bark, or do only dogs bark?
The first approach lead to a class animal failing all the verses of the entire zoo, implemented one-by one in each animal. And in particular class dog will override just bark().
In this approach animal becomes a sort of "god object" (knows everything), requiring to be constantly updated every time something new is introduced, and requiring It's entire "universe" to be re-created (recompile everything) after it.
The second approach requires first to check the animal is a dog (via dynamic cast) and then ask it to bark. (or check for cat before asking a mieow)
The trade-off will probably consist in understanding how frequent is the possibility you have to check a bark out of its context (not knowing which animal are you deal with), how to report a fail, and what to do in case of such fail.
In other words, the key driver is not the bark, but the context around it inside your program.
//Are you trying to do something like this?
class Scene{
public:
virtual void Display()=0; //Pure virtual func
};
class AnimatedScene : public Scene{
public:
void Display(){
std::cout<<"function Display() inside class AnimatedScene" <<std::endl;
}
};
class ControllableScene : public Scene{
public:
void Display(){
std::cout<<"function Display() inside class ControllableScene" <<std::endl;
}
};
int main(){
AnimatedScene as;
ControllableScene cs;
Scene *ex1 = &as;
Scene *ex2 = &cs;
ex1->Display();
ex2->Display();
return 0;
}
So this is probably a weird question, but I have a reasonably good reason for asking.
The gist of my question is, given an example with two levels of derivation on a class hierarchy:
Main Base Class:
class Animal
{
virtual void Draw() = 0;
};
Derived Class:
class Dog : public Animal
{
virtual void Draw()
{
// draw a generic dog icon or something...
}
};
Further Derivation:
class Corgi : public Dog
{
virtual void Draw()
{
// draw a corgi icon...
}
};
Now, I'd love to be able to, from within the Corgi class, permanently downcast the 'this' pointer to a Dog pointer and then pass it off somewhere else as an Animal. This other place will then be able to call the Draw function and get the Dog method, not the virtual Corgi method. I know this is strange, but again, I have a vaguely legitimate reason for wanting to do it.
I've tried all the different casting operators and haven't had any luck, but maybe there is a consistent way of pulling this off? In the past I've caused myself trouble by not properly using dynamic_cast which resulted in a similar state for a pointer. Perhaps this time I can use that to my advantage?
Edit:
Perhaps the above example doesn't illustrate clearly the what I'm trying to achieve, so I'll elaborate with my real goal.
I'm trying to achieve a shorthand for registering base class implementations that link into a scripting system I've been using for a while. The scripting system relies on a base class IScriptContext to facilitate access to real-code functions and member variable access. Internally base classes register their member function addresses and member variable addresses which are later dispatched/accessed through lookup tables. I'm in the process of adding proper support for class derivation hierarchies to the scripting system, and I figured being able to isolate the base class versions of these interfaces would help save time and make the whole process cleaner for me when it comes time to register available base classes with the script interpreter. There are other ways to achieve this, such as registering class specific function pointers for each required method for each available base class (e.g. this->Dog::CallFunction, this->Dog::SetMember, this->Dog::GetMember.) However, I figured using an interface would allow me to modify things a bit easier down the road if I ever needed to.
I hope all of that makes some kind of sense.
Thanks!
You have a Corgi object. You can:
Treat it as a Dog object everywhere by using the Dog:: qualifier to all calls (e.g. ptr->Dog::draw();). This loses you virtual dispatch, and is almost certainly not what you want from how your question reads.
Actually construct a new Dog object from your Corgi. Just do this with a normal static_cast as you'd convert any other type or let implicit conversion take over (e.g. Corgi c; Dog d(c);).
These are the options available to you. To want to retain a Corgi but automatically pretend everywhere that it's a Dog is neither reasonable nor legitimate, so the language does not provide for it.
Let me start off by saying your design looks faulty.
You can however explicitly say which function in a hierarchy you want to call:
Corgi* corgi = new Corgi;
corgi->Dog::draw();
This will call the draw method from Dog, not from Corgi. (I hope I understood correctly what you're asking).
Tomalak has already outlined two of the main choices available to you:
use qualified calls, or
construct a Dog as a Dog copy of your Corgi.
In addition to these, you can
use a simple wrapper
e.g.
class LooksLikeDog
: public Dog
{
private:
Dog* realObject_;
LooksLikeDog( LooksLikeDog const& ); // No such.
LooksLikeDog& operator=( LooksLikeDog const& ); // No such.
public:
LooksLikeDog( Dog& other )
: realObject_( &other )
{}
// Just for exposition: not implementing this does the same.
virtual void draw() override { Dog::draw(); }
// Example of other method that may need to be implemented:
virtual void bark() override { realObject_->bark(); }
};
But the best solution is most probably to fix your design. ;-)
Implement Corgi draw function and call your parent's implementation:
virtual void Corgi::Draw()
{
Dog::draw();
}
Effective C++ by Scott Meyers tells in Chapter 5, Item 28 to avoid returning "handles" (pointers, references or iterators) to object internals and it definitely makes a good point.
I.e. don't do this:
class Family
{
public:
Mother& GetMother() const;
}
because it destroys encapsulation and allows to alter private object members.
Don't even do this:
class Family
{
public:
const Mother& GetMother() const;
}
because it can lead to "dangling handles", meaning that you keep a reference to a member of an object that is already destroyed.
Now, my question is, are there any good alternatives? Imagine Mother is heavy! If I now return a copy of Mother instead of a reference, GetMother is becoming a rather costly operation.
How do you handle such cases?
First, let me re-iterate: the biggest issue is not one of lifetime, but one of encapsulation.
Encapsulation does not only mean that nobody can modify an internal without you being aware of it, encapsulation means that nobody knows how things are implemented within your class, so that you can change the class internals at will as long as you keep the interface identical.
Now, whether the reference you return is const or not does not matter: you accidentally expose the fact that you have a Mother object inside of your Family class, and now you just cannot get rid of it (even if you have a better representation) because all your clients might depend on it and would have to change their code...
The simplest solution is to return by value:
class Family {
public:
Mother mother() { return _mother; }
void mother(Mother m) { _mother = m; }
private:
Mother _mother;
};
Because in the next iteration I can remove _mother without breaking the interface:
class Family {
public:
Mother mother() { return Mother(_motherName, _motherBirthDate); }
void mother(Mother m) {
_motherName = m.name();
_motherBirthDate = m.birthDate();
}
private:
Name _motherName;
BirthDate _motherBirthDate;
};
See how I managed to completely remodel the internals without changing the interface one iota ? Easy Peasy.
Note: obviously this transformation is for effect only...
Obviously, this encapsulation comes at the cost of some performance, there is a tension here, it's your judgement call whether encapsulation or performance should be preferred each time you write a getter.
Possible solutions depend on actual design of your classes and what do you consider "object internals".
Mother is just implementation detail of Family and could be completely hidden from Family user
Family is considered as composition of other public objects
In first case you shall completely encapsulate subobject and provide access to it only via Family function members (possibly duplicating Mother public interface):
class Family
{
std::string GetMotherName() const { return mommy.GetName(); }
unsigned GetMotherAge() const { return mommy.GetAge(); }
...
private:
Mother mommy;
...
};
Well, it can be boring if Mother interface is quite large, but possibly this is design problem (good interfaces shall have 3-5-7 members) and this will make you revisit and redesign it in some better way.
In second case you still need to return entire object. There are two problems:
Encapsulation breakdown (end-user code will depend on Mother definition)
Ownership problem (dangling pointers/references)
To adress problem 1 use interface instead of specific class, to adress problem 2 use shared or weak ownership:
class IMother
{
virtual std::string GetName() const = 0;
...
};
class Mother: public IMother
{
// Implementation of IMother and other stuff
...
};
class Family
{
std::shared_ptr<IMother> GetMother() const { return mommy; }
std::weak_ptr<IMother> GetMotherWeakPtr() const { return mommy; }
...
private:
std::shared_ptr<Mother> mommy;
...
};
If a read-only view is what you're after, and for some reason you need to avoid dangling handles, then you can consider returning a shared_ptr<const Mother>.
That way, the Mother object can out-live the Family object. Which must also store it by shared_ptr, of course.
Part of the consideration is whether you're going to create reference loops by using too many shared_ptrs. If you are, then you can consider weak_ptr and you can also consider just accepting the possibility of dangling handles but writing the client code to avoid it. For example, nobody worries too much about the fact that std::vector::at returns a reference that becomes stale when the vector is destroyed. But then, containers are the extreme example of a class that intentionally exposes the objects it "owns".
This goes back to a fundamental OO principle:
Tell objects what to do rather than doing it for them.
You need Mother to do something useful? Ask the Family object to do it for you. Hand it any external dependencies wrapped up in a nice interface (Class in c++) through the parameters of the method on the Family object.
because it can lead to "dangling handles", meaning that you keep a
reference to a member of an object that is already destroyed.
Your user could also de-reference null or something equally stupid, but they're not going to, and nor are they going to do this as long as the lifetime is clear and well-defined. There's nothing wrong with this.
It's just a matter of semantics. In your case, Mother is not Family internals, not its implementation details. Mother class instance can be referenced in a Family, as well as in many other entities. Moreover, Mother instance lifetime may even not correlate with Family lifetime.
So better design would be to store in Family a shared_ptr<Mother>, and expose it in Family interface without worries.
In an existing project, I am to inherit a Controller class (MVC) declared as Singleton so as to define my own treatment. How to appropriately derive this Singleton class?
First, I expand on context and need for this inheritance.
The application that I am added to the existing software wants to use a MVC module that performs almost same task as the one I am willing to perform. It is using the same methods up to signature and slight modifications. Rewriting my own MVC module would definitively be duplication of code. The existing module is intrinsically oriented towards its application to another part of the software, and I cannot simply use the same module. But is written as a Model-View-Controller pattern where Controller is Singleton. I derived View already.
Second, I have doubt that I can classicaly derive Singleton class.
Calling constructor from inherited class would simply call getinstance() for parent class and fail to return an object from derived class (?).
Third, it's how I see some way to deal with. Please comment/help me improve!
I copy the whole Singleton class in a class I could call AbstractController. I derive this class twice. The first child is singleton and adopts the whole treatment of parent class. The second child is the Controller for my part of the application, with own redefined treatment.
Thanks!
Truth is, singletons and inheritance do not play well together.
Yeah, yeah, the Singleton lovers and GoF cult will be all over me for this, saying "well, if you make your constructor protected..." and "you don't have to have a getInstance method on the class, you can put it...", but they're just proving my point. Singletons have to jump through a number of hoops in order to be both a singleton and a base class.
But just to answer the question, say we have a singleton base class. It can even to some degree enforce its singleness through inheritance. (The constructor does one of the few things that can work when it can no longer be private: it throws an exception if another Base already exists.) Say we also have a class Derived that inherits from Base. Since we're allowing inheritance, let's also say there can be any number of other subclasses of Base, that may or may not inherit from Derived.
But there's a problem -- the very one you're either already running into, or will soon. If we call Base::getInstance without having constructed an object already, we'll get a null pointer. We'd like to get back whatever singleton object exists (it may be a Base, and/or a Derived, and/or an Other). But it's hard to do so and still follow all the rules, cause there are only a couple of ways to do so -- and all of them have some drawbacks.
We could just create a Base and return it. Screw Derived and Other. End result: Base::getInstance() always returns exactly a Base. The child classes never get to play. Kinda defeats the purpose, IMO.
We could put a getInstance of our own in our derived class, and have the caller say Derived::getInstance() if they specifically want a Derived. This significantly increases coupling (because a caller now has to know to specifically request a Derived, and ends up tying itself to that implementation).
We could do a variant of that last one -- but instead of getting the instance, the function just creates one. (While we're at it, let's rename the function to initInstance, since we don't particularly care what it gets -- we're just calling it so that it creates a new Derived and sets that as the One True Instance.)
So (barring any oddness unaccounted for yet), it works out kinda like this...
class Base {
static Base * theOneTrueInstance;
public:
static Base & getInstance() {
if (!theOneTrueInstance) initInstance();
return *theOneTrueInstance;
}
static void initInstance() { new Base; }
protected:
Base() {
if (theOneTrueInstance) throw std::logic_error("Instance already exists");
theOneTrueInstance = this;
}
virtual ~Base() { } // so random strangers can't delete me
};
Base* Base::theOneTrueInstance = 0;
class Derived : public Base {
public:
static void initInstance() {
new Derived; // Derived() calls Base(), which sets this as "the instance"
}
protected:
Derived() { } // so we can't be instantiated by outsiders
~Derived() { } // so random strangers can't delete me
};
And in your init code, you say Base::initInstance(); or Derived::initInstance();, depending on which type you want the singleton to be. You'll have to cast the return value from Base::getInstance() in order to use any Derived-specific functions, of course, but without casting you can use any functions defined by Base, including virtual functions overridden by Derived.
Note that this way of doing it also has a number of drawbacks of its own, though:
It puts most of the burden of enforcing singleness on the base class. If the base doesn't have this or similar functionality, and you can't change it, you're kinda screwed.
The base class can't take all of the responsibility, though -- each class needs to declare a protected destructor, or someone could come along and delete the one instance after casting it (in)appropriately, and the whole thing goes to hell. What's worse, this can't be enforced by the compiler.
Because we're using protected destructors to prevent some random schmuck from deleting our instance, unless the compiler's smarter than i fear it is, even the runtime won't be able to properly delete your instance when the program ends. Bye bye, RAII...hello "memory leak detected" warnings. (Of course the memory will eventually be reclaimed by any decent OS. But if the destructor doesn't run, you can't depend on it to do cleanup for you. You'll need to call a cleanup function of some sort before you exit, and that won't give you anywhere near the same assurances that RAII can give you.)
It exposes an initInstance method that, IMO, doesn't really belong in an API everyone can see. If you wanted, you could make initInstance private and let your init function be a friend, but then your class is making assumptions about code outside itself, and the coupling thing comes back into play.
Also note that the code above is not at all thread safe. If you need that, you're on your own.
Seriously, the less painful route is to forget trying to enforce singleness. The least complicated way to ensure that there's only one instance is to only create one. If you need to use it multiple places, consider dependency injection. (The non-framework version of that amounts to "pass the object to stuff that needs it". :P ) I went and designed the above stuff just to try and prove myself wrong about singletons and inheritance, and just reaffirmed to myself how evil the combination is. I wouldn't recommend ever actually doing it in real code.
I'm not sure I understand the situation you're dealing with fully, and whether or not it's possible or appropriate to derive from the singleton depends very much on how the singleton is implemented.
But since you mentioned "good practice" there's some general points that come to mind when reading the question:
Inheritance isn't usually the best tool to achieve code re-use. See: Prefer composition over inheritance?
Using singleton and "good practice" generally do not go together! See: What is so bad about singletons?
Hope that helps.
I recently had the similar need in my app... in any case here is my out of code implementation :
h.
class icProjectManagerHandler;
class icProjectManager : public bs::icBaseManager {
friend class icProjectManagerHandler;
protected:
icProjectManager();
public:
~icProjectManager();
template<typename t>
static t *PM() {
return dynamic_cast<t *>(icProjectManagerHandler::PMH()->mCurrentManager);
};
template<typename t>
static t *PMS() {
static icProjectManagerHandler pm;
return static_cast<t *>(icProjectManagerHandler::PMH()->mCurrentManager);
};
};
class icProjectManagerHandler {
friend class icProjectManager;
icProjectManager *mCurrentManager;
icProjectManagerHandler();
public:
~icProjectManagerHandler();
static icProjectManagerHandler *PMH();
inline void setProjectManager(icProjectManager *pm) {
if (mCurrentManager) { delete mCurrentManager; }
mCurrentManager = pm;
}
};
Cpp.
icProjectManagerHandler::icProjectManagerHandler() {
mCurrentManager = new icProjectManager();
}
icProjectManagerHandler::~icProjectManagerHandler() {
}
icProjectManagerHandler *icProjectManagerHandler::PMH() {
static icProjectManagerHandler pmh;
return &pmh;
}
icProjectManager::icProjectManager() {
}
icProjectManager::~icProjectManager() {
}
And example:
class icProjectX : public ic::project::icProjectManager {
public:
icProjectX() {};
~icProjectX() {};
};
int main(int argc, char *argv[]) {
auto pro = new icProjectX();
pro->setIcName("Hello");
ic::project::icProjectManagerHandler::PMH()->setProjectManager(pro);
qDebug() << "\n" << pro << "\n" << ic::project::icProjectManager::PMS<icProjectX>();
return 10;
}
The issue of this implementation is that you have to initialize your "singleton" class 1st or else you will get the default base class. But other than that... it should work?
I recently switched back from Java and Ruby to C++, and much to my surprise I have to recompile files that use the public interface when I change the method signature of a private method, because also the private parts are in the .h file.
I quickly came up with a solution that is, I guess, typical for a Java programmer: interfaces (= pure virtual base classes). For example:
BananaTree.h:
class Banana;
class BananaTree
{
public:
virtual Banana* getBanana(std::string const& name) = 0;
static BananaTree* create(std::string const& name);
};
BananaTree.cpp:
class BananaTreeImpl : public BananaTree
{
private:
string name;
Banana* findBanana(string const& name)
{
return //obtain banana, somehow;
}
public:
BananaTreeImpl(string name)
: name(name)
{}
virtual Banana* getBanana(string const& name)
{
return findBanana(name);
}
};
BananaTree* BananaTree::create(string const& name)
{
return new BananaTreeImpl(name);
}
The only hassle here, is that I can't use new, and must instead call BananaTree::create(). I do not think that that is really an problem, especially since I expect to be using factories a lot anyway.
Now, the wise men of C++ fame, however, came up with another solution, the pImpl idiom. With that, if I understand it correctly, my code would look like:
BananaTree.h:
class BananaTree
{
public:
Banana* addStep(std::string const& name);
private:
struct Impl;
shared_ptr<Impl> pimpl_;
};
BananaTree.cpp:
struct BananaTree::Impl
{
string name;
Banana* findBanana(string const& name)
{
return //obtain banana, somehow;
}
Banana* getBanana(string const& name)
{
return findBanana(name);
}
Impl(string const& name) : name(name) {}
}
BananaTree::BananaTree(string const& name)
: pimpl_(shared_ptr<Impl>(new Impl(name)))
{}
Banana* BananaTree::getBanana(string const& name)
{
return pimpl_->getBanana(name);
}
This would mean I have to implement a decorator-style forwarding method for every public method of BananaTree, in this case getBanana. This sounds like an added level of complexity and maintenance effort that I prefer not to require.
So, now for the question: What is wrong with the pure virtual class approach? Why is the pImpl approach so much better documented? Did I miss anything?
I can think of a few differences:
With the virtual base class you break some of the semantics people expect from well-behaved C++ classes:
I would expect (or require, even) the class to be instantiated on the stack, like this:
BananaTree myTree("somename");
otherwise, I lose RAII, and I have to manually start tracking allocations, which leads to a lot of headaches and memory leaks.
I also expect that to copy the class, I can simply do this
BananaTree tree2 = mytree;
unless of course, copying is disallowed by marking the copy constructor private, in which case that line won't even compile.
In the above cases, we obviously have the problem that your interface class doesn't really have meaningful constructors. But if I tried to use code such as the above examples, I'd also run afoul of a lot of slicing issues.
With polymorphic objects, you're generally required to hold pointers or references to the objects, to prevent slicing. As in my first point, this is generally not desirable, and makes memory management much harder.
Will a reader of your code understand that a BananaTree basically doesn't work, that he has to use BananaTree* or BananaTree& instead?
Basically, your interface just doesn't play that well with modern C++, where we prefer to
avoid pointers as much as possible, and
stack-allocate all objects to benefit form automatic lifetime management.
By the way, your virtual base class forgot the virtual destructor. That's a clear bug.
Finally, a simpler variant of pimpl that I sometimes use to cut down on the amount of boilerplate code is to give the "outer" object access to the data members of the inner object, so you avoid duplicating the interface. Either a function on the outer object just accesses the data it needs from the inner object directly, or it calls a helper function on the inner object, which has no equivalent on the outer object.
In your example, you could remove the function and Impl::getBanana, and instead implement BananaTree::getBanana like this:
Banana* BananaTree::getBanana(string const& name)
{
return pimpl_->findBanana(name);
}
then you only have to implement one getBanana function (in the BananaTree class), and one findBanana function (in the Impl class).
Actually, this is just a design decision to make. And even if you make the "wrong" decision, it's not that hard to switch.
pimpl is also used to provide ligthweight objects on stack or to present "copies" by referencing to the same implementation object.
The delegation-functions can be a hassle, but it's a minor issue (simple, so no real added complexity), especially with limited classes.
interfaces in C++ are typically more used in strategy-like ways where you expect to be able to choose implementations, although that is not required.