using virtual function vs dynamic_cast - c++

Instead of using a virtual function, is it fine to use something like:
void BaseClass::functionName () { // BaseClass already has virtual functions
// some LONG code true for all derived classes of BaseClass
// ...
if (typeid (*this) == typeid (DerivedClass1))
// use functions of DerivedClass1 on dynamic_cast<DerivedClass1*>(this)
else if (typeid (*this) == typeid (DerivedClass2))
// use functions of DerivedClass2 on dynamic_cast<DerivedClass2*>(this)
// some LONG code true for all derived classes of BaseClass
// ...
}
It's just that I feel it's not a good idea to use virtual functions for something like the above when it is only a small section that is specialized for the derived classes. The long code that is used for all the derived classes will then need to be used over and over for all the derived classes (suggesting a helper function(s) just for that). Of course, I've tested my method and it works (and I suppose with no loss in performance), but I wonder if this is questionable practice.
What if the if-else-if part is used more than once in the function?
And if the common code for all derived classes is relavitively SHORT, then it is better to use virtual functions then, right?

Why not do this:
void BaseClass::functionName () {
// some LONG code true for all derived classes of BaseClass
// ...
this->some_protected_virtual_member_function();
// some LONG code true for all derived classes of BaseClass
// ...
}
So the common part is not duplicated and the behavior can still easily have extensions in your children classes without having to add another if to your parent class

Your code will not work at all unless the classes have virtual functions. C++ provides only limited reflection: typeid(DerivedClass1)==typeid(DerivedClass2) if there are no virtual functions. The above code also may be slower than simply accessing a virtual function: you'll get a new branch for each type rather than a constant time pointer lookup.
However, the biggest issue with the above code is that it looses polymorphism and encapsulation. The using code must be aware or what DerivedClass1 and DerivedClass2 need to do. It needs to be aware of the structures inside DerivedClass1 and DerivedClass2. Also, all the code is piled into one place, making this function possibly hundreds of lines.

I think you're looking for the template method pattern here: Just use your existing non-virtual function and have it call a virtual function only for the small section of code that differes between concrete classes. It has the advantage of looking prettier too.
void BaseClass::functionName () {
// some LONG code true for all derived classes of BaseClass
// ...
functionName_impl(); // Will be virtual (private or protected) and overriden in each child class to do the right work.
// some LONG code true for all derived classes of BaseClass
// ...
}

This is a degenerate case of the Template Method Pattern:
class Base {
public:
void templated() {
// do some stuff
this->hook1();
// other stuff
if (/*cond*/) { this->hook2(); }
size_t acc = 0;
for (Stuff const& s: /*...*/) { acc += this->hook3(s); }
// other stuff
}
private:
virtual void hook1() {}
virtual void hook2() {}
virtual size_t hook3(Stuff const&) { return 0; }
}; // class Base
And then a Derived class can customize the behavior of the hooks.
A word of warning: this is extremely rigid, by nature, since the templated method is not virtual; this is a both a virtue and a problem of this pattern, it is good because if you need to change the templated method then it is defined in a single place and it is annoying if the hooks provided are not sufficient to customize the behavior.

Related

Calling a member function from a base class pointer with varying parameters depending on the derived class

I'm pretty experienced in C++, but I find myself struggling with this particular design problem.
I want to have a base class, that I can stuff in a std::map, with a virtual function that can be called generically by a method that is querying the map. But I want to be able to call that function from a base class pointer with different parameters, depending on what the derived type is. Something functionally similar to the following wildly illegal example:
class Base
{
virtual void doThing() = 0;
}
class Derived1 : public Base
{
void doThing(int i, const std::string& s) {} // can't do that
}
class Derived2: public Base
{
void doThing(double d, std::vector<int>& v) {} // can't do that either
}
enum class ID = {
DERIVED1,
DERIVED2
}
std::map<ID, std::unique_ptr<Base> thingmap = { ... }
std::unique_ptr<Base>& getThing(int) { return thingmap[i] };
int main(int I, const char* argv[]) {
auto baseptr = getThing(DERIVED1);
baseptr->doThing(42, "hello world");
}
I don't want the caller to have to know what the derived type is, only that a Derived1 takes an int and a string. Downcasting isn't an option because the whole point of this is that I don't want the caller to have to specify the derived type explicitly. And C-style variable argument lists are yucky. :-)
Edited to clarify: I know exactly why the above can't possibly work, thank you. :-) This is library code and I'm trying to conceal the internals of the library from the caller to the greatest extent possible. If there's a solution it probably involves a variadic template function.
You can't do that.
Your map is filled with Base instances, so the class DO NOT have the required prototypes implemented in Derived1 or Derived2... And redefining overloaded methods do not implement the pure virtual method doThing, so Derived1 and Derived2 are still abstract classes and therefore cannot be instanciated.
Worst, your getThing function only deals with Base, so the compiler would NEVER allows you to use the overloaded signatures, since they don't exist AT ALL in Base. There is nothing to know the real class behind, since you don't use templates and implicit template argument deduction.
Your pattern cannot be done this way, period. Since you don't want to use neither downcasting nor explicitely specified child classes, you're stuck.
Even if you add all possible prototypes in Base, since it will be pure virtual methods, both derived classes will still be abstract classes. And if they aren't, then you'll never be able to know which one is a NOP and which one is implemented, since it will requires downcasting!
I think that you made a common mistake, even done by expert developers sometimes: you went into conception directly, BEFORE determining your real ROOT needs.
What you ask looks like the core system of a factory, and it's really not the good way to implement this design pattern and/or designing the specialized derived classes.

what is the difference between polymorphism and inheritance

I am confused about the concepts of inheritance and polymorphism. I mean, what is the difference between code re-usability and function overriding? Is it impossible to reuse parent class function using inheritance concept or else is it impossible to override parent class variables using Polymorphism. There seems little difference for me.
class A
{
public:
int a;
virtual void get()
{
cout<<"welcome";
}
};
class B:public A
{
a =a+1; //why it is called code reuse
void get() //why it is called overriding
{
cout<<"hi";
}
};
My doubt is about the difference between the code reuse and function overriding.
Lets start with your example.
class A
{
public:
int a;
virtual void get()
{
cout<<"welcome";
}
};
class B:public A
{
a =a+1; //why it is called code reuse
void get() //why it is called overriding
{
cout<<"hi";
}
};
Inheritance: Here you are deriving class B from class A, this means that you can access all of its public variables and method.
a = a + 1
Here you are using variable a of class A, you are reusing the variable a in class B thereby achieving code reusability.
Polymorphism deals with how a program invokes a method depending on the things it has to perform: in your example you are overriding the method get() of class A with method get() of class B. So when you create an instance of Class B and call method get you'll get 'hi' in the console not 'welcome'
Function inheritance allows for abstraction of behaviour from a "more concrete" derived class(es) to a "more abstract" base class. (This is analogous to factoring in basic math and algebra.) In this context, more abstract simply means that less details are specified. It is expected that derived classes will extend (or add to) what is specified in the base class. For example:
class CommonBase
{
public:
int getCommonProperty(void) const { return m_commonProperty; }
void setCommonProperty(int value) { m_commonProperty = value; }
private:
int m_commonProperty;
};
class Subtype1 : public CommonBase
{
// Add more specific stuff in addition to inherited stuff here...
public:
char getProperty(void) const { return m_specificProperty1; }
private:
char m_specificProperty1;
};
class Subtype2 : public CommonBase
{
// Add more specific stuff in addition to inherited stuff here...
public:
float getProperty(void) const { return m_specificProperty2; }
private:
float m_specificProperty2;
};
Note that in the above example, getCommonProperty() and setCommonProperty(int) are inherited from the CommonBase class, and can be used in instances of objects of type Subtype1 and Subtype2. So we have inheritance here, but we don't really have polymorphism yet (as will be explained below).
You may or may not want to instantiate objects of the base class, but you can still use it to collect/specify behaviour (methods) and properties (fields) that all derived classes will inherit. So with respect to code reuse, if you have more than one type of derived class that shares some common behaviour, you can specify that behaviour only once in the base class and then "reuse" that in all derived classes without having to copy it. For example, in the above code, the specifications of getCommmonProperty() and setCommonProperty(int) can be said to be reused by each Subtype# class because the methods do not need to be rewritten for each.
Polymorphism is related, but it implies more. It basically means that you can treat objects that happen to be from different classes the same way because they all happen to be derived from (extend) a common base class. For this to be really useful, the language should support virtual inheritance. That means that the function signatures can be the same across multiple derived classes (i.e., the signature is part of the common, abstract base class), but will do different things depending on specific type of object.
So modifying the above example to add to CommonBase (but keeping Subtype1 and Subtype2 the same as before):
class CommonBase
{
public:
int getCommonProperty(void) const { return m_commonProperty; }
void setCommonProperty(int value) { m_commonProperty = value; }
virtual void doSomething(void) = 0;
virtual ~CommonBase() { }
private:
int m_commonProperty;
};
Note that doSomething() is declared here as a pure virtual function in CommonBase (which means that you can never instantiate a CommonBase object directly -- it didn't have to be this way, I just did that to keep things simple). But now, if you have a pointer to a CommonBase object, which can be either a Subtype1 or a Subtype2, you can call doSomething() on it. This will do something different depending on the type of the object. This is polymorphism.
void foo(void)
{
CommonBase * pCB = new Subtype1;
pCB->doSomething();
pCB = new Subtype2;
pCB->doSomething(); // Does something different...
}
In terms of the code sample you provided in the question, the reason get() is called "overriding" is because the behaviour specified in the B::get() version of the method takes precedence over ("overrides") the behaviour specified in the A::get() version of the method if you call get() on an instance of a B object (even if you do it via an A*, because the method was declared virtual in class A).
Finally, your other comment/question about "code reuse" there doesn't quite work as you specified it (since it's not in a method), but I hope it will be clear if you refer to what I wrote above. When you are inheriting behaviour from a common base class and you only have to write the code for that behaviour once (in the base class) and then all derived classes can use it, then that can be considered a type of "code reuse".
You can have parametric polymorphism without inheritance. In C++, this is implemented using templates. Wiki article:
http://en.wikipedia.org/wiki/Polymorphism_%28computer_science%29#Parametric_polymorphism

C++, abstract class and inheritance

I'm trying to process classes instance two by two.
I have a abstract base class (IBase here) that contains a doStuff method.
This method will be overriden in extended class in order to process all other defined classes.
This is part of a library I'm building. I want Base objects to be written by the library user. Each Base class need to interact with another Base objects through the doStuff methode. The container is needed to handle multiples Base objects.
It is not the first time I run into this problem, but I can't remember how I did the last times. This kind of class can be used for a lot of thing. Here, it is a collision detection system. IBase represent an abstract HitBox and Container represente the Scene where collision occures. In this case, Container::process checks for transitions between hit boxes and Container::process is used to implement the optimizing algorithm (quadtree, ...).
I built those class in this way:
class IBase {
public:
virtual void doStuff(IBase* base) = 0;
}
class Base {
public:
virtual void doStuff(Base* base) {
foobar();
}
virtual void doStuff(IBase* base) {
// irrelevant
}
}
class Container {
public:
void process() {
for (std::list<IBase*>::iterator it=base_list.begin() ; it!=base_list.end() ; it++) {
for (std::list<IBase*>::itarator jt=std::next(it) ; jt!=base_list.end() ; jt++) {
(*it)->doStuff(*jt);
}
}
}
private:
std::list<Ibase*> base_list;
}
But in the loop, I can't reach void Base::doStuff(Base*) when working with two Base objects.
I can only call Base::doStuff(IBase*) which is not something I want.
Any help on this one ? I understand the problem, but I can't see a solution to it. Is this the good way to handle it or do I need to think again my architecture ? How would you do this ? I think a design pattern must exists for such a problem, but I didn't find any that fits.
Thanks
C++ does not support contravariance for arguments. See also Why is there no parameter contra-variance for overriding?.
You might be better off explicitly invoking doStuff(Base* base) from within the doStuff(IBase* base) body.
Your objects, when dereferenced from *it and *jt, are referenced as IBase objects, not Base objects. This means that only methods from IBase can be called.
Your virtual method:
virtual void doStuff(Base* base) { ... }
is not overriding anything. It is creating a new virtual method that is accessible from Base downward only. When you call doStuff from a IBase pointer, it's going to call:
virtual void doStuff(IBase* base) { ... }
which matches the signature defined in IBase.
If you want to execute your foobar function, you should do some kind of check on base when it's based into the overriding doStuff, cast it to Base* once you're sure it's safe, then work with it as needed.
virtual void doStuff(IBase* base) {
// not irrelevant
if (base->isBase())
{
foobar();
}
}
And finally, as previously suggested, make doStuff public.

Force all classes to implement / override a 'pure virtual' method in multi-level inheritance hierarchy

In C++ why the pure virtual method mandates its compulsory overriding only to its immediate children (for object creation), but not to the grand children and so on ?
struct B {
virtual void foo () = 0;
};
struct D : B {
virtual void foo () { ... };
};
struct DD : D {
// ok! ... if 'B::foo' is not overridden; it will use 'D::foo' implicitly
};
I don't see any big deal in leaving this feature out.
For example, at language design point of view, it could have been possible that, struct DD is allowed to use D::foo only if it has some explicit statement like using D::foo;. Otherwise it has to override foo compulsory.
Is there any practical way of having this effect in C++?
I found one mechanism, where at least we are prompted to announce the overridden method explicitly. It's not the perfect way though.
Suppose, we have few pure virtual methods in the base class B:
class B {
virtual void foo () = 0;
virtual void bar (int) = 0;
};
Among them, suppose we want only foo() to be overridden by the whole hierarchy. For simplicity, we have to have a virtual base class, which contains that particular method. It has a template constructor, which just accepts the type same as that method.
class Register_foo {
virtual void foo () = 0; // declare here
template<typename T> // this matches the signature of 'foo'
Register_foo (void (T::*)()) {}
};
class B : public virtual Register_foo { // <---- virtual inheritance
virtual void bar (int) = 0;
Base () : Register_foo(&Base::foo) {} // <--- explicitly pass the function name
};
Every subsequent child class in the hierarchy would have to register a foo inside its every constructor explicitly. e.g.:
struct D : B {
D () : Register_foo(&D::foo) {}
virtual void foo () {};
};
This registration mechanism has nothing to do with the business logic. Though, the child class can choose to register using its own foo or its parent's foo or even some similar syntax method, but at least that is announced explicitly.
In your example, you have not declared D::foo pure; that is why it does not need to be overridden. If you want to require that it be overridden again, then declare it pure.
If you want to be able to instantiate D, but force any further derived classes to override foo, then you can't. However, you could derive yet another class from D that redeclares it pure, and then classes derived from that must override it again.
What you're basically asking for is to require that the most derived
class implement the functiom. And my question is: why? About the only
time I can imagine this to be relevant is a function like clone() or
another(), which returns a new instance of the same type. And that's
what you really want to enforce, that the new instance has the same
type; even there, where the function is actually implemented is
irrelevant. And you can enforce that:
class Base
{
virtual Base* doClone() const = 0;
public:
Base* clone() const
{
Base* results = doClone();
assert( typeid(*results) == typeid(*this) );
return results;
}
}
(In practice, I've never found people forgetting to override clone to
be a real problem, so I've never bothered with something like the above.
It's a generally useful technique, however, anytime you want to enforce
post-conditions.)
A pure virtual means that to be instantiated, the pure virtual must be overridden in some descendant of the class that declares the pure virtual function. That can be in the class being instantiated or any intermediate class between the base that declares the pure virtual, and the one being instantiated.
It's still possible, however, to have intermediate classes that derive from one with a pure virtual without overriding that pure virtual. Like the class that declares the pure virtual, those classes can only be used as based classes; you can't create instances of those classes, only of classes that derive from them, in which every pure virtual has been implemented.
As far as requiring that a descendant override a virtual, even if an intermediate class has already done so, the answer is no, C++ doesn't provide anything that's at least intended to do that. It almost seems like you might be able to hack something together using multiple (probably virtual) inheritance so the implementation in the intermediate class would be present but attempting to use it would be ambiguous, but I haven't thought that through enough to be sure how (or if) it would work -- and even if it did, it would only do its trick when trying to call the function in question, not just instantiate an object.
Is there any practical way of having this effect in C++?
No, and for good reason. Imagine maintenance in a large project if this were part of the standard. Some base class or intermediate base class needs to add some public interface, an abstract interface. Now, every single child and grandchild thereof would need to changed and recompiled (even if it were as simple as adding using D::foo() as you suggested), you probably see where this is heading, hells kitchen.
If you really want to enforce implementation you can force implementation of some other pure virtual in the child class(s). This can also be done using the CRTP pattern as well.

Getting OOP right

Ok, this is my problem. I have the following classes:
class Job {
bool isComplete() {}
void setComplete() {}
//other functions
};
class SongJob: public Job {
vector<Job> v;
string getArtist() {}
void setArtist() {}
void addTrack() {}
string getTrack() {}
// other functions
};
// This were already implemeted
Now I want to implement a VideoJob and derived it from Job. But here is my problem. I also have the following function witch it was set to work only with SongJob:
void process(SongJob s)
{
// not the real functions
s.setArtist();
..............
s.getArtist();
.............
s.getArtist();
...............
s.setArtist()
}
Here I just want it to show that the function uses only derived object methods. So if I have another object derived from Job, I will need to change the parameter to Job, but then the compiler would not know about thoose functions and I dont what to test for everyone what kind of object it is and then cast it so I can call the correct function.
So it is okay to put all the functions in the base class, because then I will have no problem, but I don't know if this is correct OOP, if one class deals with Songs and the other with videos, I thing good oop means to have 2 clases.
If I didn't make myself clear, please say so and I will try explaining better.
And in short words, I want to use polymorfism.
It is totally fine to put all the things that the classes SongJob and VideoJob have in common into a common base-class. However, this will cause problems once you want to add a subclass of Job that has nothing to do with artists.
There are some things to note about the code you have posted. First, your class Job is apparently not an abstract base class. This means that you can have jobs that are just jobs. Not SongJob and not VideoJob. If you want to make it clear that there can not be a simple Job, make the base-class abstract:
class Job {
virtual bool isComplete() = 0;
virtual void setComplete() = 0;
//other functions
};
Now, you cannot create instances of Job:
Job job; // compiler-error
std::vector<Job> jobs; // compiler-error
Note that the functions are now virtual, which means that subclasses can override them. The = 0 and the end means that subclasses have to provide an implementation of these functions (they are pure virtual member functions).
Secondly, your class SongJob has a member std::vector<Job>. This is almost certainly not what you want. If you add a SongJob to this vector, it will become a normal Job. This effect is called slicing. To prevent it, you'd have to make it a std::vector<Job*>.
There is much more to say here, but that would go to far. I suggest you get a good book.
In your Base class Job you could add those methods as virtual methods so that a class deriving from Job may or may not override these specific methods.
In your SongJob class you override the methods and dont override them in VideoJob
In, void process() pass a pointer to Base class Job
void process(Job *s)
It will then call the appropriate methods depending on the adress of the objec s is pointing to which will be a SongJob object.
In C++, you have to do two things to get polymorphism to work:
Access polymorphic functions by a reference (&) or pointer (*) to a base type
Define the polymorphic functions as virtual in the base type
So, change these from:
class Job {
bool isComplete() {}
void setComplete() {}
};
void process(SongJob s)
{
// ...
}
To:
class Job {
public: // You forgot this...
virtual bool isComplete() { }
virtual void setComplete() { }
};
void process(Job& s)
{
// ...
}
If you can't define all the functionality you need inside process on your base class (if all the member functions you'd want don't apply to all the derived types), then you need to turn process into a member function on Job, and make it virtual:
class Job {
public:
virtual bool isComplete() { }
virtual void setComplete() { }
virtual void process() = 0;
};
// ...
int main(int argc, char* argv[])
{
SongJob sj;
Job& jobByRef = sj;
Job* jobByPointer = new SongJob();
// These call the derived implementation of process, on SongJob
jobByRef.process();
jobByPointer->process();
delete jobByPointer;
jobByPointer = new VideoJob();
// This calls the derived implementation of process, on VideoJob
jobByPointer->process();
return 0;
}
And of course, you'll have two different implementations of process. One for each class type.
People will tell you all sorts of "is-a" vs "has-a" stuff, and all sorts of complicated things about this silly "polymorphism" thing; and they're correct.
But this is basically the point of polymorphism, in a utilitarian sense: It is so you don't have to go around checking what type each class it before calling functions on it. You can just call functions on a base type, and the right derived implementation will get called in the end.
BTW, in C++, virtual ... someFunc(...) = 0; means that the type that function is defined in cannot be instantiated, and must be implemented in a derived class. It is called a "pure virtual" function, and the class it is defined on becomes "abstract".
Your problem comes from the fact you're calling a process method on an object. You should have a method Process on the Job class and override this method in your derived classes.
use pure virtual functions:
class Job
{
virtual string getArtist() =0;
};