I am a bit weak with designing and I wonder whether it's a good design to have simple virtual methods (not only pure virtual) in an interface? I have a class that is some kind of interface:
class IModel {
void initialize(...);
void render(...);
int getVertexCount() const;
int getAnotherField() const;
};
the initialize and render methods need to be reimplemented for sure, so they are good candidates for pure virtual methods. However, the two last methods are very simple and practically always with the same implementation (just returning some field). Can I leave them as virtual methods with default implementation or is it better to have it pure virtual that needs to be reimplemented, because it's an interface?
We have to point out some differences:
there is no such thing as "some kind of Interface", is this class supposed to be an Interface or an Abstract Class?
If it's supposed to be an Interface then the answer is: all its methods must be pure virtual (no implementation) and it must not contain fields, not even one. The most you can (must, actually) do is, like jaunchopanza said, giving an empty body to the virtual destructor, thus allowing the derived classes to be destructed accordingly.
If, instead, it's supposed to be an Abstract Class then you're free to add the fields m_vertexCount and m_anotherField (I suppose) and implement getVertexCount() and ՝getAnotherField()՝ as you please. However, you should not name it IModel, because the I prefix should be used only for Interfaces.
Edit: I think I'm one of those "Believers" of which Bo Persson is talking about :)
You are facing a trade-off between code repetition and readability. The reader of your code will derive good help from every pure interface and from every non-overridden method. However, the default implementation wil be duplicated by every subclass. Whether or not you should provide a default implementation depends on the likelihood that the default implementation will change and will then need to be changed all over the place.
Without knowing these details, a hard yes-or-no answer cannot be given.
One thing you could do is make IModel be an interface and provide base class, eg ModelBase that implements common/repeating functionality.
class IModel
{
virtual void initialize(...) = 0;
virtual void render(...) = 0
virtual int getVertexCount() const = 0;
virtual int getAnotherField() const = 0;
};
class ModelBase : public IModel
{
// common functions
virtual int getVertexCount() const override { return vertexCount_; }
virtual int getAnotherField() const override { return anotherField_; }
protected:
int vertexCount_ = 0, anotherField_ = 0;
};
class MyModel : public ModelBase
{
virtual void initialize(...) override { ... }
virtual void render(...) override { ... }
};
The one downside of this approach is that there will be some (probably negligible) performance penalty due to extra virtual functions and loss of optimizations by the compiler.
Related
I'm using class to declare interface. I just want to define method signature. This method must be implemented in any non-abstract subclass. I don't need method to be virtual. This is default behaviour in C# BTW (i came from C#/Java world)
However it seems in C++ it is not possible. I either declare method in regular way
void Foo::Method()
and then it is not mandatory to implement it or declare method as "pure virtual"
void virtual Foo::Method() = 0;
and then method become virtual, but I want to avoid this to save performance a little bit.
It seems I want to have something like that
void Foo::Method() = 0;
but that would be compilation error
if you're planning on using the derived class from template code, i.e. compile time polymorphism, then you only need to document the expected signature
the code using a derived class simply won't compile and link if the used function isn't implemented
otherwise, for runtime polymorphism it needs to be virtual, or else it won't be called
I believe that you might be confused with regard to how C# version works:
class A {
public void NonVirt() { Console.Out.WriteLine("A:NonVirt"); }
public virtual void Virt() { Console.Out.WriteLine("A:Virt"); }
}
class B : A {
public void NonVirt() { Console.Out.WriteLine("B:NonVirt"); }
public override void Virt() { Console.Out.WriteLine("B:Virt"); }
}
class Program {
static void Main(string[] args) {
A x = new B();
x.NonVirt();
x.Virt();
}
}
This will output
A:NonVirt
B:Virt
So even in C#, you need to make method virtual if you want to call the derived implementation.
If method must be implemented in all non-abstract subclasses this means that you need to call them through base class pointer. This in turn means that you need to make them virtual, same as in C# (and likely in Java, but I am not sure)
Btw, price of virtual call is a few nanoseconds on modern CPUs, so I am not sure if it is worth it but lets say that it is.
If you want to avoid the cost of virtual call, you should use compile time polymorphism via templates
There is no notion of interface in C++. The only way to achieve your goal is to create a base class defining as virtual and = 0 all the methods which must be actually defined in subclasses.
class IBase {
// ...
virtual void f1() = 0;
// ....
}
That class will be virtual pure if all methods are defined like f1, which is the closest to an interface you can get.
The concept of interface in Java is a bit like a contract with regard to classes implementing it. The compiler enforces the constraints of the contract by checking the content of the implementors. This notion of contract or explicit structural subtyping does not exist formally in C++.
However, you can manually verify that such constraints are respected by defining a template wich will expect as a parameter a class with the defined methods or attributes, and using that template on the classes to be verified. This could be considered a form of unit testing I suppose.
I'm working on a serialization system, and all my serializable classes implement
virtual void serialize(Buffer buffer);
When a pointer is going to be serialized, I need to call the serialize() function of the class itself, and not that of any of its parents, even if the pointer is a parent type, and I've been running into a lot of bugs because I don't notice that a child class doesn't even have serialize() at all so the parent serialize() class is just being called
ie
class A
{
virtual void serialize();
}
class B:public A
{
virtual void serialize();
}
class C:public B
{
virtual void serialize();
}
void doSerialization(A *a)
{
a->serialize();
}
C *c=new C();
doSerialization(c);
right now, if C didn't have a serialize function, B::serialize() would be silently called. I'd prefer an error message, or anything else that will at least point it out to me. Is there any keyword in C++ (even '11) that would do this?
There's no easy way of doing so in C++.
There is a hack though, explained in this answer, using virtual inheritance and forcing your classes to register which serialize method they are using.
Use pure virtual function in parent:
virtual void serialize(Buffer buffer) = 0;
At compile time you can only do that by making the function a pure virtual function in classes that are not final:
class A
{
virtual void serialize() = 0;
}
class B:public A
{
virtual void serialize() = 0;
}
class C final:public B
{
virtual void serialize();
}
Of course that means that all concrete classes in your design need to be final. If you must inherit from concrete classes, you can't enforce this at compile time.
right now, if C didn't have a serialize function B::serialize() would be silently called.
No, you'll get linker error. As I see, that's what you want.
One variant to solve this is is to not inherit several layers, so instead of class C: public B, you use class C: public A. Of course, that's not necessarily a suggestion for all scenarios.
At some point sooner or later, you do have to leave things in the hand of the programmer.
There may be some ways to check this as well, - maybe create a temporary pointer to B and check if typeid(*this) == typeid(temp) or some such?
I read so many blogs and I understand how to use virtual function in c++. But, still I don't understand why we use virtual functions. Can you give me a real world example so that I can more easily visualize the actual meaning of virtual function.
An important thing to mention is that inheritance (which the keyword virtual is fundamental for) should not be for the sole purpose of code re-use, use delegation for this.
Delegation would be when we have a class say BroadbandConnection with a method called connection(). Then your manager says we want to add encryption, so you create a class BroadbandConnectionWithEncryption. Your natural instinct may be to use inheritance and then make the new class BroadbandConnectionWithEncryption derive from BroadbandConnection.
Drawback's to this is that the creator of the initial class had not designed it for inheritance so you would need to change its definition to make the method connection() virtual so you can override its behavior in the derived class. This is not always ideal. A better idea is to use delegation here for the purpose of code reuse.
class BroadBandConnection
{
public:
void Connection (string password)
{
//connection code.
}
};
class BroadBandConnectionWithEndcryption
{
public:
void Connection (string password)
{
mbroadbandconnection.Connection(password);
//now do some stuff to zero the memory or
//do some encryption stuff
}
private:
BroadBandConnection mbroadbandconnection;
};
The keyword virtual is used for the purpose of polymorphism. As the name suggest, it is the ability for an object to have more than one form. This sort of decision would be made at the time of designing an interface or class.
class IShape
{
virtual void Draw () = 0;
};
class Square
{
void Draw()
{
//draw square on screen
}
};
class Circle
{
void Draw()
{
//draw circle on screen
}
};
I made Draw() pure virtual with the = 0. I could have left this out and added some default implementation. Pure virtual makes sense for Interfaces where there is no reasonable default implementation.
What this lets me do is pass around a Shape object to various methods and they do not need to be concerned with what I have just given them. All they know is that I have to provide something that supports the ability for a shape to draw itself.
IShape* circle = new Circle ();
IShape* square = new Square ();
void SomeMethod (IShape* someShape)
{
someShape->Draw(); //This will call the correct functionality of draw
}
In the future as people begin thinking of new shapes, they can derive from IShape and so long as they implement some functionality for Draw. They can pass this object to SomeMethod.
First, this.
Now, a real life example. I have a program with a GUI with three tabs. Each tab is an object of a class that derives from a common base, TabBase. It has a virtual function OnActivate(). When a tab is activated, the dispatcher calls it on the current tab. There's some common action and there are actions that are specific to this tab. This is implemented via virtual functions.
The benefit is that the controller does not need to know what kind of tab it is. It stores an array of TabBase pointers, and just calls OnActivate() on them. The magic of virtual functions makes sure the right override is called.
class TabBase
{
virtual void OnActivate()
{
//Do something...
}
};
class SearchTab: public TabBase
{
void OnActivate() //An override
{
TabBase::OnActivate(); //Still need the basic setup
//And then set up the things that are specific to the search tab
}
}
We have one base class (animal) that have method, that can be implemented differently by it's children (say). When we declare this method virtual, we can adress that method and it will be implemented from it's children's definition. You don't have to use virtual if you adress children's overloaded methods, but you have to, when you adress parent's methods.
For example, if you have a vector of animals each one of whom is different. You declare method (say) as virtual and call it from animal class and it will be called from corresponding child.
Correct me if I'm wrong, that's how I understood it.
They actually give an example on Wiki
http://en.wikipedia.org/wiki/Virtual_function
using animals. Animals is the super class, all animals eat (the superclass virtual function). Each animal may eat differently than all the other animals (overriding the virtual function). I have a list of arbitrary animals, and when I call the eat function, they will display their own differing eating habit.
If you are familiar with Java - that should be easy. In Java, ALL class methods are effectively virtual. If you override it in a derived class, and you call it via a base class reference, the override will be called, not the base.
That's not the default behavior in C++. If you want a function to behave in that way, you have to declare it as virtual in the base class. Easy enough.
Java is choke full of virtual functions. It just does not have an explicit keyword for them.
The purpose of virtual functions is to achieve dynamic dispatch.
You say you are familiar with Java, so then for a real world use of virtual functions, think of any place in Java where you would have used an interface or used #Override on a public/protected method.
The decision to use virtual functions is a simple matter. You just need to know when you'd want to override a base method. Take the following code as an example:
class animal
{
public:
void sound()
{
cout << "nothing";
}
};
class bird : public animal
{
public:
void sound()
{
cout << "tweet";
}
};
In this case, I'd want to override bird(). But what if I didn't? This is what would happen:
animal * a = new bird;
a->sound();
**Output**
nothing
The screen would say nothing because for all intents and purposes, C++ only sees an animal. However, if you declared it virtual, it knows to search for the lowest method in the class hierachy. Try it again:
class animal{
public:
virtual void sound(){cout<<"nothing";}
};
class bird : public animal
{
public:
void sound()
{
cout << "tweet";
}
};
animal * a = new bird;
a->sound();
**Output**
tweet.
Hope this helps.
Ok, this is my problem. I have the following classes:
class Job {
bool isComplete() {}
void setComplete() {}
//other functions
};
class SongJob: public Job {
vector<Job> v;
string getArtist() {}
void setArtist() {}
void addTrack() {}
string getTrack() {}
// other functions
};
// This were already implemeted
Now I want to implement a VideoJob and derived it from Job. But here is my problem. I also have the following function witch it was set to work only with SongJob:
void process(SongJob s)
{
// not the real functions
s.setArtist();
..............
s.getArtist();
.............
s.getArtist();
...............
s.setArtist()
}
Here I just want it to show that the function uses only derived object methods. So if I have another object derived from Job, I will need to change the parameter to Job, but then the compiler would not know about thoose functions and I dont what to test for everyone what kind of object it is and then cast it so I can call the correct function.
So it is okay to put all the functions in the base class, because then I will have no problem, but I don't know if this is correct OOP, if one class deals with Songs and the other with videos, I thing good oop means to have 2 clases.
If I didn't make myself clear, please say so and I will try explaining better.
And in short words, I want to use polymorfism.
It is totally fine to put all the things that the classes SongJob and VideoJob have in common into a common base-class. However, this will cause problems once you want to add a subclass of Job that has nothing to do with artists.
There are some things to note about the code you have posted. First, your class Job is apparently not an abstract base class. This means that you can have jobs that are just jobs. Not SongJob and not VideoJob. If you want to make it clear that there can not be a simple Job, make the base-class abstract:
class Job {
virtual bool isComplete() = 0;
virtual void setComplete() = 0;
//other functions
};
Now, you cannot create instances of Job:
Job job; // compiler-error
std::vector<Job> jobs; // compiler-error
Note that the functions are now virtual, which means that subclasses can override them. The = 0 and the end means that subclasses have to provide an implementation of these functions (they are pure virtual member functions).
Secondly, your class SongJob has a member std::vector<Job>. This is almost certainly not what you want. If you add a SongJob to this vector, it will become a normal Job. This effect is called slicing. To prevent it, you'd have to make it a std::vector<Job*>.
There is much more to say here, but that would go to far. I suggest you get a good book.
In your Base class Job you could add those methods as virtual methods so that a class deriving from Job may or may not override these specific methods.
In your SongJob class you override the methods and dont override them in VideoJob
In, void process() pass a pointer to Base class Job
void process(Job *s)
It will then call the appropriate methods depending on the adress of the objec s is pointing to which will be a SongJob object.
In C++, you have to do two things to get polymorphism to work:
Access polymorphic functions by a reference (&) or pointer (*) to a base type
Define the polymorphic functions as virtual in the base type
So, change these from:
class Job {
bool isComplete() {}
void setComplete() {}
};
void process(SongJob s)
{
// ...
}
To:
class Job {
public: // You forgot this...
virtual bool isComplete() { }
virtual void setComplete() { }
};
void process(Job& s)
{
// ...
}
If you can't define all the functionality you need inside process on your base class (if all the member functions you'd want don't apply to all the derived types), then you need to turn process into a member function on Job, and make it virtual:
class Job {
public:
virtual bool isComplete() { }
virtual void setComplete() { }
virtual void process() = 0;
};
// ...
int main(int argc, char* argv[])
{
SongJob sj;
Job& jobByRef = sj;
Job* jobByPointer = new SongJob();
// These call the derived implementation of process, on SongJob
jobByRef.process();
jobByPointer->process();
delete jobByPointer;
jobByPointer = new VideoJob();
// This calls the derived implementation of process, on VideoJob
jobByPointer->process();
return 0;
}
And of course, you'll have two different implementations of process. One for each class type.
People will tell you all sorts of "is-a" vs "has-a" stuff, and all sorts of complicated things about this silly "polymorphism" thing; and they're correct.
But this is basically the point of polymorphism, in a utilitarian sense: It is so you don't have to go around checking what type each class it before calling functions on it. You can just call functions on a base type, and the right derived implementation will get called in the end.
BTW, in C++, virtual ... someFunc(...) = 0; means that the type that function is defined in cannot be instantiated, and must be implemented in a derived class. It is called a "pure virtual" function, and the class it is defined on becomes "abstract".
Your problem comes from the fact you're calling a process method on an object. You should have a method Process on the Job class and override this method in your derived classes.
use pure virtual functions:
class Job
{
virtual string getArtist() =0;
};
I know that it's OK for a pure virtual function to have an implementation. However, why it is like this? Is there conflict for the two concepts? What's the usage? Can any one offer any example?
In Effective C++, Scott Meyers gives the example that it is useful when you are reusing code through inheritance. He starts with this:
struct Airplane {
virtual void fly() {
// fly the plane
}
...
};
struct ModelA : Airplane { ... };
struct ModelB : Airplane { ... };
Now, ModelA and ModelB are flown the same way, and that's believed to be a common way to fly a plane, so the code is in the base class. However, not all planes are flown that way, and we intend planes to be polymorphic, so it's virtual.
Now we add ModelC, which must be flown differently, but we make a mistake:
struct ModelC : Airplane { ... (no fly function) };
Oops. ModelC is going to crash. Meyers would prefer the compiler to warn us of our mistake.
So, he makes fly pure virtual in Airplane with an implementation, and then in ModelA and ModelB, put:
void fly() { Airplane::fly(); }
Now unless we explictly state in our derived class that we want the default flying behaviour, we don't get it. So instead of just the documentation telling us all the things we need to check about our new model of plane, the compiler tells us too.
This does the job, but I think it's a bit weak. Ideally we instead have a BoringlyFlyable mixin containing the default implementation of fly, and reuse code that way, rather than putting code in a base class that assumes certain things about airplanes which are not requirements of airplanes. But that requires CRTP if the fly function actually does anything significant:
#include <iostream>
struct Wings {
void flap() { std::cout << "flapping\n"; }
};
struct Airplane {
Wings wings;
virtual void fly() = 0;
};
template <typename T>
struct BoringlyFlyable {
void fly() {
// planes fly by flapping their wings, right? Same as birds?
// (This code may need tweaking after consulting the domain expert)
static_cast<T*>(this)->wings.flap();
}
};
struct PlaneA : Airplane, BoringlyFlyable<PlaneA> {
void fly() { BoringlyFlyable<PlaneA>::fly(); }
};
int main() {
PlaneA p;
p.fly();
}
When PlaneA declares inheritance from BoringlyFlyable, it is asserting via interface that it is valid to fly it in the default way. Note that BoringlyFlyable could define pure virtual functions of its own: perhaps getWings would be a good abstraction. But since it's a template it doesn't have to.
I've a feeling that this pattern can replace all cases where you would have provided a pure virtual function with an implementation - the implementation can instead go in a mixin, which classes can inherit if they want it. But I can't immediately prove that (for instance if Airplane::fly uses private members then it requires considerable redesign to do it this way), and arguably CRTP is a bit high-powered for the beginner anyway. Also it's slightly more code that doesn't actually add functionality or type safety, it just makes explicit what is already implicit in Meyer's design, that some things can fly just by flapping their wings whereas others need to do other stuff instead. So my version is by no means a total shoo-in.
Was addressed in GotW #31. Summary:
There are three main reasons you might
do this. #1 is commonplace, #2 is
pretty rare, and #3 is a workaround
used occasionally by advanced
programmers working with weaker
compilers.
Most programmers should only ever use #1.
... Which is for pure virtual destructors.
There is no conflict with the two concepts, although they are rarely used together (as OO purists can't reconcile it, but that's beyond the scope of this question/answer).
The idea is that the pure virtual function is given an implementation while at the same time forcing subclasses to override that implementation. The subclasses may invoke the base class function to provide some default behavior. The base cannot be instantiated (it is "abstract") because the virtual function(s) is pure even though it may have an implementation.
Wikipedia sums this up pretty well:
Although pure virtual methods
typically have no implementation in
the class that declares them, pure
virtual methods in C++ are permitted
to contain an implementation in their
declaring class, providing fallback or
default behaviour that a derived class
can delegate to if appropriate.
Typically you don't need to provide base class implementations for pure virtuals. But there is one exception: pure virtual destructors. In fact if your base class has a pure virtual destructor, it must have an implementation. Why would you need a pure virtual destructor instead of just a virtual one? Typically, in order to make a base class abstract without requiring the implementation of any other method. For example, in a class where you might reasonably use the default implementation for any method, but you still don't want people to instantiate the base class, you can mark only the destructor as pure virtual.
EDIT:
Here's some code that illustrates a few ways to call the base implementation:
#include <iostream>
using namespace std;
class Base
{
public:
virtual void DoIt() = 0;
};
class Der : public Base
{
public:
void DoIt();
};
void Base::DoIt()
{
cout << "Base" << endl;
}
void Der::DoIt()
{
cout << "Der" << endl;
Base::DoIt();
}
int main()
{
Der d;
Base* b = &d;
d.DoIt();
b->DoIt(); // note that Der::DoIt is still called
b->Base::DoIt();
return 0;
}
That way you can provide a working implementation but still require the child class implementer to explicitely call that implementation.
Well, we have some great answers already.. I'm to slow at writing..
My thought would be for instance an init function that has try{} catch{}, meaning it shouldn't be placed in a constructor:
class A {
public:
virtual bool init() = 0 {
... // initiate stuff that couldn't be made in constructor
}
};
class B : public A{
public:
bool init(){
...
A::init();
}
};