Overriding virtual with pure virtual..Is it ok? - c++

Example:
class IGui{
protected:
virtual bool OnClicked(){return false;}
virtual bool OnHover(){return false;}
virtual bool OnScrollBarChange(){return false;}
virtual bool OnTextChange(){return false;}
...
}
class IGuiButton: public IGui{
protected:
virtual bool OnClicked() = 0;
virtual bool OnHover(){
do stuff
return true;}
...
}
The point is having a commom interface for all gui types that can be (where not all virtuals need to be overridden), and then provide a lite specialization for a button, but for the button, theres must be a override for the OnClicked..
Also, I think I should make the ones a button shouldnt override private( so use private inheritance, and use that fancy "using Base::Method;" for making the specific ones protected?

There are multiple sides to the question. The first one is actually a quite interesting question:
Can a derived class have a pure virtual method that is not pure in the base?
The answer is yes, it can. With the expected (if you expected this to work) semantics: a type derived from the intermediate type must implement the virtual function not to be an abstract type. This leads to a curious circumstance, where the base is not abstract, but the derived type is... which will be surprising. Just for this, I would avoid this in a design.
Should you mark as private the members that deriving types should not override?
No, there is no reason or advantage to do that. Whether the member function is public, protected or private derived classes can override it. Any code that can call the functions through the base type will still be able to call it by casting to base. This leads to another strange thing in your design. The base class is filled with protected virtual functions, which means that they are accessible only by the derived type. This does not define an interface and cannot be used as such. If a function/class takes a reference to a IGui, or a IGuiButton it will not be able to do much, as there is no public interface. That basically means that noone will be able to call any of the events --unless you are also abusing friendship to provide access to the event handler, but you should avoid it.
So what is a proper design?
There are different alternatives. I'd recommend that before creating your own square wheel you look at those wheels that were invented in the past: look at different graphical frameworks and libraries and try to understand why they decided to design them as such. Look at the differences and try to determine what advantages/disadvantages they bring and which option matches your problem. UI is a domain where there is a lot of prior art, and chances are you will not design from scratch anything better than people in the field have done in the past --you might do it, but it is much easier to fall in the same pitfalls everyone else felt before.

I'd have to say I think what you are trying to do is poor design.
Your top level (IGui) "has everything" and then you are effectively taking stuff out as you move down the class hierarchy. The top level would normally have the common stuff and you add the differences as you move down.
You are losing the protections that a good design can give you.

Related

Delegate part of an interface to a subclass in C++? [duplicate]

Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.

Why does C++ not let baseclasses implement a derived class' inherited interface?

Here is what I am talking about
// some guy wrote this, used as a Policy with templates
struct MyWriter {
void write(std::vector<char> const& data) {
// ...
}
};
In some existing code, the people did not use templates, but interfaces+type-erasure
class IWriter {
public:
virtual ~IWriter() {}
public:
virtual void write(std::vector<char> const& data) = 0;
};
Someone else wanted to be usable with both approaches and writes
class MyOwnClass: private MyWriter, public IWriter {
// other stuff
};
MyOwnClass is implemented-in-terms-of MyWriter. Why doesn't MyOwnClass' inherited member functions implement the interface of IWriter automatically? Instead the user has to write forwarding functions that do nothing but call the base class versions, as in
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) {
MyWriter::write(data);
}
};
I know that in Java when you have a class that implements an interface and derives from a class that happens to have suitable methods, that base class automatically implements the interface for the derived class.
Why doesn't C++ do that? It seems like a natural thing to have.
This is multiple inheritance, and there are two inherited functions with the same signature, both of which have implementation. That's where C++ is different from Java.
Calling write on an expression whose static type is MyBigClass would therefore be ambiguous as to which of the inherited functions was desired.
If write is only called through base class pointers, then defining write in the derived class is NOT necessary, contrary to the claim in the question. Now that the question changed to include a pure specifier, implementing that function in the derived class is necessary to make the class concrete and instantiable.
MyWriter::write cannot be used for the virtual call mechanism of MyBigClass, because the virtual call mechanism requires a function that accepts an implicit IWriter* const this, and MyWriter::write accepts an implicit MyWriter* const this. A new function is required, which must take into account the address difference between the IWriter subobject and the MyWriter subobject.
It would be theoretically possible for the compiler to create this new function automatically, but it would be fragile, since a change in a base class could suddenly cause a new function to be chosen for forwarding. It's less fragile in Java, where only single inheritance is possible (there's only one choice for what function to forward to), but in C++, which supports full multiple inheritance, the choice is ambiguous, and we haven't even started on diamond inheritance or virtual inheritance yet.
Actually, this problem (difference between subobject addresses) is solved for virtual inheritance. But it requires additional overhead that's not necessary most of the time, and a C++ guiding principle is "you don't pay for what you don't use".
Why doesn't C++ do that? It seems like a natural thing to have.
Actually, no, it is extremely unnatural thing to have.
Please note that my reasoning is based on my own understanding of "common sense" and can be fundamentally flawed as a result.
You see, you have two different methods, first one in MyWriter, which is non virtual and second one in IWriter which is virtual. They are completely different despite "looking" similar.
I suggest to check this question. The good thing about non-virtual methods is that no matter what you do, as long as they don't call virtual methods, their behavior will never change. I.e. somebody deriving from your class with non-virtual methods will not break existing method by masking them. Virtual methods are designed to be overriden. The price of that is that it is possible to break underlying logic by improperly overriding virtual method. And this is a root of your problem.
Let's say what you propose is allowed. (automatic conversion to virtual with multiple inheritance) There two possible solutions:
Solution #1
MyWriter becomes virtual. Consequences: All existing C++ code in the world becomes easy to break via typo or name clash. MyWriter method was not supposed to be overriden initially, so suddenly turning it into virtual will (murphy's law) break underlying logic of MyWriter class when somebody derives from MyOwnClass. Which means that suddenly making MyWriter::write virtual is a bad idea.
Soluion #2
MyWriter remains static BUUUT it is included temporarily as a virtual method into IWriter, until overriden. At first glance there's nothing to worry about, but let's think about it. IWriter implements some kind of concept you had in mind, and it is supposed to do something. MyWriter implements another concept. To assign MyWriter::write as IWriter::write method you need two guarantees:
Compiler must ensure that MyWriter::write does what IWriter::write() is supposed to do.
Compiler must ensure that calling MyWriter::write from IWriter will not break existing functionality in MyWriter code programmer expects to use elsewhere.
So, the thing is that compiler cannot guarantee that. Functions have similar name and argument list, but by Murphy's law that means that they're prbably doing completely different thing. (sinf and cosf have same argument list, for example), and it is unlikely that compiler will be able to predict the future and make sure that at no point in development will MyWriter be changed in such way that it will become incompatible with IWriter. So, since machine can't make reasonable decision (no AI for that) by itself, it has to ask YOU, programmer - "What is it you wish to do?". And you say "redirect virtual method into MyWriter::write(). It totally won't break anything. I think.".
And that's why you must specify which method you want to use manually....
Doing it automatically would be unintuitive and surprising. C++ does not assume that multiple base classes are related to each other, and protects the user against name collisions between their members by defining nested name specifiers for nonstatic members. Adding implicit declarations to MyOwnClass where signatures from IWriter and MyWriter collide would be antithetical to protecting names.
However, C++11 extensions do bring us closer. Consider this:
class MyOwnClass: private MyWriter, public IWriter {
public:
void write(std::vector<char> const& data) final = MyWriter::write;
};
This mechanism would be safe because it expresses that MyWriter doesn't expect any further overrides, and convenient because it names the function signature that will be "joined" but nothing more. Also, final would be ill-formed if the function weren't implicitly virtual, so it checks that the signature matches the virtual interface.
On one hand, most interfaces don't just happen to match up this way. Defining this feature to work only with identical signatures would be safe but rarely useful. Defining it as a shortcut to a delegating function body would be useful but fragile. So it might not really be a good feature
On the other hand, this is a good design pattern to provide functionality which isn't virtual when you don't need it to be. So given this idiom, we might use it to write good code, even if it doesn't match up well with current practices.
Why doesn't C++ do that?
I'm not sure what you're asking here. Could C++ be rewritten to allow this? Yes, but to what end?
Because MyWriter and IWriter are completely different classes, it is illegal in C++ to call a member of MyWriter through an instance of IWriter. The member pointers have completely different types. And just as a MyWriter* is not convertible to a IWriter*, neither is a void (MyWriter::*)(const std::vector<char>&) convertible to a void (IWriter::*)(const std::vector<char>&).
The rules of C++ don't change just because there could be a third class that combines the two. Neither class is a direct parent/child relative of one another. Therefore, they are treated as entirely distinct classes.
Remember: member functions always take an additional parameter: a this pointer to the object that they point to. You cannot call void (MyWriter::*)(const std::vector<char>&) on an IWriter*. The third class can have a method that casts itself into the proper base class, but it must actually have this method. So either you or the C++ compiler must create it. The rules of C++ require this.
Consider what would have to happen to make this work without a derived-class method.
A function gets an IWriter*. The user calls the write member of it, using nothing more than the IWriter* pointer. So... exactly how can the compiler generate the code to call MyWriter::writer? Remember: MyWriter::writer needs a MyWriter instance. And there is no relationship between IWriter and MyWriter.
So how exactly could the compiler do the type coercion locally? The compiler would have to check the virtual function to see if the actual function to be called takes IWriter or some other type. If it takes another type, it would have to convert the pointer to its true type, then do another conversion to the type needed by the virtual function. After doing all of that, it would then be able to make the call.
All of this overhead would affect every virtual call. All of them would have to at least check to see if the actual function to be call. Every call will also have to generate the code to do the type conversions, just in case.
Every virtual function call would have a "get type" and conditional branch in it. Even if it is never possible to trigger that branch. So you would be paying for something regardless of whether you use it or not. That's not the C++ way.
Even worse, a straight v-table implementation of virtual calls is no longer possible. The fastest method of doing virtual dispatch would not be a conforming implementation. The C++ committee is not going to make any change that would make such implementations impossible.
Again, to what end? Just so that you don't have to write a simple forwarding function?
Just make MyWriter derive from IWriter, eliminate the IWriter derivation in MyOwnClass, and move on with life. This should resolve the problem and should not interfere with the template code.

Should methods that implement pure virtual methods of an interface class be declared virtual as well?

I read different opinions about this question. Let's say I have an interface class with a bunch of pure virtual methods. I implement those methods in a class that implements the interface and I do not expect to derive from the implementation.
Is there a need for declaring the methods in the implementation as virtual as well? If yes, why?
No - every function method declared virtual in the base class will be virtual in all derived classes.
But good coding practices are telling to declare those methods virtual.
virtual is optional in the derived class override declaration, but for clarity I personally include it.
Real need - no. Once a method is declared as virtual in the base class, it stays virtual for all derived classes. But it's good to know which method is virtual and which - not, instead of checking this in the base class. Also, in most cases, you cannot be sure, if your code will be derived or not (for example, if you're developing some software for some firm). As I said, it's not a problem, as once declared as virtual, it stays virtual, but just in case .. (:
There is no requirement to mark them virtual.
I'd start by arguing that virtual advertises to readers that you expect derived classes to override the virtual to do something useful. If you are implementing the virtual to do something, then the virtual method might have nothing to do with the kind of thing your class is: in which case marking it virtual is silly. consider:
class CommsObject {
virtual OnConnect();
virtual OnRawBytesIn();
};
class XMLStream : public CommsObject {
virtual OnConnect();
OnRawBytesIn();
virtual OnXMLData();
};
In that example, OnConnect is documented as virtual in both classes because it makes sense that a descendent would always want to know. OnRawBytesIn doesn't make sense to "Export" from XMLStream as it uses that to handle raw bytes, and generate parsed data - which it notifies via OnXMLData().
Having done all that, I'd then argue that the maintainer of a 3rd class, looking at XMLStream, might think that it would be "safe" to create their own OnRawBytes function and expect it to work as a normal overloaded function - i.e. the base class would call the internal correct one, and the outer one would mask the internal OnRawBytes.
So omitting the virtual has hidden important detail from consumers of the class and made the code behave in unexpected ways.
So ive gone full circle: Don't try to use it as a hint about the intended purpose of a function - DO use it as a hint about the behaviour of the function: mark functions virtual consistently so downstream programmers have to read less files to know how a function is going to behave when overridden.
No, it is not needed and it doesn't prevent any coding errors although many coders prefer to put it.
Once C++0x becomes mainstream you'll be able to use the override specifier instead.
Once 'virtual', it's virtual all the way down to the last child. Afaik, that's the feature of the c++.
If you never derive from a class then there is no point making its methods virtual.

Use-cases of pure virtual functions with body?

I recently came to know that in C++ pure virtual functions can optionally have a body.
What are the real-world use cases for such functions?
The classic is a pure virtual destructor:
class abstract {
public:
virtual ~abstract() = 0;
};
abstract::~abstract() {}
You make it pure because there's nothing else to make so, and you want the class to be abstract, but you have to provide an implementation nevertheless, because the derived classes' destructors call yours explicitly. Yeah, I know, a pretty silly textbook example, but as such it's a classic. It must have been in the first edition of The C++ Programming Language.
Anyway, I can't remember ever really needing the ability to implement a pure virtual function. To me it seems the only reason this feature is there is because it would have had to be explicitly disallowed and Stroustrup didn't see a reason for that.
If you ever feel you need this feature, you're probably on the wrong track with your design.
Pure virtual functions with or without a body simply mean that the derived types must provide their own implementation.
Pure virtual function bodies in the base class are useful if your derived classes wants to call your base class implementation.
One reason that an abstract base class (with a pure virtual function) might provide an implementation for a pure virtual function it declares is to let derived classes have an easy 'default' they can choose to use. There isn't a whole lot of advantage to this over a normal virtual function that can be optionally overridden - in fact, the only real difference is that you're forcing the derived class to be explicit about using the 'default' base class implementation:
class foo {
public:
virtual int interface();
};
int foo::interface()
{
printf( "default foo::interface() called\n");
return 0;
};
class pure_foo {
public:
virtual int interface() = 0;
};
int pure_foo::interface()
{
printf( "default pure_foo::interface() called\n");
return 42;
}
//------------------------------------
class foobar : public foo {
// no need to override to get default behavior
};
class foobar2 : public pure_foo {
public:
// need to be explicit about the override, even to get default behavior
virtual int interface();
};
int foobar2::interface()
{
// foobar is lazy; it'll just use pure_foo's default
return pure_foo::interface();
}
I'm not sure there's a whole lot of benefit - maybe in cases where a design started out with an abstract class, then over time found that a lot of the derived concrete classes were implementing the same behavior, so they decided to move that behavior into a base class implementation for the pure virtual function.
I suppose it might also be reasonable to put common behavior into the pure virtual function's base class implementation that the derived classes might be expected to modify/enhance/augment.
One use case is calling the pure virtual function from the constructor or the destructor of the class.
The almighty Herb Sutter, former chair of the C++ standard committee, did give 3 scenarios where you might consider providing implementations for pure virtual methods.
Gotta say that personally – I find none of them convincing, and generally consider this to be one of C++'s semantic warts. It seems C++ goes out of its way to build and tear apart abstract-parent vtables, then briefly exposes them only during child construction/destruction, and then the community experts unanimously recommend never to use them.
The only difference of virtual function with body and pure virtual function with body is that existence of second prevent instantiation. You can't mark class abstract in c++.
This question can really be confusing when learning OOD and C++. Personally, one thing constantly coming in my head was something like:
If I needed a Pure Virtual function to also have an implementation, so why make it "Pure" in first place ? Why not just leaving it only "Virtual" and have derivatives, both benefit and override the base implementation ?
The confusion comes to the fact that many developers consider the no body/implementation as the primary goal/benefit of defining a pure virtual function. This is not true!
The absence of body is in most cases a logical consequence of having a pure virtual function. The main benefit of having a pure virtual function is defining a contract: By defining a pure virtual function, you want to force every derivative to always provide their own implementation of the function. This "contract aspect" is very important especially if you are developing something like a public API. Making the function only virtual is not so sufficient because derivatives are no longer forced to provide their own implementation, therefore you may loose the contract aspect (which can be limiting in the case of a public API).
As commonly said :
"Virtual functions can be overrided, Pure Virtual functions must be overrided."
And in most cases, contracts are abstract concepts so it doesn't make sense for the corresponding pure virtual functions to have any implementation.
But sometimes, and because life is weird, you may want to establish a strong contract among derivatives and also want them to somehow benefit from some default implementation while specifying their own behavior for the contract. Even if most book authors recommend to avoid getting yourself into these situations, the language needed to provide a safety net to prevent the worst! A simple virtual function wouldn't be enough since there might be risk of escaping the contract. So the solution C++ provided was to allow pure virtual functions to also be able to provide a default implementation.
The Sutter article cited above gives interesting use cases of having Pure Virtual functions with body.

questions about an article introducing C++ interface

I have been reading an article about C++ interfaces (http://accu.org/index.php/journals/233) and I am completely lost at the part where it says all the virtual member functions should be made private (the section titled "Strengthening the Separation"). It just does not make sense to me at all.
According to the author, the code is like this:
class shape {
public:
virtual ~shape();
virtual void move_x(distance x) = 0;
virtual void move_y(distance y) = 0;
virtual void rotate(angle rotation) = 0;
//...
};
class line : public shape {
public:
line(point end_point_1, point end_point_2);
//...
private:
virtual ~line();
virtual void move_x(distance x);
virtual void move_y(distance y);
virtual void rotate(angle rotation);
//...
};
So we have a pure virtual function which is public, and its implementation (in the line class) which is private.
Could anybody explain how the move_x function can be called? Its access specifier is private, it will lead to an error if I try to do this:
line my_line(point(0,0), point(1,2));
my_line.move_x(-1); // does not compile
Similarly is it correct to say that the drawing interface (see earlier in the article)cannot access these functions either?
Thank you.
The idea is that you'd use those methods via a reference or pointer to shape.
shape &s = my_line;
s.move_x(-1);
This could be justified on the grounds of "reveal only what you need to", or as a form of self-documentation. It proves that the methods are only called in the intended way.
If you have in instance of the line object, you might be tempted to call it's methods. But if the only way you can get at them is by asking for it's shape interface, then the object looks to less like an object and more like a collection of interfaces.
This makes more sense if you imagine line implementing more than one interface.
This advice is applicable to homogeneous hierarchies only -- that is, hierarchies in which derived classes introduce no new functions (except constructors maybe) and just override base class functions. In this case you obviously don't need to work with line instance directly -- only via pointer/reference to shape.
When you have less homogeneous hierarchy, this advice hasn't much sense: how would anyone apply it to cases when derived class introduces new functions, or inherits them from another base class? In this case you sometimes want to work directly with objects of derived class directly, and this advice would lead to inconveniences only.
Further development of this idea -- less radical and usable in more contexts -- Non-Virtual Interface (NVI) by Herb Sutter.
I think that the article highlights the rationale well in this quote:
Now, the only thing users can do with
line is create instances of it. All
usage must be via its interface - i.e.
shape, thus enforcing a stronger
interface/implementation separation.
Before leaving this topic, it is
important to get something straight:
the point of enforcing the
interface/implementation separation is
not to tell users what to do. Rather,
the objective is to underpin the
logical separation - the code now
explains that the key abstraction is
shape, and that line serves to provide
an implementation of shape.
That is, line is not by itself interesting. It's just an implementation of the shape interface, and there may be other implementations. You're particularly interested in the shape interface. Hence, you should only access the implementation through this interface, and not as a standalone class.