Following code prints "I'm B!". It's a bit strange because B::foo() is private. About A* ptr we can say that its static type is A (foo is public) and its dynamic type is B (foo is private). So I can invoke foo via pointer to A. But this way I have access to private function in B. Can it be considered as encapsulation violation?
Since access qualifier is not part of class method signature it can lead to such strange cases. Why does in C++ access qualifier is not considered when virtual function is overridden? Can I prohibit such cases? What design principle is behind this decision?
Live example.
#include <iostream>
class A
{
public:
virtual void foo()
{
std::cout << "I'm A!\n";
};
};
class B: public A
{
private:
void foo() override
{
std::cout << "I'm B!\n";
};
};
int main()
{
A* ptr;
B b;
ptr = &b;
ptr->foo();
}
You have multiple questions, so I'll try to answer them one-by-one.
Why is in C++ access qualifier not considered when virtual function is overridden?
Because access qualifiers are taken into account by the compiler after all overload resolutions.
Such behavior is prescribed by the Standard.
For example, see on cppreference:
Member access does not affect visibility: names of private and privately-inherited members are visible and considered by overload resolution, implicit conversions to inaccessible base classes are still considered, etc. Member access check is the last step after any given language construct is interpreted. The intent of this rule is that replacing any private with public never alters the behavior of the program.
The next paragraph described the behavior demonstrated by your example:
Access rules for the names of virtual functions are checked at the call point using the type of the expression used to denote the object for which the member function is called. The access of the final overrider is ignored.
Also see the sequence of actions listed in this answer.
Can I prohibit such cases?
No.
And I don't think you will ever be able to do so, because there's nothing illegal in this behavior.
What design principle is behind this decision?
Just to clarify: by "decision" here I imply the prescription for the compiler to check the access qualifiers after overload resolution.
The short answer: to prevent surprises when you're changing your code.
For more details let's assume you're developing some CoolClass which looks like this
class CoolClass {
public:
void doCoolStuff(int coolId); // your class interface
private:
void doCoolStuff(double coolValue); // auxiliary method used by the public one
};
Assume that compiler can do overload resolution based on public/private specifiers. Then the following code would successfully compile:
CoolClass cc;
cc.doCoolStuff(3.14); // invokes CoolClass::doCoolStuff(int)
// yes, this would raise the warning, but it can be ignored or suppressed
Then at some point you discover that your private member function is actually useful for the class client and move it to "public" area. This automatically changes the behavior of the preexisting client code, since now it invokes CoolClass::doCoolStuff(double).
So the rules of applying access qualifiers are written in a manner that does not allow such cases, so instead you will get the "ambiguous call" compiler error in the very beginning. And virtual functions are no special case for the same reason (see this answer).
Can it be considered as encapsulation violation?
Not really.
By converting pointer to your class into a pointer to its base class you're actually saying: "Herewith I would like to use this object B as if it's an object A" - which is perfectly legal, because the inheritance implies "as-is" relation.
So the question is rather, can your example be considered as violating contract prescribed by the base class? It seems that yes, it can.
See the answer to this question for alternative explanation.
P.S.
Don't get me wrong, all this doesn't mean at all that you shouldn't use private virtual functions. On the contrary, it's often considered as a good practice, see this thread. But they should be private from the very base class. So again, the bottom line is, you should not use private virtual functions to break public contracts.
P.P.S. ...unless you deliberately want to force client to use your class via the pointer to interface / base class. But there are better ways for that, and I believe the discussion of those lies beyond the scope of this question.
Access qualifiers like public, private, etc. are a compile time feature, while dynamic polymorphism is a runtime feature.
What do you think should happen at runtime when a private override of a virtual function is called? An exception?
Can it be considered as encapsulation violation?
No, since the interface is already published through the inheritance, it isn't.
It's perfectly fine (and might be intended), to override a public virtual function from the base class with a private function in the derived class.
Related
I happened to be browsing the source for mongoDB, and found this interesting construct:
class NonspecificAssertionException final : public AssertionException {
public:
using AssertionException::AssertionException;
private:
void defineOnlyInFinalSubclassToPreventSlicing() final {}
};
How does the private method prevent slicing? I can't seem to think of the problem case.
Cheers, George
The only member functions to which the final specifier may be applied are virtual member functions. It is likely that in AssertionException or one of it's own base classes, this member is defined as
virtual void defineOnlyInFinalSubclassToPreventSlicing() = 0;
Thus, all classes in the hierarchy save the most derived ones are abstract base classes. One may not create values of abstract classes (they can only serve as bases). And so, one may not accidentally write
try {
foo();
}
catch(AssertionException const e) { // oops catching by value
}
If AssertionException was not abstract, the above could be written. But when it's abstract the compiler will complain at that exception handler, forcing us to catch by reference. And catching by reference is recommended practice.
Marking the member (and class) as final ensures no further derivation is possible. So the problem cannot reappear accidentally when the inheritance hierarchy is changed. Because a programmer that adds another class and again defines defineOnlyInFinalSubclassToPreventSlicing as final will elicit an error from the compiler, on account of this member already being declared final in the base. They will therefore have to remove the implementation from the base class, thus making it abstract again.
It's a bookkeeping system.
Normally calling virtual functions from constructors is considered bad practice, because overridden functions in sub-objects will not be called as the objects have not been constructed yet.
But, Consider the following classes:
class base
{
public:
base() {}
~base() {}
private:
virtual void startFSM() = 0;
};
class derived final : public base
, public fsm_action_interface
{
public:
derived() : base{}
, theFSM_{}
{ startFSM(); }
/// FSM interface actions
private:
virtual void startFSM()
{ theFSM_.start(); }
private:
SomeFSMType theFSM_;
}
In this case class derived is marked as final so no o further sub-objects can exist. Ergo the virtual call will resolve correctly (to the most derived type).
Is it still considered bad practice?
This would still be considered bad practice as this sort of this almost always indicates bad design. You'd have to comment the heck out of the code to explain why this works in this one case.
T.C.'s comment above reinforces one of the reasons why this is considered bad practice.
What happens if, a year down the line, you decide that derived
shouldn't be final after all?
That said, in the example above, the pattern will work without issue. This is because the constructor of the most derived type is the one calling the virtual function. This problem manifests itself when a base class's constructor calls a virtual function that resolves to a subtype's implementation. In C++, such a function won't get called, because during base class construction, such calls will never go to a more derived class than that of the currently executing constructor or destructor. In essence, you end up with behavior you didn't expect.
Edit:
All (correct/non-buggy) C++ implementations have to call the version of the function defined at the level of the hierarchy in the current constructor and no further.
The C++ FAQ Lite covers this in section 23.7 in pretty good detail.
Scott Meyers also weighs in on the general issue of calling virtual functions from constructors and destructors in Effective C++ Item 9
Regarding
” Normally calling virtual functions from constructors is considered bad practice, because overridden functions in sub-objects will not be called as the objects have not been constructed yet.
That is not the case. Among competent C++ programmers it’s normally not regarded as bad practice to call virtual functions (except pure virtual ones) from constructors, because C++ is designed to handle that well. In contrast to languages like Java and C#, where it might result in a call to a method on an as yet uninitialized derived class sub-object.
Note that the dynamic adjustment of dynamic type has a runtime cost.
In a language oriented towards ultimate efficiency, with "you don't pay for what you don't use" as a main guiding principle, that means that it's an important and very much intentional feature, not an arbitrary choice. It's there for one purpose only. Namely to support those calls.
Regarding
” In this case class derived is marked as final so no o further sub-objects can exist. Ergo the virtual call will resolve correctly (to the most derived type).
The C++ standard guarantees that at the time of construction execution for a class T, the dynamic type is T.
Thus there was no problem about resolving to incorrect type, in the first place.
Regarding
” Is it still considered bad practice?
It is indeed bad practice to declare a member function virtual in a final class, because that’s meaningless. The “still” is not very meaningful either.
Sorry, I didn't see that the virtual member function was inherited as such.
Best practice for marking a member function as an override or implementation of pure virtual, is to use the keyword override, not to mark it as virtual.
Thus:
void startFSM() override
{ theFSM_.start(); }
This ensures a compilation error if it is not an override/implementation.
It can work, but why does startFSM() need to be virtual? In no case do you actually want to actually call anything but derived::startFSM(), so why have any dynamic binding at all? If you want to have it call the same thing as a dynamically binded method, make another non-virtual function called startFSM_impl() and have both the constructor and startFSM() call it instead.
Always prefer non-virtual to virtual if you can help it.
Consider the below code snippet.
The method Sayhi() is having public access in class Base.
Sayhi() has been overridden as a private method by the class Derived.
In this way, we can intrude into someone's privacy and C++ has no way to detect it because things happen during run-time.
I understand it is "purely" compile-time check. But when using some thick inheritance hierarchy, programmers may incorrectly change access specifiers. Shouldn't the standard have some say atleast? some kind of warning message.
Why doesn't the compiler issue atleast a warning message whenever access specifier of overridden or virtual functions differ?
Q1. Does C++ standard has any say about such run-time anomalies?
Q2. I want to understand from C++ standard's perspective, why wouldn't standard enforce compiler implementors to have warning diagnostics?
#include <iostream>
class Base {
public:
virtual void Sayhi() { std::cout<<"hi from Base"<<std::endl; }
};
class Derived : public Base
{
private:
virtual void Sayhi() { std::cout<<"hi from Derived"<<std::endl; }
};
int main() {
Base *pb = new Derived;
// private method Derived::Sayhi() invoked.
// May affect the object state!
pb->Sayhi();
return 0;
}
Does C++ standard has any say about such run-time anomalies?
No. Access control is purely compile-time, and affects which names may be used, not which functions may be called.
So in your example, you can access the name Base::Sayhi, but not Derived::Sayhi; and access to Base::Sayhi allows you to virtually call any function that overrides it.
Why wouldn't standard enforce compiler implementors to have warning diagnostics?
The standard has nothing to say about warnings at all; it just defines the behaviour of well-formed code. It's up to compiler writers to decide what warnings might be useful; and warning about all private overrides just in case you didn't mean them to be overrides sounds like it would generate a lot of false positives.
Access specification cannot be loosened it can only be tightened up.
Sayhi() is public in Base class so basically all classes deriving and overidding from it should expect the method to be public, there is no intrusion. The access specification for overidding functions is well specified since the method was declared public to begin with.
Even though your question has been answered by now, I would like to add a note.
While you consider this as an "anomaly" and would like to have diagnostics, this is actually useful: You can ensure that your implementation can only be used polymorpically. The derived class should only have a public ctor and no other public functions, all the re-implemented member functions should be private.
Why do classes in C++ have to declare their private functions? Has it actual technical reasons (what is its role at compile time) or is it simply for consistency's sake?
I asked why private functions had to be declared at all, as they don't add anything (neither object size nor vtable entry) for other translation units to know
If you think about it, this is similar to declaring some functions static in a file. It's not visible from the outside, but it is important for the compiler itself. The compiler wants to know the signature of the function before it can use it. That's why you declare functions in the first place. Remember that C++ compilers are one pass, which means everything has to be declared before it is used.1
From the programmer's point of view, declaring private functions is still not completely useless. Imagine 2 classes, one of which is the friend of the other. The friendzoned class2 would need to know how the privates of that class look like, (This discussion is getting weird) otherwise they can't use it.
As to why exactly C++ was designed in this way, I would first say there is the historical reason: the fact that you can't slice a struct in C, was adopted by C++ so you can't slice a class (and adopted by other languages branched from C++, too). I'd also guess that it's about simplicity: Imagine how difficult it would be to devise a method of compilation in which you can split the class among different header files, let your source files know about it, and prevent others from adding stuff to your class.
A final note is that, private functions can affect vtable size. That is, if they are virtual.
1 Actually not entirely. If you have inline functions in the class, they can refer to functions later defined in the same class. But probably the idea started from single pass and this exception later added to it.
2 It's inlined member functions in particular.
You have to declare all members in the definition of the class itself so that the compiler knows which functions are allowed to be members. Otherwise, a second programmer could (accidentally?) come along and add members, make mistakes, and violate your object's guarantees, causing undefined behavior and/or random crashes.
There's a combination of concerns, but:
C++ doesn't let you re-open a class to declare new members in it after its initial definition.
C++ doesn't let you have different definitions of a class in different translation units that combine to form a program.
Therefore:
Any private member functions that the .cpp file wants declared in the class need to be defined in the .h file, which every user of the class sees too.
From the POV of practical binary compatibility: as David says in a comment, private virtual functions affect the size and layout of the vtable of this class and any classes that use it as a base. So the compiler needs to know about them even when compiling code that can't call them.
Could C++ have been invented differently, to allow the .cpp file to reopen the class and add certain kinds of additional member functions, with the implementation required to arrange that this doesn't break binary compatibility? Could the one definition rule be relaxed, to allow definitions that differ in certain ways? For example, static member functions and non-virtual non-static member functions.
Probably yes to both. I don't think there's any technical obstacle, although the current ODR is very strict about what makes a definition "different" (and hence is very generous to implementations in allowing binary incompatibilities between very similar-looking definitions). I think the text to introduce this kind of exception to the rule would be complex.
Ultimately it might come down to, "the designers wanted it that way", or it might be that someone tried it and encountered an obstacle that I haven't thought of.
The access level does not affect visibility. Private functions are visible to external code and may be selected by overload resolution (which would then result in an access violoation error):
class A {
void F(int i) {}
public:
void F(unsigned i) {}
};
int main() {
A a;
a.F(1); // error, void A::F(int) is private
}
Imagine the confusion when this works:
class A {
public:
void F(unsigned i) {}
};
int main() {
A a;
a.F(1);
}
// add private F overload to A
void A::F(int i) {}
But changing it to the first code causes overload resolution to select a different function. And what about the following example?
class A {
public:
void F(unsigned i) {}
};
// add private F overload to A
void A::F(int i) {}
int main() {
A a;
a.F(1);
}
Or here's another example of this going wrong:
// A.h
class A {
public:
void g() { f(1); }
void f(unsigned);
};
// A_private_interface.h
class A;
void A::f(int);
// A.cpp
#include "A_private_interface.h"
#include "A.h"
void A::f(int) {}
void A::f(unsigned) {}
// main.cpp
#include "A.h"
int main() {
A().g();
}
One reason is that in C++ friends can access your privates. For friends to access them, friends have to know about them.
Private members of a class are still members of the class, so they must be declared, as the implementation of other public members might depend on that private method. Declaring them will allow the compiler to understand a call to that function as a member function call.
If you have a method that only is used int the .cpp file and does not depend on direct access to other private members of the class, consider moving it to an anonymous namespace. Then, it does not need to be declared in the header file.
There are a couple of reason on why private functions must be declared.
First Compile Time Error Checks
the point of access modifiers is to catch certain classes (no pun intended) of programming errors at compile time. Private functions are functions that, if someone called them from outside the class, that would be a bug, and you want to know about it as early as possible.
Second Casting and Inheritance
Taken from the C++ standard:
3 [ Note: A member of a private base class might be inaccessible as an inherited member name, but accessible directly. Because of the rules on pointer conversions (4.10) and explicit casts (5.4), a conversion from a pointer to a derived class to a pointer to an inaccessible base class might be ill-formed if an implicit conversion is used, but well-formed if an explicit cast is used.
3rd Friends
Friends show each other there privates. A private method can be call by another class that is a friend.
4th General Sanity and Good Design
Ever worked on a project with another 100 developers. Having a standard and a general set of rule helps maintain maintainable. declaring something private has a specific meaning to everyone else in the group.
Also this flows into good OO design principles. What to expose and what not
I have a value class according to the description in "C++ Coding Standards", Item 32. In short, that means it provides value semantics and does not have any virtual methods.
I don't want a class to derive from this class. Beside others, one reason is that it has a public nonvirtual destructor. But a base class should have a destructor that is public and virtual or protected and nonvirtual.
I don't know a possibility to write the value class, such that it is not possible to derive from it. I want to forbid it at compile time. Is there perhaps any known idiom to do that? If not, perhaps there are some new possibilities in the upcoming C++0x? Or are there good reasons that there is no such possibility?
Bjarne Stroustrup has written about this here.
The relevant bit from the link:
Can I stop people deriving from my class?
Yes, but why do you want to? There are two common answers:
for efficiency: to avoid my function
calls being virtual.
for safety: to ensure that my class is not used as a
base class (for example, to be sure
that I can copy objects without fear
of slicing)
In my experience, the efficiency reason is usually misplaced fear. In C++, virtual function calls are so fast that their real-world use for a class designed with virtual functions does not to produce measurable run-time overheads compared to alternative solutions using ordinary function calls. Note that the virtual function call mechanism is typically used only when calling through a pointer or a reference. When calling a function directly for a named object, the virtual function class overhead is easily optimized away.
If there is a genuine need for "capping" a class hierarchy to avoid virtual function calls, one might ask why those functions are virtual in the first place. I have seen examples where performance-critical functions had been made virtual for no good reason, just because "that's the way we usually do it".
The other variant of this problem, how to prevent derivation for logical reasons, has a solution. Unfortunately, that solution is not pretty. It relies on the fact that the most derived class in a hierarchy must construct a virtual base. For example:
class Usable;
class Usable_lock {
friend class Usable;
private:
Usable_lock() {}
Usable_lock(const Usable_lock&) {}
};
class Usable : public virtual Usable_lock {
// ...
public:
Usable();
Usable(char*);
// ...
};
Usable a;
class DD : public Usable { };
DD dd; // error: DD::DD() cannot access
// Usable_lock::Usable_lock(): private member
(from D&E sec 11.4.3).
If you are willing to only allow the class to be created by a factory method you can have a private constructor.
class underivable {
underivable() { }
underivable(const underivable&); // not implemented
underivable& operator=(const underivable&); // not implemented
public:
static underivable create() { return underivable(); }
};
Even if the question is not marked for C++11, for people who get here it should be mentioned that C++11 supports new contextual identifier final. See wiki page
Take a good look here.
It's really cool but it's a hack.
Wonder for yourself why stdlib doesn't do this with it's own containers.
Well, i had a similar problem. This is posted here on SO. The problem was other way around; i.e. only allow those classes to be derived that you permit. Check if it solves your problem.
This is done at compile-time.
I would generally achieve this as follows:
// This class is *not* suitable for use as a base class
The comment goes in the header and/or in the documentation. If clients of your class don't follow the instructions on the packet, then in C++ they can expect undefined behavior. Deriving without permission is just a special case of this. They should use composition instead.
Btw, this is slightly misleading: "a base class should have a destructor that is public and virtual or protected and nonvirtual".
That's true for classes which are to be used as bases for runtime polymorphism. But it's not necessary if derived classes are never going to be referenced via pointers to the base class type. It might be reasonable to have a value type which is used only for static polymorphism, for instance with simulated dynamic binding. The confusion is that inheritance can be used for different purposes in C++, requiring different support from the base class. It means that although you don't want dynamic polymorphism with your class, it might nevertheless be fine to create derived classes provided they're used correctly.
This solution doesn't work, but I leave it as an example of what not to do.
I haven't used C++ for a while now, but as far as I remember, you get what you want by making destructor private.
UPDATE:
On Visual Studio 2005 you'll get either a warning or an error. Check up the following code:
class A
{
public:
A(){}
private:
~A(){}
};
class B : A
{
};
Now,
B b;
will produce an error "error C2248: 'A::~A' : cannot access private member declared in class 'A'"
while
B *b = new B();
will produce warning "warning C4624: 'B' : destructor could not be generated because a base class destructor is inaccessible".
It looks like a half-solutiom, BUT as orsogufo pointed, doing so makes class A unusable. Leaving answers