Can we make virtual function inline [duplicate] - c++

Pure virtual functions are those member functions that are virtual and have the pure-specifier ( = 0; )
Clause 10.4 paragraph 2 of C++03 tells us what an abstract class is and, as a side note, the following:
[Note: a function declaration cannot provide both a pure-specifier and a definition
—end note] [Example:
struct C {
virtual void f() = 0 { }; // ill-formed
};
—end example]
For those who are not very familiar with the issue, please note that pure virtual functions can have definitions but the above-mentioned clause forbids such definitions to appear inline (lexically in-class). (For uses of defining pure virtual functions you may see, for example, this GotW)
Now for all other kinds and types of functions it is allowed to provide an in-class definition, and this restriction seems at first glance absolutely artificial and inexplicable. Come to think of it, it seems such on second and subsequent glances :) But I believe the restriction wouldn't be there if there weren't a specific reason for that.
My question is: does anybody know those specific reasons? Good guesses are also welcome.
Notes:
MSVC does allow PVF's to have inline definitions. So don't get surprised :)
the word inline in this question does not refer to the inline keyword. It is supposed to mean lexically in-class

In the SO thread "Why is a pure virtual function initialized by 0?" Jerry Coffin provided this quote from Bjarne Stroustrup’s The Design & Evolution of C++, section §13.2.3, where I've added some emphasis of the part I think is relevant:
The curious =0 syntax was chosen over the obvious alternative of introducing a new keyword pure or abstract because at the time I saw no chance of getting a new keyword accepted. Had I suggested pure, Release 2.0 would have shipped without abstract classes. Given a choice between a nicer syntax and abstract classes, I chose abstract classes. Rather than risking delay and incurring the certain fights over pure, I used the tradition C and C++ convention of using 0 to represent "not there." The =0 syntax fits with my view that a function body is the initializer for a function and also with the (simplistic, but usually adequate) view of the set of virtual functions being implemented as a vector of function pointers. [ … ]
So, when choosing the syntax Bjarne was thinking of a function body as a kind of initializer part of the declarator, and =0 as an alternate form of initializer, one that indicated “no body” (or in his words, “not there”).
It stands to reason that one cannot both indicate “not there” and have a body – in that conceptual picture.
Or, still in that conceptual picture, having two initializers.
Now, that's as far as my telepathic powers, google-foo and soft-reasoning goes. I surmise that nobody's been Interested Enough™ to formulate a proposal to the committee about having this purely syntactical restriction lifted, and following up with all the work that that entails. Thus it's still that way.

You shouldn't have so much faith in the standardization committee. Not everything has a deep reason to explain it. Something are so just because at first nobody thought otherwise and after nobody thought that changing it is important enough (I think it is the case here); for things old enough it could even be an artifact of the first implementation. Some are the result of evolution -- there was a deep reason at a time, but the reason was removed and the initial decision wasn't reconsidered again (it could be also the case here, where the initial decision was because any definition of the pure function was forbidden). Some are the result of negotiation between different POV and the result lacks coherence but this lack was deemed necessary to reach to consensus.

Good guesses... well, considering the situation:
it is legal to declare the function inline and provide an explicitly inline body (outside the class), so there's clearly no objection to the only practical implication of being declared inside the class.
I see no potential ambiguities or conflicts introduced in the grammar, so no logical reason for the exclusion of function definitions in situ.
My guess: the use for bodies for pure virtual functions was realised after the = 0 | { ... } grammar was formulated, and the grammar simply wasn't revised. It's worth considering that there are a lot of proposals for language changes / enhancements - including those to make things like this more logical and consistent - but the number that are picked up by someone and written up as formal proposals is much smaller, and the number of those the Committee has time to consider, and believes the compiler-vendors will be prepared to implement, is much smaller again. Things like this need a champion, and perhaps you're the first person to see an issue in it. To get a feel for this process, check out http://www2.research.att.com/~bs/evol-issues.html.

Good guesses are welcome you say?
I think the = 0 at the declaration comes from having the implementation in mind. Most likely this definition means, that you get a NULL entry in the RTTI's vtbl of the class information -- the location where at runtime addresses of the member functions of a class are stored.
But actually, when put a definition of the function in your *.cpp file, you introduce a name into the object file for the linker: An address in the *.o file where to find a specific function.
The basic linker then does need to know about C++ anymore. It can just link together, even though you declared it as = 0.
I think I read that it is possible what you described, although I forgot the behaviour :-)...

Leaving destructors aside, implementations of pure virtual functions are a strange thing, because they never get called in the natural way. i.e. if you have a pointer or reference to your Base class the underlying object will always be some Derived that overrides the function, and that will always get called.
The only way to actually get the implementation to be called is using the Base::func() syntax from one of the derived class's overloads.
This actually, in some ways, makes it a better target for inlining, as at the point where the compiler wants to invoke it, it is always clear which overload is being called.
Also, if implementations for pure virtual functions were forbidden, there would be an obvious workaround of some other (probably protected) non-virtual function in the Base class that you could just call in the regular way from your derived function. Of course the scope would be less limited in that you could call it from any function.
(By the way, I am under the assumption that Base::f() can only be called with this syntax from Derived::f() and not from Derived::anyOtherFunc(). Am I right with this assumption?).
Pure virtual destructors are a different story, in a sense. It is used as a technique simply to prevent someone creating an instance of the derived class without there being any pure virtual functions elsewhere.
The answer to the actual question of "why" it is not permitted is really just because the standards committee said so, but my answer sheds some light on what we are trying to achieve anyway.

Related

Does C++20 offer any new solutions to the problem of public member invisibility and source code bloat with inherited class templates?

Does C++20 offer any new solutions to the problem of public member invisibility and source code bloat/repetition with inherited class templates described in this question over 2 years ago ?
The "Problem"
The alleged "problem" is that, in a template, an unqualified name used in a way which isn't dependent on the specialization is truly independent of the specialization and refers to the entity with that name found at that point. The alleged source code "bloat" is using this-> to explicitly make the name dependent or qualifying the name. This is still the situation in C++20.
Just to be clear, the set of entities is not known at the point we refer to them. In the linked question, the base class depends on the template parameter, and we have only seen the primary base class template. The base class template may be specialized later and may have completely different member functions than the ones we've seen. So, any "solution" requires that a name without any obvious contextual dependence on the specialization find entities not yet declared which may be surprising.
Why It's Impossible
Any naive changes in this direction are either pretty big or have severe downsides, or both.
You could postpone all name lookup until instantiation time, discarding two phase lookup. That invites ODR violations which is silent UB, which is a huge downside.
You could restrict specialization so that you cannot specialize the base class later such that a different entity is found. That is difficult to diagnose, so it would likely be a new rule introducing silent UB.
You could opt in with a using declaration: using X::* as you propose or using class X as someone else suggested in a different context. This has the benefit of explicitness. It moves the problem a level up: if X is not dependent, presumably it should be found now, but if it is dependent, what happens? We can't instantiate it prior to instantiating the template we're in now. Thus, we can't interpret any names we see until instantiation. It has similar downsides to discarding two phase lookup.
Any of these changes would add complexity to an already complex area and would also pose significant backwards compatibility hurdles. None of them is a clear win.
Note: In C++20 they did make the rules more uniform by allowing ADL to find function templates with specified parameters to be found: f<int>(1).
Why It's Not a Problem
I doubt there would be consensus that this really is a problem. The linked question makes a poor argument. The derived class adds member function behavior to a certain base class, but a free function works better. These behaviors did not need member access, they aren't required by the language to be members, they can be found as non-members with ADL, and by using a free function they apply even when the static type you have is the base type. So, using inheritance for this is unnecessary coupling, and is a worse option.
Searching for 100s of places to add this-> and adding these 6 characters 100s of times strikes me as Code Bloat and Repetition when I have to templatize a base class
"Searching": The compiler will tell you when a name can't be found, which is better than silent bad behavior, such as making ODR violations easier to hit.
"templatize a base": Templatizing the base class doesn't trigger this. Templatizing the derived class and making the base dependent does. Yes, when templatizing the derived class, specifying that a bare name used in a way that is independent of the template parameter is in fact dependent may seem like boilerplate to some, but others might argue being explicit is clearer.
"100s of times": Seems hyperbolic.
These code patterns are used all the time in real world. Just look at CRTP. (comment on linked question)
Again, this only applies if the derived class is templated. I would dispute the commonality, but these idioms do exist and have a place.
Most importantly, though, is that CRTP is not a goal. CRTP is a hack. It's a C++ idiom because C++ lacks better facilities. CRTP allows a class to opt into certain behaviors that would otherwise be bothersome to write. Relevant C++ proposals do exist, but by and large, they have focused on making extension easier or removing boilerplate, and not on making CRTP, the hack, easier.
These are some that come to mind:
C++20: Comparisons
A very common use of CRTP is for things that require a lot of extra boilerplate. C++ required you to define operator== and operator!=, for instance. By opting into a CRTP base class, one could define only the primitive operation and have the other one generated.
C++20 fixed the underlying problem with comparisons. The typical member-wise comparison can be defaulted, and comparisons can be re-written so that != can invoke ==.
The problem is solved at the root, removing CRTP, not enhancing it.
C++20: Iterators to Ranges
Another common use of CRTP in the same vein as above is iterators. Writing custom iterators requires only a few fundamental operations: advance and dereference for forward iterators. But then there's a lot of extra seemingly unnecessary ceremony: pre-increment, post-increment, const iterators, typedefs, etc.
C++20 took a large step forward by introducing range concepts and the range library. The result is that it should be much less necessary to write a custom iterator. Ranges become a capable concept on their own, and there's a good suite of range combinators.
C++20: Concepts
C++ essentially has two systems for specifying an interface: virtual functions and concepts. They have trade-offs. Virtual functions are intrusive. But prior to C++20, concepts were implicit and emulated. One reason to use CRTP or inheritance generally would be to inject virtual functions. But in C++20, concepts are a language feature removing one big negative.
Future C++: Metaclasses
One value of CRTP in addition to the boilerplate reduction is satisfying an entire collection of multiple type requirements. A comparable class defines all the comparison operators. An clonable base defines a virtual destructor and clone.
This is the topic of metaclasses, which is not yet in C++.
See Also
See also the work on customization points which seems very interesting. And see the debates on unified function call syntax, which seems like we'll never get.
Summary
There's a very good question hiding in here about how C++20 makes it easier to reduce boilerplate, remove hacks like CRTP, and write better and clearer code. C++20 takes several steps in this regard, but they made the expression of intent easier, not a particular idiom.

Why is function with useless isolated `static` considered impure?

In Wikipedia article on Pure function, there is an example of impure function like this:
void f() {
static int x = 0;
++x;
}
With the remark of "because of mutation of a local static variable".
I wonder why is it impure? It's from unit type to unit type, so it always returns the same result for same input. And it has no side effects, because even despite it has static int variable, it's unobservable by any other function than this f(), so there is no observable mutation of global state that other functions might use.
If one argues that any global mutations are disallowed, regardless of whether they are observable or not, then no real life function can be considered pure ever, because any function would allocate its memory on stack, and allocation is impure, as it involves talking to MMU via OS, and allocated page might be residing in a different physical page, and so on, and so on.
So, why does this useless isolated static int makes function impure?
The result of a pure function is fully defined by its input arguments. Here, the result means not only the returned value, but also the effect in terms of the virtual machine defined by the C/C++ standard. In other words, if the function occasionally exhibits undefined behavior with the same input arguments, it cannot be considered pure (because the behavior is different from one call to another with the same input).
In the particular case with the static local variable, that variable may become the source of a data race if f is called concurrently in multiple threads. Data race means undefined behavior. Another possible source of UB is signed integer overflow, which may eventually happen.
The concept of pure functions seems to only matter in... functional languages? Correct me if I'm wrong. The wikipedia link you provide provides two references near the top, one of which is Professor Frisby's Mostly Adequate Guide to Functional Programming. Where there are several different qualifications for a pure function, including:
does not have any observable side effect
This matters because one of the things we can do to a pure function (as opposed to an impure function) is memoization (from the link above), or input/output caching. Pure functions are also testable, reasonable, and self documenting.
I guess memoization matters for the compiler, so asking if a function is "pure" can be considered equivalent to asking if the compiler can memoize the function. It seems like the concept of a static local variable that no other piece of code touches is just bad code, and the compiler should issue a warning about it. But should the compiler optimize it away? And should the compiler try to figure out if any given static local variable actually has no side effects?
It seems like it's just easier to design the compiler to always flag a function is impure if it has a static local, instead of writing logic to hem and haw over whether the function is memoizable or not. See a local static? Boom: no longer pure.
So from a compiler's point of view, it's impure.
What about the other properties of a pure function?
testable, reasonable, and self documenting
Tests are written by a person, usually, so I'd argue this function is testable. Although some automated test-writing software might again see that it's not memoizable, and just choose to ignore writing tests for it entirely. This hypothetical software might just skip anything with local statics. Again, hypothetically.
Is the code reasonable? Certainly not. Although I'm not sure how much this matters. It doesn't do anything. It makes it hard to understand. ("Why did Bob write the function this way? Is this a Magic/More Magic situation?").
Is the code self-documenting? Again, I'd say not. But again, this is degenerate example code.
I think the biggest argument against this being considered a pure function is that a functional language compiler would be perfectly reasonable if it just assumed it wasn't pure.
I think the biggest argument for this being considered a pure function is that we can look at it with our own eyeballs and see that there's obviously no outside behavior. Ignore the fact that signed overflow is undefined. Replace this with a datatype that has defined overflow, and is atomic. Well now there's no undefined behavior, but it still looks weird.
"In conclusion, I don't care whether it's pure or not."
Let me rephrase from my previous (above) conclusion.
I'm inclined to just scan a function for any mutation of static variables and call it a day. Boom, no longer pure.
Can the function be considered pure if we really think about it? Sure. But what's the point? If the definition of a pure function needs to be changed, argue for it to be changed. Seems like you think this is a pure function. That's fine, I see the merits in that. I also see the merits in considering it an impure function.
As much as this is a non-answer, it really depends on what you're using the definition of pure for. If it's writing a compiler? Probably want to use the more conservative definition of pure that allows false positives and excludes this function. If it's to impress a bunch of sophomore CS students while listening to Zep? Go for the definition that recognizes this has no side effects and call it a day.

Access to protected member through member-pointer: is it a hack?

We all know members specified protected from a base class can only be accessed from a derived class own instance. This is a feature from the Standard, and this has been discussed on Stack Overflow multiple times:
Cannot access protected member of another instance from derived type's scope
;
Why can't my object access protected members of another object defined in common base class?
And others.
But it seems possible to walk around this restriction with member pointers, as user chtz has shown me:
struct Base { protected: int value; };
struct Derived : Base
{
void f(Base const& other)
{
//int n = other.value; // error: 'int Base::value' is protected within this context
int n = other.*(&Derived::value); // ok??? why?
(void) n;
}
};
Live demo on coliru
Why is this possible, is it a wanted feature or a glitch somewhere in the implementation or the wording of the Standard?
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
The fact that a member is not accessible using class member access expr.ref (aclass.amember) due to access control [class.access] does not make this member inaccessible using other expressions.
The expression &Derived::value (whose type is int Base::*) is perfectly standard compliant, and it designates the member value of Base. Then the expression a_base.*p where p is a pointer to a member of Base and a_base an instance of Base is also standard compliant.
So any standard compliant compiler shall make the expression other.*(&Derived::value); defined behavior: access the member value of other.
is it a hack?
In similar vein to using reinterpret_cast, this can be dangerous and may potentially be a source of hard to find bugs. But it's well formed and there's no doubt whether it should work.
To clarify the analogy: The behaviour of reinterpret_cast is also specified exactly in the standard and can be used without any UB. But reinterpret_cast circumvents the type system, and the type system is there for a reason. Similarly, this pointer to member trick is well formed according to the standard, but it circumvents the encapsulation of members, and that encapsulation (typically) exists for a reason (I say typically, since I suppose a programmer can use encapsulation frivolously).
[Is it] a glitch somewhere in the implementation or the wording of the Standard?
No, the implementation is correct. This is how the language has been specified to work.
Member function of Derived can obviously access &Derived::value, since it is a protected member of a base.
The result of that operation is a pointer to a member of Base. This can be applied to a reference to Base. Member access privileges does not apply to pointers to members: It applies only to the names of the members.
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
Not UB. Base has the member.
Just to add to the answers and zoom in a bit on the horror I can read between your lines. If you see access specifiers as 'the law', policing you to keep you from doing 'bad things', I think you are missing the point. public, protected, private, const ... are all part of a system that is a huge plus for C++. Languages without it may have many merits but when you build large systems such things are a real asset.
Having said that: I think it's a good thing that it is possible to get around almost all the safety nets provided to you. As long as you remember that 'possible' does not mean 'good'. This is why it should never be 'easy'. But for the rest - it's up to you. You are the architect.
Years ago I could simply do this (and it may still work in certain environments):
#define private public
Very helpful for 'hostile' external header files. Good practice? What do you think? But sometimes your options are limited.
So yes, what you show is kind-of a breach in the system. But hey, what keeps you from deriving and hand out public references to the member? If horrible maintenance problems turn you on - by all means, why not?
Basically what you're doing is tricking the compiler, and this is supposed to work. I always see this kind of questions and people some times get bad results and some times it works, depending on how this converts to assembler code.
I remember seeing a case with a const keyword on a integer, but then with some trickery the guy was able to change the value and successfully circumvented the compiler's awareness. The result was: A wrong value for a simple mathematical operation. The reason is simple: Assembly in x86 does make a distinction between constants and variables, because some instructions do contain constants in their opcode. So, since the compiler believes it's a constant, it'll treat it as a constant and deal with it in an optimized way with the wrong CPU instruction, and baam, you have an error in the resulting number.
In other words: The compiler will try to enforce all the rules it can enforce, but you can probably eventually trick it, and you may or may not get wrong results based on what you're trying to do, so you better do such things only if you know what you're doing.
In your case, the pointer &Derived::value can be calculated from an object by how many bytes there are from the beginning of the class. This is basically how the compiler accesses it, so, the compiler:
Doesn't see any problem with permissions, because you're accessing value through derived at compile-time.
Can do it, because you're taking the offset in bytes in an object that has the same structure as derived (well, obviously, the base).
So, you're not violating any rules. You successfully circumvented the compilation rules. You shouldn't do it, exactly because of the reasons described in the links you attached, as it breaks OOP encapsulation, but, well, if you know what you're doing...

pure virtual final functions : legal in C++11

class Foo
{
public:
virtual int foo() final = 0;
};
Compiles fine.
Isn't Foo just a waste of space, and an accident in the making? Or am I missing something?
It is almost a complete waste of space, as you've said. There is at least one admittedly contrieved usage for this. The fact that it compiles, by the way, is not surprising. As long as code is legitimate, it needs not "make sense" to compile.
Say you want to use Foo as a policy. That means it will be used as a template parameter, but it needs not be instantiated. In fact, you really don't want anyone to ever instantiate the class (although admittedly I wouldn't know why, what can it hurt).
This is exactly what you have here. A class with a type that you can lay your hands on, but you can't instantiate it (though making the constructor private would probably be a lot more straightforward).
As an added bonus, you could add enums or static functions inside the class scope. Those could be used without actually instantiating, and they'd be within that class' namespace. So, you have a class that's primarily usable only as type, but you still have "some functionality" bundled with it in the form of static functions.
Most of the time, one would probably just wrap that stuff into a namespace, but who knows, in some situation, this might be the desired way.
Isn't Foo just a waste of space
Indeed it is; you can't instantiate it since it's abstract, and you can't override the function to make a non-abstract derived class.
It could be used as a way of preventing a class from being instantiated, if you want to do that for some reason; but even then it would probably make more sense to delete the default constructor.
and an accident in the making?
Not really. Since you can't do anything with the class, you can't do anything wrong with it.
If I'm reading the grammar in 9.2 correctly this is actually legal although I may have missed something in the notes prohibiting it.
member-declarator:
declarator virt-specifier-seq(opt) pure-specifier(opt)
Then it shows that virt-specifier-seq can be final and pure-specifier is = 0
I can't see any way this would be useful although there may be some corner case that makes use of it.

Reason for C++ member function hiding [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
name hiding and fragile base problem
I'm familiar with the rules involving member function hiding. Basically, a derived class with a function that has the same name as a base class function doesn't actually overload the base class function - it completely hides it.
struct Base
{
void foo(int x) const
{
}
};
struct Derived : public Base
{
void foo(const std::string& s) { }
};
int main()
{
Derived d;
d.foo("abc");
d.foo(123); // Will not compile! Base::foo is hidden!
}
So, you can get around this with a using declaration. But my question is, what is the reason for base class function hiding? Is this a "feature" or just a "mistake" by the standards committee? Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Name lookup works by looking in the current scope for matching names, if nothing is found then it looks in the enclosing scope, if nothing is found it looks in the enclosing scope, etc. until reaching the global namespace.
This isn't specific to classes, you get exactly the same name hiding here:
#include <iostream>
namespace outer
{
void foo(char c) { std::cout << "outer\n"; }
namespace inner
{
void foo(int i) { std::cout << "inner\n"; }
void bar() { foo('c'); }
}
}
int main()
{
outer::inner::bar();
}
Although outer::foo(char) is a better match for the call foo('c') name lookup stops after finding outer::inner::foo(int) (i.e. outer::foo(char) is hidden) and so the program prints inner.
If member function name weren't hidden that would mean name lookup in class scope behaved differently to non-class scope, which would be inconsistent and confusing, and make C++ even harder to learn.
So there's no technical reason the name lookup rules couldn't be changed, but they'd have to be changed for member functions and other types of name lookup, it would make compilers slower because they'd have to continue searching for names even after finding matching names in the current scope. Sensibly, if there's a name in the current scope it's probably the one you wanted. A call in a scope A probably wants to find names in that scope, e.g. if two functions are in the same namespace they're probably related (part of the same module or library) and so if one uses the name of the other it probably means to call the one in the same scope. If that's not what you want then use explicit qualification or a using declaration to tell the compiler the other name should be visible in that scope.
Is this a "feature" or just a "mistake" by the standards committee?
It's definitely not a mistake, since it's clearly stipulated in the standard. It's a feature.
Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Technically, a compiler could look in the base class. Technically. But if it did, it would break the rules set by the standard.
But my question is, what is the reason for base class function hiding?
Unless someone from the committee comes with an answer, I think we can only speculate. Basically, there were two options:
if I declare a function with the same name in a derived class, keep the base class's functions with the same name directly accessible through a derived class
don't
It could have been determined by flipping a coin (...ok, maybe not).
In general, what are the reasons for wanting a function with the same name as that of a base class? There's different functionality - where you'd more likely use polymorphism instead. For handling different cases (different parameters), and if these cases aren't present in the base class, a strategy pattern might be more appropriate to handle the job. So most likely function hiding comes in effect when you actually do want to hide the function. You're not happy with the base class implementation so you provide your own, with the option of using using, but only when you want to.
I think it's just a mechanism to make you think twice before having a function with the same name & different signature.
I believe #Lol4t0 is pretty much correct, but I'd state things much more strongly. If you allowed this, you'd end up with two possibilities: either make a lot of other changes throughout almost the entirety of the language, or else you end up with something almost completely broken.
The other changes you'd make to allow this to work would be to completely revamp how overloading is done -- you'd have to change at least the order of the steps that were taken, and probably the details of the steps themselves. Right now, the compiler looks up the name, then forms an overload set, resolves the overload, then checks access to the chosen overload.
To make this work even sort of well, you'd pretty much have to change that to check access first, and only add accessible functions to the overload set. With that, at least the example in #Lol4t0's answer could continue to compile, because Base::foo would never be added to the overload set.
That still means, however, that adding to the interface of the base class could cause serious problems. If Base didn't originally contain foo, and a public foo were added, then the call in main to d.foo() would suddenly do something entirely different, and (again) it would be entirely outside the control of whoever wrote Derived.
To cure that, you'd just about have to make a fairly fundamental change in the rules: prohibit implicit conversions of function arguments. Along with that, you'd change overload resolution so in case of a tie, the most derived/most local version of a function was favored over a less derived/outer scope. With those rules, the call to d.foo(5.0) could never resolve to Derived::foo(int) in the first place.
That, however, would only leave two possibilities: either calls to free functions would have different rules than calls to member functions (implicit conversions allowed only for free functions) or else all compatibility with C would be discarded entirely (i.e., also prohibit implicit conversions in all function arguments, which would break huge amounts of existing code).
To summarize: to change this without breaking the language entirely, you'd have to make quite a few other changes as well. It would almost certainly be possible to create a language that worked that way, but by the time you were done it wouldn't be C++ with one minor change -- it would be an entirely different language that wasn't much like C++ or C, or much of anything else.
I can only propose, that this decision was made to make things simpler.
Imagine, that derived function will overload base one. Then, does the following code should generate compilation error, or use Deriveds function?
struct Base
{
private:
void foo(float);
}
struct Derived: public Base
{
public:
void foo(int);
}
int main()
{
Derived d;
d.foo(5.0f);
}
According to existing behavior of overloads this should generate error.
Now imagine, in the first version Base had no foo(float). In second version it appears. Now changing the realization of base class breaks interface of derived.
If you are developer of Derived and cannot influence developers of Base and a lot of clients use your interface, you are in a bad situation now.