Why aren't all functions virtual in C++? [duplicate] - c++

Is there any real reason not to make a member function virtual in C++? Of course, there's always the performance argument, but that doesn't seem to stick in most situations since the overhead of virtual functions is fairly low.
On the other hand, I've been bitten a couple of times with forgetting to make a function virtual that should be virtual. And that seems to be a bigger argument than the performance one. So is there any reason not to make member functions virtual by default?

One way to read your questions is "Why doesn't C++ make every function virtual by default, unless the programmer overrides that default." Without consulting my copy of "Design and Evolution of C++": this would add extra storage to every class unless every member function is made non-virtual. Seems to me this would have required more effort in the compiler implementation, and slowed down the adoption of C++ by providing fodder to the performance obsessed (I count myself in that group.)
Another way to read your questions is "Why do C++ programmers do not make every function virtual unless they have very good reasons not to?" The performance excuse is probably the reason. Depending on your application and domain, this might be a good reason or not. For example, part of my team works in market data ticker plants. At 100,000+ messages/second on a single stream, the virtual function overhead would be unacceptable. Other parts of my team work in complex trading infrastructure. Making most functions virtual is probably a good idea in that context, as the extra flexibility beats the micro-optimization.

Stroustrup, the designer of the language, says:
Because many classes are not designed to be used as base classes. For example, see class complex.
Also, objects of a class with a virtual function require space needed by the virtual function call mechanism - typically one word per object. This overhead can be significant, and can get in the way of layout compatibility with data from other languages (e.g. C and Fortran).
See The Design and Evolution of C++ for more design rationale.

There are several reasons.
First, performance: Yes, the overhead of a virtual function is relatively low seen in isolation. But it also prevents the compiler from inlining, and that is a huge source of optimization in C++. The C++ standard library performs as well as it does because it can inline the dozens and dozens of small one-liners it consists of. Additionally, a class with virtual methods is not a POD datatype, and so a lot of restrictions apply to it. It can't be copied just by memcpy'ing, it becomes more expensive to construct, and takes up more space. There are a lot of things that suddenly become illegal or less efficient once a non-POD type is involved.
And second, good OOP practice. The point in a class is that it makes some kind of abstraction, hides its internal details, and provides a guarantee that "this class will behave so and so, and will always maintain these invariants. It will never end up in an invalid state".
That is pretty hard to live up to if you allow others to override any member function. The member functions you defined in the class are there to ensure that the invariant is maintained. If we didn't care about that, we could just make the internal data members public, and let people manipulate them at will. But we want our class to be consistent. And that means we have to specify the behavior of its public interface. That may involve specific customizability points, by making individual functions virtual, but it almost always also involves making most methods non-virtual, so that they can do the job of ensuring that the invariant is maintained. The non-virtual interface idiom is a good example of this:
http://www.gotw.ca/publications/mill18.htm
Third, inheritance isn't often needed, especially not in C++. Templates and generic programming (static polymorphism) can in many cases do a better job than inheritance (runtime polymorphism). Yes, you sometimes still need virtual methods and inheritance, but it is certainly not the default. If it is, you're Doing It Wrong. Work with the language, rather than try to pretend it was something else. It's not Java, and unlike Java, in C++ inheritance is the exception, not the rule.

I'll ignore performance and memory cost, because I have no way to measure them for the "in general" case...
Classes with virtual member functions are non-POD. So if you want to use your class in low-level code which relies on it being POD, then (among other restrictions) any member functions must be non-virtual.
Examples of things you can portably do with an instance of a POD class:
copy it with memcpy (provided the target address has sufficient alignment).
access fields with offsetof()
in general, treat it as a sequence of char
... um
that's about it. I'm sure I've forgotten something.
Other things people have mentioned that I agree with:
Many classes are not designed for inheritance. Making their methods virtual would be misleading, since it implies child classes might want to override the method, and there shouldn't be any child classes.
Many methods are not designed to be overridden: same thing.
Also, even when things are intended to be subclassed / overridden, they aren't necessarily intended for run-time polymorphism. Very occasionally, despite what OO best practice says, what you want inheritance for is code reuse. For example if you're using CRTP for simulated dynamic binding. So again you don't want to imply your class will play nicely with runtime polymorphism by making its methods virtual, when they should never be called that way.
In summary, things which are intended to be overridden for runtime polymorphism should be marked virtual, and things which don't, shouldn't. If you find that almost all your member functions are intended to be virtual, then mark them virtual unless there's a reason not to. If you find that most of your member functions are not intended to be virtual, then don't mark them virtual unless there's a reason to do so.
It's a tricky issue when designing a public API, because flipping a method from one to the other is a breaking change, so you have to get it right first time. But you don't necessarily know before you have any users, whether your users are going to want to "polymorph" your classes. Ho hum. The STL container approach, of defining abstract interfaces and banning inheritance entirely, is safe but sometimes requires users to do more typing.

The following post is mostly opinion, but here goes:
Object oriented design is three things, and encapsulation (information hiding) is the first of these things. If a class design is not solid on this, then the rest doesn't really matter very much.
It has been said before that "inheritance breaks encapsulation" (Alan Snyder '86) A good discussion of this is present in the group of four design pattern book. A class should be designed to support inheritance in a very specific manner. Otherwise, you open the possibility of misuse by inheritors.
I would make the analogy that making all of your methods virtual is akin to making all your members public. A bit of a stretch, I know, but that's why I used the word 'analogy'

As you are designing your class hierarchy, it may make sense to write a function that should not be overridden. One example is if you are doing the "template method" pattern, where you have a public method that calls several private virtual methods. You would not want derived classes to override that; everyone should use the base definition.
There is no "final" keyword, so the best way to communicate to other developers that a method should not be overridden is to make it non-virtual. (other than easily ignored comments)
At the class level, making the destructor non-virtual communicates that the class should not be used as a base class, such as the STL containers.
Making a method non-virtual communicates how it should be used.

The Non-Virtual Interface idiom makes use of non-virtual methods. For more information please refer to Herb Sutter "Virtuality" article.
http://www.gotw.ca/publications/mill18.htm
And comments on the NVI idiom:
http://www.parashift.com/c++-faq-lite/strange-inheritance.html#faq-23.3
http://accu.org/index.php/journals/269 [See sub-section]

Related

Should classes be final by default?

Quoting the Effective C++ (Scott Meyers), third edition, item 7:
Declare destructors virtual in polymorphic base classes.
This implies that classes intended to be inherited from should, at a minimum, have the destructor virtual.
When writing some applications/libraries, some classes (I would say not few) are not designed with the intention of being inherited from. We usually rely on some conventions, in which one is not supposed to inherit classes that are not his own or without checking if it's safe to.
Now, a coding standard may require to write and design classes such that inheritance is always safe. I feel like this might be too much. C++11 has added the final keyword that makes sure classes are not inherited from. Would you recommend marking all classes that are not designed for inheritance as final by default?
This would make the compiler enforce what we have been doing by convention for ages. But that convention might be considered well enough understood and easy to follow (especially since inheritance is usually avoided, with composition being preferred) such that it might be just another thing to care of while writing the code.
Just because a class does not have any virtual functions does not mean that inheriting from it does not make sense. It just means it was probably not intended and that you have to be careful not to delete any instances of the derived objects through pointers to the base class. private inheritance would often be used in such a case, as an alternative to composition, to signal that inheriting from the base is purely an implementation detail and the derived class is not to be considered as having a "is-a" relationship with the base.
Marking a class final is a great way to express that a class is not intended to be derived from, at all, by making it impossible. So if that's what you want, slap a final on the class.
In some cases, marking a (derived) class final to prevent any further derivatives can also have the added benefit of making it easier for the compiler to devirtualize function calls, thus leading to better performance in some cases.
In my opinion, final should (usually) be the default and you can then remove it later if you find that you need/want to derive from the class after all.
But that's just opinion.

Non-friend, non-member functions increase encapsulation?

In the article How Non-Member Functions Improve Encapsulation, Scott Meyers argues that there is no way to prevent non-member functions from "happening".
Syntax Issues
If you're like many people with whom I've discussed this issue, you're
likely to have reservations about the syntactic implications of my
advice that non-friend non-member functions should be preferred to
member functions, even if you buy my argument about encapsulation. For
example, suppose a class Wombat supports the functionality of both
eating and sleeping. Further suppose that the eating functionality
must be implemented as a member function, but the sleeping
functionality could be implemented as a member or as a non-friend
non-member function. If you follow my advice from above, you'd declare
things like this:
class Wombat {
public:
void eat(double tonsToEat);
void sleep(double hoursToSnooze);
};
w.eat(.564);
w.sleep(2.57);
Ah, the uniformity of it all! But this uniformity is misleading,
because there are more functions in the world than are dreamt of by
your philosophy.
To put it bluntly, non-member functions happen. Let us continue with
the Wombat example. Suppose you write software to model these fetching
creatures, and imagine that one of the things you frequently need your
Wombats to do is sleep for precisely half an hour. Clearly, you could
litter your code with calls to w.sleep(.5), but that would be a lot
of .5s to type, and at any rate, what if that magic value were to
change? There are a number of ways to deal with this issue, but
perhaps the simplest is to define a function that encapsulates the
details of what you want to do. Assuming you're not the author of
Wombat, the function will necessarily have to be a non-member, and
you'll have to call it as such:
void nap(Wombat& w) { w.sleep(.5); }
Wombat w;
nap(w);
And there you have it, your dreaded syntactic inconsistency. When you
want to feed your wombats, you make member function calls, but when
you want them to nap, you make non-member calls.
If you reflect a bit and are honest with yourself, you'll admit that
you have this alleged inconsistency with all the nontrivial classes
you use, because no class has every function desired by every client.
Every client adds at least a few convenience functions of their own,
and these functions are always non-members. C++ programers are used to
this, and they think nothing of it. Some calls use member syntax, and
some use non-member syntax. People just look up which syntax is
appropriate for the functions they want to call, then they call them.
Life goes on. It goes on especially in the STL portion of the Standard
C++ library, where some algorithms are member functions (e.g., size),
some are non-member functions (e.g., unique), and some are both (e.g.,
find). Nobody blinks. Not even you.
I can't really wrap my head around what he says in the bold/italic sentence. Why will it necessarily have to be implemented as a non-member? Why not just inherit your own MyWombat class from the Wombat class, and make the nap() function a member of MyWombat?
I'm just starting out with C++, but that's how I would probably do it in Java. Is this not the way to go in C++? If not, why so?
In theory, you sort of could do this, but you really don't want to. Let's consider why you don't want to do this (for the moment, in the original context--C++98/03, and ignoring the additions in C++11 and newer).
First of all, it would mean that essentially all classes have to be written to act as base classes--but for some classes, that's just a lousy idea, and may even run directly contrary to the basic intent (e.g., something intended to implement the Flyweight pattern).
Second, it would render most inheritance meaningless. For an obvious example, many classes in C++ support I/O. As it stands now, the idiomatic way to do that is to overload operator<< and operator>> as free functions. Right now, the intent of an iostream is to represent something that's at least vaguely file-like--something into which we can write data, and/or out of which we can read data. If we supported I/O via inheritance, it would also mean anything that can be read from/written to anything vaguely file-like.
This simply makes no sense at all. An iostream represents something at least vaguely file-like, not all the kinds of objects you might want to read from or write to a file.
Worse, it would render nearly all the compiler's type checking nearly meaningless. Just for example, writing a distance object into a person object makes no sense--but if they both support I/O by being derived from iostream, then the compiler wouldn't have a way to sort that out from one that really did make sense.
Unfortunately, that's just the tip of the iceberg. When you inherit from a base class, you inherit the limitations of that base class. For example, if you're using a base class that doesn't support copy assignment or copy construction, objects of the derived class won't/can't either.
Continuing the previous example, that would mean if you want to do I/O on an object, you can't support copy construction or copy assignment for that type of object.
That, in turn, means that objects that support I/O would be disjoint from objects that support being put in collections (i.e., collections require capabilities that are prohibited by iostreams).
Bottom line: we almost immediately end up with a thoroughly unmanageable mess, where none of our inheritance would any longer make any real sense at all and the compiler's type checking would be rendered almost completely useless.
Because you are then creating a very strong dependency between your new class and the original Wombat. Inheritance is not necessarily good; it is the second strongest relationship between any two entities in C++. Only friend declarations are stronger.
I think most of us did a double-take when Meyers first published that article, but it is generally acknowledged to be true by now. In the world of modern C++ your first instinct should not be to derive from a class. Deriving is the last resort, unless you are adding a new class that really is a specialization of an existing class.
Matters are different in Java. There you inherit. You really have no other choice.
Your idea doesn't work across the board, as Jerry Coffin describes, however it is viable for simple classes that are not part of a hierarchy, such as Wombat here.
There are some couple of dangers to watch out for though:
Slicing - if there is a function that accepts a Wombat by value, then you have to cut off myWombat's extra appendages and they don't grow back. This doesn't occur in Java in which all objects are passed by reference.
Base class pointer - If Wombat is non-polymorphic (i.e. no v-table), it means you cannot easily mix Wombat and myWombat in a container. Deleting a pointer will not properly delete myWombat varieties. (However you could use shared_ptr which tracks a custom deleter).
Type mismatch: If you write any functions that accept a myWombat then they cannot be called with a Wombat. On the other hand, if you write your function to accept a Wombat then you can't use the syntactic sugar of myWombat. Casting doesn't fix this; your code won't interact properly with other parts of the interface.
A way of avoiding all these dangers would be to use containment instead of inheritance: myWombat will have a Wombat private member, and you write forwarding functions for any Wombat properties you want to expose. This is more work in terms of design and maintenance of the myWombat class; but it eliminates the possibility for anyone to use your class erroneously, and it enables you to work around problems such as the contained class being non-copyable.
For polymorphic objects in a hierarchy, you don't have the slicing and base-class-pointer problems, although the type mismatch problem is still there. In fact it's worse. Suppose the hierarchy is:
Animal <-- Marsupial <-- Wombat <-- NorthernHairyNosedWombat
You come along and derive myWombat from Wombat. However, this means that NorthernHairyNosedWombat is a sibling of myWombat, whereas it was a child of Wombat.
So any nice sugar functions you add to myWombat are not usable by NorthernHairyNosedWombat anyway.
Summary: IMHO the benefits are not worth the mess it leaves behind.

Why should I mark all methods virtual in C++? Is there a trade-off?

I know why and how virtual methods work, and most of the time people tell me I should always mark a method virtual, but, I don't understand why if I'm not going to override it. And I also know there's a tiny memory issue.
Please explain me why I should mark all methods virtual and what's the trade-off.
Code example:
class Point
{
int x, y;
public:
virtual void setX(int i);
virtual void setY(int i);
};
(That question is not equal to Should I mark all methods virtual? because I want to know the trade-off and because the programming language in the case is C++, not C#)
OBS: I'm sorry if there's any grammar error, English is not my native language.
No, you should not "mark all methods as virtual".
If you want the method to be virtual, then mark it as such. If not, leave the keyword out.
There is an overhead for virtual methods compared to regular ones. If you want to read more about this, check out the Wikipedia side about VTables.
The real reason to make member functions non-virtual is to enforce class invariants.
Advice to make all member functions virtual generally means that either:
The people giving the advice don't understand the class, or
the people giving the advice don't understand OO design.
Yes, there are a few cases (e.g., some abstract base classes, where the only class invariant is the existence of specific functions) in which all the functions should be virtual. Those are the exception though. In most classes, virtual functions should be restricted to those that you really intend to allow derived classes to provide new/different behavior.
As for the discussion of things like vtables and the overhead of virtual function calls, I'd say they're correct as far as they go, but they miss the really big point. Whether a particular function should or shouldn't be virtual is primarily a question of class design and only secondarily a question of function call overhead. The two don't do the same thing, so trying to compare overhead rarely makes sense.
That is not the case, ie, if you dont need a virtual function then dont use it. Also as per Bjarne Stroustrup Pay per use
In C++: --
Virtual functions have a slight performance penalty. Normally it is too small to make any difference but in a tight loop it might be
significant.
A virtual function increases the size of each object by one pointer. Again this is typically insignificant, but if you create
millions of small objects it could be a factor.
Classes with virtual functions are generally meant to be inherited from. The derived classes may replace some, all or none of the
virtual functions. This can create additional complexity and
complexity is the programmers mortal enemy. For example, a derived
class may poorly implement a virtual function. This may break a part
of the base class that relies on the virtual function.
One of C++'s basic principles is that you don't pay for what you don't need. virtual functions cost more than normal member functions in both time and space. Therefore you shouldn't always use them irregardless of whether or not you'll actually ever need them or not.
Making methods virtual has slight costs (more code, more complexity, larger binaries, slower method calls), and if the class is not inherited from it brings no benefit. Classes need to be designed for inheritance, otherwise inheriting from them is just begging to shoot yourself in the foot. So no, you should not always make every method virtual. The people who tell you this are probably just too inheritance-happy.
It is not true that all functions should be marked as virtual.
Indeed, there's a pattern for enforcing pre/postconditions which explicitly requires that public members are not virtual. It works as follows:
class Whatever
{
public:
int frobnicate(int);
protected:
virtual int do_frobnicate(int);
};
int Whatever::frobnicate(int x)
{
check_preconditions(x);
int result = do_frobnicate(x);
check_postconditions(x, result);
return result;
}
Since derived classes cannot override the public function, they cannot remove the pre/postcondition checks. They can, however, override the protected do_frobnicate which does the actual work.
(Edit - I got to this question by way of a duplicate C# question, but the answer is still useful, I think! Edited to reflect that:)
Actually, C# has a good "official" reason: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/versioning-with-the-override-and-new-keywords
The first sentence there is:
The C# language is designed so that versioning between base and derived classes in different libraries can evolve and maintain backward compatibility.
So if you're writing a class and it's possible end-users will make derived classes, and it's possible different versions will get out of sync...all of a sudden it starts to be important. If you have carefully protected core aspects of your code, then if you update things, people can still use the old derived class (hopefully).
On the other hand, if you are okay with no one being able to used derived classes until their authors have updated to your newest version, everything can be virtual. I have seen this scenario in several "early access" games that allow modding - when the game version increases, all of a sudden a lot of mods are broken because they relied on things working one way...and they changed. (Okay, not all changes are related to this, but some are.)
It really depends on your usage scenario. If people can keep using your old version, maybe they don't care if you've updated it and are happy to keep using it with their derived classes. In a large business scenario, making everything virtual may very well be a recipe for breaking many things at once when someone updates something.
Does this apply to C++ as well? I don't see why not - C++ is also used for massive projects and would also face the dangers of multiple simultaneous versions.

If a class might be inherited, should every function be virtual?

In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.

Class design vs. IDE: Are nonmember nonfriend functions really worth it?

In the (otherwise) excellent book C++ Coding Standards, Item 44, titled "Prefer writing nonmember nonfriend functions", Sutter and Alexandrescu recommend that only functions that really need access to the members of a class be themselves members of that class. All other operations which can be written by using only member functions should not be part of the class. They should be nonmembers and nonfriends. The arguments are that:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
It keeps the class small, which in turn makes it easier to test and maintain.
Although I see the value in these argument, I see a huge drawback: my IDE can't help me find these functions! Whenever I have an object of some kind, and I want to see what operations are available on it, I can't just type "pMysteriousObject->" and get a list of member functions anymore.
Keeping a clean design is in the end about making your programming life easier. But this would actually make mine much harder.
So I'm wondering if it's really worth the trouble. How do you deal with that?
Scott Meyers has a similar opinion to Sutter, see here.
He also clearly states the following:
"Based on his work with various string-like classes, Jack Reeves has observed that some functions just don't "feel" right when made non-members, even if they could be non-friend non-members. The "best" interface for a class can be found only by balancing many competing concerns, of which the degree of encapsulation is but one."
If a function would be something that "just makes sense" to be a member function, make it one. Likewise, if it isn't really part of the main interface, and "just makes sense" to be a non-member, do that.
One note is that with overloaded versions of eg operator==(), the syntax stays the same. So in this case you have no reason not to make it a non-member non-friend floating function declared in the same place as the class, unless it really needs access to private members (in my experience it rarely will). And even then you can define operator!=() a non-member and in terms of operator==().
I don't think it would be wrong to say that between them, Sutter, Alexandrescu, and Meyers have done more for the quality of C++ than anybody else.
One simple question they ask is:
If a utility function has two independent classes as parameteres, which class should "own" the member function?
Another issue, is you can only add member functions where the class in question is under your control. Any helper functions that you write for std::string will have to be non-members since you cannot re-open the class definition.
For both of these examples, your IDE will provide incomplete information, and you will have to use the "old fashion way".
Given that the most influential C++ experts in the world consider that non-member functions with a class parameter are part of the classes interface, this is more of an issue with your IDE rather than the coding style.
Your IDE will likely change in a release or two, and you may even be able to get them to add this feature. If you change your coding style to suit todays IDE you may well find that you have bigger problems in the future with unextendable/unmaintainable code.
I'm going to have to disagree with Sutter and Alexandrescu on this one. I think if the behavior of function foo() falls within the realm of class Bar's responsibilities, then foo() should be part of bar().
The fact that foo() doesn't need direct access to Bar's member data doesn't mean it isn't conceptually part of Bar. It can also mean that the code is well factored. It's not uncommon to have member functions which perform all their behavior via other member functions, and I don't see why it should be.
I fully agree that peripherally-related functions should not be part of the class, but if something is core to the class responsibilities, there's no reason it shouldn't be a member, regardless of whether it is directly mucking around with the member data.
As for these specific points:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
Indeed, the fewer functions that directly access the internals, the better. That means that having member functions do as much as possible via other member functions is a good thing. Splitting well-factored functions out of the class just leaves you with a half-class, that requires a bunch of external functions to be useful. Pulling well-factored functions away from their classes also seems to discourage the writing of well-factored functions.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
I don't understand this at all. If you pull a bunch of functions out of classes, you've thrust more responsibility onto function templates. They are forced to assume that even less functionality is provided by their class template arguments, unless we are going to assume that most functions pulled from their classes is going to be converted into a template (ugh).
It keeps the class small, which in turn makes it easier to test and maintain.
Um, sure. It also creates a lot of additional, external functions to test and maintain. I fail to see the value in this.
It's true that external functions should not be part of the interface. In theory, your class should only contain the data and expose the interface for what it is intended and not utilitarian functions. Adding utility functions to the interface just grow the class code base and make it less maintainable. I currently maintain a class with around 50 public methods, that's just insane.
Now, in reality, I agree that this is not easy to enforce. It's often easier to just add another method to your class, even more if you are using an IDE that can really simply add a new method to an existing class.
In order to keep my classes simple and still be able to centralize external function, I often use utility class that works with my class, or even namespaces.
I start by creating the class that will wrap my data and expose the simplest possible interface. I then create a new class for every task I have to do with the class.
Example: create a class Point, then add a class PointDrawer to draw it to a bitmap, PointSerializer to save it, etc.
If you give them a common prefix, then maybe your IDE will help if you type
::prefix
or
namespace::prefix
In many OOP languages non-friend non-class methods are third-class citizens that reside in an orphanage unconnected to anything. When I write a method, I like to pick good parents - a fitting class - where they have the best chances to feel welcome and help.
I would have thought the IDE was actually helping you out.
The IDE is hiding the protected functions from the list because they are not available to the public just as the designer of the class intended.
If you had been within the scope of the class and typed this-> then the protected functions would be displayed in the list.