When you prefer virtual functions over templates in C++? - c++

I was interviewed by a financial company and was asked this question:
"List the case(s) when you prefer virtual functions over templates?"
It sounds weird for me, because usually we are aiming the opposite right?
All the books, articles, talks are there encouraging us to use static polymorphism instead of dynamic.
Are there any known cases I was not aware of when you should use virtual functions and avoid templates?

GUI / visualization widget toolkits are an obvious case. Re-implementing a draw method, for example, is certainly less cumbersome with virtual methods and dynamic dispatch. And since modern C++ tends to discourage raw pointer management, std::unique_ptr can manage the resource for you.
I'm sure there are plenty of other hierarchical examples you can come up with... a base enemy class for a game, with virtual methods handling the behaviour of various baddies:)
The whole 'overhead' argument for dynamic dispatch is completely without merit today. I'd argue that the vtable indirection implementation hasn't been a significant overhead for serious workloads in decades. There's the more interesting question that if C++ was designed today, would polymorphism be part of the language? But that's neither here nor there now.
I don't like the chances of this question remaining open, as it's not a direct programming problem and is probably too subject to opinions. It might be a better question for software engineering.

When the type of the object is not known at compile time, use virtual methods.
e.g.
void Accept (Fruit* pFruit) // supplied from external factors at runtime
{
pFruit->eat(); // `Fruit` can be anyone among `Apple/Blackberry/Chickoo/`...
}
Based on what user enters, a fruit will be supplied to the function. Hence, there is no way we can figure out that what is going to be eat(). So it's a candidate for runtime polymorphism:
class Fruit
{
public: virtual void eat () = 0;
}
In all the other cases, always prefer static polymorphism (includes templates). It's more deterministic and easy to maintain.

Interview question seems to asking for preference (& your reasoing).
I would prefer interfaces (virtual methods) for ease of creating mocks for unit testing. We can use template for these, but it is cumbersome (consumer has to be templated). If profiling shows no speed degradation with vtable lookups, prefer that over mocking/testing non-virtual methods.
Also, type erasure. I don't think it can be implemented at all using templates. Type erasure can be roughly thought of as void ptr and function pointers, which can be implemented easily using interfaces + virtual methods.
Templates need code implementation to be made available. I believe virtual methods of an interface can be invoked on a child ptr with just its binary (object) file.
Not sure if code bloat with template is an issue with modern compilers, but if executable size increases significantly, that could be an issue in embedded systems with limited memory and these would not prefer to use templates.

Related

Why aren't all functions virtual in C++? [duplicate]

Is there any real reason not to make a member function virtual in C++? Of course, there's always the performance argument, but that doesn't seem to stick in most situations since the overhead of virtual functions is fairly low.
On the other hand, I've been bitten a couple of times with forgetting to make a function virtual that should be virtual. And that seems to be a bigger argument than the performance one. So is there any reason not to make member functions virtual by default?
One way to read your questions is "Why doesn't C++ make every function virtual by default, unless the programmer overrides that default." Without consulting my copy of "Design and Evolution of C++": this would add extra storage to every class unless every member function is made non-virtual. Seems to me this would have required more effort in the compiler implementation, and slowed down the adoption of C++ by providing fodder to the performance obsessed (I count myself in that group.)
Another way to read your questions is "Why do C++ programmers do not make every function virtual unless they have very good reasons not to?" The performance excuse is probably the reason. Depending on your application and domain, this might be a good reason or not. For example, part of my team works in market data ticker plants. At 100,000+ messages/second on a single stream, the virtual function overhead would be unacceptable. Other parts of my team work in complex trading infrastructure. Making most functions virtual is probably a good idea in that context, as the extra flexibility beats the micro-optimization.
Stroustrup, the designer of the language, says:
Because many classes are not designed to be used as base classes. For example, see class complex.
Also, objects of a class with a virtual function require space needed by the virtual function call mechanism - typically one word per object. This overhead can be significant, and can get in the way of layout compatibility with data from other languages (e.g. C and Fortran).
See The Design and Evolution of C++ for more design rationale.
There are several reasons.
First, performance: Yes, the overhead of a virtual function is relatively low seen in isolation. But it also prevents the compiler from inlining, and that is a huge source of optimization in C++. The C++ standard library performs as well as it does because it can inline the dozens and dozens of small one-liners it consists of. Additionally, a class with virtual methods is not a POD datatype, and so a lot of restrictions apply to it. It can't be copied just by memcpy'ing, it becomes more expensive to construct, and takes up more space. There are a lot of things that suddenly become illegal or less efficient once a non-POD type is involved.
And second, good OOP practice. The point in a class is that it makes some kind of abstraction, hides its internal details, and provides a guarantee that "this class will behave so and so, and will always maintain these invariants. It will never end up in an invalid state".
That is pretty hard to live up to if you allow others to override any member function. The member functions you defined in the class are there to ensure that the invariant is maintained. If we didn't care about that, we could just make the internal data members public, and let people manipulate them at will. But we want our class to be consistent. And that means we have to specify the behavior of its public interface. That may involve specific customizability points, by making individual functions virtual, but it almost always also involves making most methods non-virtual, so that they can do the job of ensuring that the invariant is maintained. The non-virtual interface idiom is a good example of this:
http://www.gotw.ca/publications/mill18.htm
Third, inheritance isn't often needed, especially not in C++. Templates and generic programming (static polymorphism) can in many cases do a better job than inheritance (runtime polymorphism). Yes, you sometimes still need virtual methods and inheritance, but it is certainly not the default. If it is, you're Doing It Wrong. Work with the language, rather than try to pretend it was something else. It's not Java, and unlike Java, in C++ inheritance is the exception, not the rule.
I'll ignore performance and memory cost, because I have no way to measure them for the "in general" case...
Classes with virtual member functions are non-POD. So if you want to use your class in low-level code which relies on it being POD, then (among other restrictions) any member functions must be non-virtual.
Examples of things you can portably do with an instance of a POD class:
copy it with memcpy (provided the target address has sufficient alignment).
access fields with offsetof()
in general, treat it as a sequence of char
... um
that's about it. I'm sure I've forgotten something.
Other things people have mentioned that I agree with:
Many classes are not designed for inheritance. Making their methods virtual would be misleading, since it implies child classes might want to override the method, and there shouldn't be any child classes.
Many methods are not designed to be overridden: same thing.
Also, even when things are intended to be subclassed / overridden, they aren't necessarily intended for run-time polymorphism. Very occasionally, despite what OO best practice says, what you want inheritance for is code reuse. For example if you're using CRTP for simulated dynamic binding. So again you don't want to imply your class will play nicely with runtime polymorphism by making its methods virtual, when they should never be called that way.
In summary, things which are intended to be overridden for runtime polymorphism should be marked virtual, and things which don't, shouldn't. If you find that almost all your member functions are intended to be virtual, then mark them virtual unless there's a reason not to. If you find that most of your member functions are not intended to be virtual, then don't mark them virtual unless there's a reason to do so.
It's a tricky issue when designing a public API, because flipping a method from one to the other is a breaking change, so you have to get it right first time. But you don't necessarily know before you have any users, whether your users are going to want to "polymorph" your classes. Ho hum. The STL container approach, of defining abstract interfaces and banning inheritance entirely, is safe but sometimes requires users to do more typing.
The following post is mostly opinion, but here goes:
Object oriented design is three things, and encapsulation (information hiding) is the first of these things. If a class design is not solid on this, then the rest doesn't really matter very much.
It has been said before that "inheritance breaks encapsulation" (Alan Snyder '86) A good discussion of this is present in the group of four design pattern book. A class should be designed to support inheritance in a very specific manner. Otherwise, you open the possibility of misuse by inheritors.
I would make the analogy that making all of your methods virtual is akin to making all your members public. A bit of a stretch, I know, but that's why I used the word 'analogy'
As you are designing your class hierarchy, it may make sense to write a function that should not be overridden. One example is if you are doing the "template method" pattern, where you have a public method that calls several private virtual methods. You would not want derived classes to override that; everyone should use the base definition.
There is no "final" keyword, so the best way to communicate to other developers that a method should not be overridden is to make it non-virtual. (other than easily ignored comments)
At the class level, making the destructor non-virtual communicates that the class should not be used as a base class, such as the STL containers.
Making a method non-virtual communicates how it should be used.
The Non-Virtual Interface idiom makes use of non-virtual methods. For more information please refer to Herb Sutter "Virtuality" article.
http://www.gotw.ca/publications/mill18.htm
And comments on the NVI idiom:
http://www.parashift.com/c++-faq-lite/strange-inheritance.html#faq-23.3
http://accu.org/index.php/journals/269 [See sub-section]

Why should I mark all methods virtual in C++? Is there a trade-off?

I know why and how virtual methods work, and most of the time people tell me I should always mark a method virtual, but, I don't understand why if I'm not going to override it. And I also know there's a tiny memory issue.
Please explain me why I should mark all methods virtual and what's the trade-off.
Code example:
class Point
{
int x, y;
public:
virtual void setX(int i);
virtual void setY(int i);
};
(That question is not equal to Should I mark all methods virtual? because I want to know the trade-off and because the programming language in the case is C++, not C#)
OBS: I'm sorry if there's any grammar error, English is not my native language.
No, you should not "mark all methods as virtual".
If you want the method to be virtual, then mark it as such. If not, leave the keyword out.
There is an overhead for virtual methods compared to regular ones. If you want to read more about this, check out the Wikipedia side about VTables.
The real reason to make member functions non-virtual is to enforce class invariants.
Advice to make all member functions virtual generally means that either:
The people giving the advice don't understand the class, or
the people giving the advice don't understand OO design.
Yes, there are a few cases (e.g., some abstract base classes, where the only class invariant is the existence of specific functions) in which all the functions should be virtual. Those are the exception though. In most classes, virtual functions should be restricted to those that you really intend to allow derived classes to provide new/different behavior.
As for the discussion of things like vtables and the overhead of virtual function calls, I'd say they're correct as far as they go, but they miss the really big point. Whether a particular function should or shouldn't be virtual is primarily a question of class design and only secondarily a question of function call overhead. The two don't do the same thing, so trying to compare overhead rarely makes sense.
That is not the case, ie, if you dont need a virtual function then dont use it. Also as per Bjarne Stroustrup Pay per use
In C++: --
Virtual functions have a slight performance penalty. Normally it is too small to make any difference but in a tight loop it might be
significant.
A virtual function increases the size of each object by one pointer. Again this is typically insignificant, but if you create
millions of small objects it could be a factor.
Classes with virtual functions are generally meant to be inherited from. The derived classes may replace some, all or none of the
virtual functions. This can create additional complexity and
complexity is the programmers mortal enemy. For example, a derived
class may poorly implement a virtual function. This may break a part
of the base class that relies on the virtual function.
One of C++'s basic principles is that you don't pay for what you don't need. virtual functions cost more than normal member functions in both time and space. Therefore you shouldn't always use them irregardless of whether or not you'll actually ever need them or not.
Making methods virtual has slight costs (more code, more complexity, larger binaries, slower method calls), and if the class is not inherited from it brings no benefit. Classes need to be designed for inheritance, otherwise inheriting from them is just begging to shoot yourself in the foot. So no, you should not always make every method virtual. The people who tell you this are probably just too inheritance-happy.
It is not true that all functions should be marked as virtual.
Indeed, there's a pattern for enforcing pre/postconditions which explicitly requires that public members are not virtual. It works as follows:
class Whatever
{
public:
int frobnicate(int);
protected:
virtual int do_frobnicate(int);
};
int Whatever::frobnicate(int x)
{
check_preconditions(x);
int result = do_frobnicate(x);
check_postconditions(x, result);
return result;
}
Since derived classes cannot override the public function, they cannot remove the pre/postcondition checks. They can, however, override the protected do_frobnicate which does the actual work.
(Edit - I got to this question by way of a duplicate C# question, but the answer is still useful, I think! Edited to reflect that:)
Actually, C# has a good "official" reason: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/versioning-with-the-override-and-new-keywords
The first sentence there is:
The C# language is designed so that versioning between base and derived classes in different libraries can evolve and maintain backward compatibility.
So if you're writing a class and it's possible end-users will make derived classes, and it's possible different versions will get out of sync...all of a sudden it starts to be important. If you have carefully protected core aspects of your code, then if you update things, people can still use the old derived class (hopefully).
On the other hand, if you are okay with no one being able to used derived classes until their authors have updated to your newest version, everything can be virtual. I have seen this scenario in several "early access" games that allow modding - when the game version increases, all of a sudden a lot of mods are broken because they relied on things working one way...and they changed. (Okay, not all changes are related to this, but some are.)
It really depends on your usage scenario. If people can keep using your old version, maybe they don't care if you've updated it and are happy to keep using it with their derived classes. In a large business scenario, making everything virtual may very well be a recipe for breaking many things at once when someone updates something.
Does this apply to C++ as well? I don't see why not - C++ is also used for massive projects and would also face the dangers of multiple simultaneous versions.

Dyamic vs Static Polymorphism in C++ : which is preferable?

I understand that dynamic/static polymorphism depends on the application design and requirements. However, is it advisable to ALWAYS choose static polymorphism over dynamic if possible? In particular, I can see the following 2 design choice in my application, both of which seem to be advised against:
Implement Static polymorphism using CRTP: No vtable lookup overhead while still providing an interface in form of template base class. But, uses a Lot of switch and static_cast to access the correct class/method, which is hazardous
Dynamic Polymorphism: Implement interfaces (pure virtual classes), associating lookup cost for even trivial functions like accessors/mutators
My application is very time critical, so am in favor of static polymorphism. But need to know if using too many static_cast is an indication of poor design, and how to avoid that without incurring latency.
EDIT: Thanks for the insight. Taking a specific case, which of these is a better approach?
class IMessage_Type_1
{
virtual long getQuantity() =0;
...
}
class Message_Type_1_Impl: public IMessage_Type_1
{
long getQuantity() { return _qty;}
...
}
OR
template <class T>
class TMessage_Type_1
{
long getQuantity() { return static_cast<T*>(this)->getQuantity(); }
...
}
class Message_Type_1_Impl: public TMessage_Type_1<Message_Type_1_Impl>
{
long getQuantity() { return _qty; }
...
}
Note that there are several mutators/accessors in each class, and I do need to specify an interface in my application. In static polymorphism, I switch just once - to get the message type. However, in dynamic polymorphism, I am using virtual functions for EACH method call. Doesnt that make it a case to use static poly? I believe static_cast in CRTP is quite safe and no performance penalty (compile time bound) ?
Static and dynamic polymorphism are designed to solve different
problems, so there are rarely cases where both would be appropriate. In
such cases, dynamic polymorphism will result in a more flexible and
easier to manage design. But most of the time, the choice will be
obvious, for other reasons.
One rough categorisation of the two: virtual functions allow different
implementations for a common interface; templates allow different
interfaces for a common implementation.
A switch is nothing more than a sequence of jumps that -after optimized- becomes a jump to an address looked-up by a table. Exactly like a virtual function call is.
If you have to jump depending on a type, you must first select the type. If the selection cannot be done at compile time (essentially because it depends on the input) you must always perform two operation: select & jump. The syntactic tool you use to select doesn't change the performance, since optimize the same.
In fact you are reinventing the v-table.
You see the design issues associated with purely template based polymorphism. While a looking virtual base class gives you a pretty good idea what is expected from a derived class, this gets much harder in heavily templated designs. One can easily demonstrate that by introducing a syntax error while using one of the boost libraries.
On the other hand, you are fearful of performance issues when using virtual functions. Proofing that this will be a problem is much harder.
IMHO this is a non-question. Stick with virtual functions until indicated otherwise. Virtual function calls are a lot faster than most people think (Calling a function from a dynamically linked library also adds a layer of indirection. No one seems to think about that).
I would only consider a templated design if it makes the code easier to read (generic algorithms), you use one of the few cases known to be slow with virtual functions (numeric algorithms) or you already identified it as a performance bottleneck.
Static polimorphism may provide significant advantage if the called method may be inlined by compiler.
For example, if the virtual method looks like this:
protected:
virtual bool is_my_class_fast_enough() override {return true;}
then static polimophism should be the preferred way (otherwise, the method should be honest and return false :).
"True" virtual call (in most cases) can't be inlined.
Other differences(such as additional indirection in the vtable call) are neglectable
[EDIT]
However, if you really need runtime polymorphism
(if the caller shouldn't know the method's implementation and, therefore, the method can't be inlined on the caller's side) then
do not reinvent vtable (as Emilio Garavaglia mentioned), just use it.

Inheritance & virtual functions Vs Generic Programming

I need to Understand that whether really Inheritance & virtual functions not necessary in C++ and one can achieve everything using Generic programming. This came from Alexander Stepanov and Lecture I was watching is Alexander Stepanov: STL and Its Design Principles
I always like to think of templates and inheritance as two orthogonal concepts, in the very literal sense: To me, inheritance goes "vertically", starting with a base class at the top and going "down" to more and more derived classes. Every (publically) derived class is a base class in terms of its interface: A poodle is a dog is an animal.
On the other hand, templates go "horizontal": Each instance of a template has the same formal code content, but two distinct instances are entirely separate, unrelated pieces that run in "parallel" and don't see each other. Sorting an array of integers is formally the same as sorting an array of floats, but an array of integers is not at all related to an array of floats.
Since these two concepts are entirely orthogonal, their application is, too. Sure, you can contrive situations in which you could replace one by another, but when done idiomatically, both template (generic) programming and inheritance (polymorphic) programming are independent techniques that both have their place.
Inheritance is about making an abstract concept more and more concrete by adding details. Generic programming is essentially code generation.
As my favourite example, let me mention how the two technologies come together beautifully in a popular implementation of type erasure: A single handler class holds a private polymorphic pointer-to-base of an abstract container class, and the concrete, derived container class is determined a templated type-deducing constructor. We use template code generation to create an arbitrary family of derived classes:
// internal helper base
class TEBase { /* ... */ };
// internal helper derived TEMPLATE class (unbounded family!)
template <typename T> class TEImpl : public TEBase { /* ... */ }
// single public interface class
class TE
{
TEBase * impl;
public:
// "infinitely many" constructors:
template <typename T> TE(const T & x) : impl(new TEImpl<T>(x)) { }
// ...
};
They serve different purpose. Generic programming (at least in C++) is about compile time polymorphisim, and virtual functions about run-time polymorphisim.
If the choice of the concrete type depends on user's input, you really need runtime polymorphisim - templates won't help you.
Polymorphism (i.e. dynamic binding) is crucial for decisions that are based on runtime data. Generic data structures are great but they are limited.
Example: Consider an event handler for a discrete event simulator: It is very cheap (in terms of programming effort) to implement this with a pure virtual function, but is verbose and quite inflexible if done purely with templated classes.
As rule of thumb: If you find yourself switching (or if-else-ing) on the value of some input object, and performing different actions depending on its value, there might exist a better (in the sense of maintainability) solution with dynamic binding.
Some time ago I thought about a similar question and I can only dream about giving you such a great answer I received. Perhaps this is helpful: interface paradigm performance (dynamic binding vs. generic programming)
It seems like a very academic question, like with most things in life there are lots of ways to do things and in the case of C++ you have a number of ways to solve things. There is no need to have an XOR attitude to things.
In the ideal world, you would use templates for static polymorphism to give you the best possible performance in instances where the type is not determined by user input.
The reality is that templates force most of your code into headers and this has the consequence of exploding your compile times.
I have done some heavy generic programming leveraging static polymorphism to implement a generic RPC library (https://github.com/bytemaster/mace (rpc_static_poly branch) ). In this instance the protocol (JSON-RPC, the transport (TCP/UDP/Stream/etc), and the types) are all known at compile time so there is no reason to do a vtable dispatch... or is there?
When I run the code through the pre-processor for a single.cpp it results in 250,000 lines and takes 30+ seconds to compile a single object file. I implemented 'identical' functionality in Java and C# and it compiles in about a second.
Almost every stl or boost header you include adds thousands or 10's of thousands of lines of code that must be processed per-object-file, most of it redundant.
Do compile times matter? In most cases they have a more significant impact on the final product than 'maximally optimized vtable elimination'. The reason being that every 'bug' requires a 'try fix, compile, test' cycle and if each cycle takes 30+ seconds development slows to a crawl (note motivation for Google's go language).
After spending a few days with java and C# I decided that I needed to 're-think' my approach to C++. There is no reason a C++ program should compile much slower than the underlying C that would implement the same function.
I now opt for runtime polymorphism unless profiling shows that the bottleneck is in vtable dispatches. I now use templates to provide 'just-in-time' polymorphism and type-safe interface on top of the underlying object which deals with 'void*' or an abstract base class. In this way users need not derive from my 'interfaces' and still have the 'feel' of generic programming, but they get the benefit of fast compile times. If performance becomes an issue then the generic code can be replaced with static polymorphism.
The results are dramatic, compile times have fallen from 30+ seconds to about a second. The post-preprocessor source code is now a couple thousand lines instead of 250,000 lines.
On the other side of the discussion, I was developing a library of 'drivers' for a set of similar but slightly different embedded devices. In this instance the embedded device had little room for 'extra code' and no use for 'vtable' dispatch. With C our only option was 'separate object files' or runtime 'polymorphism' via function pointers. Using generic programming and static polymorphism we were able to create maintainable software that ran faster than anything we could produce in C.

Achieving Interface functionality in C++

A big reason why I use OOP is to create code that is easily reusable. For that purpose Java style interfaces are perfect. However, when dealing with C++ I really can't achieve any sort of functionality like interfaces... at least not with ease.
I know about pure virtual base classes, but what really ticks me off is that they force me into really awkward code with pointers. E.g. map<int, Node*> nodes; (where Node is the virtual base class).
This is sometimes ok, but sometimes pointers to base classes are just not a possible solution. E.g. if you want to return an object packaged as an interface you would have to return a base-class-casted pointer to the object.. but that object is on the stack and won't be there after the pointer is returned. Of course you could start using the heap extensively to avoid this but that's adding so much more work than there should be (avoiding memory leaks).
Is there any way to achieve interface-like functionality in C++ without have to awkwardly deal with pointers and the heap?? (Honestly for all that trouble and awkardness id rather just stick with C.)
You can use boost::shared_ptr<T> to avoid the raw pointers. As a side note, the reason why you don't see a pointer in the Java syntax has nothing to do with how C++ implements interfaces vs. how Java implements interfaces, but rather it is the result of the fact that all objects in Java are implicit pointers (the * is hidden).
Template MetaProgramming is a pretty cool thing. The basic idea? "Compile time polymorphism and implicit interfaces", Effective C++. Basically you can get the interfaces you want via templated classes. A VERY simple example:
template <class T>
bool foo( const T& _object )
{
if ( _object != _someStupidObject && _object > 0 )
return true;
return false;
}
So in the above code what can we say about the object T? Well it must be compatible with '_someStupidObject' OR it must be convertible to a type which is compatible. It must be comparable with an integral value, or again convertible to a type which is. So we have now defined an interface for the class T. The book "Effective C++" offers a much better and more detailed explanation. Hopefully the above code gives you some idea of the "interface" capability of templates. Also have a look at pretty much any of the boost libraries they are almost all chalk full of templatization.
Considering C++ doesn't require generic parameter constraints like C#, then if you can get away with it you can use boost::concept_check. Of course, this only works in limited situations, but if you can use it as your solution then you'll certainly have faster code with smaller objects (less vtable overhead).
Dynamic dispatch that uses vtables (for example, pure virtual bases) will make your objects grow in size as they implement more interfaces. Managed languages do not suffer from this problem (this is a .NET link, but Java is similar).
I think the answer to your question is no - there is no easier way. If you want pure interfaces (well, as pure as you can get in C++), you're going to have to put up with all the heap management (or try using a garbage collector. There are other questions on that topic, but my opinion on the subject is that if you want a garbage collector, use a language designed with one. Like Java).
One big way to ease your heap management pain somewhat is auto pointers. Boost has a nice automatic pointer that does a lot of heap management work for you. The std::auto_ptr works, but it's quite quirky in my opinion.
You might also evaluate whether you really need those pure interfaces or not. Sometimes you do, but sometimes (like some of the code I work with), the pure interfaces are only ever instantiated by one class, and thus just become extra work, with no benefit to the end product.
While auto_ptr has some weird rules of use that you must know*, it exists to make this kind of thing work easily.
auto_ptr<Base> getMeAThing() {
return new Derived();
}
void something() {
auto_ptr<Base> myThing = getMeAThing();
myThing->foo(); // Calls Derived::foo, if virtual
// The Derived object will be deleted on exit to this function.
}
*Never put auto_ptrs in containers, for one. Understand what they do on assignment is another.
This is actually one of the cases in which C++ shines. The fact that C++ provides templates and functions that are not bound to a class makes reuse much easier than in pure object oriented languages. The reality though is that you will have to adjust they manner in which you write your code in order to make use of these benefits. People that come from pure OO languages often have difficulty with this, but in C++ an objects interface includes not member functions. In fact it is considered to be good practice in C++ to use non-member functions to implement an objects interface whenever possible. Once you get the hang of using template nonmember functions to implement interfaces, well it is a somewhat life changing experience. \