interface overhead - c++

I've a simple class that looks like Boost.Array. There are two template parameters T and N. One drawback of Boost.Array is, that every method that uses such an array, has to be a template with parameter N (T is OK). The consequence is that the whole program tends to be a template. One idea is to create an interface (abstract class with only pure virtual functions) that only depends on T (something like ArrayInterface). Now every other class only access the interface, and therefore needs only the template parameter T (which in contrast to N, is more or less always known). The drawback here is the overhead of the virtual call (more the missed opportunity to inline calls), if the interface is used. Until here only facts.
template<typename T>
class ArrayInterface {
public:
virtual ~ArrayInterface() {};
virtual T Get(int i) = 0;
};
template<typename T, int N>
class Array : ArrayInterface<T> {
public:
T Get(int i) { ... }
};
template<typename T, int N>
class ArrayWithoutInterface {
public:
T Get() { ... }
};
But my real problem lies somewhere else. When I extend Boost.Array with an Interface, a direct instantiation of Boost.Array gets slow (factor 4 in one case, where it matters). If I remove the interface, Boost.Array is as fast as before. I understand, if a method is called through ArrayInterface there is an overhead, that is OK. But I don't understand why the a call to a method gets slower if there is only an additional interface with only pure virtual methods and the class is called directly.
Array<int, 1000> a;
a.Get(0); // Slow
ArrayWithoutInterface<int, 1000> awi;
awi.Get(0); // Fast
GCC 4.4.3 and Clang 1.1 show the same behavior.

This behaviour is expected: you’re invoking a virtual method. Whether you’re invoking it directly or via a base class pointer isn’t relevant at first: in both cases, the call has got to go through the virtual function table.
For a simple call such as Get (that simply dereferences an array cell, presumably without bounds checking), this can indeed make a factor 4 difference.
Now, a good compiler could see that the added indirection isn’t necessary here since the dynamic type of the object (and hence the method call target) is known at compile time. I’m mildly surprised that GCC apparently doesn’t optimize this (did you compile with -O3?). Then again, it’s only an optimization.

I'd disagree with your conclusion that 'the whole program tends to be a template': it looks to me like you're trying to solve a non-problem.
However, it's unclear what you mean by 'extend Boost.Array with an interface': are you modifying the source of boost::array by introducing your interface? If so, every array instance you are creating has to drag a vtable pointer along, whether or not you use the virtual methods. The existence of virtual methods may also make the compiler wary of using aggressive optimizations on non-virtual methods possible in a purely header-defined class.
Edited: ...and of course, you are using the virtual method. It takes pretty advanced code analysis techniques for the compiler to be certain a virtual call can be optimized away.

Two reasons:
late binding is slow
virtual methods cannot be inlined

If you have a virtual method that never gets extended, it could be quite possible that the compiler is optimizing out the virtual part of the methods. During a normal non-virtual method call, the program flow will go directly from the caller to the method. However, when the method is marked virtual, the cpu must first jump to a virtual table, then find the method you're looking for, and then jump to that method.
Now this normally isn't too noticeable. If the method you're calling takes 100ms to execute, even if your virtual table lookup takes 1ms, it's not going to matter. But if, in the case of an array, your method takes 0.5ms to execute that 1ms performance drop is going to be quite noticeable.
There's not much you can do about it, except don't extend Boost.Array, or rename your methods so they don't override.

Related

C++ Low latency Design: Function Dispatch v/s CRTP for Factory implementation

As part of a system design, we need to implement a factory pattern. In combination with the Factory pattern, we are also using CRTP, to provide a base set of functionality which can then be customized by the Derived classes.
Sample code below:
class FactoryInterface{
public:
virtual void doX() = 0;
};
//force all derived classes to implement custom_X_impl
template< typename Derived, typename Base = FactoryInterface>
class CRTP : public Base
{
public:
void doX(){
// do common processing..... then
static_cast<Derived*>(this)->custom_X_impl();
}
};
class Derived: public CRTP<Derived>
{
public:
void custom_X_impl(){
//do custom stuff
}
};
Although this design is convoluted, it does a provide a few benefits. All the calls after the initial virtual function call can be inlined. The derived class custom_X_impl call is also made efficiently.
I wrote a comparison program to compare the behavior for a similar implementation (tight loop, repeated calls) using function pointers and virtual functions. This design came out triumphs for gcc/4.8 with O2 and O3.
A C++ guru however told me yesterday, that any virtual function call in a large executing program can take a variable time, considering cache misses and I can achieve a potentially better performance using C style function table look-ups and gcc hotlisting of functions. However I still see 2x the cost in my sample program mentioned above.
My questions are as below:
1. Is the guru's assertion true? For either answers, are there any links I can refer.
2. Is there any low latency implementation which I can refer, has a base class invoking a custom function in a derived class, using function pointers?
3. Any suggestions on improving the design?
Any other feedback is always welcome.
Your guru refers to the hot attribute of the gcc compiler. The effect of this attribute is:
The function is optimized more aggressively and on many targets it is
placed into a special subsection of the text section so all hot
functions appear close together, improving locality.
So yes, in a very large code base, the hotlisted function may remain in cache ready to be executed without delay, because it avodis cache misses.
You can perfectly use this attribute for member functions:
struct X {
void test() __attribute__ ((hot)) {cout <<"hello, world !\n"; }
};
But...
When you use virtual functions the compiler generally generates a vtable that is shared between all objects of the class. This table is a table of pointers to functions. And indeed -- your guru is right -- nothing garantees that this table remains in cached memory.
But, if you manually create a "C-style" table of function pointers, the problem is EXACTLY THE SAME. While the function may remain in cache, nothing ensures that your function table remains in cache as well.
The main difference between the two approaches is that:
in the case of virtual functions, the compiler knows that the virtual function is a hot spot, and could decide to make sure to keep the vtable in cache as well (I don't know if gcc can do this or if there are plans to do so).
in the case of the manual function pointer table, your compiler will not easily deduce that the table belongs to a hot spot. So this attempt of manual optimization might very well backfire.
My opinion: never try to optimize yourself what a compiler can do much better.
Conclusion
Trust in your benchmarks. And trust your OS: if your function or your data is frequently acessed, there are high chances that a modern OS will take this into account in its virtual memry management, and whatever the compiler will generate.

Mimicing C# 'new' (hiding a virtual method) in a C++ code generator

I'm developing a system which takes a set of compiled .NET assemblies and emits C++ code which can then be compiled to any platform having a C++ compiler. Of course, this involves some extensive trickery due to various things .NET does that C++ doesn't.
One such situation is the ability to hide virtual methods, such as the following in C#:
class A
{
virtual void MyMethod()
{ ... }
}
class B : A
{
override void MyMethod()
{ ... }
}
class C : B
{
new virtual void MyMethod()
{ ... }
}
class D : C
{
override void MyMethod()
{ ... }
}
I came up with a solution to this that seemed clever and did work, as in the following example:
namespace impdetails
{
template<class by_type>
struct redef {};
}
struct A
{
virtual void MyMethod( void );
};
struct B : A
{
virtual void MyMethod( void );
};
struct C : B
{
virtual void MyMethod( impdetails::redef<C> );
};
struct D : C
{
virtual void MyMethod( impdetails::redef<D> );
};
This does of course require that all the call sites for C::MyMethod and D::MyMethod construct and pass the dummy object, as in this example:
C *c_d = &d;
c_d->MyMethod( impdetails::redef<C>() );
I'm not worried about this extra source code overhead; the output of this system is mainly not intended for human consumption.
Unfortunately, it turns out this actually causes runtime overhead. Intuitively, one would expect that because impdetails::redef<> is empty, it would take no space and passing it would involve no code.
However, the C++ standard, for reasons I understand but don't totally agree with, mandates that objects cannot have zero size. This leaves us with a situation where the compiler actually emits code to create and pass the object.
In fact, at least on VC2008, I found that it even went to the trouble of zeroing the dummy byte, even in release builds! I'm not sure why that was necessary, but it makes me even more not want to do it this way.
If all else fails I could always change the actual name of the function, such as perhaps having MyMethod, MyMethod$1, and MyMethod$2. However, this causes more problems. For instance, $ is actually not legal in C++ identifiers (although compilers I've tested will allow it.) A totally acceptable identifier in the output program could also be an identifier in the input program, which suggests a more complex approach would be needed, making this a less attractive option.
It also so turns out that there are other situations in this project where it would be nice to be able to modify method signatures using arbitrary type arguments similar to how I'm passing a type to impdetails::redef<>.
Is there any other clever way to get around this, or am I stuck between adding overhead at every call site or mangling names?
After considering some other aspects of the system as well such as interfaces in .NET, I am starting to think maybe it's better - perhaps even more-or-less necessary - to not even use the C++ virtual calling mechanism at all. The more I consider, the messier using that mechanism is getting.
In this approach, each user object class would have a separate struct for the vtable (perhaps kept in a separate namespace like vtabletype::. The generated class would have a pointer member that would be initialized through some trickery to point to a static instance of the vtable. Virtual calls would explicitly use a member pointer from that vtable.
If done properly this should have the same performance as the compiler's own implementation would. I've confirmed it does on VC2008. (By contrast, just using straight C, which is what I was planning on earlier, would likely not perform as well, since compilers often optimize this into a register.)
It would be hellish to write code like this manually, but of course this isn't a concern for a generator. This approach does have some advantages in this application:
Because it's a much more explicit approach, one can be more sure that it's doing exactly what .NET specifies it should be doing with respect to newslot as well as selection of interface implementations.
It might be more efficient (depending on some internal details) than a more traditional C++ approach to interfaces, which would tend to invoke multiple inheritance.
In .NET, objects are considered to be fully constructed when their .ctor runs. This impacts how virtual functions behave. With explicit knowledge of the vtables, this could be achieved by writing it in during allocation. (Although putting the .ctor code into a normal member function is another option.)
It might avoid redundant data when implementing reflection.
It provides better control and knowledge of object layout, which could be useful for the garbage collector.
On the downside, it totally loses the C++ compiler's overloading feature with regard to the vtable entries: those entries are data members, not functions, so there is no overloading. In this case it would be tempting to just number the members (say _0, _1...) This may not be so bad when debugging, since once the pointer is followed, you'll see an actual, properly-named member function anyway.
I think I may end up doing it this way but by all means I'd like to hear if there are better options, as this is admittedly a rather complex approach (and problem.)

Dyamic vs Static Polymorphism in C++ : which is preferable?

I understand that dynamic/static polymorphism depends on the application design and requirements. However, is it advisable to ALWAYS choose static polymorphism over dynamic if possible? In particular, I can see the following 2 design choice in my application, both of which seem to be advised against:
Implement Static polymorphism using CRTP: No vtable lookup overhead while still providing an interface in form of template base class. But, uses a Lot of switch and static_cast to access the correct class/method, which is hazardous
Dynamic Polymorphism: Implement interfaces (pure virtual classes), associating lookup cost for even trivial functions like accessors/mutators
My application is very time critical, so am in favor of static polymorphism. But need to know if using too many static_cast is an indication of poor design, and how to avoid that without incurring latency.
EDIT: Thanks for the insight. Taking a specific case, which of these is a better approach?
class IMessage_Type_1
{
virtual long getQuantity() =0;
...
}
class Message_Type_1_Impl: public IMessage_Type_1
{
long getQuantity() { return _qty;}
...
}
OR
template <class T>
class TMessage_Type_1
{
long getQuantity() { return static_cast<T*>(this)->getQuantity(); }
...
}
class Message_Type_1_Impl: public TMessage_Type_1<Message_Type_1_Impl>
{
long getQuantity() { return _qty; }
...
}
Note that there are several mutators/accessors in each class, and I do need to specify an interface in my application. In static polymorphism, I switch just once - to get the message type. However, in dynamic polymorphism, I am using virtual functions for EACH method call. Doesnt that make it a case to use static poly? I believe static_cast in CRTP is quite safe and no performance penalty (compile time bound) ?
Static and dynamic polymorphism are designed to solve different
problems, so there are rarely cases where both would be appropriate. In
such cases, dynamic polymorphism will result in a more flexible and
easier to manage design. But most of the time, the choice will be
obvious, for other reasons.
One rough categorisation of the two: virtual functions allow different
implementations for a common interface; templates allow different
interfaces for a common implementation.
A switch is nothing more than a sequence of jumps that -after optimized- becomes a jump to an address looked-up by a table. Exactly like a virtual function call is.
If you have to jump depending on a type, you must first select the type. If the selection cannot be done at compile time (essentially because it depends on the input) you must always perform two operation: select & jump. The syntactic tool you use to select doesn't change the performance, since optimize the same.
In fact you are reinventing the v-table.
You see the design issues associated with purely template based polymorphism. While a looking virtual base class gives you a pretty good idea what is expected from a derived class, this gets much harder in heavily templated designs. One can easily demonstrate that by introducing a syntax error while using one of the boost libraries.
On the other hand, you are fearful of performance issues when using virtual functions. Proofing that this will be a problem is much harder.
IMHO this is a non-question. Stick with virtual functions until indicated otherwise. Virtual function calls are a lot faster than most people think (Calling a function from a dynamically linked library also adds a layer of indirection. No one seems to think about that).
I would only consider a templated design if it makes the code easier to read (generic algorithms), you use one of the few cases known to be slow with virtual functions (numeric algorithms) or you already identified it as a performance bottleneck.
Static polimorphism may provide significant advantage if the called method may be inlined by compiler.
For example, if the virtual method looks like this:
protected:
virtual bool is_my_class_fast_enough() override {return true;}
then static polimophism should be the preferred way (otherwise, the method should be honest and return false :).
"True" virtual call (in most cases) can't be inlined.
Other differences(such as additional indirection in the vtable call) are neglectable
[EDIT]
However, if you really need runtime polymorphism
(if the caller shouldn't know the method's implementation and, therefore, the method can't be inlined on the caller's side) then
do not reinvent vtable (as Emilio Garavaglia mentioned), just use it.

Why bother with virtual functions in c++?

This is not a question about how they work and declared, this I think is pretty much clear to me. The question is about why to implement this?
I suppose the practical reason is to simplify bunch of other code to relate and declare their variables of base type, to handle objects and their specific methods from many other subclasses?
Could this be done by templating and typechecking, like I do it in Objective C? If so, what is more efficient? I find it confusing to declare object as one class and instantiate it as another, even if it is its child.
SOrry for stupid questions, but I havent done any real projects in C++ yet and since I am active Objective C developer (it is much smaller language thus relying heavily on SDK's functionalities, like OSX, iOS) I need to have clear view on any parallel ways of both cousins.
Yes, this can be done with templates, but then the caller must know what the actual type of the object is (the concrete class) and this increases coupling.
With virtual functions the caller doesn't need to know the actual class - it operates through a pointer to a base class, so you can compile the client once and the implementor can change the actual implementation as much as it wants and the client doesn't have to know about that as long as the interface is unchanged.
Virtual functions implement polymorphism. I don't know Obj-C, so I cannot compare both, but the motivating use case is that you can use derived objects in place of base objects and the code will work. If you have a compiled and working function foo that operates on a reference to base you need not modify it to have it work with an instance of derived.
You could do that (assuming that you had runtime type information) by obtaining the real type of the argument and then dispatching directly to the appropriate function with a switch of shorts, but that would require either manually modifying the switch for each new type (high maintenance cost) or having reflection (unavailable in C++) to obtain the method pointer. Even then, after obtaining a method pointer you would have to call it, which is as expensive as the virtual call.
As to the cost associated to a virtual call, basically (in all implementations with a virtual method table) a call to a virtual function foo applied on object o: o.foo() is translated to o.vptr[ 3 ](), where 3 is the position of foo in the virtual table, and that is a compile time constant. This basically is a double indirection:
From the object o obtain the pointer to the vtable, index that table to obtain the pointer to the function and then call. The extra cost compared with a direct non-polymorphic call is just the table lookup. (In fact there can be other hidden costs when using multiple inheritance, as the implicit this pointer might have to be shifted), but the cost of the virtual dispatch is very small.
I don't know the first thing about Objective-C, but here's why you want to "declare an object as one class and instantiate it as another": the Liskov Substitution Principle.
Since a PDF is a document, and an OpenOffice.org document is a document, and a Word Document is a document, it's quite natural to write
Document *d;
if (ends_with(filename, ".pdf"))
d = new PdfDocument(filename);
else if (ends_with(filename, ".doc"))
d = new WordDocument(filename);
else
// you get the point
d->print();
Now, for this to work, print would have to be virtual, or be implemented using virtual functions, or be implemented using a crude hack that reinvents the virtual wheel. The program need to know at runtime which of various print methods to apply.
Templating solves a different problem, where you determine at compile time which of the various containers you're going to use (for example) when you want to store a bunch of elements. If you operate on those containers with template functions, then you don't need to rewrite them when you switch containers, or add another container to your program.
A virtual function is important in inheritance. Think of an example where you have a CMonster class and then a CRaidBoss and CBoss class that inherit from CMonster.
Both need to be drawn. A CMonster has a Draw() function, but the way a CRaidBoss and a CBoss are drawn is different. Thus, the implementation is left to them by utilizing the virtual function Draw.
Well, the idea is simply to allow the compiler to perform checks for you.
It's like a lot of features : ways to hide what you don't want to have to do yourself. That's abstraction.
Inheritance, interfaces, etc. allow you to provide an interface to the compiler for the implementation code to match.
If you didn't have the virtual function mecanism, you would have to write :
class A
{
void do_something();
};
class B : public A
{
void do_something(); // this one "hide" the A::do_something(), it replace it.
};
void DoSomething( A* object )
{
// calling object->do_something will ALWAYS call A::do_something()
// that's not what you want if object is B...
// so we have to check manually:
B* b_object = dynamic_cast<B*>( object );
if( b_object != NULL ) // ok it's a b object, call B::do_something();
{
b_object->do_something()
}
else
{
object->do_something(); // that's a A, call A::do_something();
}
}
Here there are several problems :
you have to write this for each function redefined in a class hierarchy.
you have one additional if for each child class.
you have to touch this function again each time you add a definition to the whole hierarcy.
it's visible code, you can get it wrong easily, each time
So, marking functions virtual does this correctly in an implicit way, rerouting automatically, in a dynamic way, the function call to the correct implementation, depending on the final type of the object.
You dont' have to write any logic so you can't get errors in this code and have an additional thing to worry about.
It's the kind of thing you don't want to bother with as it can be done by the compiler/runtime.
The use of templates is also technically known as polymorphism from theorists. Yep, both are valid approach to the problem. The implementation technics employed will explain better or worse performance for them.
For example, Java implements templates, but through template erasure. This means that it is only apparently using templates, under the surface is plain old polymorphism.
C++ has very powerful templates. The use of templates makes code quicker, though each use of a template instantiates it for the given type. This means that, if you use an std::vector for ints, doubles and strings, you'll have three different vector classes: this means that the size of the executable will suffer.

If classes with virtual functions are implemented with vtables, how is a class with no virtual functions implemented?

In particular, wouldn't there have to be some kind of function pointer in place anyway?
I think that the phrase "classes with virtual functions are implemented with vtables" is misleading you.
The phrase makes it sound like classes with virtual functions are implemented "in way A" and classes without virtual functions are implemented "in way B".
In reality, classes with virtual functions, in addition to being implemented as classes are, they also have a vtable. Another way to see it is that "'vtables' implement the 'virtual function' part of a class".
More details on how they both work:
All classes (with virtual or non-virtual methods) are structs. The only difference between a struct and a class in C++ is that, by default, members are public in structs and private in classes. Because of that, I'll use the term class here to refer to both structs and classes. Remember, they are almost synonyms!
Data Members
Classes are (as are structs) just blocks of contiguous memory where each member is stored in sequence. Note that some times there will be gaps between members for CPU architectural reasons, so the block can be larger than the sum of its parts.
Methods
Methods or "member functions" are an illusion. In reality, there is no such thing as a "member function". A function is always just a sequence of machine code instructions stored somewhere in memory. To make a call, the processor jumps to that position of memory and starts executing. You could say that all methods and functions are 'global', and any indication of the contrary is a convenient illusion enforced by the compiler.
Obviously, a method acts like it belongs to a specific object, so clearly there is more going on. To tie a particular call of a method (a function) to a specific object, every member method has a hidden argument that is a pointer to the object in question. The member is hidden in that you don't add it to your C++ code yourself, but there is nothing magical about it -- it's very real. When you say this:
void CMyThingy::DoSomething(int arg);
{
// do something
}
The compiler really does this:
void CMyThingy_DoSomething(CMyThingy* this, int arg)
{
/do something
}
Finally, when you write this:
myObj.doSomething(aValue);
the compiler says:
CMyThingy_DoSomething(&myObj, aValue);
No need for function pointers anywhere! The compiler knows already which method you are calling so it calls it directly.
Static methods are even simpler. They don't have a this pointer, so they are implemented exactly as you write them.
That's is! The rest is just convenient syntax sugaring: The compiler knows which class a method belongs to, so it makes sure it doesn't let you call the function without specifying which one. It also uses that knowledge to translates myItem to this->myItem when it's unambiguous to do so.
(yeah, that's right: member access in a method is always done indirectly via a pointer, even if you don't see one)
(Edit: Removed last sentence and posted separately so it can be criticized separately)
Non virtual member functions are really just a syntactic sugar as they are almost like an ordinary function but with access checking and an implicit object parameter.
struct A
{
void foo ();
void bar () const;
};
is basically the same as:
struct A
{
};
void foo (A * this);
void bar (A const * this);
The vtable is needed so that we call the right function for our specific object instance. For example, if we have:
struct A
{
virtual void foo ();
};
The implementation of 'foo' might approximate to something like:
void foo (A * this) {
void (*realFoo)(A *) = lookupVtable (this->vtable, "foo");
(realFoo)(this); // Make the call to the most derived version of 'foo'
}
The virtual methods are required when you want to use polymorphism. The virtual modifier puts the method in the VMT for late binding and then at runtime is decided which method from which class is executed.
If the method is not virtual - it is decided at compile time from which class instance will it be executed.
Function pointers are used mostly for callbacks.
If a class with a virtual function is implemented with a vtable, then a class with no virtual function is implemented without a vtable.
A vtable contains the function pointers needed to dispatch a call to the appropriate method. If the method isn't virtual, the call goes to the class's known type, and no indirection is needed.
For a non-virtual method the compiler can generate a normal function invocation (e.g., CALL to a particular address with this pointer passed as a parameter) or even inline it. For a virtual function, the compiler doesn't usually know at compile time at which address to invoke the code, therefore it generates code that looks up the address in the vtable at runtime and then invokes the method. True, even for virtual functions the compiler can sometimes correctly resolve the right code at compile time (e.g., methods on local variables invoked without a pointer/reference).
(I pulled this section from my original answer so that it can be criticized separately. It is a lot more concise and to the point of your question, so in a way it's a much better answer)
No, there are no function pointers; instead, the compiler turns the problem inside-out.
The compiler calls a global function with a pointer to the object instead of calling some pointed-to function inside the object
Why? Because it's usually a lot more efficient that way. Indirect calls are expensive instructions.
There's no need for function pointers as it cant change during the runtime.
Branches are generated directly to the compiled code for the methods; just like if you have functions that aren't in a class at all, branches are generated straight to them.
The compiler/linker links directly which methods will be invoked. No need for a vtable indirection. BTW, what does that have to do with "stack vs. heap"?