C++ Pimpl vs Pure Virtual Interface Performance - c++

I realize there are quite a few posts on this subject, but I am having trouble finding the answer to this exact question.
For function calls, which is faster, a pure-virtual interface or a pimpl?
At first glance, it seems to me that the pure-virtual interface would be faster, because the using the pimpl would cost two function calls instead of one...or would some kind of clever compiler trick take over in this case?
edit:
I am trying to decide which of these I should use to abstract away the system-dependent portions of a few objects that may end up having to be spawned quite frequently, and in large numbers.
edit:
I suppose it's worth saying at this point, that the root of my problem was that I mistook the Abstract Factory design pattern for a method of making my code work on multiple platforms, when it's real purpose is for switching implementations for a given interface at runtime.

The two options are not equivalent, they should not be compared on performance as the focus is different. Even if they were equivalent, the performance difference would be minimal to unimportant in most situations. If you are in the rare case that you know that dispatch is being an issue, then you have the tools to measure the difference yourself.

Why do you ask? The question doesn't seem to make sense.
One generally uses virtual functions when one wants polymorphism: when you want them to be overridden in derived classes.
One generally uses pimpl when one wants to remove implementation details from header files.
The two really aren't interchangeable. Off the top of my head, I cannot think of any reasonable situations where you would use one and consider replacing it with the other.
Anyways, that said, for a typical implementation of virtual functions, a function call involves reading the object to find the virtual function table pointer, then reading the virtual function table to find the function pointer, and finally calling the function pointer.
For a class implemented via pimpl, one function call is forced, but it could be absolutely anything 'under the hood'. Despite what you suggest, no second function call is implied by the paradigm.
Finally, don't forget the usual guidelines for optimization apply: you have to actually implement and measure. Trying to "think" up the answer tends to lead to poor results, even from people experienced at this sort of thing.
And, of course, the most important rule of optimization: make sure something matters before you devote a lot of time trying to optimize it. Otherwise, you are going to wind up wasting a lot of time and energy.

Related

pointer to functions vs simpler switch in high speed audio programming

I am looking at simplifying some class structures by combining a number of classes and either
a. having a simple switch statement, using variables assigned on initialisation of the class, that changes some of the function behaviour within the class.
or
b. using functions pointers to define different behaviours in a class. These pointers would be assigned in the class initialisation.
In audio, which requires speed, a. is a lot cleaner looking and maybe safer than b.
My question is- Is a switch statement(a.) that much more slower than function pointers(b.)?
Is there a simpler method, like a template type class which changes according to an initialisation variable, but has the same input variables for all variations or am I wishing too much?
Thanks in advance
Since this is C++, it seems like the obvious answer is "c": use virtual functions.
Your (a) vs. (b) question is almost impossible to answer, mostly because such questions should only be answered with a profiler. Beware premature optimization. Trying to be tricky for the sake of speed can be problematic because compiler writers and hardware designers tend to optimize for common idioms. If you do weird stuff then you'll miss out on that.

How can I better learn to "not pay for what you don't use"?

I've just gotten answers to this question which, at the bottom line, tell me: "Doing X doesn't make sense since it would make you pay for things you might not use."
I find this maxim difficult to follow; my instincts lean more towards seeing what I consider clear semantics, with things defined "in their place". More generally, it's not immediate for me to realize what the hidden costs and secret tariffs would be for a particular design choice?.
Is this covered by (non-reference) books on C++? Is there someplace relevant online to better enlighten myself on following this principle?
In the case you are presenting it is not as general a statement as it seems.
Doing X doesn't make sense since it would make you pay for things you
might not use.
This is merely a statement that if you can, avoid using virtual functions. They add overhead to the function call.
Virtual functions can often be redesigned by using templates and regular function calls. One std:: example is std::vector. In Java for instance a Vector implements interfaces to be usable in algorithms. Accomplished by virtual function calls. std::vector uses iterators.
Despite the question being overly broad and asking for off site material I think it is interesting enough to deserve an answer. Remember that C++ was originally just "C with classes" and it is still possible today to write what is basically C code without using any of the nice abstractions that C++ gives you. For example if you don't want the cost of using exceptions then don't use them, if you don't want the cost of RTTI (virtual functions) then don't use them, if you don't want the overhead of using templates... etc.
As for resources, I'm going to break the rules and recommend Game Programming Patterns which despite the name is a good general purpose guide to writing performant C++.
How can I better learn to “not pay for what you don't use”?
The key to "not paying for what you don't use" is abstractions. When you clearly understand the purpose of a class or a function, you add the data and arguments that are absolutely necessary for the class and the function to work correctly, with as little overhead as possible.
You have to be very vigilant about adding member variables, member functions (virtual as well as non-virtual) to a class. Every member variable adds to the memory requirements of the class. Every member function requires maintenance. Presence of virtual member functions add to the memory requirements of the class as well as a small penalty at run time.
You have to be very vigilant about the arguments to a function. You don't want the user to be burdened with supplying arguments that don't make sense. You also don't want to leave out any arguments by making hidden assumptions.

class instance pointers or function pointers?

This is a c++ question. We would like to have 2 utility functions that have different implementations depending on a certain parameter, during runtime it is determined which implementation should be called based on this parameter, what design could be best in terms of memory usage and performance? We are thinking of two approaches but we can’t determine the improvement gained in either:
- Defining an interface for these 2 utility functions and having multiple classes extending them, then we create a map with instances of these class implementations (eager initialising)
- define all these functions in one class as static functions and invoke them using function pointers
Virtual inheritance is usually realized using function pointers, so both of your ideas boil down to the same thing (from compiler point of view).
On a second thought, you are considering performance of a something as basic as function call. Are you 100% sure you're optimizing the part that is the bottleneck? It's extremely easy to get sidetracked when optimizing, and spend days on something which has 0 or 1% impact on performance. So stick to the golden rule: prove which part really slows you down. If you write tests for it, it'll be easy to test both solutions and get the results which one is faster.

Moving from void* and casting to an ABC with PVFs (will there be a speed hit?)

I've just inherited (ahem) a QNX realtime project which uses a void*/downcasting/case statement mechanism to handle messaging. I'd prefer to switch to an abstract base class with pure virtual functions instead but I'm wondering if the original solution was done like that for speed reasons? it looks a lot like it was written originally in C and got moved at some point to C++, so I'm guessing that could be the reason behind it.
Any thoughts on this are appreciated. I don't want to make the code nice, safe and neat and then have it fail for performance reasons during testing.
I doubt that performance will be a concern. If there are sufficient disparate values in the switch/case your compiler may not even optimize it into a jump table, setting up the possibility that the virtual dispatch could be faster than the switch.
If a pure virtual interface makes sense design-wise I would definitely go that way (prototype and profile it if you're really concerned).

Why is boost so heavily templated?

There are many places in boost where I see a templated class and can't help but think why the person who wrote it used templates.
For example, the mutex class(es). All the mutex concepts are implemented as templates where one could simply create a few base classes or abstract classes with an interface that matches the concept.
edit after answers: I thought about the cost of virtual functions but isn't it sometimes worth giving away very little performance penalty for better understanding? I mean sometimes (especially with boost) it's really hard to understand templated code and decrypt compiler errors as a result of misusing templates.
Templates can be highly optimized at compile time, without the need for virtual functions. A lot of template tricks can be thought of as compile-time polymorphism. Since you know at compile time which behaviours you want, why should you pay for a virtual function call everytime you use the class. As a bonus, a lot of templated code can be easily inlined to eliminate even the most basic function-call overhead.
In addition, templates in C++ are extremely powerful and flexible - they have been shown to be a turing complete language in their own right. There are some things that are easy to do with templates that require much more work with runtime polymorphism.
Templates allow you to do a generic version of an algorithm. A generic version of a container. You no longer have to worry about types and what you produce need no longer be tied to a type. Boost is a collection of libraries that tries to address the needs of a wide variety of people using C++ in their day to day lives.