Does the usage of interfaces slow down programs? [duplicate] - c++

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What is the performance cost of having a virtual method in a C++ class?
Is it true that interfaces slow down programs? I have heard that this is the case because during running time each during each usage of an object implementing this interface the decision has to be made which class implementing the interface this object belongs to.
I am especially interested in an answer for C++, but also in general. And if this is true, some numbers would be helpful, too.
Thank you very much!

Yes, but not much and certainly not enough to matter if you need the flexibility that interfaces require. (Bear in mind that if you're using an interface heavily, the relevant bits of the vtables are going to end up in L1 or L2 cache and so won't nearly as much as you fear.)

Dynamic dispatch (i.e. using virtual functions) is more expensive than a direct call.
But it would have to be a unusual program for this to be the performance limiter. More likely to limit the performance are things like disk/network access, updating the UI or memory bandwidth.

Although Billy points out that this is a lot like the other post on SO, I think it's not exactly the same... mainly because of the way this question is worded.
Because Olga talks about a "decision", I almost thought that she was getting mixed up between using interfaces vs. using a derived class, and determining if the pointer to the object is of a particular class via dynamic_cast.
If you are talking about using dynamic_cast, then from what I understand (and this is not based on concrete performance numbers), you will get a pretty significant performance hit.
If you are talking about using interfaces, well, then I feel that the minor hit in doing a vtable lookup and extra call(s) is far outweighed by a better software design.

If you use the interface pattern (i.e. abstract classes in C++), then yes there will be an overhead on the virtual function calls. But if you implemented your own, non-abstract class mechanism to acheive the same thing, you would also have an overhead, probably greater than a VF call. So in reality, there is no extra overhead.

You're probably talking about virtual inheritance in C++. The performance penalty is minor if the virtual class is not used in critical code paths. Basically the overhead is the same as additional function call.

Related

pointer to functions vs simpler switch in high speed audio programming

I am looking at simplifying some class structures by combining a number of classes and either
a. having a simple switch statement, using variables assigned on initialisation of the class, that changes some of the function behaviour within the class.
or
b. using functions pointers to define different behaviours in a class. These pointers would be assigned in the class initialisation.
In audio, which requires speed, a. is a lot cleaner looking and maybe safer than b.
My question is- Is a switch statement(a.) that much more slower than function pointers(b.)?
Is there a simpler method, like a template type class which changes according to an initialisation variable, but has the same input variables for all variations or am I wishing too much?
Thanks in advance
Since this is C++, it seems like the obvious answer is "c": use virtual functions.
Your (a) vs. (b) question is almost impossible to answer, mostly because such questions should only be answered with a profiler. Beware premature optimization. Trying to be tricky for the sake of speed can be problematic because compiler writers and hardware designers tend to optimize for common idioms. If you do weird stuff then you'll miss out on that.

class instance pointers or function pointers?

This is a c++ question. We would like to have 2 utility functions that have different implementations depending on a certain parameter, during runtime it is determined which implementation should be called based on this parameter, what design could be best in terms of memory usage and performance? We are thinking of two approaches but we can’t determine the improvement gained in either:
- Defining an interface for these 2 utility functions and having multiple classes extending them, then we create a map with instances of these class implementations (eager initialising)
- define all these functions in one class as static functions and invoke them using function pointers
Virtual inheritance is usually realized using function pointers, so both of your ideas boil down to the same thing (from compiler point of view).
On a second thought, you are considering performance of a something as basic as function call. Are you 100% sure you're optimizing the part that is the bottleneck? It's extremely easy to get sidetracked when optimizing, and spend days on something which has 0 or 1% impact on performance. So stick to the golden rule: prove which part really slows you down. If you write tests for it, it'll be easy to test both solutions and get the results which one is faster.

C++ : inheritance without virtuality

I wonder if what I'm currently doing is a shame for C++, or if it is OK.
I work on a code for computational purpose. For some classes, I use a normal inheritance scheme with virtuality/polymorphism. But I need some classes to do intensive computation, and it would be great to avoid overhead due to virtuality.
Basically, I want to use this classes without pointers or redirection : inheritance is just here to avoid many copy/paste of code (the file size of the base class is like 60Ko (which is a lot of code)). So no virtual functions, and no virtual desctructor.
I wonder if it is perfectly OK from a C++ point of view or if it can create side effects (the concerned classes will be used a lot in the program).
Thank you very much.
Using polymorphism in C++ is neither good nor bad. Polymorphism serves a purpose, as does a lack of polymorphism. There is nothing wrong with using inheritance without using polymorphism on its own.
Since polymorphism serves a purpose, and the lack of polymorphism also serves a purpose, you should design your classes with those purposes in mind. If, for example, you need runtime binding of behavior to class instances, you need polymorphism.
That all being said, there are right and wrong reasons for choosing one approach over the other. If you are designing your classes without polymorphism strictly because you want to "avoid overhead" that is likely a wrong reason. This is an instance of premature optimization so long as you are making design changes or decisions without having profiled your code and proved that polymorphism is an actual problem.
Design by architectural requirements first. Later go back and refactor if the design proves to be non-performant.
I would rephrase the question:
What does inheritance brings that composition could not achieve if you eschew polymorphism ?
If the answer is nothing, which I suspect, then perhaps that inheritance is not required in the first place.
Not using virtual members/inheritance is perfectly ok. C++ is designed to entertain vast audience and it doesn't restrict anyone to particular paradigm.
You can use C++ to code procedural, generic, object-oriented or any mix of them. Just try to make best out of it.
I'm currently doing is a shame for C++, or if it is OK.
Not at all.
Rather if you don't need OO design and still imposing it just for the sake of it, would be a shame.
Basically, I want to use this classes without pointers or redirection ...
In fact you are going in right direction. Using pointers, arrays and such low level features are better suited for advance programming. Use instead like std::shared_ptr, std::vector, and standard library containers.
Basically, you are using inheritance without polymorphism. And that's ok.
Object-oriented programming has other feature than polymorphism. If you can benefits from them, just use them.
In general, it is not a good idea to use inheritance to reuse code. Inheritance is rather to be used by code that was designed to use your base class. I would suggest a different approach to the problem. Consider some of the alternatives, like composition, changing the functionality to be implemented in free functions rather than a base class, or static polymorphism (through the use of templates).
It's not a performance problem until you can prove it.
Check out that answer and the "Fastest possible delegates" article.

C++ Pimpl vs Pure Virtual Interface Performance

I realize there are quite a few posts on this subject, but I am having trouble finding the answer to this exact question.
For function calls, which is faster, a pure-virtual interface or a pimpl?
At first glance, it seems to me that the pure-virtual interface would be faster, because the using the pimpl would cost two function calls instead of one...or would some kind of clever compiler trick take over in this case?
edit:
I am trying to decide which of these I should use to abstract away the system-dependent portions of a few objects that may end up having to be spawned quite frequently, and in large numbers.
edit:
I suppose it's worth saying at this point, that the root of my problem was that I mistook the Abstract Factory design pattern for a method of making my code work on multiple platforms, when it's real purpose is for switching implementations for a given interface at runtime.
The two options are not equivalent, they should not be compared on performance as the focus is different. Even if they were equivalent, the performance difference would be minimal to unimportant in most situations. If you are in the rare case that you know that dispatch is being an issue, then you have the tools to measure the difference yourself.
Why do you ask? The question doesn't seem to make sense.
One generally uses virtual functions when one wants polymorphism: when you want them to be overridden in derived classes.
One generally uses pimpl when one wants to remove implementation details from header files.
The two really aren't interchangeable. Off the top of my head, I cannot think of any reasonable situations where you would use one and consider replacing it with the other.
Anyways, that said, for a typical implementation of virtual functions, a function call involves reading the object to find the virtual function table pointer, then reading the virtual function table to find the function pointer, and finally calling the function pointer.
For a class implemented via pimpl, one function call is forced, but it could be absolutely anything 'under the hood'. Despite what you suggest, no second function call is implied by the paradigm.
Finally, don't forget the usual guidelines for optimization apply: you have to actually implement and measure. Trying to "think" up the answer tends to lead to poor results, even from people experienced at this sort of thing.
And, of course, the most important rule of optimization: make sure something matters before you devote a lot of time trying to optimize it. Otherwise, you are going to wind up wasting a lot of time and energy.

Moving from void* and casting to an ABC with PVFs (will there be a speed hit?)

I've just inherited (ahem) a QNX realtime project which uses a void*/downcasting/case statement mechanism to handle messaging. I'd prefer to switch to an abstract base class with pure virtual functions instead but I'm wondering if the original solution was done like that for speed reasons? it looks a lot like it was written originally in C and got moved at some point to C++, so I'm guessing that could be the reason behind it.
Any thoughts on this are appreciated. I don't want to make the code nice, safe and neat and then have it fail for performance reasons during testing.
I doubt that performance will be a concern. If there are sufficient disparate values in the switch/case your compiler may not even optimize it into a jump table, setting up the possibility that the virtual dispatch could be faster than the switch.
If a pure virtual interface makes sense design-wise I would definitely go that way (prototype and profile it if you're really concerned).