class instance pointers or function pointers? - c++

This is a c++ question. We would like to have 2 utility functions that have different implementations depending on a certain parameter, during runtime it is determined which implementation should be called based on this parameter, what design could be best in terms of memory usage and performance? We are thinking of two approaches but we can’t determine the improvement gained in either:
- Defining an interface for these 2 utility functions and having multiple classes extending them, then we create a map with instances of these class implementations (eager initialising)
- define all these functions in one class as static functions and invoke them using function pointers

Virtual inheritance is usually realized using function pointers, so both of your ideas boil down to the same thing (from compiler point of view).
On a second thought, you are considering performance of a something as basic as function call. Are you 100% sure you're optimizing the part that is the bottleneck? It's extremely easy to get sidetracked when optimizing, and spend days on something which has 0 or 1% impact on performance. So stick to the golden rule: prove which part really slows you down. If you write tests for it, it'll be easy to test both solutions and get the results which one is faster.

Related

How can I better learn to "not pay for what you don't use"?

I've just gotten answers to this question which, at the bottom line, tell me: "Doing X doesn't make sense since it would make you pay for things you might not use."
I find this maxim difficult to follow; my instincts lean more towards seeing what I consider clear semantics, with things defined "in their place". More generally, it's not immediate for me to realize what the hidden costs and secret tariffs would be for a particular design choice?.
Is this covered by (non-reference) books on C++? Is there someplace relevant online to better enlighten myself on following this principle?
In the case you are presenting it is not as general a statement as it seems.
Doing X doesn't make sense since it would make you pay for things you
might not use.
This is merely a statement that if you can, avoid using virtual functions. They add overhead to the function call.
Virtual functions can often be redesigned by using templates and regular function calls. One std:: example is std::vector. In Java for instance a Vector implements interfaces to be usable in algorithms. Accomplished by virtual function calls. std::vector uses iterators.
Despite the question being overly broad and asking for off site material I think it is interesting enough to deserve an answer. Remember that C++ was originally just "C with classes" and it is still possible today to write what is basically C code without using any of the nice abstractions that C++ gives you. For example if you don't want the cost of using exceptions then don't use them, if you don't want the cost of RTTI (virtual functions) then don't use them, if you don't want the overhead of using templates... etc.
As for resources, I'm going to break the rules and recommend Game Programming Patterns which despite the name is a good general purpose guide to writing performant C++.
How can I better learn to “not pay for what you don't use”?
The key to "not paying for what you don't use" is abstractions. When you clearly understand the purpose of a class or a function, you add the data and arguments that are absolutely necessary for the class and the function to work correctly, with as little overhead as possible.
You have to be very vigilant about adding member variables, member functions (virtual as well as non-virtual) to a class. Every member variable adds to the memory requirements of the class. Every member function requires maintenance. Presence of virtual member functions add to the memory requirements of the class as well as a small penalty at run time.
You have to be very vigilant about the arguments to a function. You don't want the user to be burdened with supplying arguments that don't make sense. You also don't want to leave out any arguments by making hidden assumptions.

Should I move to Object Oriented Programming(black boxes) and how?

Right now I have a DirectX engine with a couple of classes - Application,Graphics,Sound and each of them is around 1k lines and they each reference eachother.I initially tried to limit use of classes and stuff like passing the D3D Device and instead made it global for all classes to use,but I see in everyone else's engine that everything is split up into many classes and they have stuff like Engine->GetRenderer->Render(MyD3DContext); isn't that terriby inefficient?Why not just make MyD3DContext global and use it directly in the Render function.And one last thing I don't get is = how are you supposed to make classes that work independent of eachother?Sounds weird.
Firstly why do you think that's terribly inefficient? Besides being much easier to code and maintain, that is also blazingly fast. OOP isn't a bottleneck, its a boon for large projects with multiple developers and multiple concerns(such as real world games).
Let me give you an example, since you mentioned "games":
The game is a Simulation
The simulation contains entities(Objects)
Objects can do things, and have attributes. Hence Objects are like an encapsulation of attributes and actions. This is what makes the "Object" in "Object Oriented Programming". You can think they're(objects are) created in a fictional factory in your simulator. The blueprint of object is the "class", and is called encapsulation.
Each of these objects are bound to your world, probably through some sort of highly mathematical Half-Life-2(source) level Physics engine. You wouldn't want to code the "physics" for each class. Instead you would inherit from a class(or interface) "IPhysics". And then whenever you change the gravity from 10.0 to 15.0, this value is propagated throughout the "world" scenario. This is inheritance.
Each object in your game, say Half-Life-2, Gordon Freeman can at the same time, act as a "Player" and "Can-Be-Scripted" if you know what I mean. This is polymorphism. One object acting in different types.
So you see, this is pretty easy(and terribly EFFICIENT) to model and present the fictional game in OOP.
It isn't terribly inefficient. And you definitely need an introduction to OOP of some sort. Maybe even something online
Yes.
As the project becomes larger having one global anything will cause a vast list of problems. It's also not particularly inefficient to traverse a few pointers. Worry about efficiency in the right areas, areas that you have proven by running tests are inefficient, and try and maintain code clarity and separation at all times.
If you're worried about inefficiency why not knock together a test app that has exactly that kind of structure and time how long it takes to do all that dereferencing. You'll find it insignificant to, say, building up the list of polys in sight.
The only way you'll see the benefit of having well encapsulated non-global objects will be as your project grows and you change things around.
there are a couple big tenants of OO design: in particular Code-Reuse/modularity and scope/isolation. Globals are generally frowned upon these days because they just don't scale well to large development efforts and always end up causing problems, so OO attempts to limit the scope of any given call to the minimum required to perform the function.
as for Modularity/reuse, the larger a sub-module grows, generally the more specific it becomes, and as such the less likely that it will fit all the purposes it could if it were broken apart into more modular chunks. as a result you spend less time rewriting the same code for a slightly tweaked purpose, which also reduces the adjacent purposes that you might break while implementing code for your new one. that makes it more efficient to implement, though there may or may not be some slight cost at runtime. likely not though. remember, it doesn't take a lick more binary to run Render() whether its defined in a root module or composed several layers deep in a composed object. its still just a function pointer.
these are just general concepts, so take what you like.
hope that helps.

C++ Pimpl vs Pure Virtual Interface Performance

I realize there are quite a few posts on this subject, but I am having trouble finding the answer to this exact question.
For function calls, which is faster, a pure-virtual interface or a pimpl?
At first glance, it seems to me that the pure-virtual interface would be faster, because the using the pimpl would cost two function calls instead of one...or would some kind of clever compiler trick take over in this case?
edit:
I am trying to decide which of these I should use to abstract away the system-dependent portions of a few objects that may end up having to be spawned quite frequently, and in large numbers.
edit:
I suppose it's worth saying at this point, that the root of my problem was that I mistook the Abstract Factory design pattern for a method of making my code work on multiple platforms, when it's real purpose is for switching implementations for a given interface at runtime.
The two options are not equivalent, they should not be compared on performance as the focus is different. Even if they were equivalent, the performance difference would be minimal to unimportant in most situations. If you are in the rare case that you know that dispatch is being an issue, then you have the tools to measure the difference yourself.
Why do you ask? The question doesn't seem to make sense.
One generally uses virtual functions when one wants polymorphism: when you want them to be overridden in derived classes.
One generally uses pimpl when one wants to remove implementation details from header files.
The two really aren't interchangeable. Off the top of my head, I cannot think of any reasonable situations where you would use one and consider replacing it with the other.
Anyways, that said, for a typical implementation of virtual functions, a function call involves reading the object to find the virtual function table pointer, then reading the virtual function table to find the function pointer, and finally calling the function pointer.
For a class implemented via pimpl, one function call is forced, but it could be absolutely anything 'under the hood'. Despite what you suggest, no second function call is implied by the paradigm.
Finally, don't forget the usual guidelines for optimization apply: you have to actually implement and measure. Trying to "think" up the answer tends to lead to poor results, even from people experienced at this sort of thing.
And, of course, the most important rule of optimization: make sure something matters before you devote a lot of time trying to optimize it. Otherwise, you are going to wind up wasting a lot of time and energy.

How should I design a mechanism in C++ to manage relatively generic entities within a simulation?

I would like to start my question by stating that this is a C++ design question, more then anything, limiting the scope of the discussion to what is accomplishable in that language.
Let us pretend that I am working on a vehicle simulator that is intended to model modern highway systems. As part of this simulation, entities will be interacting with each other to avoid accidents, stop at stop lights and perhaps eventually even model traffic enforcement with radar guns and subsequent exciting high speed chases.
Being a spatial simulation written in C++, it seems like it would be ideal to start with some kind of Vehicle hierarchy, with cars and trucks deriving from some common base class. However, a common problem I have run in to is that such a hierarchy is usually very rigidly defined, and introducing unexpected changes - modeling a boat for instance - tends to introduce unexpected complexity that tends to grow over time into something quite unwieldy.
This simple aproach seems to suffer from a combinatoric explosion of classes. Imagine if I created a MoveOnWater interface and a MoveOnGround interface, and used them to define Car and Boat. Then lets say I add RadarEquipment. Now I have to do something like add the classes RadarBoat and RadarCar. Adding more capabilities using this approach and the whole thing rapidly becomes quite unreasonable.
One approach I have been investigating to address this inflexibility issue is to do away with the inheritance hierarchy all together. Instead of trying to come up with a type safe way to define everything that could ever be in this simulation, I defined one class - I will call it 'Entity' - and the capabilities that make up an entity - can it drive, can it fly, can it use radar - are all created as interfaces and added to a kind of capability list that the Entity class contains. At runtime, the proper capabilities are created and attached to the entity and functions that want to use these interfaced must first query the entity object and check for there existence. This approach seems to be the most obvious alternative, and is working well for the time being. I, however, worry about the maintenance issues that this approach will have. Effectively any arbitrary thing can be added, and there is no single location in which all possible capabilities are defined. Its not a problem currently, when the total number of things is quite small, but I worry that it might be a problem when someone else starts trying to use and modify the code.
As one potential alternative, I pondered using the template system to achieve type safe while keeping the same kind of flexibility. I imagine I could create entities that inherited whatever combination of interfaces I wanted. Using these objects would entail creating a template class or function that used any combination of the interfaces. One example might be the simple move on road using just the MoveOnRoad interface, whereas more complex logic, like a "high speed freeway chase", could use methods from both MoveOnRoad and Radar interfaces.
Of course making this approach usable mandates the use of boost concept check just to make debugging feasible. Also, this approach has the unfortunate side effect of making "optional" interfaces all but impossible. It is not simple to write a function that can have logic to do one thing if the entity has a RadarEquipment interface, and do something else if it doesn't. In this regard, type safety is somewhat of a curse. I think some trickery with boost any may be able to pull it off, but I haven't figured out how to make that work and it seems like way to much complexity for what I am trying to achieve.
Thus, we are left with the dynamic "list of capabilities" and achieving the goal of having decision logic that drives behavior based on what the entity is capable of becomes trivial.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning. I am eager to learn of a design pattern or idiom that is commonly used to address this issue, and the sort of tradeoffs I will have to make.
I also want to mention that I have been contemplating perhaps an even more random design. Even though I my gut tells me that this should be designed as a high performance C++ simulation, a part of me wants to do away with the Entity class and object-orientated foo all together and uses a relational model to define all of these entity states. My initial thought is to treat entities as an in memory database and use procedural query logic to read and write the various state information, with the necessary behavior logic that drives these queries written in C++. I am somewhat concerned about performance, although it would not surprise me if that was a non-issue. I am perhaps more concerned about what maintenance issues and additional complexity this would introduce, as opposed to the relatively simple list-of-capabilities approach.
Encapsulate what varies and Prefer object composition to inheritance, are the two OOAD principles at work here.
Check out the Bridge Design pattern. I visualize Vehicle abstraction as one thing that varies, and the other aspect that varies is the "Medium". Boat/Bus/Car are all Vehicle abstractions, while Water/Road/Rail are all Mediums.
I believe that in such a mechanism, there may be no need to maintain any capability. For example, if a Bus cannot move on Water, such a behavior can be modelled by a NOP behavior in the Vehicle Abstraction.
Use the Bridge pattern when
you want to avoid a permanent binding
between an abstraction and its
implementation. This might be the
case, for example, when the
implementation must be selected or
switched at run-time.
both the abstractions and their
implementations should be extensible
by subclassing. In this case, the
Bridge pattern lets you combine the
different abstractions and
implementations and extend them
independently.
changes in the implementation of an
abstraction should have no impact on
clients; that is, their code should
not have to be recompiled.
Now, with that background in mind, I am open to any design gurus telling me where I err'd in my reasoning.
You may be erring in using C++ to define a system for which you as yet have no need/no requirements:
This approach seems to be the most
obvious alternative, and is working
well for the time being. I, however,
worry about the maintenance issues
that this approach will have.
Effectively any arbitrary thing can be
added, and there is no single location
in which all possible capabilities are
defined. Its not a problem currently,
when the total number of things is
quite small, but I worry that it might
be a problem when someone else starts
trying to use and modify the code.
Maybe you should be considering principles like YAGNI as opposed to BDUF.
Some of my personal favourites are from Systemantics:
"15. A complex system that works is invariably found to have evolved from a simple system that works"
"16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system."
You're also worring about performance, when you have no defined performance requirements, and no problems with performance:
I am somewhat concerned about
performance, although it would not
surprise me if that was a non-issue.
Also, I hope you know about double-dispatch, which might be useful for implementing anything-to-anything interactions (it's described in some detail in More Effective C++ by Scott Meyers).

Does the usage of interfaces slow down programs? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What is the performance cost of having a virtual method in a C++ class?
Is it true that interfaces slow down programs? I have heard that this is the case because during running time each during each usage of an object implementing this interface the decision has to be made which class implementing the interface this object belongs to.
I am especially interested in an answer for C++, but also in general. And if this is true, some numbers would be helpful, too.
Thank you very much!
Yes, but not much and certainly not enough to matter if you need the flexibility that interfaces require. (Bear in mind that if you're using an interface heavily, the relevant bits of the vtables are going to end up in L1 or L2 cache and so won't nearly as much as you fear.)
Dynamic dispatch (i.e. using virtual functions) is more expensive than a direct call.
But it would have to be a unusual program for this to be the performance limiter. More likely to limit the performance are things like disk/network access, updating the UI or memory bandwidth.
Although Billy points out that this is a lot like the other post on SO, I think it's not exactly the same... mainly because of the way this question is worded.
Because Olga talks about a "decision", I almost thought that she was getting mixed up between using interfaces vs. using a derived class, and determining if the pointer to the object is of a particular class via dynamic_cast.
If you are talking about using dynamic_cast, then from what I understand (and this is not based on concrete performance numbers), you will get a pretty significant performance hit.
If you are talking about using interfaces, well, then I feel that the minor hit in doing a vtable lookup and extra call(s) is far outweighed by a better software design.
If you use the interface pattern (i.e. abstract classes in C++), then yes there will be an overhead on the virtual function calls. But if you implemented your own, non-abstract class mechanism to acheive the same thing, you would also have an overhead, probably greater than a VF call. So in reality, there is no extra overhead.
You're probably talking about virtual inheritance in C++. The performance penalty is minor if the virtual class is not used in critical code paths. Basically the overhead is the same as additional function call.