Template abuse? - c++

I wanted to transform the dynamic_casts from base class to derived from this style:
auto derived = dynamic_cast<Derived*>(object);
To something more compact. For that I have added in Base class the following template:
template<typename T>
T As() { return dynamic_cast<T>(this); }
So now the previous statement would be rewritten as
auto derived = object->As<Derived*>();
I like this style more. But I know there might be readability issues (subjective) or memory usage of the class maybe? If am I correct this will generate a function for each type of derived I cast. This number can be potentially large (100 different derived classes).
Should I just stick to plain dynamic_cast?

If you read material from a number of experts who have participated in the design of C++ (Stroustrup, Sutter, the list goes on) you will find that dynamic_cast (and all the _casts) are verbose and clumsy for the programmer BY DESIGN.
Where at all possible, it is considered best to AVOID using them. While all of the _cast operators have their place (i.e. there are circumstances in which they are genuinely the best solution to a problem) they are also blunt instruments that can be used to work around problems due to bad design. Unfortunately, given a choice, a lot of programmers will reach for such blunt instruments rather than applying a bit more effort to learn appropriate techniques, and to clean up their design - which has benefits such as making the code easier to get working right, and easier to maintain.
dynamic_cast is, arguably, the worst of the _cast operators, since it almost invariably introduces an overhead at run time. If it is used to work around deficiencies due to bad design, there is a distinct run-time penalty.
Making the syntax clumsy and verbose encourages a programmer to find alternatives (e.g. design types and operations on types, in a way that avoids the need for such conversions).
What you're asking for is a way to allow programmers to use dynamic_cast easily and with less thought. That will encourage bad design, by allowing a programmer to easily use the _cast operators to work around design problems, when they would often be better off applying a bit more effort to avoid a need for such conversions in the first place. There is plenty of information available about techniques that can be used to avoid use of operations like dynamic_cast.
So, yes, if you really need to use such conversions, I suggest you stick to use of dynamic_cast.
Better yet, you might want to also apply effort to learn design techniques and idioms that reduce how often you need to use it.

Related

Abstract classes vs. templates - good practices

Let's say I have got some kind of class, that represents algorithm and this algorithm requires something special from the data (eg. some member function).
In example we can do:
<<interface>>
+------------------------+ +------------+
| Algorithm | <<uses>> | Data |
+------------------------+-------------->+------------+
| + doJob(inData : Data) | | +getPixel()|
+------------------------+ +------------+
And we can force user of Algorithm to inherit from Data every time he wants to use class algorithm. We can also do a template:
template<typename T>
doJob(T&& inputData){
//implementation
}
(function without class to simplify things)
And we force our client to create classes, that have methods of proper name, but we do not make him implement our abstract class (interface in different languages) (a little bit better performace maybe?)
And my question is:
Which approach is better?
When having the choice should we implement things in a template way or abstract way in a library?
Is there a reason for standard not to define some standard "Interfaces" like std::container or std::factory (just examples)?
You actually have more than one question, so, let's answer them one by one:
Which approach is better?
Neither is better in general. Each has is strengths and weaknesses. But you do come to an interesting point: on a more abstract level, those two are pretty much the same.
When having the choice should we implement things in a template way or abstract way in a library?
With templates you get:
In general, faster execution. It can be much faster, 'cause a lot of inlining, and then optimization, can be done. OTOH, with an advanced de-virtualization compiler/linker and functions that can't be much inlined/optimized, you might get pretty much the same speed.
Slower compile times. It can be much slower, especially if you go the "fancy template-meta-programming" way.
Worse compiler errors. They can be much worse, especially if you go the "fancy template-meta-programming" way. When C++ gets support for concepts, one should be able to avoid this.
If you design it carefully, improved type-safety. OTOH, if you're not careful, you'll end up in worse duck-typing than Smalltalk. Concepts would be a tool that could help here, too.
With virtual functions / interfaces, you get:
De-coupled design, where, if you're careful, changes from one file won't require a re-compilation of others, and compile times can be much faster.
Run-time polymorphism, meaning you can dynamically load code (it ain't as easy as it sounds, but, it's possible)
Something that looks more familiar to someone who's experienced in OO.
Is there a reason for standard not to define some standard "Interfaces" like std::container or std::factory (just examples)?
One could find a lot of "low-level" reasons, I guess, but the fundamental reason is performance. That is, STL was designed to be "as fast as can be", and putting some (useful) interfaces "on top if it" now is pretty-much impossible.
It seems to be an opinion based question. The best way to force a client to fulfill it's obligations is to make him sign a contract, that contact being an interface.

Practical uses of exploiting RTTI in C++

Having done with 1st Vol. of Thinking in C++ by Bruce Eckel, I have started reading the 2nd Vol. The chapter devoted to RTTI (Run-Time Type Identification) amazes me the most. I have been reading about tyepid, dynamic_cast, etc.
But, I have a question floating in my mind. Are their any practical uses of exploiting RTTI through the operators mentioned i.e. some examples from real-life projects? Also, what were the limitations encountered which made its use necessary?
dynamic_cast can be useful for adding optional functionality
void foo(ICoolStuff *cs)
{
auto ecs = dynamic_cast<IEvenCoolerStuff*>(cs);
if (ecs != 0)
{
ecs->DoEvenCoolerStuff();
}
cs->DoCoolStuff();
}
when you design from scratch it might be possible to put DoEvenCoolerStuff into ICoolStuff and have empty implementations in classes which don't support it, but it's often not feasible when you need to change existing code.
Another use is messaging system implementation where one might use dynamic_cast for distinguishing messages you are interested in. More generally speaking you might need it when faced with the expression problem.
The most common example of RTTI in production code that I have seen in my travels is dynamic_cast, but it is almost always used as a band-aid for a poor design.
dynamic_cast is useful primarily for polymorphic classes, and then for going from base to derived. But think about it. If you have a base pointer to a properly designed polymorphic class, why would you ever need a pointer to a derived type? You should, in theory, only ever need to call the virtual functions, and have the actual instantiation deal with the implementation details.
Now that being said there are cases where even though dynamic_cast is a band-aid, it is still the lesser of two evils. This is particulary true when "fixing" the broken design would imply a large maintenance project, and would have no performance implications. Suppose you have a 1 MLOC application, and fixing something that is academically broken would mean having to touch 100k lines of code. If there is no performance reason to make that change, then you are fixing it purely for the sake of fixing it, but you run the risk of creating dozens or hundreds of new bugs. It might not be worth it.

C++ : inheritance without virtuality

I wonder if what I'm currently doing is a shame for C++, or if it is OK.
I work on a code for computational purpose. For some classes, I use a normal inheritance scheme with virtuality/polymorphism. But I need some classes to do intensive computation, and it would be great to avoid overhead due to virtuality.
Basically, I want to use this classes without pointers or redirection : inheritance is just here to avoid many copy/paste of code (the file size of the base class is like 60Ko (which is a lot of code)). So no virtual functions, and no virtual desctructor.
I wonder if it is perfectly OK from a C++ point of view or if it can create side effects (the concerned classes will be used a lot in the program).
Thank you very much.
Using polymorphism in C++ is neither good nor bad. Polymorphism serves a purpose, as does a lack of polymorphism. There is nothing wrong with using inheritance without using polymorphism on its own.
Since polymorphism serves a purpose, and the lack of polymorphism also serves a purpose, you should design your classes with those purposes in mind. If, for example, you need runtime binding of behavior to class instances, you need polymorphism.
That all being said, there are right and wrong reasons for choosing one approach over the other. If you are designing your classes without polymorphism strictly because you want to "avoid overhead" that is likely a wrong reason. This is an instance of premature optimization so long as you are making design changes or decisions without having profiled your code and proved that polymorphism is an actual problem.
Design by architectural requirements first. Later go back and refactor if the design proves to be non-performant.
I would rephrase the question:
What does inheritance brings that composition could not achieve if you eschew polymorphism ?
If the answer is nothing, which I suspect, then perhaps that inheritance is not required in the first place.
Not using virtual members/inheritance is perfectly ok. C++ is designed to entertain vast audience and it doesn't restrict anyone to particular paradigm.
You can use C++ to code procedural, generic, object-oriented or any mix of them. Just try to make best out of it.
I'm currently doing is a shame for C++, or if it is OK.
Not at all.
Rather if you don't need OO design and still imposing it just for the sake of it, would be a shame.
Basically, I want to use this classes without pointers or redirection ...
In fact you are going in right direction. Using pointers, arrays and such low level features are better suited for advance programming. Use instead like std::shared_ptr, std::vector, and standard library containers.
Basically, you are using inheritance without polymorphism. And that's ok.
Object-oriented programming has other feature than polymorphism. If you can benefits from them, just use them.
In general, it is not a good idea to use inheritance to reuse code. Inheritance is rather to be used by code that was designed to use your base class. I would suggest a different approach to the problem. Consider some of the alternatives, like composition, changing the functionality to be implemented in free functions rather than a base class, or static polymorphism (through the use of templates).
It's not a performance problem until you can prove it.
Check out that answer and the "Fastest possible delegates" article.

Fast dynamic casting progress

A little while ago, I found that very interesting paper on a very neat performance upgrade for dynamic_cast in C++: http://www2.research.att.com/~bs/fast_dynamic_casting.pdf.
Basically, it makes dynamic_cast in C++ way faster than the traditional research in inheritance tree. As stated in the paper, the method provides for a fast, constant-time dynamic casting algorithm.
This paper was published in 2005. Now, I am wondering if the technique was ever implemented somewhere or if there are plans to implement it anywhere?
I do not know what implementations various compilers use beside GCC (which isn't linear). However, it is important to stress that the paper does not necessarily propose a method that is always faster than existing implementations for all (or even common) usage. It proposes a general solution that is asymptotically better as inheritance hierarchies grow.
However, it is rarely a good design to have large inheritance hierarchies, as they tend to force the application to become monolithic and inflexible to change. Programs with flexible design tend to only have hierarchies mostly with 2 levels, an abstract base and an implementation of runtime polymorphic roles to support the Open/Closed Principle. In these cases, walking the inheritance graph can be as simple as a single pointer dereference and compare, which can be faster than the index-sum-then-dereference-then-compare presented by Gibbs and Stroustrup.
Also, it is important to stress that it is never necessary to write a program that uses dynamic_cast unless your own business rules require it. The use of dynamic_cast is always an indication that polymorphism is not being properly used and reuse is being compromised. If you need a behavior based on casting up a hierarchy, adding a virtual method gives the clean solution. If you have a code section that does dynamic_cast-checks on types, that section of code will never "close" (in the meaning of the Open/Closed Principle), and will need to be updated for every new type added to the system. A virtual dispatch, on the other hand, is added only on new types, allowing you to remain open to expansion and yet closing the behaviors operating on the base type.
So this is really a rather academic suggestion (equating to changing a map to a hash_map algorithmically) that shouldn't have real world effects if good design is followed. If business rules forbid good design (some shops may have code barriers or code ownership issues where you cannot change existing architectures the way they need to be, nor do they allow adaptors to be built as would commonly be used for 3rd party libraries), then it is best not to make the decision on which compiler to use based on what algorithm is implemented. As always, if performance is key and you have to use a feature like dynamic_cast, profile your code. It is possible (and likely in many cases) that the tree-walking implementation is faster in practice.
See also the standards committee's review of implementations, including dynamic_cast and a well-known look at c++ in embedded environments and good use (which mentions Gibbs and Stroustrup in passing).

C++ style vs. performance?

C++ style vs. performance - is using C-style things, that are faster the some C++ equivalents, that bad practice ? For example:
Don't use atoi(), itoa(), atol(), etc. ! Use std::stringstream <- probably sometimes it's better, but always? What's so bad using the C functions? Yep, C-style, not C++, but whatever? This is C++, we're looking for performance all the time..
Never use raw pointers, use smart pointers instead - OK, they're really useful, everyone knows that, I know that, I use the all the time and I know how much better they're that raw pointers, but sometimes it's completely safe to use raw pointers.. Why not? "Not C++ style? <- is this enough?
Don't use bitwise operations - too C-style? WTH? Why not, when you're sure what you're doing? For example - don't do bitwise exchange of variables ( a ^= b; b ^= a; a ^= b; ) - use standard 3-step exchange. Don't use left-shift for multiplying by two. Etc, etc.. (OK, that's not C++ style vs. C-style, but still "not good practice" )
And finally, the most expensive - "Don't use enum-s to return codes, it's too C-style, use exceptions for different errors" ? Why? OK, when we're talking about error handling on deep levels - OK, but why always? What's so wrong with this, for example - when we're talking about a function, that returns different error codes and when the error handling will be implemented only in the function, that calls the first one? I mean - no need to pass the error codes on a upper level. Exceptions are rather slow and they're exceptions for exceptional situations, not for .. beauty.
etc., etc., etc.
Okay, I know that good coding style is very, very important <- the code should be easy to read and understand. I know that there's no need from micro optimizations, as the modern compilers are very smart and Compiler optimizations are very powerful. But I also know how expensive is the exceptions handling, how (some) smart_pointers are implemented, and that there's no need from smart_ptr all the time.. I know that, for example, atoi is not that "safe" as std::stringstream is, but still.. What about performance?
EDIT: I'm not talking about some really hard things, that are only C-style specific. I mean - don't wonder to use function pointers or virtual methods and these kind of stuff, that a C++ programmer may not know, if never used such things (while C programmers do this all the time). I'm talking about some more common and easy things, such as in the examples.
In general, the thing you're missing is that the C way often isn't faster. It just looks more like a hack, and people often think hacks are faster.
Never use raw pointers, use smart pointers instead - OK, they're really useful, everyone knows that, I know that, I use the all the time and I know how much better they're that raw pointers, but sometimes it's completely safe to use raw pointers.. Why not?
Let's turn the question on its head. Sometimes it's safe to use raw pointers. Is that alone a reason to use them? Is there anything about raw pointers that is actually superior to smart pointers? It depends. Some smart pointer types are slower than raw pointers. Others aren't. What is the performance rationale for using a raw pointer over a std::unique_ptr or a boost::scoped_ptr? Neither of them have any overhead, they just provide safer semantics.
This isn't to say that you should never use raw pointers. Just that you shouldn't do it just because you think you need performance, or just because "it seems safe". Do it when you need to represent something that smart pointers can't. As a rule of thumb, use pointers to point to things, and smart pointers to take ownership of things. But it's a rule of thumb, not a universal rule. Use whichever fits the task at hand. But don't blindly assume that raw pointers will be faster. And when you use smart pointers, be sure you are familiar with them all. Too many people just use shared_ptr for everything, and that is just awful, both in terms of performance and the very vague shared ownership semantics you end up applying to everything.
Don't use bitwise operations - too C-style? WTH? Why not, when you're sure what you're doing? For example - don't do bitwise exchange of variables ( a ^= b; b ^= a; a ^= b; ) - use standard 3-step exchange. Don't use left-shift for multiplying by two. Etc, etc.. (OK, that's not C++ style vs. C-style, but still "not good practice" )
That one is correct. And the reason is "it's faster". Bitwise exchange is problematic in many ways:
it is slower on a modern CPU
it is more subtle and easier to get wrong
it works with a very limited set of types
And when multiplying by two, multiply by two. The compiler knows about this trick, and will apply it if it is faster. And once again, shifting has many of the same problems. It may, in this case, be faster (which is why the compiler will do it for you), but it is still easier to get wrong, and it works with a limited set of types. In paticular, it might compile fine with types that you think it is safe to do this trick with... And then blow up in practice. In particular, bit shifting on negative values is a minefield. Let the compiler navigate it for you.
Incidentally, this has nothing to do with "C style". The exact same advice applies in C. In C, a regular swap is still faster than the bitwise hack, and bitshifting instead of a multiply will still be done by the compiler if it is valid and if it is faster.
But as a programmer, you should use bitwise operations for one thing only: to do bitwise manipulation of integers. You've already got a multiplication operator, so use that when you want to multiply. And you've also got a std::swap function. Use that if you want to swap two values. One of the most important tricks when optimizing is, perhaps surprisingly, to write readable, meaningful code. That allows your compiler to understand the code and optimize it. std::swap can be specialized to do the most efficient exchange for the particular type it's used on. And the compiler knows several ways to implement multiplication, and will pick the fastest one depending on circumstance... If you tell it to. If you tell it to bit shift instead, you're just misleading it. Tell it to multiply, and it will give you the fastest multiply it has.
And finally, the most expensive - "Don't use enum-s to return codes, it's too C-style, use exceptions for different errors" ?
Depends on who you ask. Most C++ programmers I know of find room for both. But keep in mind that one unfortunate thing about return codes is that they're easily ignored. If that is unacceptable, then perhaps you should prefer an exception in this case. Another point is that RAII works better together with exceptions, and a C++ programmer should definitely use RAII wherever possible. Unfortunately, because constructors can't return error codes, exceptions are often the only way to indicate errors.
but still.. What about performance?
What about it? Any decent C programmer would be happy to tell you not to optimize prematurely.
Your CPU can execute perhaps 8 billion instructions per second. If you make two calls to a std::stringstream in that second, is that going to make a measurable dent in the budget?
You can't predict performance. You can't make up a coding guideline that will result in fast code. Even if you never throw a single exception, and never ever use stringstream, your code still won't automatically be fast. If you try to optimize while you write the code, then you're going to spend 90% of the effort optimizing the 90% of the code that is hardly ever executed. In order to get a measurable improvement, you need to focus on the 10% of the code that make up 95% of the execution time. Trying to make everything fast just results in a lot of wasted time with little to show for it, and a much uglier code base.
I'd advise against atoi, and atol as a rule, but not just on style grounds. They make it essentially impossible to detect input errors. While a stringstream can do the same job, strtol (for one example) is what I'd usually advise as the direct replacement.
I'm not sure who's giving that advice. Use smart pointers when they're helpful, but when they're not, there's nothing wrong with using a raw pointer.
I really have no idea who thinks it's "not good practice" to use bitwise operators in C++. Unless there were some specific conditions attached to that advice, I'd say it was just plain wrong.
This one depends heavily on where you draw the line between an exceptional input, and (for example) an input that's expected, but not usable. Generally speaking, if you're accepting input direct from the user, you can't (and shouldn't) classify anything as truly exceptional. The main good point of exceptions (even in a situation like this) is ensuring that errors aren't just ignored. OTOH, I don't think that's always the sole criterion, so you can't say it's the right way to handle every situation.
All in all, it sounds to me like you've gotten some advice that's dogmatic to the point of ignoring reality. It's probably best ignored or at least viewed as one rather extreme position about how C++ could be written, not necessarily how it always (or ever, necessarily) should be written.
Adding to #Jerry Coffin's answer, which I think is extremely useful, I would like to present some subjective observations.
The thing is that programmers tend to get fancy. That is, most of us really like writing fancy code just for the sake of it. This is perfectly fine as long as you are doing the project on your own. Remember a good software is the one whose binary code works as expected and not the one whose source code is clean. However when it comes to larger projects which are developed and maintained by lots of people, it is economically better to write simpler code so that no one from the team loses time to understand what you meant. Even at the cost of runtime(naturally minor cost). That's why many people, including myself, would discourage using the xor trick instead of assignment(you may be surprised but there are extremely many programmers out there that haven't heard of the xor trick). The xor trick works only for integers anyway, and the traditional way of swapping integers is very fast anyway, so using the xor trick is just being fancy.
using itoa, atoi etc instead of streams is faster. Yes, it is. But how much faster? Not much. Unless most of your program does only conversions from text to string and vice versa you won't notice the difference. Why do people use itoa, atoi etc? Well, some of them do, because they are unaware of the c++ alternative. Another group does because it's just one LOC. For the former group - shame on you, for the latter - why not boost::lexical_cast?
exceptions... ah ... yeah, they can be slower than return codes but in most cases not really. Return codes can contain information, which is not an error. Exceptions should be used to report severe errors, ones which cannot be ignored. Some people forget about this and use exceptions for simulating some weird signal/slot mechanisms (believe me, I have seen it, yuck). My personal opinion is that there is nothing wrong using return codes, but severe errors should be reported with exceptions, unless the profiler has shown that refraining from them would considerably boost the performance
raw pointers - My own opinion is this: never use smart pointers when it's not about ownership. Always use smart pointers when it's about ownership. Naturally with some exceptions.
bit-shifting instead of multiplication by powers of two. This, I believe, is a classic example of premature optimization. x << 3; I bet at least 25% of your co-workers will need some time before they will understand/realize this means x * 8; Obfuscated (at least for 25%) code for which exact reasons? Again, if the profiler tells you this is the bottleneck (which I doubt will be the case for extremely rare cases), then green light, go ahead and do it (leaving a comment that in fact this means x * 8)
To sum it up. A good professional acknowledges the "good styles", understands why and when they are good, and rightfully makes exceptions because he knows what he's doing. Average/bad professionals are categorized into 2 types: first type doesn't acknowledge good style, doesn't even understand what and why it is. fire them. The other type treats the style as a dogma, which is not always good.
What's a best practice ? Wikipedia's words are better than mine would be :
A best practice is a technique,
method, process, activity, incentive,
or reward which conventional wisdom
regards as more effective at
delivering a particular outcome than
any other technique, method, process,
etc. when applied to a particular
condition or circumstance.
[...]
A given best practice is only
applicable to particular condition or
circumstance and may have to be
modified or adapted for similar
circumstances. In addition, a "best"
practice can evolve to become better
as improvements are discovered.
I believe there is no such thing as universal truth in programming : if you think that something is a better fit in your situation than a so called "best practice", then do what you believe is right, but know perfectly why you do (ie: prove it with numbers).
Functions with mutable char* arguments are bad in C++ because it's too difficult to manually handle their memory, since we have an alternatives. They aren't generic, we can't easily switch from char to wchar_t as basic_string allows. Also lexical_cast is more straight replacement for atoi, itoa.
If you don't really need smartness of a smart pointer - don't use it.
To swap use swap. Use bitwise operations only for bitwise operations - checking/setting/inverting flags, etc.
Exceptions are fast. They allow removing error checking condition branches, so if they really "never happen" - they increase performance.
Multiplication by bitshifting doesn't improve performance in C, the compiler will do that for you. Just be sure to multiply or divide by 2^n values for performance.
Bitfield swapping is also something that'll probably just confuse your compiler.
I'm not very experienced with string handling in C++, but from from what I know, it's hard to believe it's more flexible than scanf and printf.
Also, these "you should never" statements, I generally regard them as recommendations.
All of your questions are a-priori. What I mean is you are asking them in the abstract, not in the context of any specific program whose performance is your concern.
That's like trying to swim without being in water.
If you do tuning on a specific concrete program, you will find performance problems, and chances are they will have almost nothing whatever to do with these abstract questions. They will most likely all be things you could not have thought of a-priori.
For a specific example of this, look here.
If I could generalize from experience, a major source of performance problems is galloping generality.
That is, while data structure abstraction is generally considered a good thing, any good thing can be massively over-used, and then it becomes a crippling bad thing. This is not rare. In my experience it is typical.
I think you're answering big parts of your question on your own. I personally prefer easy-to-read code (even if you understand C style, maybe the next to read your code has more trouble with it) and safe code (which suggests stringstream, exceptions, smart pointers...)
If you really have something where it makes sense to consider bitwise operations - ok. But often I see C programmers use a char instead of a couple of bools. I do NOT like this.
Speed is important, but most of the time is usually required at a few hotspots in a program. So unless you measure that some technique is a problem (or you know pretty sure that it will become one) I would rather use what you call C++ style.
Why the expensiveness of exceptions is an argument? Exceptions are exceptions because they are rare. Their performance doesn't influence the overall performance. The steps you have to take to make your code exception-safe do not influence the performance either. But on the other hand exceptions are convenient and flexible.
This is not really an "answer", but if you work in a project where performance is important (e.g. embedded/games), people usually do the faster C way instead of the slower C++ way in the ways you described.
The exception may be bitwise operations, where not as much is gained as you might think. For example, "Don't use left-shift for multiplying by two." A half-way decent compiler will generate the same code for << 2 and * 2.