Branch prediction between objects of same class - c++

I'm optimizing a program, and trying to avoid branch misprediction. I have two objects of a class. In the class's primary function there are several if branches. Each object takes a different direction on each of those branches, and they each run the function one after another. My questions:
Since they're members of the same class, and are therefore sharing that function, are they also sharing the same branch prediction? Essentially, am I making the system go TFTFTFTF...
Or, since they're their own objects do they have their own branch predictions and therefore maintaining consistent predictions (TTTTTTT... and FFFFFFFF...)

Yes, the method is shared between instances of a class.
It means, as well, that the predictions are shared.
However, there is more to branch prediction than the "last" time. The processor will remember some of the last results and identify "easy" (cyclic) patterns. Therefore, if you constantly swap between your two objects and the pattern ends up TFTFTFTFTF then the processor will correctly guess that the next result will be a T.
From a semantic point of view, however, did you thought about using a base class and two different derived classes (+ the usual virtual mechanism) ?

Don't bother about such low-level details like branch prediction (it will vary from one model of a processor to the next). Leave that optimization to the compiler (and it is probably good enough).
If you want to improve your application, work more on the algorithms themselves. And use profiling & measurements. Don't forget that premature optimization is evil.

Since a branch misprediction will typically cost of the order of 10 to 20 cycles it's really only of importance when it's inside a loop that is being executed millions of times a second. Modern CPUs do a pretty good job of branch prediction anyway, so it's pretty rare to have to worry about this kind of thing (compared to say 5 - 10 years ago).

Related

Branch-aware programming

I'm reading around that branch misprediction can be a hot bottleneck for the performance of an application. As I can see, people often show assembly code that unveil the problem and state that programmers usually can predict where a branch could go the most of the times and avoid branch mispredictons.
My questions are:
Is it possible to avoid branch mispredictions using some high level programming technique (i.e. no assembly)?
What should I keep in mind to produce branch-friendly code in a high level programming language (I'm mostly interested in C and C++)?
Code examples and benchmarks are welcome.
people often ... and state that programmers usually can predict where a branch could go
(*) Experienced programmers often remind that human programmers are very bad at predicting that.
1- Is it possible to avoid branch mispredictions using some high level programming technique (i.e. no assembly)?
Not in standard c++ or c. At least not for a single branch. What you can do is minimize the depth of your dependency chains so that branch mis-prediction would not have any effect. Modern cpus will execute both code paths of a branch and drop the one that wasn't chosen. There's a limit to this however, which is why branch prediction only matters in deep dependency chains.
Some compilers provide extension for suggesting the prediction manually such as __builtin_expect in gcc. Here is a stackoverflow question about it. Even better, some compilers (such as gcc) support profiling the code and automatically detect the optimal predictions. It's smart to use profiling rather than manual work because of (*).
2- What should I keep in mind to produce branch-friendly code in a high level programming language (I'm mostly interested in C and C++)?
Primarily, you should keep in mind that branch mis-prediction is only going to affect you in the most performance critical part of your program and not to worry about it until you've measured and found a problem.
But what can I do when some profiler (valgrind, VTune, ...) tells that on line n of foo.cpp I got a branch prediction penalty?
Lundin gave very sensible advice
Measure fo find out whether it matters.
If it matters, then
Minimize the depth of dependency chains of your calculations. How to do that can be quite complicated and beyond my expertise and there's not much you can do without diving into assembly. What you can do in a high level language is to minimize the number of conditional checks (**). Otherwise you're at the mercy of compiler optimization. Avoiding deep dependency chains also allows more efficient use of out-of-order superscalar processors.
Make your branches consistently predictable. The effect of that can be seen in this stackoverflow question. In the question, there is a loop over an array. The loop contains a branch. The branch depends on size of the current element. When the data was sorted, the loop could be demonstrated to be much faster when compiled with a particular compiler and run on a particular cpu. Of course, keeping all your data sorted will also cost cpu time, possibly more than the branch mis-predictions do, so, measure.
If it's still a problem, use profile guided optimization (if available).
Order of 2. and 3. may be switched. Optimizing your code by hand is a lot of work. On the other hand, gathering the profiling data can be difficult for some programs as well.
(**) One way to do that is transform your loops by for example unrolling them. You can also let the optimizer do it automatically. You must measure though, because unrolling will affect the way you interact with the cache and may well end up being a pessimization.
As a caveat, I'm not a micro-optimization wizard. I don't know exactly how the hardware branch predictor works. To me it's a magical beast against which I play scissors-paper-stone and it seems to be able to read my mind and beat me all the time. I'm a design & architecture type.
Nevertheless, since this question was about a high-level mindset, I might be able to contribute some tips.
Profiling
As said, I'm not a computer architecture wizard, but I do know how to profile code with VTune and measure things like branch mispredictions and cache misses and do it all the time being in a performance-critical field. That's the very first thing you should be looking into if you don't know how to do this (profiling). Most of these micro-level hotspots are best discovered in hindsight with a profiler in hand.
Branch Elimination
A lot of people are giving some excellent low-level advice on how to improve the predictability of your branches. You can even manually try to aid the branch predictor in some cases and also optimize for static branch prediction (writing if statements to check for the common cases first, e.g.). There's a comprehensive article on the nitty-gritty details here from Intel: https://software.intel.com/en-us/articles/branch-and-loop-reorganization-to-prevent-mispredicts.
However, doing this beyond a basic common case/rare case anticipation is very hard to do and it is almost always best saved for later after you measure. It's just too difficult for humans to be able to accurately predict the nature of the branch predictor. It's far more difficult to predict than things like page faults and cache misses, and even those are almost impossible to perfectly humanly-predict in a complex codebase.
However, there is an easier, high-level way to mitigate branch misprediction, and that's to avoid branching completely.
Skipping Small/Rare Work
One of the mistakes I commonly made earlier in my career and see a lot of peers trying to do when they're starting out, before they've learned to profile and are still going by hunches, is to try to skip small or rare work.
An example of this is memoizing to a large look-up table to avoid repeatedly doing some relatively-cheap computations, like using a look-up table that spans megabytes to avoid repeatedly calling cos and sin. To a human brain, this seems like it's saving work to compute it once and store it, except often loading the memory from this giant LUT down through the memory hierarchy and into a register often ends up being even more expensive than the computations they were intended to save.
Another case is adding a bunch of little branches to avoid small computations which are harmless to do unnecessarily (won't impact correctness) throughout the code as a naive attempt at optimization, only to find the branching costs more than just doing unnecessary computations.
This naive attempt at branching as an optimization can also apply even for slightly-expensive but rare work. Take this C++ example:
struct Foo
{
...
Foo& operator=(const Foo& other)
{
// Avoid unnecessary self-assignment.
if (this != &other)
{
...
}
return *this;
}
...
};
Note that this is somewhat of a simplistic/illustrative example as most people implement copy assignment using copy-and-swap against a parameter passed by value and avoid branching anyway no matter what.
In this case, we're branching to avoid self-assignment. Yet if self-assignment is only doing redundant work and doesn't hinder the correctness of the result, it can often give you a boost in real-world performance to simply allow the self-copying:
struct Foo
{
...
Foo& operator=(const Foo& other)
{
// Don't check for self-assignment.
...
return *this;
}
...
};
... this can help because self-assignment tends to be quite rare. We're slowing down the rare case by redundantly self-assigning, but we're speeding up the common case by avoiding the need to check in all other cases. Of course that's unlikely to reduce branch mispredictions significantly since there is a common/rare case skew in terms of the branching, but hey, a branch that doesn't exist can't be mispredicted.
A Naive Attempt at a Small Vector
As a personal story, I formerly worked in a large-scale C codebase which often had a lot of code like this:
char str[256];
// do stuff with 'str'
... and naturally since we had a pretty extensive user base, some rare user out there would eventually type in a name for a material in our software that was over 255 characters in length and overflow the buffer, leading to segfaults. Our team was getting into C++ and started porting a lot of these source files to C++ and replacing such code with this:
std::string str = ...;
// do stuff with 'str'
... which eliminated those buffer overruns without much effort. However, at least back then, containers like std::string and std::vector were heap(free store)-allocated structures, and we found ourselves trading correctness/safety for efficiency. Some of these replaced areas were performance-critical (called in tight loops), and while we eliminated a lot of bug reports with these mass replacements, the users started noticing the slowdowns.
So then we wanted something which was like a hybrid between these two techniques. We wanted to be able to slap something in there to achieve safety over the C-style fixed-buffer variants (which were perfectly fine and very efficient for common-case scenarios), but still work for the rare-case scenarios where the buffer wasn't big enough for user inputs. I was one of the performance geeks on the team and one of the few using a profiler (I unfortunately worked with a lot of people who thought they were too smart to use one), so I got called into the task.
My first naive attempt was something like this (vastly simplified: the actual one used placement new and so forth and was a fully standard-compliant sequence). It involves using a fixed-size buffer (size specified at compile-time) for the common case and a dynamically-allocated one if the size exceeded that capacity.
template <class T, int N>
class SmallVector
{
public:
...
T& operator[](int n)
{
return num < N ? buf[n]: ptr[n];
}
...
private:
T buf[N];
T* ptr;
};
This attempt was an utter fail. While it didn't pay the price of the heap/free store to construct, the branching in operator[] made it even worse than std::string and std::vector<char> and was showing up as a profiling hotspot instead of malloc (our vendor implementation of std::allocator and operator new used malloc under the hood). So then I quickly got the idea to simply assign ptr to buf in the constructor. Now ptr points to buf even in the common case scenario, and now operator[] can be implemented like this:
T& operator[](int n)
{
return ptr[n];
}
... and with that simple branch elimination, our hotspots went away. We now had a general-purpose, standard-compliant container we could use that was just about as fast as the former C-style, fixed-buffer solution (only difference being one additional pointer and a few more instructions in the constructor), but could handle those rare-case scenarios where the size needed to be larger than N. Now we use this even more than std::vector (but only because our use cases favor a bunch of teeny, temporary, contiguous, random-access containers). And making it fast came down to just eliminating a branch in operator[].
Common Case/Rare Case Skewing
One of the things learned after profiling and optimizing for years is that there's no such thing as "absolutely-fast-everywhere" code. A lot of the act of optimization is trading an inefficiency there for greater efficiency here. Users might perceive your code as absolutely-fast-everywhere, but that comes from smart tradeoffs where the optimizations are aligning with the common case (common case being both aligned with realistic user-end scenarios and coming from hotspots pointed out from a profiler measuring those common scenarios).
Good things tend to happen when you skew the performance towards the common case and away from the rare case. For the common case to get faster, often the rare case must get slower, yet that's a good thing.
Zero-Cost Exception-Handling
An example of common case/rare case skewing is the exception-handling technique used in a lot of modern compilers. They apply zero-cost EH, which isn't really "zero-cost" all across the board. In the case that an exception is thrown, they're now slower than ever before. Yet in the case where an exception isn't thrown, they're now faster than ever before and often faster in successful scenarios than code like this:
if (!try_something())
return error;
if (!try_something_else())
return error;
...
When we use zero-cost EH here instead and avoid checking for and propagating errors manually, things tend to go even faster in the non-exceptional cases than this style of code above. Crudely speaking, it's due to the reduced branching. Yet in exchange, something far more expensive has to happen when an exception is thrown. Nevertheless, that skew between common case and rare case tends to aid real-world scenarios. We don't care quite as much about the speed of failing to load a file (rare case) as loading it successfully (common case), and that's why a lot of modern C++ compilers implement "zero-cost" EH. It is again in the interest of skewing the common case and rare case, pushing them further away from each in terms of performance.
Virtual Dispatch and Homogeneity
A lot of branching in object-oriented code where the dependencies flow towards abstractions (stable abstractions principle, e.g.), can have a large bulk of its branching (besides loops of course, which play well to the branch predictor) in the form of dynamic dispatch (virtual function calls or function pointer calls).
In these cases, a common temptation is to aggregate all kinds of sub-types into a polymorphic container storing a base pointer, looping through it and calling virtual methods on each element in that container. This can lead to a lot of branch mispredictions, especially if this container is being updated all the time. The pseudocode might look like this:
for each entity in world:
entity.do_something() // virtual call
A strategy to avoid this scenario is to start sorting this polymorphic container based on its sub-types. This is a fairly old-style optimization popular in the gaming industry. I don't know how helpful it is today, but it is a high-level kind of optimization.
Another way I've found to be definitely still be useful even in recent cases which achieves a similar effect is to break the polymorphic container apart into multiple containers for each sub-type, leading to code like this:
for each human in world.humans():
human.do_something()
for each orc in world.orcs():
orc.do_something()
for each creature in world.creatures():
creature.do_something()
... naturally this hinders the maintainability of the code and reduces the extensibility. However, you don't have to do this for every single sub-type in this world. We only need to do it for the most common. For example, this imaginary video game might consist, by far, of humans and orcs. It might also have fairies, goblins, trolls, elves, gnomes, etc., but they might not be nearly as common as humans and orcs. So we only need to split the humans and orcs away from the rest. If you can afford it, you can also still have a polymorphic container that stores all of these subtypes which we can use for less performance-critical loops. This is somewhat akin to hot/cold splitting for optimizing locality of reference.
Data-Oriented Optimization
Optimizing for branch prediction and optimizing memory layouts tends to kind of blur together. I've only rarely attempted optimizations specifically for the branch predictor, and that was only after I exhausted everything else. Yet I've found that focusing a lot on memory and locality of reference did make my measurements result in fewer branch mispredictions (often without knowing exactly why).
Here it can help to study data-oriented design. I've found some of the most useful knowledge relating to optimization comes from studying memory optimization in the context of data-oriented design. Data-oriented design tends to emphasize fewer abstractions (if any), and bulkier, high-level interfaces that process big chunks of data. By nature such designs tend to reduce the amount of disparate branching and jumping around in code with more loopy code processing big chunks of homogeneous data.
It often helps, even if your goal is to reduce branch misprediction, to focus more on consuming data more quickly. I've found some great gains before from branchless SIMD, for example, but the mindset was still in the vein of consuming data more quickly (which it did, and thanks to some help from here on SO like Harold).
TL;DR
So anyway, these are some strategies to potentially reduce branch mispredictions throughout your code from a high-level standpoint. They're devoid of the highest level of expertise in computer architecture, but I'm hoping this is an appropriate kind of helpful response given the level of the question being asked. A lot of this advice is kind of blurred with optimization in general, but I've found that optimizing for branch prediction often needs to be blurred with optimizing beyond it (memory, parallelization, vectorization, algorithmic). In any case, the safest bet is to make sure you have a profiler in your hand before you venture deep.
Linux kernel defines likely and unlikely macros based on __builtin_expect gcc builtins:
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
(See here for the macros definitions in include/linux/compiler.h)
You can use them like:
if (likely(a > 42)) {
/* ... */
}
or
if (unlikely(ret_value < 0)) {
/* ... */
}
In general it's a good idea to keep hot inner loops well proportioned to the cache sizes most commonly encountered. That is, if your program handles data in lumps of, say, less than 32kbytes at a time and does a decent amount of work on it then you're making good use of the L1 cache.
In contrast if your hot inner loop chews through 100MByte of data and performs only one operation on each data item, then the CPU will spend most of the time fetching data from DRAM.
This is important because part of the reason CPUs have branch prediction in the first place is to be able to pre-fetch operands for the next instruction. The performance consequences of a branch mis-prediction can be reduced by arranging your code so that there's a good chance that the next data comes from L1 cache no matter what branch is taken. Whilst not a perfect strategy, L1 cache sizes seem to be universally stuck on 32 or 64K; it's almost a constant thing across the industry. Admittedly coding in this way is not often straightforward, and relying on profile driven optimisation, etc. as recommended by others is probably the most straightforward way ahead.
Regardless of anything else, whether or not a problem with branch mis-prediction will occur varies according to the CPU's cache sizes, what else is running on the machine, what the main memory bandwidth / latency is, etc.
Perhaps the most common techniques is to use separate methods for normal and error returns. C has no choice, but C++ has exceptions. Compilers are aware that the exception branches are exceptional and therefore unexpected.
This means that exception branches are indeed slow, as they're unpredicted, but the non-error branch is made faster. On average, this is a net win.
1- Is it possible to avoid branch mispredictions using some high level programming technique (i.e. no assembly)?
Avoid? Perhaps not. Reduce? Certainly...
2- What should I keep in mind to produce branch-friendly code in a high level programming language (I'm mostly interested in C and C++)?
It is worth noting that optimisation for one machine isn't necessarily optimisation for another. With that in mind, profile-guided optimisation is reasonably good at rearranging branches, based on whichever test input you give it. This means you don't need to do any programming to perform this optimisation, and it should be relatively tailored to whichever machine you're profiling on. Obviously, the best results will be achieved when your test input and the machine you profile on roughly matches what common expectations... but those are also considerations for any other optimisations, branch-prediction related or otherwise.
To answer your questions let me explain how does branch prediction works.
First of all, there is a branch penalty when the processor correctly predicts the taken branches. If the processor predicts a branch as taken, then it has to know the target of the predicted branch since execution flow will continue from that address. Assuming that the branch target address is already stored in Branch Target Buffer(BTB), it has to fetch new instructions from the address found in BTB. So you are still wasting a few clock cycles even if the branch is correctly predicted.
Since BTB has an associative cache structure the target address might not be present, and hence more clock cycles might be wasted.
On the other, hand if the CPU predicts a branch as not taken and if it's correct then there is no penalty since the CPU already knows where the consecutive instructions are.
As I explained above, predicted not taken branches have higher throughput than predicted taken branches.
Is it possible to avoid branch misprediction using some high level programming technique (i.e. no assembly)?
Yes, it is possible. You can avoid by organizing your code in way that all branches have repetitive branch pattern such that always taken or not taken.
But if you want to get higher throughput you should organize branches in a way that they are most likely to be not taken as I explained above.
What should I keep in mind to produce branch-friendly code in a high
level programming language (I'm mostly interested in C and C++)?
If it's possible eliminate branches as possible. If this is not the case when writing if-else or switch statements, check the most common cases first to make sure the branches most likely to be not taken. Try to use __builtin_expect(condition, 1) function to force compiler to produce condition to be treated as not taken.
Branchless isn't always better, even if both sides of the branch are trivial. When branch prediction works, it's faster than a loop-carried data dependency.
See gcc optimization flag -O3 makes code slower than -O2 for a case where gcc -O3 transforms an if() to branchless code in a case where it's very predictable, making it slower.
Sometimes you are confident that a condition is unpredictable (e.g. in a sort algorithm or binary search). Or you care more about the worst-case not being 10x slower than about the fast-case being 1.5x faster.
Some idioms are more likely to compile to a branchless form (like a cmov x86 conditional move instruction).
x = x>limit ? limit : x; // likely to compile branchless
if (x>limit) x=limit; // less likely to compile branchless, but still can
The first way always writes to x, while the second way doesn't modify x in one of the branches. This seems to be the reason that some compilers tend to emit a branch instead of a cmov for the if version. This applies even when x is a local int variable that's already live in a register, so "writing" it doesn't involve a store to memory, just changing the value in a register.
Compilers can still do whatever they want, but I've found this difference in idiom can make a difference. Depending on what you're testing, it's occasionally better to help the compiler mask and AND rather than doing a plain old cmov. I did it in that answer because I knew that the compiler would have what it needed to generate the mask with a single instruction (and from seeing how clang did it).
TODO: examples on http://gcc.godbolt.org/

Setting or consulting boolean. Which has the best performance?

Just for curiosity, which process is fastest: setting a value to a boolean (ex: changing it from true to false) or simple checking its value (ex: if(boolean)...)
The problem I have with "which is fastest" is that they are too underspecified to actually be answered conclusively, and at the same time too broad to yield useful conclusions even if answered conclusively. The only productive avenue your curiosity can take is to build a mental model of the machine and run both cases through it.
foo = true stores the value true to the location allocated to foo. That raises the question: Where and how is foo stored? This is impossible to answer without actually running the complete source code through the compiler with the right settings. It could be anywhere in RAM, or it could be in a register, or it could use no storage at all, being completely eliminated by compiler optimizations. Depending on where foo resides, the cost of overwriting it can vary: Hundreds of CPU cycles (if in RAM and not in cache), a couple cycles (in RAM and cache), one cycle (register), zero cycles (not stored).
if (foo) generally means reading foo and then performing a conditional branch based on it. Regarding the aspects I'll discuss here (I have to omit many details and some major categories), reading is effectively like writing. The conditional branch that follows has is even less predictable, as its cost depends on the run-time behavior of the program. If the branch is always taken, branch prediction will make it virtually free (a few cycles). If it's unpredictable, it may take tens of cycles and blow the pipeline (further reducing throughput). However, it's also possible that the conditional code will be predicated, invaliding most of the above concerns and replacing it with reasoning about instruction latency, data dependencies, and the gory details of the pipeline.
As you can see from the sheer volume to be written about it (despite omitting many details and even some important secondary effects), it's virtually impossible to really answer this in any generality. You need to look at concrete, complete programs to make any sort of prediction, and you need to know your whole system from top to bottom. Note that I had to assume a very specific kind of machine to even get this far: For a GPGPU or a embedded system or an early 90's consumer CPU, I'd have to rewrite almost all of that.
I think that this is so dependent on the context :
CPU/Architecture
Is the boolean stored in memory, cache, or register
whether what's behind the if induce a jump or only a conditional move
whether the value of the boolean is predictable or not
that the question doesn't finally makes sense.

High-performance code in c++ (inheritance, pointers to functions, if)

Suppose you have a very large graph with lots of processing upon its nodes (like tens of milions of operations per node). The core routine is the same for each node, but there are some additional operations based on internal conditions. There can be 2 such conditions which produces 4 cases (0,0), (1,0), (0,1), (1,1). E.g. (1,1) means that both conditions hold. Conditions are established once (one set for each node independently) in a program and, from then on, never change. Unfortunately, they are determined in runtime and in a fully unpredictable way (based on data received via HTTP from and external server).
What is the fastest in such scenario? (taken into account modern compiler optimizations which I have no idea of)
simply using "IFs": if (condition X) perform additional operation X.
using inheritance to derive four classes from base class (exposing method OPERATION) to have a proper operation and save tens milions of "ifs". [but I am not sure if this is a real saving, inheritance must have its overhead too)
use pointer to function to assign the function based on conditions once.
I would take me long to come to a point where I can test it by myself (I don't have such a big data yet and this will be integrated into bigger project so would not be easy to test all versions).
Reading answers: I know that I probably have to experiment with it. But apart from everything, this is sort of a question what is faster:
tens of milions of IF statements and normal statically known function calls VS function pointer calls VS inheritance which I think is not the best idea in this case and I am thinking of eliminating it from further inspection Thanks for any constructive answers (not saying that I shouldn't care about such minor things ;)
There is no real answer except to measure the actual code on the
real data. At times, in the past, I've had to deal with such
problems, and in the cases I've actually measured, virtual
functions were faster than if's. But that doesn't mean much,
since the cases I measured were in a different program (and thus
a different context) than yours. For example, a virtual
function call will generally prevent inlining, whereas an if is
inline by nature, and inlining may open up additional
optimization possibilities.
Also the machines I measured on handled virtual functions pretty
well; I've heard that some other machines (HP's PA, for example)
are very ineffective in their implementation of indirect jumps
(including not only virtual function calls, but also the return
from the function---again, the lost opportunity to inline
costs).
If you absolutely have to have the fastest way, and the process order of the nodes is not relevant, make four different types, one for each case, and define a process function for each. Then in a container class, have four vectors, one for each node type. Upon creation of a new node get all the data you need to create the node, including the conditions, and create a node of the correct type and push it into the correct vector. Then, when you need to process all your nodes, process them type by type, first processing all nodes in the first vector, then the second, etc.
Why would you want to do this:
No ifs for the state switching
No vtables
No function indirection
But much more importantly:
No instruction cache thrashing (you're not jumping to a different part of your code for every next node)
No branch prediction misses for state switching ifs (since there are none)
Even if you'd have inheritance with virtual functions and thus function indirection through vtables, simply sorting the nodes by their type in your vector may already make a world of difference in performance as any possible instruction cache thrashing would essentially be gone and depending on the methods of branch prediction the branch prediction misses could also be reduced.
Also, don't make a vector of pointers, but make a vector of objects. If they are pointers you have an extra adressing indirection, which in itself is not that worrisome, but the problem is that it may lead to data cache thrashing if the objects are pretty much randomly spread throughout your memory. If on the other hand your objects are directly put into the vector the processing will basically go through memory linearly and the cache will basically hit every time and cache prefetching might actually be able to do a good job.
Note though that you would pay heavily in data structure creation if you don't do it correctly, if at all possible, when making the vector reserve enough capacity in it immediately for all your nodes, reallocating and moving every time your vector runs out of space can become expensive.
Oh, and yes, as James mentioned, always, always measure! What you think may be the fastest way may not be, sometimes things are very counter intuitive, depending on all kinds of factors like optimizations, pipelining, branch prediction, cache hits/misses, data structure layout, etc. What I wrote above is a pretty general approach, but it is not guaranteed to be the fastest and there are definitely ways to do it wrong. Measure, Measure, Measure.
P.S. inheritance with virtual functions is roughly equivalent to using function pointers. Virtual functions are usually implemented by a vtable at the head of the class, which is basically just a table of function pointers to the implementation of the given virtual for the actual type of the object. Whether ifs is faster than virtuals or the other way around is a very very difficult question to answer and depends completely on the implementation, compiler and platform used.
I'm actually quite impressed with how effective branch prediction can be, and only the if solution allows inlining which also can be dramatic. Virtual functions and pointer to function also involve loading from memory and could possibly cause cache misses
But, you have four conditions so branch misses can be expensive.
Without the ability to test and verify the answer really can't be answered. Especially since its not even clear that this would be a performance bottleneck sufficient enough to warrant optimization efforts.
In cases like this. I would err on the side of readability and ease of debugging and go with if
Many programmers have taken classes and read books that go on about certain favorite subjects: pipelining, cacheing, branch prediction, virtual functions, compiler optimizations, big-O algorithms, etc. etc. and the performance of those.
If I could make an analogy to boating, these are things like trimming weight, tuning power, adjusting balance and streamlining, assuming you are starting from some speedboat that's already close to optimal.
Never mind you may actually be starting from the Queen Mary, and you're assuming it's a speedboat.
It may very well be that there are ways to speed up the code by large factors, just by cutting away fat (masquerading as good design), if only you knew where it was.
Well, you don't know where it is, and guessing where is a waste of time, unless you value being wrong.
When people say "measure and use a profiler" they are pointing in the right direction, but not far enough.
Here's an example of how I do it, and I made a crude video of it, FWIW.
Unless there's a clear pattern to these attributes, no branch predictor exists that can effectively predict this data-dependent condition for you. Under such circumstances, you may be better off avoiding a control speculation (and paying the penalty of a branch misprediction), and just wait for the actual data to arrive and resolve the control flow (more likely to occur using virtual functions). You'll have to benchmark of course to verify that, as it depends on the actual pattern (if for e.g. you have even small groups of similarly "tagged" elements).
The sorting suggested above is nice and all, but note that it converts a problem that's just plain O(n) into an O(logn) one, so for large sizes you'll lose unless you can sort once - traverse many times, or otherwise cheaply maintain the sort state.
Note that some predictors may also attempt to predict the address of the function call, so you might be facing the same problem there.
However, I must agree about the comments regarding early-optimizations - do you know for sure that the control flow is your bottleneck? What if fetching the actual data from memory takes longer? In general, it would seem that your elements can be process in parallel, so even if you run this on a single thread (and much more if you use multiple cores) - you should be bandwidth-bound and not latency bound.

Using virtual functions instead of IF statements is faster?

I remember reading online somewhere that in EXTREMELY low latency situations its better to use virtual functions as a substitute for IF statements.
Is this true? Are they basically saying dynamic polymorphism is better for speed situations?
Do any users have any other C++ low latency "quirks" they could share?
I very much doubt that a single if/else statement would be slower than using a virtual function: the virtual function typically enforces a pipeline stall and limits the optimization opportunities. An if statement may stall the pipeline but if it is often executed the prediction may go the right way. However, if your alternative is between a cascade of a few if/else statements vs. just one virtual function call than that latter may be faster. Also, if the total code being executed via using virtual functions vs. branches is different functions ends up substantially smaller it may cause few cache misses on the instruction cache. That is, it depends on the situation. The best way is to measure. Note that measuring artificial code which is just attempting to investigate the difference between two approaches but doesn't really do any processing yields misleading results. However, when you need to produce very low latency code you typically can spend more time to come up with it, i.e. experimenting with multiple different approaches may be viable.
Although my colleagues tend to frown upon my template approaches for avoiding run-time branching, the code I end up with often is very slow to compile but very fast to run. Of course, this depends on the functions or branches being used to be known at compile time. In the areas I have used this e.g. for message processing it is often sufficient to have one dynamic decision e.g. one for each message (i.e. one virtual function call), followed by processing which doesn't involve any dynamic types (this are still conditionals, e.g. for the amount of values in a table).

Speed comparison - Template specialization vs. Virtual Function vs. If-Statement

Just to get it out of the way...
Premature optimization is the root of all evil
Make use of OOP
etc.
I understand. Just looking for some advice regarding the speed of certain operations that I can store in my grey matter for future reference.
Say you have an Animation class. An animation can be looped (plays over and over) or not looped (plays once), it may have unique frame times or not, etc. Let's say there are 3 of these "either or" attributes. Note that any method of the Animation class will at most check for one of these (i.e. this isn't a case of a giant branch of if-elseif).
Here are some options.
1) Give it boolean members for the attributes given above, and use an if statement to check against them when playing the animation to perform the appropriate action.
Problem: Conditional checked every single time the animation is played.
2) Make a base animation class, and derive other animations classes such as LoopedAnimation and AnimationUniqueFrames, etc.
Problem: Vtable check upon every call to play the animation given that you have something like a vector<Animation>. Also, making a separate class for all of the possible combinations seems code bloaty.
3) Use template specialization, and specialize those functions that depend on those attributes. Like template<bool looped, bool uniqueFrameTimes> class Animation.
Problem: The problem with this is that you couldn't just have a vector<Animation> for something's animations. Could also be bloaty.
I'm wondering what kind of speed each of these options offer? I'm particularly interested in the 1st and 2nd option because the 3rd doesn't allow one to iterate through a general container of Animations.
In short, what is faster - a vtable fetch or a conditional?
(1) Not that the size of the generated assembly matters anymore these days, but this is what it generates (approximately, assuming MSVC on x86):
mov eax, [ecx+12] ; 'this' pointer stored in ecx, eax is scratch
cmp eax, 0 ; test for 0
jz .somewhereElse ; jump if the bool isn't set
The optimizing compiler will intersperse other instructions there, making it more pipeline-friendly. The contents of your class will most likely be in your cache anyway, and if it's not, it will be needed a few cycles later anyway. So, in retrospect, that's maybe a few cycles, and for something that will be called at most a few times per frame, that's nothing.
(2) This is approximately the assembly that will be generated every time your play() method is called:
mov eax, [ebp+4] ; pointer to your Animation* somewhere on the stack, eax is scratch
mov eax, [eax+12] ; dereference the vtable
call eax ; call it
Then, you'll have some duplicate code or another function call inside your specialized play() function, since there'll definetely be some common stuff, so that incurs some overhead (in code size and/or execution speed). So, this is definetely slower.
Also, this makes it alot harder to load generic animations. Your graphics department won't be happy.
(3) To use this effectively, you'll end up making a base class for your templated version anyway, with virtual functions (in that case, see (2)), OR you'll do it manually by checking types in places where you call your animation thing, in which case also see (2).
This also makes it MUCH harder to load generic animations. Your graphics department will be even less happy.
(4) What you need to worry about is not some microoptimization for tiny things done at most a few times a frame. From reading your post, i actually identified another problem that's commonly overlooked. You're mentioning std::vector<Animation>. Nothing against the STL, but that's bad voodoo. A single memory allocation will cost you more cycles than all the boolean checks in your play() or update() methods for probably the entire time your application is running. Putting Animations in and out of std::vectors (especially if you're putting in instances and not pointers (smart or dumb) to instances) will cost you way more.
You need to look at different places to optimize. This is such a ridiculous microoptimization that will bring you no benefit except make it harder to generalize and make your graphics department happy. What will matter, however, is worrying about memory allocation, and THEN, when you're done programming that part, starting a profiler and looking where the hot spots are.
If keeping your animations is actually becoming a bottleneck, the std::vector (nice as it is) is where you might want to look. Have you looked at, say, an intrusive linked list? That will actually be more benefit than worrying about this.
(Edited for brevity.)
The compiler, CPU, and OS all can change the answer, here:
CPU: instruction/data cache size, architecture, and behavior, especially any intelligent prefetch
CPU: branch prediction and speculative execution behavior
CPU: the penalty for a mispredicted branch
compiler and CPU: the availability and relative cost of conditionally-executed instructions (helps with branch cases that only cover a few instructions)
compiler or linker: optimizations that may transform your code and remove branches
In short, as Blindy said in the comments: test it. =)
If you're writing for a modern desktop OS or OSes, enlist the help of a profiling tool (valgrind, shark, codeanalyst, vtune, etc) -- it may give you details you never even knew you could look for, such as cache misses, branch mispredicts, etc.
Even if you don't find a great answer, you'll learn something from applying the tool. I often find looking at the disassembly quite instructive, too (see some of the other answers in this thread).
Some slightly more speculative notes:
vtable tends to result in a load (this+0), offset, second load, and then branch on the contents of the register. You can see this in some of the other answers. Most CPUs that I'm familiar with are miserable at predicting branches from registers.
the bool may be near other data you're using and as such may already be cached. The branch target is also likely to be fixed and therefore a lot more friendly for prediction and/or speculative execution.
on some processors (rarer these days), it costs more to load a bool than an int.
on an ARM processor I work with, we occasionally tuck the vtables in "tightly coupled memory" on the processor core. Decreases the indirect load time considerably -- it's as if the vtable is always in-cache or better.
As you mentioned, the usual rule applies: do what fits requirements and is flexible/maintainable/readable first, then optimize.
Further reading / other patterns to pursue:
FastDelegate, which makes component based systems much easier to deal with
Pitfalls of Object-Oriented Programming slides, which discuss how to get more out of CPU and CPU caches in general
Both the "Data Oriented Design" and the "Component-Based Entity" paradigms are useful to keep in your brain for games, multimedia engines, and other things where you have a greater-than-average demand for performance and still want to keep your code somewhat organized. YMMV, of course. =)
Vtable is very very fast. So are simple conditionals. They translate to single digits of CPU instructions. Worrying about this kind of performance gets you in the murky waters of compiler optimisations, where you don't at all understand what the compiler is doing. Chances are, very subtle changes in your program can trump the minute differences between an if statement and a vtable.
I did a little test a while ago testing differences between RTTI multiple dispatch and vtable. In release mode a dispatch between three objects (two vtable calls) done over two million iterations take 62 milliseconds. That is way way not even worth worrying about.
Who says #3 makes it impossible to have a generic container of animations? There are several approaches one can use. They do all boil down to eventually making a polymorphic call but the options are there. Consider this:
std::vector<boost::any> generic_container;
function(generic_container[0]);
void function(boost::any & a)
{
my_operation::execute(a.type().name(), a);
}
my_operation just needs to have a way of registering and filtering operations by type name. It searches for a functor that operates on whatever a represents, and uses it. The functor then any_casts to the appropriate time and does the type specific operation.
Or use a visitor framework. The above is sort of a variation of that but at too generic a level to really qualify.
And there are more possible methods. Instead of storing animations you could store a type that hides the specifics and executes the correct view options when activated. One virtual is called but it is specific to switching out concrete types that do more complex operations on each other.
There is no general answer to your question in other words. Depending on what you need you could reach all kinds of levels of complexity to make almost your entire program compile time polymorphic as opposed to run-time.