let's say I have an very simple function called foo. Foo can return two values, I'll use x and y as arbitrary placeholder variables.
I define it like so:
int foo(bool expression)
{
static const int x = ..., y = ...;
if(expression)
return x;
else
return y;
}
This obviously a branching statement
I was thinking doing something like the following could remove any branching:
int foo(bool expression)
{
static const int array[] = {x, y};
return array[expression];
}
Yet I'm not sure if, by using C arrays, it still incurs branching, does it? Do C++ std:: arrays or vectors cause branching?
Is it worth it to attempt to read from the array, or is it a waste of memory and execution speed?
And lastly, if the expression contained a logical expression, such as &&, does this mean it will still branch?
As far as condition relies on a boolean value to know what to do next, then it's definitely branching. It's reasonable to say that the code itself needs to wait and branch to decide which element from the array to access and return.
By the same concept, && or any other logical operator implies branching.
The question is complicated by the fact that in one case you are showing a bool expression, in the other case you are showing an int condition.
If your expression naturally evaluates to an int, then using this int to pick up an item from the array will not involve any branching. If the most natural type that your expression evaluates to is bool, then you will need to convert it to an int, and this conversion is likely to internally involve branching, so you are probably not going to gain anything. I am saying "probably" because a lot depends on the compiler and on the underlying CPU instruction set, so you will not know unless you have your compiler produce disassembly and examine the disassembly.
That having been said, I would add that your quest to eliminate a branch is rather an exercise in futility. There is nothing inherently evil with branching, nor does it perform badly. True, it is best to eliminate branches, but only if it is trivial to do so. If, in order to eliminate branching, you introduce an array that you otherwise wouldn't have, then you are probably adding an order of magnitude more overhead than you are saving. If you introduce a vector instead of an array, you may be introducing twice the overhead of the array. So, my recommendation about this would be: do not worry about the branching.
Related
The following code doesn't compile
#include <vector>
int main()
{
std::vector<bool> enable(10);
enable[0] |= true;
return 0;
}
giving the error
no match for ‘operator|=’ (operand types are ‘std::vector<bool>::reference {aka std::_Bit_reference}’ and ‘bool’)
In my real life code I have a bit field with values I want to |= with the result of a function.
There are easy way to express the same idea, but is there any good reason for such an operator not to be available ?
The main reason would be that std::vector<bool> is special, and its specification specifically permits an implementation to minimise memory usage.
For vectors of anything other than bool, the reference type can actually be a true reference (i.e. std::vector<int>::reference can actually be an int &) - usually directly referencing an element of the vector itself. So it makes sense for the reference type to support all operations that the underlying type can. This works because vector<int> effectively manages a contiguous array of int internally. The same goes for all types other than bool.
However, to minimise memory usage, a std::vector<bool> may not (in fact probably will not) work internally with an actual array of bool. Instead it might use some packed data structure, such as an array of unsigned char internally, where each unsigned char is a bitfield containing 8 bits. So a vector<bool> of length 800 would actually manage an array of 100 unsigned char, and the memory it consumes would be 100 bytes (assuming no over-allocation). If the vector<bool> actually contained an array of 800 bool, its memory usage would be a minimum of 800 bytes (since sizeof(bool) must be at least 1, by definition).
To permit such memory optimisation by implementers of vector<bool>, the return type of vector<bool>::operator[] (i.e. std::vector<bool>::reference) cannot simply be a bool &. Internally, it would probably contain a reference to the underlying type (e.g. a unsigned char) and information to track what bit it actually affects. This would make all op= operators (+=, -=, |=, etc) somewhat expensive operations (e.g. bit fiddling) on the underlying type.
The designers of std::vector<bool> would then have faced a choice between
specify that std::vector<bool>::reference support all the
op= and hear continual complaints about runtime inefficiency from
programmers who use those operators
Don't support those op= and field complaints from programmers who think such things are okay ("cleaner code", etc) even though they will be inefficient.
It appears the designers of std::vector<bool> opted for option 2. A consequence is that the only assignment operators supported by std::vector<bool>::reference are the stock standard operator=() (with operands either of type reference, or of type bool) not any of the op=. The advantage of this choice is that programmers get a compilation error if trying to do something which is actually a poor choice in practice.
After all, although bool supports all the op= using them doesn't achieve much anyway. For example, some_bool |= true has the same net effect as some_bool = true.
Why don't you just do the following?
enable[0] = enable[0] | true;
You should be able to make one yourself pretty easily. Something like:
std::vector<bool>::reference& operator |= (std::vector<bool>::reference& a, bool b)
{
if (b)
a = true;
return a;
}
Alternatively, std::bitset is a good fit.
Short and sweet answer: std::vector<bool> should be avoided. Use vector<wchar> instead. You actually get a container back in which the bools are packed in bits, which gives different behaviour from other vectors, slow code and no-one cares a bout memory anyway. I guess by now no-one likes this anymore, but turning back the clock would break too much code...
Say I have a simple function that does something like this:
template<typename T>
T get_half(T a){
return 0.5*a;
}
this function will typically be evaluated with T being double or float.
The standard specifies that 0.5 will be a double (0.5f for float).
How can write the above code so that 0.5 will always be of type T so that there is no cast when evaluating either the product or the return?
What I want is 0.5 to be a constant of type T at compile time. The point of this question is that I want to avoid conversion at run time.
For example, if I write:
template<typename T>
T get_half(T a){
return T(0.5)*a;
}
Can I be absolutely sure that T(0.5) is evaluated at compile time?
if not, what would be the proper approach to accomplish this? I'm ok with using c++11 if that is needed.
Thank you in advance.
In c++11 I have a numeric_traits class something as follows (within a header file)
template<typename Scalar>
struct numeric_traits{
static constexpr Scalar one_half = 0.5;
//Many other useful constants ....
};
so within my code I would use this as:
template<typename T>
T get_half(T a){
return numeric_traits<T>::one_half*a;
}
This does what I want i.e. 0.5 is resolved at compile time with the precision I need and no casts happen at run-time. However the downsides are:
I need to modify numeric_traits every time I need a new constant
The sintax is probably too verbosely annoying? (not a big issue really, of course)
It'd be nice maybe have something like: constant(0.5) which resolves to T type at run-time.
Thank you in advance again.
There isn't and cannot be any way of forcing constants to never be computed at run-time, because some machines simply don't have a single instruction that can load all possible values of a type. For instance, machines may only have a 16-bit load constant instruction, where 0x12345678 would need to be computed, at run-time, as 0x1234 << 16 | 0x5678. Alternatively, such a constant might be loaded from memory, but that could be an even more costly operation than computing it.
You need to trust your compiler a little bit. On systems where it is feasible, any compiler that has any amount of optimisation at all will translate T(0.5) the same way it will translate 0.5f, assuming T is float. And 0.5f will be computed in the most sensible way for your platform. That might involve loading it as a constant, or that might involve computing it. Or who knows, your compiler might change T(0.5)*a to a/2 if that gives the same results.
In your question you give an example of adding a numeric_traits helper class. This, IMO, is overkill. In the extremely unlikely case that constexpr makes a difference, you can just write
template <typename T>
T get_half(T a) {
constexpr T half = 0.5;
return half * a;
}
However, this still does more harm than good, in my opinion: your get_half can now no longer be used with non-literal types. It requires the type to support conversions from double in constant expressions. Suppose you have an arbitrary-precision rational type, written without constexpr in mind. Now your get_half can not be used, because the initialisation constexpr T half = 0.5; is invalid, even if 0.5 * a might otherwise have compiled.
This is the case even with your numeric_traits helper class; it's not invalid just because I moved it into the function body.
Say I have a class C that I want to be able to implicitly cast to bool to use in if statements.
class C {
public:
...
operator bool() { return data ? true : false; }
private:
void * data;
};
and
C c;
...
if (c) ...
But the cast operator has a conditional which is technically overhead (even if relatively insignificant). If data was public I could do if (c.data) instead which is entirely possible and does not involve any conditionals. I doubt that the compiler will do any implicit conversion involving a conditional in the latter scenario, since it will likely generate a "jump if zero" or "jump if not zero" which doesn't really need any Boolean value, which the CPU will most likely have no notion of anyway.
My question is whether the typecast operator overload will indeed be less efficient than directly using the data member.
Note that I did establish that if the typecast directly returns data it also works, probably using the same type of implicit (hypothetical and not really happening in practice) conversion that would be used in the case of if (c.data).
Edit: Just to clarify, the point of the matter is actually a bit hypothetical. The dilemma is that Boolean is itself a hypothetical construct (which didn't initially exist in C/C++), in reality it is just integers. As I mentioned, the typecast can directly return data or use != instead, but it is really not very readable, but even that is not the issue. I don't really know how to word it to make sense of it better, the C class has a void * that is an integer, the CPU has conditional jumps which use integers, the issue is that abiding to the hypothetical Boolean construct that sits in the middle mandates the extra conditional. Dunno if that "clarification" made things any more clear though...
My question is whether the typecast operator overload will indeed be less efficient than directly using the data member.
Only examining your compiler output - with the specific optimisation flags you'd like to use - can tell you for sure, and then it might change after some seemingly irrelevant change like adding an extra variable somewhere in the calling context, or perhaps with the next compiler release etc....
More generally, C++ wouldn't be renowned for speed if the optimisers didn't tend to handle this kind of situation perfectly, so your odds are very good.
Further, write working code then profile it and you'll learn a lot more about what performance problems are actually significant.
It depends on how smart your compiler's optimizer is. I think they should be smart enough to remove the useless ? true: false operation, because the typecast operation should be inlined.
Or you could just write this and not worry about it:
operator bool() { return data; }
Since there's a built-in implicit typecast from void* to bool, data gets typecast on the way out the function.
I don't remember if the conditional in if expects bool or void*; at one point, before C++ added bool, it was the latter. (operator! in the iostream classes returned void* back then.)
On modern compilers these two functions produce the same machine code:
bool toBool1(void* ptr) {
return ptr ? true : false;
}
bool toBool2(void* ptr) {
return ptr;
}
Demo
So it really doesn't matter.
I have a class that exposes an enum. I am trying to check the validity of the values in the setter function, like so:
enum abc
{
X,
Y
};
int my_class::set_abc(abc value)
{
if(static_cast<int>(value) > static_cast<int>(Y))
return -1;
...
}
There is a similar check for value being less than X.
I see that the compiler removes the check completely. I have Googled for the reason and come across many pages explaining the rules for integer conversions in C++, but I wouldn't find any clarifications about converting enums to ints, or checking the validity.
What is the correct way to accomplish this?
It seems arbitrary to test against Y, so I would add some limits. This also allows you to add more elements between min and max, and not be concerned with the ordering.
enum abc
{
ABC_MIN = 0,
X,
Y,
ABC_MAX
};
int my_class::set_abc(abc value)
{
assert(value > ABC_MIN && value < ABC_MAX);
{
Since 0 and 1 are the only valid values of the type abc, whoever passes in a value larger or smaller than that has already invoked undefined behavior in order to create it.
You can't easily write code in C++ to detect conditions that have previously caused UB -- as you observe, the compiler has a tendency to optimize based on what is permitted or forbidden by the language.
You could write an int overload of the function that checks the value and then converts to the enum type, and not bother checking in the abc overload since it's someone else's problem to avoid invoking UB.
Alternatively, you could avoid your test being redundant, by putting some arbitrary additional values in the enum. Then the compiler can't remove it.
In C++, you can't directly assign integers to enum variables without an explicit cast.
If your code uses the enum type everywhere, then there's no point to checking that's valid. It should be valid from the start and should remain valid.
If, however, your code gets the value as an integer and you need to convert it to an enum (or you perhaps do some arithmetic operation on an enum value), then you should validate the value at that site.
I'm browsing through some code and I found a few ternary operators in it. This code is a library that we use, and it's supposed to be quite fast.
I'm thinking if we're saving anything except for space there.
What's your experience?
Performance
The ternary operator shouldn't differ in performance from a well-written equivalent if/else statement... they may well resolve to the same representation in the Abstract Syntax Tree, undergo the same optimisations etc..
Things you can only do with ? :
If you're initialising a constant or reference, or working out which value to use inside a member initialisation list, then if/else statements can't be used but ? : can be:
const int x = f() ? 10 : 2;
X::X() : n_(n > 0 ? 2 * n : 0) { }
Factoring for concise code
Keys reasons to use ? : include localisation, and avoiding redundantly repeating other parts of the same statements/function-calls, for example:
if (condition)
return x;
else
return y;
...is only preferable to...
return condition ? x : y;
...on readability grounds if dealing with very inexperienced programmers, or some of the terms are complicated enough that the ? : structure gets lost in the noise. In more complex cases like:
fn(condition1 ? t1 : f1, condition2 ? t2 : f2, condition3 ? t3 : f3);
An equivalent if/else:
if (condition1)
if (condition2)
if (condition3)
fn(t1, t2, t3);
else
fn(t1, t2, f3);
else if (condition3)
fn(t1, f2, t3);
else
fn(t1, f2, f3);
else
if (condition2)
...etc...
That's a lot of extra function calls that the compiler may or may not optimise away.
Further, ? allows you to select an object, then use a member thereof:
(f() ? a : b).fn(g() ? c : d).field_name);
The equivalent if/else would be:
if (f())
if (g())
x.fn(c.field_name);
else
x.fn(d.field_name);
else
if (g())
y.fn(c.field_name);
else
y.fn(d.field_name);
Can't named temporaries improve the if/else monstrosity above?
If the expressions t1, f1, t2 etc. are too verbose to type repeatedly, creating named temporaries may help, but then:
To get performance matching ? : you may need to use std::move, except when the same temporary is passed to two && parameters in the function called: then you must avoid it. That's more complex and error-prone.
c ? x : y evaluates c then either but not both of x and y, which makes it safe to say test a pointer isn't nullptr before using it, while providing some fallback value/behaviour. The code only gets the side effects of whichever of x and y is actually selected. With named temporaries, you may need if / else around or ? : inside their initialisation to prevent unwanted code executing, or code executing more often than desired.
Functional difference: unifying result type
Consider:
void is(int) { std::cout << "int\n"; }
void is(double) { std::cout << "double\n"; }
void f(bool expr)
{
is(expr ? 1 : 2.0);
if (expr)
is(1);
else
is(2.0);
}
In the conditional operator version above, 1 undergoes a Standard Conversion to double so that the type matched 2.0, meaning the is(double) overload is called even for the true/1 situation. The if/else statement doesn't trigger this conversion: the true/1 branch calls is(int).
You can't use expressions with an overall type of void in a conditional operator either, whereas they're valid in statements under an if/else.
Emphasis: value-selection before/after action needing values
There's a different emphasis:
An if/else statement emphasises the branching first and what's to be done is secondary, while a ternary operator emphasises what's to be done over the selection of the values to do it with.
In different situations, either may better reflect the programmer's "natural" perspective on the code and make it easier to understand, verify and maintain. You may find yourself selecting one over the other based on the order in which you consider these factors when writing the code - if you've launched into "doing something" then find you might use one of a couple (or few) values to do it with, ? : is the least disruptive way to express that and continue your coding "flow".
The only potential benefit to ternary operators over plain if statements in my view is their ability to be used for initializations, which is particularly useful for const:
E.g.
const int foo = (a > b ? b : a - 10);
Doing this with an if/else block is impossible without using a function cal as well. If you happen to have lots of cases of const things like this you might find there's a small gain from initializing a const properly over assignment with if/else. Measure it! Probably won't even be measurable though. The reason I tend to do this is because by marking it const the compiler knows when I do something later that could/would accidentally change something I thought was fixed.
Effectively what I'm saying is that ternary operator is important for const-correctness, and const correctness is a great habit to be in:
This saves a lot of your time by letting the compiler help you spot mistakes you make
This can potentially let the compiler apply other optimizations
Well...
I did a few tests with GCC and this function call:
add(argc, (argc > 1)?(argv[1][0] > 5)?50:10:1, (argc > 2)?(argv[2][0] > 5)?50:10:1, (argc > 3)?(argv[3][0] > 5)?50:10:1);
The resulting assembler code with gcc -O3 had 35 instructions.
The equivalent code with if/else + intermediate variables had 36. With nested if/else using the fact that 3 > 2 > 1, I got 44. I did not even try to expand this into separate function calls.
Now I did not do any performance analysis, nor did I do a quality check of the resulting assembler code, but at something simple like this with no loops e.t.c. I believe shorter is better.
It appears that there is some value to ternary operators after all :-)
That is only if code speed is absolutely crucial, of course. If/else statements are much easier to read when nested than something like (c1)?(c2)?(c3)?(c4)?:1:2:3:4. And having huge expressions as function arguments is not fun.
Also keep in mind that nested ternary expressions make refactoring the code - or debugging by placing a bunch of handy printfs() at a condition - a lot harder.
If you're worried about it from a performance perspective then I'd be very surprised if there was any different between the two.
From a look 'n feel perspective it's mainly down to personal preference. If the condition is short and the true/false parts are short then a ternary operator is fine, but anything longer tends to be better in an if/else statement (in my opinion).
You assume that there must be a distinction between the two when, in fact, there are a number of languages which forgo the "if-else" statement in favor of an "if-else" expression (in this case, they may not even have the ternary operator, which is no longer needed)
Imagine:
x = if (t) a else b
Anyway, the ternary operator is an expression in some languages (C,C#,C++,Java,etc) which do not have "if-else" expressions and thus it serves a distinct role there.