my code has a lot of patterns like
int a, b.....
bool c = x ? a >= b : a <= b;
and similarly for other inequality comparison operators. Is there a way to write this to achieve better performance/branchlessness for x86.
Please spare me with have you benchmarked your code? Is this really your bottleneck? type comment. I am asking for other ways to write this so I can benchmark and test.
EDIT:
bool x
Original expression:
x ? a >= b : a <= b
Branch-free equivalent expression without short-circuit evaluation:
!!x & a >= b | !x & a <= b
This is an example of a generic pattern without resorting to arithmetic trickery. Watch out for operator precedence; you may need parentheses for more complex examples.
Another way would be :
bool c = (2*x - 1) * (a - b) >= 0;
This generates a branch-less code here: https://godbolt.org/z/1nAp7G
#include <stdbool.h>
bool foo(int a, int b, bool x)
{
return (2*x - 1) * (a - b) >= 0;
}
------------------------------------------
foo:
movzx edx, dl
sub edi, esi
lea eax, [rdx-1+rdx]
imul eax, edi
not eax
shr eax, 31
ret
Since you're just looking for equivalent expressions, this comes from patching #AlexanderZhang's comment:
(a==b) || (c != (a<b))
The way you currently have it is possibly unbeatable.
But for positive integral a and b and bool x you can use
a / b * x + b / a * !x
(You could adapt this, at the cost of extra cpu burn, by replacing a with a + 1 and similarly for b if you need to support zero.)
If a>=b, a-b will be positive and the first bit(sign bit) is 0. Otherwise a-b is negative and first bit is 1.
So we can simply “xor” the first bit of a-b and the the value of x
constexpr auto shiftBit = sizeof(int)*8-1;
bool foo(bool x, int a, int b){
return x ^ bool((a-b)>>shiftBit);
}
foo(bool, int, int):
sub esi, edx
mov eax, edi
shr esi, 31
xor eax, esi
ret
Related
I have to check whether K-th bit of a number is set or not.
Example what I mean:
Input: N = 4, K = 0
Output: false
Explanation: Binary representation of 4 is b'100, in which the 0th bit from LSB is not set and therefore it returns false.
This code does not work with the not equal to the statement:
bool checkKthBit(int n, int k)
{
if( n & (1 << k) != 0)
return true;
else
return false;
}
However, after removing the not equal operator the code works perfectly fine:
bool checkKthBit(int n, int k)
{
if (n & (1 << k))
return true;
else
return false;
}
How is this happening?
If you put the equation into brackets it does what you want:
https://godbolt.org/z/enTcjhrzT
bool checkA(int n, int k) {
if(n&(1<<k)!=0) return true; else return false;
}
bool checkB(int n, int k) {
if((n&(1<<k))!=0) return true; else return false;
}
bool checkC(int n, int k) {
if((n&(1<<k))) return true; else return false;
}
Will produce this assembly, notice that checkB and checkC are identical (correct brackets means correct precedence and the != operator can stay). While the checkA does give precedence to the equality and then the result is and-ed and that result is used for the return.
checkA(int, int):
mov eax, 1
mov ecx, esi
sal eax, cl
test eax, eax
setne al
movzx eax, al
and eax, edi
ret
checkB(int, int):
mov ecx, esi
sar edi, cl
mov eax, edi
and eax, 1
ret
checkC(int, int):
mov ecx, esi
sar edi, cl
mov eax, edi
and eax, 1
ret
See the C precedence table where the & is precedence 2 (right to left) and != is precedence 7 (left to right)
Reference from: https://en.cppreference.com/w/c/language/operator_precedence
BUT if you are using C++ notice how the precedence table changes, the & is 11 and the != is 10:
Reference from: https://en.cppreference.com/w/cpp/language/operator_precedence
If I would assume GCC as the toolchain, if you do not provide any dialect then the default will be used:
The default, if no C language dialect options are given, is
-std=gnu17.
Or for C++:
The default, if no C++ language dialect options are given, is
-std=gnu++17.
GCC reference: https://gcc.gnu.org/onlinedocs/gcc/Standards.html
Broadly speaking it's better to use more parenthesis than less, instead of depending on the precedence order and making sure you and everybody who reads it understand the caveats of the order, it's considered safer to make the order explicit:
https://softwareengineering.stackexchange.com/questions/201175/should-i-use-parentheses-in-logical-statements-even-where-not-necessary
How much will it affect the performance if I use:
n>>1 instead of n/2
n&1 instead of n%2!=0
n<<3 instead of n*8
n++ instead of n+=1
and so on...
and if it does increase the performance then please explain why.
Any half decent compiler will optimize the two versions into the same thing. For example, GCC compiles this:
unsigned int half1(unsigned int n) { return n / 2; }
unsigned int half2(unsigned int n) { return n >> 1; }
bool parity1(int n) { return n % 2; }
bool parity2(int n) { return n & 1; }
int mult1(int n) { return n * 8; }
int mult2(int n) { return n << 3; }
void inc1(int& n) { n += 1; }
void inc2(int& n) { n++; }
to
half1(unsigned int):
mov eax, edi
shr eax
ret
half2(unsigned int):
mov eax, edi
shr eax
ret
parity1(int):
mov eax, edi
and eax, 1
ret
parity2(int):
mov eax, edi
and eax, 1
ret
mult1(int):
lea eax, [0+rdi*8]
ret
mult2(int):
lea eax, [0+rdi*8]
ret
inc1(int&):
add DWORD PTR [rdi], 1
ret
inc2(int&):
add DWORD PTR [rdi], 1
ret
One small caveat is that in the first example, if n could be negative (in case that it is signed and the compiler can't prove that it's nonnegative), then the division and the bitshift are not equivalent and the division needs some extra instructions. Other than that, compilers are smart and they'll optimize operations with constant operands, so use whichever version makes more sense logically and is more readable.
Strictly speaking, in most cases, yes.
This is because bit manipulation is a simpler operation to perform for CPUs due to the circuitry in the APU being much simpler and requiring less discrete steps (clock cycles) to perform fully.
As others have mentioned, any compiler worth a damn will automatically detect constant operands to certain arithmetic operations with bitwise analogs (like those in your examples) and will convert them to the appropriate bitwise operations under the hood.
Keep in mind, if the operands are runtime values, such optimizations cannot occur.
I saw the chosen answer to this post.
I was suprised that (x & 255) == (x % 256) if x is an unsigned integer, I wondered if it makes sense to always replace % with & in x % n for n = 2^a (a = [1, ...]) and x being a positive integer.
Since this is a special case in which I as a human can decide because I know with which values the program will deal with and the compiler does not. Can I gain a significant performance boost if my program uses a lot of modulo operations?
Sure, I could just compile and look at the dissassembly. But this would only answer my question for one compiler/architecture. I would like to know if this is in principle faster.
If your integral type is unsigned, the compiler will optimize it, and the result will be the same. If it's signed, something is different...
This program:
int mod_signed(int i) {
return i % 256;
}
int and_signed(int i) {
return i & 255;
}
unsigned mod_unsigned(unsigned int i) {
return i % 256;
}
unsigned and_unsigned(unsigned int i) {
return i & 255;
}
will be compiled (by GCC 6.2 with -O3; Clang 3.9 produces very similar code) into:
mod_signed(int):
mov edx, edi
sar edx, 31
shr edx, 24
lea eax, [rdi+rdx]
movzx eax, al
sub eax, edx
ret
and_signed(int):
movzx eax, dil
ret
mod_unsigned(unsigned int):
movzx eax, dil
ret
and_unsigned(unsigned int):
movzx eax, dil
ret
The result assembly of mod_signed is different because
If both operands to a multiplication, division, or modulus expression have the same sign, the result is positive. Otherwise, the result is negative. The result of a modulus operation's sign is implementation-defined.
and AFAICT, most of implementation decided that the result of a modulus expression is always the same as the sign of the first operand. See this documentation.
Hence, mod_signed is optimized to (from nwellnhof's comment):
int d = i < 0 ? 255 : 0;
return ((i + d) & 255) - d;
Logically, we can prove that i % 256 == i & 255 for all unsigned integers, hence, we can trust the compiler to do its job.
I did some measurements with gcc, and
if the argument of a / or % is a compiled time constant that's a power of 2, gcc can turn it into the corresponding bit operation.
Here are some of my benchmarks for divisions
What has a better performance: multiplication or division? and as you can see, the running times with divisors that are statically known powers of two are noticably lower than with other statically known divisors.
So if / and % with statically known power-of-two arguments describe your algorithm better than bit ops, feel free to prefer / and %.
You shouldn't lose any performance with a decent compiler.
bool x = false, y = false, z = true;
if(x || y || z){}
or
if(x | y | z){}
Does the second if statement perform a bit wise "or" operation on all booleans? treating them as if there were bytes? ex) (0000 | 0000 | 0001) = true...
Or does it act like a Java | on booleans, where it will evaluate every bool in the expression even if the first was true?
I want to know how bit wise operators work on bool values. is it equivalent to integer bitwise operations?
Efficiency depends, the logical or operator || is a short circuit operator
meaning if x in your example is true it will not evaluate y or z.
If it was a logical and && then if x is false, it will not test y or z.
Its important to note that this operation does not exist as an instruction
so that means you have to use test and jump instructions. This means branching, which slows down things. Since modern CPU's are pipelined.
But the real answer is it depends, like many other questions of this nature, as sometimes the benefit of short circuiting operations outweighs the cost.
In the following extremely simple example you can see that bitwise or | is superior.
#include <iostream>
bool test1(bool a, bool b, bool c)
{
return a | b | c;
}
bool test2(bool a, bool b, bool c)
{
return a || b || c;
}
int main()
{
bool a = true;
bool b = false;
bool c = true;
test1(a,b,c);
test2(a,b,c);
return 0;
}
The following is the intel-style assembly listings produced by gcc-4.8 with -O3 :
test1 assembly :
_Z5test1bbb:
.LFB1264:
.cfi_startproc
mov eax, edx
or eax, esi
or eax, edi
ret
.cfi_endproc
test2 assembly :
_Z5test2bbb:
.LFB1265:
.cfi_startproc
test dil, dil
jne .L6
test sil, sil
mov eax, edx
jne .L6
rep; ret
.p2align 4,,10
.p2align 3
.L6:
mov eax, 1
ret
.cfi_endproc
You can see that it has branch instructions, which mess up the pipeline.
Sometimes however short-circuiting is worth it such as
return x && deep_recursion_function();
Disclaimer:
I would always use logical operators on bools. Unless performance really is critical, or maybe simple case like in test1 and test2 but with lots of bools.
And in either case first verify that you do get an improvement.
The second acts a java | on integers, a bit-wise or. As C originally didn't have a boolean type, the if statement reads any non-zero as true, so you can use it as that, but it is often more efficient to use the short-circuiting operator || instead, especially when calling functions that return the conditions.
I would also like to point out that short-circuit lets you check unsafe conditions, like if(myptr == NULL || myptr->struct_member < 0) return -1;, while using the bitwise or there will give you a segfault when myptr is null.
Just out of curiosity. If I have something like:
if(x < 0)
x = 0;
if(x > some_maximum)
x = some_maximum;
return x;
Is there a way to not branch? This is c++.
Addendum: I mean no branch instructions in the assembly. It's a MIPS architecture.
There are bit-tricks to find the minimum or maximum of two numbers, so you could use those to find min(max(x, 0), some_maximum). From here:
y ^ ((x ^ y) & -(x < y)); // min(x, y)
x ^ ((x ^ y) & -(x < y)); // max(x, y)
As the source states though, it's probably faster to do it the normal way, despite the branch
This is going to be compiler- and processor-dependent, but if you use ?: it can be translated to a conditional move (at least on Intel-based processors) which does not use a branch.
x = x < 0 ? 0 : x;
x = x > max ? max : x;
This can use the CMOV instruction (see http://www.intel.com/software/products/documentation/vlin/mergedprojects/analyzer_ec/mergedprojects/reference_olh/mergedProjects/instructions/instruct32_hh/vc35.htm), whose purpose is to avoid branching (and thus branch prediction penalties).
Edit: this thread may be of interest to you. Benchmarks show that conditional moves will give you speed gains only on branches that are not very predictable, whereas highly predictable branches (such as that of a long-running loop) prefer the standard approach.
In C++17 you can use std::clamp
Defined in header <algorithm>
template<class T>
constexpr const T& clamp( const T& v, const T& lo, const T& hi ); (1) (since C++17)
template<class T, class Compare>
constexpr const T& clamp( const T& v, const T& lo, const T& hi, Compare comp ); (2) (since C++17)
If v compares less than lo, returns lo; otherwise if hi compares
less than v, returns hi; otherwise returns v. Uses operator< to
compare the values.
Same as (1), but uses comp to compare the values.
Using the ternary operator :)
return x < 0 ? 0 : x > some_maximum ? : some_maximum : x;
Depends on your architecture. For ARM, at least, the compiler would probably emit conditionally executed instructions and the resulting machine code wouldn't contain a branch. I can't think of a good way to make that explicit in the C program though.
If it's possible to limit to powers of 2 (non inclusive), then just go with
int newx = x & ((highest power of 2) - 1)
not quite sure to handle the (if x < 0 case) or the generic (x < n case)
For future problems like this, the bit hack page might be useful: http://graphics.stanford.edu/~seander/bithacks.html.
Since the bithack for min and max was already posted, here is a different one:
// CHAR_BIT is number of bits per byte.
// sign = 1 if x < 0, sign = 0 otherwise (according to the page above)
int sign = (int)((unsigned int)((int)x) >> (sizeof(int) * CHAR_BIT - 1));
int y = (1-sign)*x; // if x < 0, then y = 0, else y = x.
// Depending on arch, the below _might_ cause a branch.
// (on x64 it does not cause a branch, not sure about MIPS)
int z = !(y/some_maximum); // if 0 <= y < some_maximum, z = 1, else z = 0
int ret = z*y + (1-z)*some_maximum; // if z =1, then ret = y; else ret = some_maximum.
return ret;
I just tried it out, and it worked for the few test cases i had.
Here is the assembly code from my computer (intel arch) which shows no branches.
int cap(int x)
{
00F013A0 push ebp
00F013A1 mov ebp,esp
00F013A3 sub esp,0FCh
00F013A9 push ebx
00F013AA push esi
00F013AB push edi
00F013AC lea edi,[ebp-0FCh]
00F013B2 mov ecx,3Fh
00F013B7 mov eax,0CCCCCCCCh
00F013BC rep stos dword ptr es:[edi]
int some_maximum = 100;
00F013BE mov dword ptr [some_maximum],64h
// CHAR_BIT is number of bits per byte.
// sign = 1 if x < 0, sign = 0 otherwise (according to the page above)
int sign = (int)((unsigned int)((int)x) >> (sizeof(int) * CHAR_BIT - 1));
00F013C5 mov eax,dword ptr [x]
00F013C8 shr eax,1Fh
00F013CB mov dword ptr [sign],eax
int y = (1-sign)*x; // if x < 0, then y = 0, else y = x.
00F013CE mov eax,1
00F013D3 sub eax,dword ptr [sign]
00F013D6 imul eax,dword ptr [x]
00F013DA mov dword ptr [y],eax
// Depending on arch, the below _might_ cause a branch.
// (on x64 it does not cause a branch, not sure about MIPS)
int z = !(y/some_maximum); // if 0 <= y < some_maximum, z = 1, else z = 0
00F013DD mov eax,dword ptr [y]
00F013E0 cdq
00F013E1 idiv eax,dword ptr [some_maximum]
00F013E4 neg eax
00F013E6 sbb eax,eax
00F013E8 add eax,1
00F013EB mov dword ptr [z],eax
int ret = z*y + (1-z)*some_maximum; // if z =1, then ret = y; else ret = some_maximum.
00F013EE mov eax,dword ptr [z]
00F013F1 imul eax,dword ptr [y]
00F013F5 mov ecx,1
00F013FA sub ecx,dword ptr [z]
00F013FD imul ecx,dword ptr [some_maximum]
00F01401 add eax,ecx
00F01403 mov dword ptr [ret],eax
return ret;
00F01406 mov eax,dword ptr [ret]
}
00F01409 pop edi
00F0140A pop esi
00F0140B pop ebx
00F0140C mov esp,ebp
00F0140E pop ebp
00F0140F ret
x = min(max(x,0),100);
The branching is hidden away nicely inside functions with normal names.
Suggesting to create a clip_by template.
x = ((int)(x > some_maximum)) * some_maximum
+ ((int)(x > 0 && x <= some_maximum)) * x;