Why this code is not efficient? - c++

I want to improve the next code, calculating the mean:
void calculateMeanStDev8x8Aux(cv::Mat* patch, int sx, int sy, int& mean, float& stdev)
{
unsigned sum=0;
unsigned sqsum=0;
const unsigned char* aux=patch->data + sy*patch->step + sx;
for (int j=0; j< 8; j++) {
const unsigned char* p = (const unsigned char*)(j*patch->step + aux ); //Apuntador al inicio de la matrix
for (int i=0; i<8; i++) {
unsigned f = *p++;
sum += f;
sqsum += f*f;
}
}
mean = sum >> 6;
int r = (sum*sum) >> 6;
stdev = sqrtf(sqsum - r);
if (stdev < .1) {
stdev=0;
}
}
I also improved the next loop with NEON intrinsics:
for (int i=0; i<8; i++) {
unsigned f = *p++;
sum += f;
sqsum += f*f;
}
This is the code improved for the other loop:
int32x4_t vsum= { 0 };
int32x4_t vsum2= { 0 };
int32x4_t vsumll = { 0 };
int32x4_t vsumlh = { 0 };
int32x4_t vsumll2 = { 0 };
int32x4_t vsumlh2 = { 0 };
uint8x8_t f= vld1_u8(p); // VLD1.8 {d0}, [r0]
//int 16 bytes /8 elementos
int16x8_t val = (int16x8_t)vmovl_u8(f);
//int 32 /4 elementos *2
int32x4_t vall = vmovl_s16(vget_low_s16(val));
int32x4_t valh = vmovl_s16(vget_high_s16(val));
// update 4 partial sum of products vectors
vsumll2 = vmlaq_s32(vsumll2, vall, vall);
vsumlh2 = vmlaq_s32(vsumlh2, valh, valh);
// sum 4 partial sum of product vectors
vsum = vaddq_s32(vall, valh);
vsum2 = vaddq_s32(vsumll2, vsumlh2);
// do scalar horizontal sum across final vector
sum += vgetq_lane_s32(vsum, 0);
sum += vgetq_lane_s32(vsum, 1);
sum += vgetq_lane_s32(vsum, 2);
sum += vgetq_lane_s32(vsum, 3);
sqsum += vgetq_lane_s32(vsum2, 0);
sqsum += vgetq_lane_s32(vsum2, 1);
sqsum += vgetq_lane_s32(vsum2, 2);
sqsum += vgetq_lane_s32(vsum2, 3);
But it is more or less 30 ms more slow. Does anyone know why?
All the code is working right.

Add to Lundin. Yes, instruction sets like ARM where you have a register based index or some reach with an immediate index you might benefit encouraging the compiler to use indexing. Also though the ARM for example can increment its pointer register in the load instruction, basically *p++ in one instruction.
it is always a toss up using p[i] or p[i++] vs *p or *p++, some instruction sets are much more obvious which path to take.
Likewise your index. if you are not using it counting down instead of up can save an instruction per loop, maybe more. Some might do this:
inc reg
cmp reg,#7
bne loop_top
If you were counting down though you might save an instruction per loop:
dec reg
bne loop_top
or even one processor I know of
decrement_and_jump_if_not_zero loop_top
The compilers usually know this and you dont have to encourage them. BUT if you use the p[i] form where the memory read order is important, then the compiler cant or at least should not arbitrarily change the order of the reads. So for that case you would want to have the code count down.
So I tried all of these things:
unsigned fun1 ( const unsigned char *p, unsigned *x )
{
unsigned sum;
unsigned sqsum;
int i;
unsigned f;
sum = 0;
sqsum = 0;
for(i=0; i<8; i++)
{
f = *p++;
sum += f;
sqsum += f*f;
}
//to keep the compiler from optimizing
//stuff out
x[0]=sum;
return(sqsum);
}
unsigned fun2 ( const unsigned char *p, unsigned *x )
{
unsigned sum;
unsigned sqsum;
int i;
unsigned f;
sum = 0;
sqsum = 0;
for(i=8;i--;)
{
f = *p++;
sum += f;
sqsum += f*f;
}
//to keep the compiler from optimizing
//stuff out
x[0]=sum;
return(sqsum);
}
unsigned fun3 ( const unsigned char *p, unsigned *x )
{
unsigned sum;
unsigned sqsum;
int i;
sum = 0;
sqsum = 0;
for(i=0; i<8; i++)
{
sum += (unsigned)p[i];
sqsum += ((unsigned)p[i])*((unsigned)p[i]);
}
//to keep the compiler from optimizing
//stuff out
x[0]=sum;
return(sqsum);
}
unsigned fun4 ( const unsigned char *p, unsigned *x )
{
unsigned sum;
unsigned sqsum;
int i;
sum = 0;
sqsum = 0;
for(i=8; i;i--)
{
sum += (unsigned)p[i-1];
sqsum += ((unsigned)p[i-1])*((unsigned)p[i-1]);
}
//to keep the compiler from optimizing
//stuff out
x[0]=sum;
return(sqsum);
}
with both gcc and llvm (clang). And of course both unrolled the loop since it was a constant. gcc, for each of the experiments produce the same code, in cases a subtle register mix change. And I would argue a bug as at least one of them the reads were not in the order described by the code.
gcc solutions for all four were this, with some read reordering, notice the reads being out of order from the source code. If this were against hardware/logic that relied on the reads being in the order described by the code, you would have a big problem.
00000000 <fun1>:
0: e92d05f0 push {r4, r5, r6, r7, r8, sl}
4: e5d06001 ldrb r6, [r0, #1]
8: e00a0696 mul sl, r6, r6
c: e4d07001 ldrb r7, [r0], #1
10: e02aa797 mla sl, r7, r7, sl
14: e5d05001 ldrb r5, [r0, #1]
18: e02aa595 mla sl, r5, r5, sl
1c: e5d04002 ldrb r4, [r0, #2]
20: e02aa494 mla sl, r4, r4, sl
24: e5d0c003 ldrb ip, [r0, #3]
28: e02aac9c mla sl, ip, ip, sl
2c: e5d02004 ldrb r2, [r0, #4]
30: e02aa292 mla sl, r2, r2, sl
34: e5d03005 ldrb r3, [r0, #5]
38: e02aa393 mla sl, r3, r3, sl
3c: e0876006 add r6, r7, r6
40: e0865005 add r5, r6, r5
44: e0854004 add r4, r5, r4
48: e5d00006 ldrb r0, [r0, #6]
4c: e084c00c add ip, r4, ip
50: e08c2002 add r2, ip, r2
54: e082c003 add ip, r2, r3
58: e023a090 mla r3, r0, r0, sl
5c: e080200c add r2, r0, ip
60: e5812000 str r2, [r1]
64: e1a00003 mov r0, r3
68: e8bd05f0 pop {r4, r5, r6, r7, r8, sl}
6c: e12fff1e bx lr
the index for the loads and subtle register mixing was the only difference between functions from gcc, all of the operations were the same in the same order.
llvm/clang:
00000000 <fun1>:
0: e92d41f0 push {r4, r5, r6, r7, r8, lr}
4: e5d0e000 ldrb lr, [r0]
8: e5d0c001 ldrb ip, [r0, #1]
c: e5d03002 ldrb r3, [r0, #2]
10: e5d08003 ldrb r8, [r0, #3]
14: e5d04004 ldrb r4, [r0, #4]
18: e5d05005 ldrb r5, [r0, #5]
1c: e5d06006 ldrb r6, [r0, #6]
20: e5d07007 ldrb r7, [r0, #7]
24: e08c200e add r2, ip, lr
28: e0832002 add r2, r3, r2
2c: e0882002 add r2, r8, r2
30: e0842002 add r2, r4, r2
34: e0852002 add r2, r5, r2
38: e0862002 add r2, r6, r2
3c: e0870002 add r0, r7, r2
40: e5810000 str r0, [r1]
44: e0010e9e mul r1, lr, lr
48: e0201c9c mla r0, ip, ip, r1
4c: e0210393 mla r1, r3, r3, r0
50: e0201898 mla r0, r8, r8, r1
54: e0210494 mla r1, r4, r4, r0
58: e0201595 mla r0, r5, r5, r1
5c: e0210696 mla r1, r6, r6, r0
60: e0201797 mla r0, r7, r7, r1
64: e8bd41f0 pop {r4, r5, r6, r7, r8, lr}
68: e1a0f00e mov pc, lr
much easier to read and follow, perhaps thinking about a cache and getting the reads all in one shot. llvm in at least one case got the reads out of order as well.
00000144 <fun4>:
144: e92d40f0 push {r4, r5, r6, r7, lr}
148: e5d0c007 ldrb ip, [r0, #7]
14c: e5d03006 ldrb r3, [r0, #6]
150: e5d02005 ldrb r2, [r0, #5]
154: e5d05004 ldrb r5, [r0, #4]
158: e5d0e000 ldrb lr, [r0]
15c: e5d04001 ldrb r4, [r0, #1]
160: e5d06002 ldrb r6, [r0, #2]
164: e5d00003 ldrb r0, [r0, #3]
Yes, for averaging some values from ram, order is not an issue, moving on.
So the compiler choose the unrolled path and didnt care about the micro-optmizations. because of the size of the loop both choose to burn a bunch of registers holding one of the loaded values per loop then either performing the adds from those temporary reads or the multiplies. if we increase the size of the loop a little I would expect to see sum and sqsum accumulations within the unrolled loop as it runs out of registers, or the threshold will be reached where they choose not to unroll the loop.
If I pass the length in, and replace the 8's in the code above with that passed in length, forcing the compiler to make a loop out of this. You sorta see the optimizations, instructions like this are used:
a4: e4d35001 ldrb r5, [r3], #1
And being arm they do a modification of the loop register in one place and branch if not equal a number of instructions later...because they can.
Granted this is a math function, but using float is painful. And using multplies is painful, divides are much worse, fortunately a shift was used. and fortunately this was unsigned so that you could use the shift (the compiler would/should have known to use an arithmetic shift if available if you used a divide against a signed number).
So basically focus on micro-optmizations of the inner loop since it gets run multiple times, and if this can be changed so it becomes shifts and adds, if possible, or arranging the data so you can take it out of the loop (if possible, dont waste other copy loops elsewhere to do this)
const unsigned char* p = (const unsigned char*)(j*patch->step + aux );
you could get some speed. I didnt try it but because it is a loop in a loop the compiler probably wont unroll that loop...
Long story short, you might get some gains depending on the instruction set against a dumber compiler, but this code is not really bad so the compiler can optimize it as well as you can.

First of all, you will probably get very good, detailed answers on stuff like this if you post at Code review instead.
Some comments regarding efficiency and suspicious variable types:
unsigned f = *p++; You will probably be better off if you access p through array indexing and then use p[i] to access the data. This is highly dependent on compiler, cache memory optimizations etc (some ARM guru can give a better advise than me in this matter).
Btw the whole const char to int looks highly suspicious. I take it those chars are to be regarded as 8-bit unsigned integers? Standard C uint8_t is likely a better type to for this, char has various undefined signedness issues that you want to avoid.
Also, why are you doing wild mixing of unsigned and int? You are asking for implicit integer balancing bugs.
stdev < .1. Just a minor thing: change this to .1f or you enforce an implicit promotion of your float to double, since .1 is a double literal.

As your data is being read in in groups of 8 bytes, depending on your hardware bus and the alignment of the array itself, you can probably get some gains by reading the inner loop in via a single long long read, then either manually splitting the numbers into seperate values, or using ARM intrinsics to do the adds in parallel with some inline asm using the add8 instruction (adds 4 numbers together at a time in 1 register) or do a touch of shifting and use add16 to allow the values to overflow into 16-bits worth of space. There is also a dual signed multiply and accumulate instruction which makes your first accumulation loop nearly perfectly supported via ARM with just a little help. Also, if the data coming in could be massaged to be 16-bit values, that could also speed this up.
As to why the NEON is slower, my guess is the overhead in setting up the vectors along with the added data you are pushing around with larger types is killing any performance it might gain with such a small set of data. The original code is very ARM friendly to begin with, which means the setup overhead is probably killing you. When in doubt, look at the assembly output. That will tell you what's truly going on. Perhaps the compiler is pushing and popping data all over the place when trying to use the intrinsics - wouldn't be the first time I've see this sort of behavior.

Thanks to Lundin, dwelch and Michel.
I made the next improvement and it seems the best for my code.
I´m trying to decrease the number of cycles improving the cache access, because is only accessing to cache one time.
int step=patch->step;
for (int j=0; j< 8; j++) {
p = (uint8_t*)(j*step + aux ); /
i=8;
do {
f=p[i];
sum += f;
sqsum += f*f;
} while(--i);
}

Related

Simple Assembly Language doubts

I had worked out some code for my assignment and something tells me that I'm not doing it correctly.. Hope someone can take a look at it.
Thank you!
AREA Reset, CODE, READONLY
ENTRY
LDR r1, = 0x13579BA0
MOV r3, #0
MOV r4, #0
MOV r2, #8
Loop CMP r2, #0
BGE DONE
LDR r5, [r1, r4]
AND r5, r5, #0x00000000
ADD r3, r3, r5
ADD r4, r4, #4
SUB r2, r2, #1
B Loop
LDR r0, [r3]
DONE B DONE
END
Write an ARM assembly program that will add the hexadecimal digits in register 1 and save the sum in register 0. For example, if r1 is initialized as follows:
LDR r1, =0x120A760C
When you program has run to completion, register 0 will contain the sum of 1+2+0+A+7+6+0+C.
You will need to use the following in your solution:
· An 8-iteration loop
· Logical shift right instruction
· The AND instruction (used to force selected bits to 0)
I know that I did not even use LSR. where should I put it? I'm just getting started on Assembly hope someone makes some improvements on this code..

GCC generates different code depending on array index value

This code (arm):
void blinkRed(void)
{
for(;;)
{
bb[0x0008646B] ^= 1;
sys.Delay_ms(14);
}
}
...is compiled to folowing asm-code:
08000470: ldr r4, [pc, #20] ; (0x8000488 <blinkRed()+24>) // r4 = 0x422191ac
08000472: ldr r6, [pc, #24] ; (0x800048c <blinkRed()+28>)
08000474: movs r5, #14
08000476: ldr r3, [r4, #0]
08000478: eor.w r3, r3, #1
0800047c: str r3, [r4, #0]
0800047e: mov r0, r6
08000480: mov r1, r5
08000482: bl 0x80001ac <CSTM32F100C6::Delay_ms(unsigned int)>
08000486: b.n 0x8000476 <blinkRed()+6>
It is ok.
But, if I just change array index (-0x400)....
void blinkRed(void)
{
for(;;)
{
bb[0x0008606B] ^= 1;
sys.Delay_ms(14);
}
}
...I've got not so optimized code:
08000470: ldr r4, [pc, #24] ; (0x800048c <blinkRed()+28>) // r4 = 0x42218000
08000472: ldr r6, [pc, #28] ; (0x8000490 <blinkRed()+32>)
08000474: movs r5, #14
08000476: ldr.w r3, [r4, #428] ; 0x1ac
0800047a: eor.w r3, r3, #1
0800047e: str.w r3, [r4, #428] ; 0x1ac
08000482: mov r0, r6
08000484: mov r1, r5
08000486: bl 0x80001ac <CSTM32F100C6::Delay_ms(unsigned int)>
0800048a: b.n 0x8000476 <blinkRed()+6>
The difference is that in the first case r4 is loaded with target address immediately (0x422191ac) and then access to memory is performed with 2-byte instructions, but in the second case r4 is loaded with some intermediate
address (0x42218000) and then access to memory is performed with 4-bytes instruction with offset (+0x1ac) to target address (0x422181ac).
Why compiler does so?
I use:
arm-none-eabi-g++ -mcpu=cortex-m3 -mthumb -g2 -Wall -O1 -std=gnu++14 -fno-exceptions -fno-use-cxa-atexit -fstrict-volatile-bitfields -c -DSTM32F100C6T6B -DSTM32F10X_LD_VL
bb is:
__attribute__ ((section(".bitband"))) volatile u32 bb[0x00800000];
In .ld it is defined as:
in MEMORY section:
BITBAND(rwx): ORIGIN = 0x42000000, LENGTH = 0x02000000
in SECTIONS section:
.bitband (NOLOAD) :
SUBALIGN(0x02000000)
{
KEEP(*(.bitband))
} > BITBAND
I would consider it an artefact/missing optimization opportunity of -O1.
It can be understood in more detail if we look at the code generated with -O- to load bb[...]:
First case:
movw r2, #:lower16:bb
movt r2, #:upper16:bb
movw r3, #37292
movt r3, 33
adds r3, r2, r3
ldr r3, [r3, #0]
Second case:
movw r3, #:lower16:bb
movt r3, #:upper16:bb
add r3, r3, #2195456 ; 0x218000 = 4*0x86000
add r3, r3, #428
ldr r3, [r3, #0]
The code in the second case is better and it can be done this way because the constant can be added with two add instructions (which is not the case if the index is 0x0008646B).
-O1 does only optimizations which are not time consuming. So apparently it merges early the add and the ldr so it misses later the opportunity to load the whole address with one pc relative ldr.
Compile with -O2 (or -fgcse) and the code looks like expected.

CMP/BEQ not working Always branching (ARM)

I'm getting very mad at this and can't figure out why my BEQ statement is always executed.
The program should replace char located in memory(Address in RO)
_ should become +
C should become A
A should become B
B should become C
This is what I have so far (sorry french comments):
MOV R11, #0 ; Initialise le nombe de copie fait
MOV R10, #43 ; R10 = +
MOV R9, #'_' ; R9 = _
MOV R8, #'A' ; R8 = A
MOV R7, #'B' ; R7 = B
MOV R6, #'C' ; R6 = C
TOP:
LDRB R5, [R0, R11] ; Copie element X dans R5
CMP R5, R9
BEQ PLUS
CMP R5, R8
BEQ A
CMP R5, R7
BEQ B
CMP R5, R6
BEQ C
PLUS: ; Branchement si _
STRB R10, [R0, R11]
A: ; Branchement si A
STRB R8, [R0, R11]
B: ; Branchement si B
STRB R7, [R0, R11]
C: ; Branchement si C
STRB R6, [R0, R11]
ADDS R11, R11, #1 ; ++nbcopiefait
CMP R11, R1 ; Validation de la condition
BNE TOP
Apparently it's not only C's switch() that confuses people...
So, what you're currently doing is the equivalent of
for (size_t i = 0; i < n; i++)
{
switch(chararray[i])
{
default:
case '_': chararray[i] = '+';
case 'C': chararray[i] = 'A';
case 'A': chararray[i] = 'B';
case 'B': chararray[i] = 'C';
}
}
You're missing the break; after every case.
Edit, because it seems I have to make it really obvious:
for (size_t i = 0; i < n; i++)
{
switch(chararray[i])
{
default:
break;
case '_': chararray[i] = '+';
break;
case 'C': chararray[i] = 'A';
break;
case 'A': chararray[i] = 'B';
break;
case 'B': chararray[i] = 'C';
break; //unnecessary, but I put it in for regularity
}
}
To expand on EOF's answer, you can see what's going on by tracing through a sample execution instruction-by-instruction - a debugger always helps, but this is simple enough to do by hand. Let's consider a couple of different situations:
Instruction case char=='A' case char=='Z'
-------------------------------------------------------------------
...
LDRB R5, [R0, R11] executes, r5='A' executes, r5='Z'
CMP R5, R9 executes, flags=ne executes, flags=ne
BEQ PLUS flags!=eq, not taken flags!=eq, not taken
CMP R5, R8 executes, flags=eq executes, flags=ne
BEQ A flags==eq, taken flags!=eq, not taken
CMP R5, R7 / executes, flags=ne
BEQ B / flags!=eq, not taken
CMP R5, R6 / executes, flags=ne
BEQ C / flags!=eq, not taken
PLUS: STRB R10, [R0, R11] V executes: oops!
A: STRB R8, [R0, R11] executes executes: oops!
B: STRB R7, [R0, R11] executes: oops! executes: oops!
C: STRB R6, [R0, R11] executes: oops! executes: oops!
ADDS R11, R11, #1 executes executes
...
So no matter what happens, everything ends up as 'C' regardless! (note there's a register mixup for 'A', 'B', and 'C' - if you match r8, you jump to storing r8, etc.) Implementing the equivalent of break is a case of making sure instructions are skipped when you don't want them executing:
...
CMP R5, R6
BEQ C
B LOOP ; no match, skip everything
PLUS: STRB R10, [R0, R11]
B LOOP ; we've stored '_', skip 'B', 'C', and 'A'
A: STRB R7, [R0, R11]
B LOOP ; we've stored 'B', skip 'C' and 'A'
B: STRB R6, [R0, R11]
B LOOP ; we've stored 'C', skip 'A'
C: STRB R8, [R0, R11] ; nothing to skip, just fall through to the loop
LOOP: ADDS R11, R11, #1
...
However, note that unlike most architectures, ARM's conditional execution applies to most instructions. Thus an alternative approach, given a small number of simple routines (1-3 instructions) is to actually remove all the branches, and let conditional execution take care of it:
...
LDRB R5, [R0, R11]
CMP R5, R9
STRBEQ R10, [R0, R11]
CMP R5, R8
STRBEQ R7, [R0, R11]
CMP R5, R7
STRBEQ R6, [R0, R11]
CMP R5, R6
STRBEQ R8, [R0, R11]
ADDS R11, R11, #1
...
That way, everything gets "executed", but any stores which fail their condition check just do nothing.

What's the efficient way to swap two register variables in CUDA?

I'm starting to write some CUDA code, and I want to do the equivalent of std::swap() for two variables within a kernel; they're in the register file (no spillage, not in some buffer, etc.). Suppose I have the following device code:
__device__ foo(/* some args here */) {
/* etc. */
int x = /* value v1 */;
int y = /* value v2 */;
/* etc. */
swap(x,y);
/* etc. */
}
Now, I could just write
template <typename T> void swap ( T& a, T& b )
{
T c(a); a=b; b=c;
}
but I wonder - isn't there some CUDA built-in for this functionality?
Notes:
Yes, I want this to run for all threads.
Never mind about whether I have enough registers or not. Assume that I have them.
I have considered the following test program
template <typename T> __device__ void inline swap_test_device1(T& a, T& b)
{
T c(a); a=b; b=c;
}
template <typename T> __device__ void inline swap_test_device2(T a, T b)
{
T c(a); a=b; b=c;
}
__global__ void swap_test_global(const int* __restrict__ input1, const int* __restrict__ input2, int* output1, int* output2) {
int tx = threadIdx.x + blockIdx.x * blockDim.x;
int x = input1[tx]*input1[tx];
int y = input2[tx]*input2[tx];
//swap_test_device2(x,y);
swap_test_device1(x,y);
output1[tx] = x;
output2[tx] = y;
}
and I have disassembled it. The result when using swap_test_device1 and swap_test_device2 is the same. The common disassembled code is the following
MOV R1, c[0x1][0x100];
S2R R0, SR_CTAID.X;
S2R R2, SR_TID.X;
MOV32I R9, 0x4;
IMAD R3, R0, c[0x0][0x8], R2;
IMAD R6.CC, R3, R9, c[0x0][0x28];
IMAD.HI.X R7, R3, R9, c[0x0][0x2c];
IMAD R10.CC, R3, R9, c[0x0][0x20];
LD.E R2, [R6]; loads input1[tx] and stores it in R2
IMAD.HI.X R11, R3, R9, c[0x0][0x24];
IMAD R4.CC, R3, R9, c[0x0][0x30];
LD.E R0, [R10]; loads input2[tx] and stores it in R0
IMAD.HI.X R5, R3, R9, c[0x0][0x34];
IMAD R8.CC, R3, R9, c[0x0][0x38];
IMAD.HI.X R9, R3, R9, c[0x0][0x3c];
IMUL R2, R2, R2; R2 = R2 * R2
ST.E [R4], R2; stores input1[tx]*input1[tx] in global memory
IMUL R0, R0, R0; R0 = R0 * R0
ST.E [R8], R0; stores input2[tx]*input2[tx] in global memory
EXIT ;
It seems that the there is not an explicit swap in the disassembled code. In other words, the compiler, for this simple example, is capable to optimize the code directly writing x and y in the proper global memory locations.
EDIT
I have now considered the following more involved test case
__global__ void swap_test_global(const char* __restrict__ input1, const char* __restrict__ input2, char* output1, char* output2) {
int tx = threadIdx.x + blockIdx.x * blockDim.x;
char x = input1[tx];
char y = input2[tx];
//swap_test_device2(x,y);
swap_test_device1(x,y);
output1[tx] = (x >> 3) & y;
output2[tx] = (y >> 5) & x;
}
with the same above __device__ functions. The disassembled code is
MOV R1, c[0x1][0x100];
S2R R0, SR_CTAID.X;
S2R R2, SR_TID.X;
IMAD R0, R0, c[0x0][0x8], R2; R0 = threadIdx.x + blockIdx.x * blockDim.x
BFE R7, R0, 0x11f;
IADD R8.CC, R0, c[0x0][0x28];
IADD.X R9, R7, c[0x0][0x2c];
IADD R10.CC, R0, c[0x0][0x20];
LD.E.S8 R4, [R8]; R4 = x = input1[tx]
IADD.X R11, R7, c[0x0][0x24];
IADD R2.CC, R0, c[0x0][0x30];
LD.E.S8 R5, [R10]; R5 = y = input2[tx]
IADD.X R3, R7, c[0x0][0x34];
IADD R12.CC, R0, c[0x0][0x38];
IADD.X R13, R7, c[0x0][0x3c];
SHR.U32 R0, R4, 0x3; R0 = x >> 3
SHR.U32 R6, R5, 0x5; R6 = y >> 5
LOP.AND R5, R0, R5; R5 = (x >> 3) & y
LOP.AND R0, R6, R4; R0 = (y >> 5) & x
ST.E.U8 [R2], R5; global memory store
ST.E.U8 [R12], R0; global memory store
EXIT ;
As it can be seen, there is still no apparent register swap.
To the best of my knowledge, this is all completely irrelevant.
x and y are not "real" objects: they only exist in the abstract machine described by the C++ standard. In particular, they do not correspond to registers.
You might imagine that the compiler when creating your program would assign them to registers, but that's really not how things work. The things being stored in registers can get shuffled around, duplicated, changed into something else, or even eliminated entirely.
In particular, unconditionally swapping two variables that are stored in registers usually doesn't generate any code at all — its only effect is for the compiler to adjust its internal tables of what objects are being stored in what registers at that point in time.
(even for a conditional swap, you're still usually better off letting the compiler do its thing)

is i=(i+1)&3 faster than i=(i+1)%4

I am optimizing a c++ code.
at one critical step, I want to implement the following function y=f(x):
f(0)=1
f(1)=2
f(2)=3
f(3)=0
which one is faster ? using a lookup table or i=(i+1)&3 or i=(i+1)%4 ? or any better suggestion?
Almost certainly the lookup table is going to be slowest. In a lot of cases, the compiler will generate the same assembly for (i+1)&3 and (i+1)%4; however depending on the type/signedness of i, they may not be strictly equivalent and the compiler won't be able to make that optimization. For example for the code
int foo(int i)
{
return (i+1)%4;
}
unsigned bar(unsigned i)
{
return (i+1)%4;
}
on my system, gcc -O2 generates:
0000000000000000 <foo>:
0: 8d 47 01 lea 0x1(%rdi),%eax
3: 89 c2 mov %eax,%edx
5: c1 fa 1f sar $0x1f,%edx
8: c1 ea 1e shr $0x1e,%edx
b: 01 d0 add %edx,%eax
d: 83 e0 03 and $0x3,%eax
10: 29 d0 sub %edx,%eax
12: c3 retq
0000000000000020 <bar>:
20: 8d 47 01 lea 0x1(%rdi),%eax
23: 83 e0 03 and $0x3,%eax
26: c3 retq
so as you can see because of the rules about signed modulus results, (i+1)%4 generates a lot more code in the first place.
Bottom line, you're probably best off using the (i+1)&3 version if that expresses what you want, because there's less chance for the compiler to do something you don't expect.
I won't get into the discussion of premature optimization. But the answer is that they will be the same speed.
Any sane compiler will compile them to the same thing. Division/modulus by a power of two will be optimized to bitwise operations anyway.
So use whichever you find (or others will find) to be more readable.
EDIT : As Roland has pointed out, it does sometimes behave different depending on the signness:
Unsigned &:
int main(void)
{
unsigned x;
cin >> x;
x = (x + 1) & 3;
cout << x;
return 0;
}
mov eax, DWORD PTR _x$[ebp]
inc eax
and eax, 3
push eax
Unsigned Modulus:
int main(void)
{
unsigned x;
cin >> x;
x = (x + 1) % 4;
cout << x;
return 0;
}
mov eax, DWORD PTR _x$[ebp]
inc eax
and eax, 3
push eax
Signed &:
int main(void)
{
int x;
cin >> x;
x = (x + 1) & 3;
cout << x;
return 0;
}
mov eax, DWORD PTR _x$[ebp]
inc eax
and eax, 3
push eax
Signed Modulus:
int main(void)
{
int x;
cin >> x;
x = (x + 1) % 4;
cout << x;
return 0;
}
mov eax, DWORD PTR _x$[ebp]
inc eax
and eax, -2147483645 ; 80000003H
jns SHORT $LN3#main
dec eax
or eax, -4 ; fffffffcH
Good chances are, you wouldn't find any differences: any reasonably modern compiler knows to optimize both into the same code.
Have you tried benchmarking it? As an offhand gues, I'll assume that the &3 version will be faster, as that's a simple addition and bitwise AND operation, both of which should be single-cycle operations on any modern-ish CPU.
The %4 could go a few different ways, depending on how smart the compiler is. it could be done via division, which is much slower than addition, or it could be translated into a bitwise and operation as well and end up being just as fast as the &3 version.
same as Mystical but C and ARM
int fun1 ( int i )
{
return( (i+1)&3 );
}
int fun2 ( int i )
{
return( (i+1)%4 );
}
unsigned int fun3 ( unsigned int i )
{
return( (i+1)&3 );
}
unsigned int fun4 ( unsigned int i )
{
return( (i+1)%4 );
}
creates:
00000000 <fun1>:
0: e2800001 add r0, r0, #1
4: e2000003 and r0, r0, #3
8: e12fff1e bx lr
0000000c <fun2>:
c: e2802001 add r2, r0, #1
10: e1a0cfc2 asr ip, r2, #31
14: e1a03f2c lsr r3, ip, #30
18: e0821003 add r1, r2, r3
1c: e2010003 and r0, r1, #3
20: e0630000 rsb r0, r3, r0
24: e12fff1e bx lr
00000028 <fun3>:
28: e2800001 add r0, r0, #1
2c: e2000003 and r0, r0, #3
30: e12fff1e bx lr
00000034 <fun4>:
34: e2800001 add r0, r0, #1
38: e2000003 and r0, r0, #3
3c: e12fff1e bx lr
For negative numbers the mask and the modulo are not equivalent, only for positive/unsigned numbers. For those cases your compiler should know that %4 is the same as &3 and use the less expensive on (&3) as gcc above. clang/llc below
fun3:
add r0, r0, #1
and r0, r0, #3
mov pc, lr
fun4:
add r0, r0, #1
and r0, r0, #3
mov pc, lr
Ofcourse & is faster then %. Which is proven by many previous posts. Also as i is local variable, u can use ++i instead of i+1, as it is better implemented by most of the compilers. i+1 may(not) be optimized as ++i.
UPDATE: Perhaps i was not clear, i meant, the function should just "return((++i)&3);"