complier generating a mov back and forth on eax - c++

int test1(int a, int b) {
if (__builtin_expect(a < b, 0))
return a / b;
return b;
}
was compiled by clang with -O3 -march=native to
test1(int, int): # #test1(int, int)
cmp edi, esi
jl .LBB0_1
mov eax, esi
ret
.LBB0_1:
mov eax, edi
cdq
idiv esi
mov esi, eax
mov eax, esi # moving eax back and forth
ret
why eax is being moved back and forth after the idiv ?
gcc has a similar behavior so this seem to be intended.
gcc with -O3 -march=native complied the code to
test1(int, int):
mov r8d, esi
cmp edi, esi
jl .L4
mov eax, r8d
ret
.L4:
mov eax, edi
cdq
idiv esi
mov r8d, eax
mov eax, r8d #back and forth mov
ret
godbolt

This is not a complete solution to the puzzle but should give some clues.
Without the __builtin_expect, clang generates:
test2(int, int): # #test2(int, int)
mov ecx, esi
cmp edi, esi
jge .LBB1_2
mov eax, edi
cdq
idiv ecx
mov ecx, eax
.LBB1_2:
mov eax, ecx
ret
While the register allocation is still weird here, it at least makes sense: if the branch is taken, the value of b in ecx is transfered to eax as the return value. If it is not taken, the result of the division (in eax) has to be transferred to ecx to be in the same register as in the other case.
It could be that a __builtin_expect convinces the compiler to special case the case where the branch is taken late in the compilatin process, orphaning the .LBB1_2 label and causing it to be ultimately absent from the assembly.

idiv esi is 32-bit operand-size, so EAX is already zero-extended to fill RAX. Therefore copying to ESI or R8D and back has no effect on the value in EAX. (And the calling convention doesn't require zero-extension or sign-extension to 64-bit anyway; 32-bit types are returned in 32-bit registers with possible garbage in the upper 32.)
This looks like purely a missed optimization. (There's no microarchitectural performance reason that this would be a good thing either.)

Related

Finding max number between two, which implementation to choose

I am trying to figure out, which implementation has edge over other while finding max number between two. As an example let's examine two implementation:
Implementation 1:
int findMax (int a, int b)
{
return (a > b) ? a : b;
}
// Assembly output: (gcc 11.1)
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov DWORD PTR [rbp-8], esi
mov eax, DWORD PTR [rbp-4]
cmp eax, DWORD PTR [rbp-8]
jle .L2
mov eax, DWORD PTR [rbp-4]
jmp .L4 .L2:
mov eax, DWORD PTR [rbp-8] .L4:
pop rbp
ret
Implementation 2:
int findMax(int a, int b)
{
int diff, s, max;
diff = a - b;
s = (diff >> 31) & 1;
max = a - (s * diff);
return max;
}
// Assembly output: (gcc 11.1)
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-20], edi
mov DWORD PTR [rbp-24], esi
mov eax, DWORD PTR [rbp-20]
sub eax, DWORD PTR [rbp-24]
mov DWORD PTR [rbp-4], eax
mov eax, DWORD PTR [rbp-4]
shr eax, 31
mov DWORD PTR [rbp-8], eax
mov eax, DWORD PTR [rbp-8]
imul eax, DWORD PTR [rbp-4]
mov edx, eax
mov eax, DWORD PTR [rbp-20]
sub eax, edx
mov DWORD PTR [rbp-12], eax
mov eax, DWORD PTR [rbp-12]
pop rbp
ret
The second one produced more assembly instructions but first one has conditional jump. Just trying to understand if both are equally good.
First you need to turn on compiler optimizations (I used -O2 for the following). And you should compare to std::max. Then this:
#include <algorithm>
int findMax (int a, int b)
{
return (a > b) ? a : b;
}
int findMax2(int a, int b)
{
int diff, s, max;
diff = a - b;
s = (diff >> 31) & 1;
max = a - (s * diff);
return max;
}
int findMax3(int a,int b){
return std::max(a,b);
}
results in:
findMax(int, int):
cmp edi, esi
mov eax, esi
cmovge eax, edi
ret
findMax2(int, int):
mov ecx, edi
mov eax, edi
sub ecx, esi
mov edx, ecx
shr edx, 31
imul edx, ecx
sub eax, edx
ret
findMax3(int, int):
cmp edi, esi
mov eax, esi
cmovge eax, edi
ret
Your first version results in identical assembly as std::max, while your second variant is doing more. Actually when trying to optimize you need to specify what you optimize for. There are several options that typically require a trade-off to be made: Runtime, memory usage, size of executable, readability of code, etc. Typically you cannot get it all at once.
When in doubt, do not reinvent a wheel but use existing already optimzied std::max. And do not forget that code you write is not instructions for your CPU, rather it is a high level abstract description of what the program should do. Its the compilers job to figure out how that can be achieved best.
Last but not least, your second variant is actually broken. See example here compiled with -O2 -fsanitize=signed-integer-overflow, results in:
/app/example.cpp:13:10: runtime error: signed integer overflow: -2147483648 - 2147483647 cannot be represented in type 'int'
You should favor correctness over speed. The fastest code is not worth a thing when it is wrong. And because of that, readability is next on the list. Code that is difficult to read and understand is also difficult to proove correct. I was only able to spot the problem in your code with the help of the compiler, while std::max(a,b) is unlikely to cause undefined behavior (and even if it does, at least it isnt your fault ;).
For two ints, you can compute max(a, b) without branching using a technique you probably learnt at school:
a ^ ((a ^ b) & -(a < b));
But no sane person would write this in their code. Always use std::max and trust the compiler to pick the best way. You may well find it adopts the above for int arguments with optimisations set appropriately. Although I conject that a compare and jump is probably the best way on the whole, even at the expense of a pipeline dump.
Using std::max gives the compiler the best optimisation hint.
Implementation 1 performs well on a CISC CPU like a modern x64 AMD/Intel CPU.
Implementation 2 performs well on a RISC GPU like from nVIDIA or AMD Graphics.
The term "performs well" is only significant in a tight loop.

C++ inline asm move WCHAR in 32-bit register

I am trying to practice the inline ASM in C++ :) Maybe outdated, but it is interesting, to know how CPU is executing the code.
So, what I am trying to do here, is to loop through processes and get a handle of needed one :) I am using for that already created methods from tlhelp32
I have this code:
HANDLE RetHandle = nullptr, snap;
int SizeOfPE = sizeof(PROCESSENTRY32), pid; PROCESSENTRY32 pe;
int PA = PROCESS_ALL_ACCESS;
const char* Pname = "explorer.exe";
__asm
{
mov eax, pe
mov ebx, this
mov ecx, [ebx]pe.dwSize
mov ecx, SizeOfPE
mov[ebx]pe.dwSize, ecx
mov eax, PA
mov ebx,0
call CreateToolhelp32Snapshot
mov eax,snap
label1:
mov eax, snap
mov ebx, [pe]
call Process32First
cmp eax,1
jne exitLabel
Process32NextLoop:
mov eax, snap
mov ebx, [pe]
call Process32Next
cmp eax, 1
jne Process32NextLoop
mov edx, pe
mov ecx, [edx].szExeFile
cmp ecx, Pname
je ExitLoop
jne Process32NextLoop
ExitLoop:
mov eax, [ebx].th32ProcessID
mov pid, eax
ExitLabel:
ret
}
Apparently, it is throwing error in th32ProcessID as well, however, it is just regular int.
Have been searching, but haven't found the equivalent for movl in C++

Understanding what clang is doing in assembly, decrementing for a loop that is incrementing

Consider the following code, in C++:
#include <cstdlib>
std::size_t count(std::size_t n)
{
std::size_t i = 0;
while (i < n) {
asm volatile("": : :"memory");
++i;
}
return i;
}
int main(int argc, char* argv[])
{
return count(argc > 1 ? std::atoll(argv[1]) : 1);
}
It is just a loop that is incrementing its value, and returns it at the end. The asm volatile prevents the loop from being optimized away. We compile it under g++ 8.1 and clang++ 5.0 with the arguments -Wall -Wextra -std=c++11 -g -O3.
Now, if we look at what compiler explorer is producing, we have, for g++:
count(unsigned long):
mov rax, rdi
test rdi, rdi
je .L2
xor edx, edx
.L3:
add rdx, 1
cmp rax, rdx
jne .L3
.L2:
ret
main:
mov eax, 1
xor edx, edx
cmp edi, 1
jg .L25
.L21:
add rdx, 1
cmp rdx, rax
jb .L21
mov eax, edx
ret
.L25:
push rcx
mov rdi, QWORD PTR [rsi+8]
mov edx, 10
xor esi, esi
call strtoll
mov rdx, rax
test rax, rax
je .L11
xor edx, edx
.L12:
add rdx, 1
cmp rdx, rax
jb .L12
.L11:
mov eax, edx
pop rdx
ret
and for clang++:
count(unsigned long): # #count(unsigned long)
test rdi, rdi
je .LBB0_1
mov rax, rdi
.LBB0_3: # =>This Inner Loop Header: Depth=1
dec rax
jne .LBB0_3
mov rax, rdi
ret
.LBB0_1:
xor edi, edi
mov rax, rdi
ret
main: # #main
push rbx
cmp edi, 2
jl .LBB1_1
mov rdi, qword ptr [rsi + 8]
xor ebx, ebx
xor esi, esi
mov edx, 10
call strtoll
test rax, rax
jne .LBB1_3
mov eax, ebx
pop rbx
ret
.LBB1_1:
mov eax, 1
.LBB1_3:
mov rcx, rax
.LBB1_4: # =>This Inner Loop Header: Depth=1
dec rcx
jne .LBB1_4
mov rbx, rax
mov eax, ebx
pop rbx
ret
Understanding the code generated by g++, is not that complicated, the loop being:
.L3:
add rdx, 1
cmp rax, rdx
jne .L3
every iteration increments rdx, and compares it to rax that stores the size of the loop.
Now, I have no idea of what clang++ is doing. Apparently it uses dec, which is weird to me, and I don't even understand where the actual loop is. My question is the following: what is clang doing?
(I am looking for comments about the clang assembly code to describe what is done at each step and how it actually works).
The effect of the function is to return n, either by counting up to n and returning the result, or by simply returning the passed-in value of n. The clang code does the latter. The counting loop is here:
mov rax, rdi
.LBB0_3: # =>This Inner Loop Header: Depth=1
dec rax
jne .LBB0_3
mov rax, rdi
ret
It begins by copying the value of n into rax. It decrements the value in rax, and if the result is not 0, it jumps back to .LBB0_3. If the value is 0 it falls through to the next instruction, which copies the original value of n into rax and returns.
There is no i stored, but the code does the loop the prescribed number of times, and returns the value that i would have had, namely, n.

Access violation on 'ret' instruction

I've got this function, which consists mostly of inline asm.
long *toarrayl(int members, ...){
__asm{
push esp
mov eax, members
imul eax, 4
push eax
call malloc
mov edx, eax
mov edi, eax
xor ecx, ecx
xor esi, esi
loopx:
cmp ecx, members
je done
mov esi, 4
imul esi, ecx
add esi, ebp
mov eax, [esi+0xC]
mov [edi], eax
inc ecx
add edi, 4
jmp loopx
done:
mov eax, edx
pop esp
ret
}
}
And upon running, I get an access violation on the return instruction.
I'm using VC++ 6, and it can sometimes mean to point at the line above, so possible on 'pop esp'.
If you could help me out, it'd be great.
Thanks, iDomo.
You are failing to manage the stack pointer correctly. In particular, your call to malloc unbalances the stack, and your pop esp ends up popping the wrong value into esp. The access violation therefore occurs when you try to ret from an invalid stack (the CPU cannot read the return address). It's unclear why you are pushing and popping esp; that accomplishes nothing.
As you spotted, you should never use the instruction POP ESP - when you see that in the code, you know something extremely wrong has happened. Of course, calling malloc inside asseembler code is also rather a bad thing to do - you have for example forgotten to check if it returned NULL, so you may well crash. Stick that outside your assembler - and check for NULL, it's much easier to debug "Couldn't allocate memory at line 54 in file mycode.c" than "Somewhere in the assembler, we got a
Here's some suggestions for improvement, which should speed up your loop a bit:
long *toarrayl(int members, ...){
__asm{
mov eax, members
imul eax, 4
push eax
call malloc
add esp, 4
mov edx, eax
mov edi, eax
mov ecx, members
lea esi, [ebp+0xc]
loopx:
mov eax, [esi]
mov [edi], eax
add edi, 4
add esi, 4
dec ecx
jnz loopx
mov lret, eax
ret
}
}
Improvements: Remove multiply by four in every loop. Just increment esi instead. Use decrement on ecx, instead of increament, and load it up with members before the loop. This allows usage of just one jump in the loop, rather than two. Remove reduntant move from edx, to eax. Use eax directly.
I've figured out the answer on my own.
For those who have had this same, or alike problem:
The actual exception was occuring after the user code, when vc++ automatically pops/restores the registers to their states before the function was called. Since I miss-aligned the stack pointer when calling malloc, there was an access violation when poping from the stack. I wasn't able to see this in the editor because it wasn't my code, so it was just shown as the last of my code in the function.
To correct this, just add an add esp, (size of parameters for previous call) after the calls you make.
Fixed code:
long *toarrayl(int members, ...){
__asm{
mov eax, members
imul eax, 4
push eax
call malloc
add esp, 4
mov edx, eax
mov edi, eax
xor ecx, ecx
xor esi, esi
loopx:
cmp ecx, members
je done
mov esi, 4
imul esi, ecx
add esi, ebp
mov eax, [esi+0xC]
mov [edi], eax
inc ecx
add edi, 4
jmp loopx
done:
mov eax, edx
ret
}
//return (long*)0;
}
Optimized code:
long *toarrayl(int members, ...){
__asm{
mov eax, members
shl eax, 2
push eax
call malloc
add esp, 4
;cmp eax, 0
;je _error
mov edi, eax
mov ecx, members
lea esi, [ebp+0xC]
loopx:
mov edx, [esi]
mov [edi], edx
add edi, 4
add esi, 4
dec ecx
jnz loopx
}
}

Call not returning properly [X86_ASM]

This is C++ using x86 inline assembly [Intel syntax]
Function:
DWORD *Call ( size_t lArgs, ... ){
DWORD *_ret = new DWORD[lArgs];
__asm {
xor edx, edx
xor esi, esi
xor edi, edi
inc edx
start:
cmp edx, lArgs
je end
push eax
push edx
push esi
mov esi, 0x04
imul esi, edx
mov ecx, esi
add ecx, _ret
push ecx
call dword ptr[ebp+esi] //Doesn't return to the next instruction, returns to the caller of the parent function.
pop ecx
mov [ecx], eax
pop eax
pop edx
pop esi
inc edx
jmp start
end:
mov eax, _ret
ret
}
}
The purpose of this function is to call multiple functions/addresses without calling them individually.
Why I'm having you debug it?
I have to start school for the day, and I need to have it done by evening.
Thanks alot, iDomo
Thank you for a complete compile-able example, it makes solving problems much easier.
According to your Call function signature, when the stack frame is set up, the lArgs is at ebp+8 , and the pointers start at ebp+C. And you have a few other issues. Here's a corrected version with some push/pop optimizations and cleanup, tested on MSVC 2010 (16.00.40219.01) :
DWORD *Call ( size_t lArgs, ... ) {
DWORD *_ret = new DWORD[lArgs];
__asm {
xor edx, edx
xor esi, esi
xor edi, edi
inc edx
push esi
start:
cmp edx, lArgs
; since you started counting at 1 instead of 0
; you need to stop *after* reaching lArgs
ja end
push edx
; you're trying to call [ebp+0xC+edx*4-4]
; a simpler way of expressing that - 4*edx + 8
; (4*edx is the same as edx << 2)
mov esi, edx
shl esi, 2
add esi, 0x8
call dword ptr[ebp+esi]
; and here you want to write the return value
; (which, btw, your printfs don't produce, so you'll get garbage)
; into _ret[edx*4-4] , which equals ret[esi - 0xC]
add esi, _ret
sub esi, 0xC
mov [esi], eax
pop edx
inc edx
jmp start
end:
pop esi
mov eax, _ret
; ret ; let the compiler clean up, because it created a stack frame and allocated space for the _ret pointer
}
}
And don't forget to delete[] the memory returned from this function after you're done.
I notice that, before calling, you push EAX, EDX, ESI, ECX (in order), but don't pop in the reverse order after returning. If the first CALL returns properly, but subsequent ones don't, that could be the issue.