I am currently analyzing a binary and came across the following three instructions:
movzx ecx, byte [rax+r9]
movzx edx, byte [rbx+r9]
lea ecx, [rcx+rdx]
The meaning of each of these instructions is clear to me, but the first and third in combination do not make any sense. The first movzx copies the value at [rax+r9] into ecx and afterwards ecx is overwritten again by the lea instruction? Why do we need the first movzx here?
I guess I am just missing something and this is a nasty compiler trick, so I appreciate any help.
Related
Sometimes compilers generate code with weird instruction duplications that can safely be removed. Consider the following piece of code:
int gcd(unsigned x, unsigned y) {
return x == 0 ? y : gcd(y % x, x);
}
Here is the assembly code (generated by clang 5.0 with optimizations enabled):
gcd(unsigned int, unsigned int): # #gcd(unsigned int, unsigned int)
mov eax, esi
mov edx, edi
test edx, edx
je .LBB0_1
.LBB0_2: # =>This Inner Loop Header: Depth=1
mov ecx, edx
xor edx, edx
div ecx
test edx, edx
mov eax, ecx
jne .LBB0_2
mov eax, ecx
ret
.LBB0_1:
ret
In the following snippet:
mov eax, ecx
jne .LBB0_2
mov eax, ecx
If the jump doesn't happen, eax is reassigned for no obvious reason.
The other example is two ret's at the end of the function: one would perfectly work as well.
Is the compiler simply not intelligent enough or there's a reason to not remove the duplications?
Compilers can perform optimisations that are not obvious to people and removing instructions does not always make things faster.
A small amount of searching shows that various AMD processors have branch prediction problems when a RET is immediately after a conditional branch. By filling that slot with what is essentially a no-op, the performance problem is avoided.
Update:
Example reference, section 6.2 of the "Software Optimization Guide for AMD64 Processors" (see http://support.amd.com/TechDocs/25112.PDF) says:
Specifically, avoid the following two situations:
Any kind of branch (either conditional or unconditional) that has the single-byte near-return RET instruction as its target. See “Examples.”
A conditional branch that occurs in the code directly before the single-byte near-return RET instruction.
It also goes into detail on why jump targets should have alignment which is also likely to explain the duplicate RETs at the end of the function.
Any compiler will have a bunch of transformations for register renaming, unrolling, hoisting, and so on. Combining their outputs can lead to suboptimal cases such as what you have shown. Marc Glisse offers good advice: it's worth a bug report. You are describing an opportunity for a peephole optimizer to discard instructions that either
don't affect the state of registers & memory at all, or
don't affect state that matters for a function's post-conditions, won't matter for its public API.
Sounds like an opportunity for symbolic execution techniques. If the constraint solver finds no branch points for a given MOV, perhaps it is really a NOP.
I have some unknown C++ code that was compiled in Release build, so it's optimized. The point I'm struggling with is:
xor al, al
add esp, 8
cmp byte ptr [ebp+userinput], 31h
movzx eax, al
This is my understanding:
xor al, al ; set eax to 0x??????00 (clear last byte)
add esp, 8 ; for some unclear reason, set the stack pointer higher
cmp byte ptr [ebp+userinput], 31h ; set zero flag if user input was "1"
movzx eax, al ; set eax to AL and extend with zeros, so eax = 0x000000??
I don't care about line 2 and 3. They might be there in this order for pipelining reasons and IMHO have nothing to do with EAX.
However, I don't understand why I would clear AL first, just to clear the rest of EAX later. The result will IMHO always be EAX = 0, so this could also be
xor eax, eax
instead. What is the advantage or "optimization" of that piece of code?
Some background info:
I will get the source code later. It's a short C++ console demo program, maybe 20 lines of code only, so nothing that I would call "complex" code. IDA shows a single loop in that program, but not around this piece. The Stud_PE signature scan didn't find anything, but likely it's Visual Studio 2013 or 2015 compiler.
xor al,al is already slower than xor eax,eax on most CPUs. e.g. on Haswell/Skylake it needs an ALU uop and doesn't break the dependency on the old value of eax/rax. It's equally bad on AMD CPUs, or Atom/Silvermont. (Well, maybe not equally because AMD doesn't eliminate xor eax,eax at issue/rename, but it still has a false dependency which could serialize the new dependency chain with whatever used eax last).
On CPUs that do rename al separately from the rest of the register (Intel pre-IvyBridge), the xor al,al may still be recognized as a zeroing idiom, but unless you actively want to preserve the upper bytes of the register, the best way to zero al is xor eax,eax.
Doing movzx on top of that just makes it even worse.
I'm guessing your compiler somehow got confused and decided it needed a 1-byte zero, but then realized it needed to promote it to 32 bits. xor sets flags, so it couldn't xor-zero after the cmp, and it failed to notice that it could have just xor-zeroed eax before the cmp.
Either that or it's something like Jester's suggestion, where the movzx is a branch target. Even if that's the case, xor eax,eax would still have been better because zero-extending into eax follows unconditionally on this code path.
I'm curious what compiler produced this from what source.
I am a teaching assistant for computer science and one of my students submitted the following code to check whether an integer is odd or even:
int is_odd (int i) {
if((i % 2 == 1) && (i % 2 == -1));
else;
}
Surprisingly (at least for me) this code gives correct results. I tested numbers up to 100000000, and I honestly cannot explain why this code is behaving as it does.
We are using gcc v6.2.1 and c++
I know that this is not a typical question for so, but I hope to find some help.
Flowing off the end of a function without returning anything is undefined behaviour, regardless of what actually happens with your compiler. Note that if you pass -O3 to GCC, or use Clang, then you get different results.
As for why you actually see the "correct" answer, this is the x86 assembly which GCC 6.2 produces at -O0:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov eax, DWORD PTR [rbp-4]
cdq
shr edx, 31
add eax, edx
and eax, 1
sub eax, edx
cmp eax, 1
nop
pop rbp
ret
Don't worry if you can't read x86. The important thing to note is that eax is used for the return value, and all the intermediate calculations for the if statement use eax as their destination. So when the function exits, eax just happens to have the result of the branch check in it.
Of course, this is all a purely academic discussion; the student's code is wrong and I'd certainly give it zero marks, regardless of whether it passes whatever tests you run it through.
I was trying to build a function in assebmly(FASM) that used more than 4 parameters. in x86 it works fine but I know in x64 with fastcall you have to spill the parameters into the shadow space in the order of rcx,rdx,r8,r9 I read that for 5 and etc you have to pass them onto the stack, but I don't know how to do this. this is what I tried but it keeps saying invalid operand. I know that the first 4 parameters I am doing right because I have made x64 functions before but it is the last 3 I don't know how to spill
proc substr,inputstring,outputstring,buffer1,buffer2,buffer3,startposition,length
;spill
mov [inputstring],rcx
mov [outputstring],rdx
mov [buffer1],r8
mov [buffer2],r9
mov [buffer3],[rsp+8*4]
mov [startposition],[rsp+8*5]
mov [length],[rsp+8*6]
if I try
mov [buffer3],rsp+8*4
it says extra characters on the line.
I also saw that somepeople use rsp+20h, rsp+28h etc but that does not work either.
how do I call more than 4 parameters using fastcall on x64?
also do I have to make room on the stack? I saw some people have to put add rsp,20h right before their spill code. I tried that and it did not help the invlaid operand.
thanks
update
after playing around with it for a little bit I found that the only way it seems to work is if I spill the first 4 parameters and then ignore the rest 5-infinity
proc substr,inputstring,outputstring,buffer1,buffer2,buffer3,startposition,length
;spill
mov [inputstring],rcx
mov [outputstring],rdx
mov [buffer1],r8
mov [buffer2],r9
;start the regular code. ignore spilling buffer3,startposition and length
On x86/x64-CPUs this following instructions does not exist:
mov [buffer3],[rsp+8*4]
mov [startposition],[rsp+8*5]
mov [length],[rsp+8*6]
Workaround with using the rax-register for to read and for to write a values from and to a memory loaction:
mov rax,[rsp+8*4]
mov [buffer3],rax
mov rax,[rsp+8*5]
mov [startposition],rax
mov rax,[rsp+8*6]
mov [length],rax
I've been trying to convert this code to C++ without any inlining and I cannot figure it out..
Say you got this line
sub edx, (offset loc_42C1F5+5)
My hex-rays gives me
edx -= (uint)((char*)loc_42C1F5 + 5))
But how would it really look like without the loc_42C1F5.
I would think it would be
edx -= 0x42C1FA;
But is that correct? (can't really step this code in any assembler-level debugger.. as it's damaged well protected)
loc_42C1F5 is a label actually..
seg000:0042C1F5 loc_42C1F5: ; DATA XREF: sub_4464A0+2B5o
seg000:0042C1F5 mov edx, [esi+4D98h]
seg000:0042C1FB lea ebx, [esi+4D78h]
seg000:0042C201 xor eax, eax
seg000:0042C203 xor ecx, ecx
seg000:0042C205 mov [ebx], eax
loc_42C1F5 is a symbol. Given the information you've provided, I cannot say what its offset is. It may be 0x42C1F5 or it may be something else entirely.
If it is 0x42C1F5, then your translation should be correct.
IDA has incorrectly identified 0x42C1FA as an offset, and Hex-Rays used that interpretation. Just convert it to plain number (press O) and all will be well. That's why it's called Interactive Disassembler :)