Related
I use online complier wrote a simple c++ code :
int main()
{
int a = 4;
int&& b = 2;
}
and the main function part of assembly code complied by gcc 11.20 shown below
main:
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], 4
mov eax, 2
mov DWORD PTR [rbp-20], eax
lea rax, [rbp-20]
mov QWORD PTR [rbp-16], rax
mov eax, 0
pop rbp
ret
I notice that when initializing 'a', the instruction just simply move an immediate operand directly to memory while for r-value reference 'b', it first store the immediate value into register eax,then move it to the memory, and also there is an unused memory bettween [rbp-8] ~ [rbp-4], I think that whatever immediate value,they just exist, so it has to be somewhere or it just simply use signal to iniltialize(my guess), I want to know more about the underlying logic.
So my question is that:
Why does inilization differs?
Why there is an empty 4-bytes unused memory on stack?
Let me address the second question first.
Note that there are actually three objects defined in this function: the int variable a, the reference b (implemented as a pointer), and the unnamed temporary int with a value of 2 that b points to. In unoptimized compilation, each of these objects needs to be stored at some unique location on the stack, and the compiler allocates stack space naively, processing the variables one by one and assigning each one space below the previous. It evidently chooses to handle them in the following order:
The variable a, an int needing 4 bytes. It goes in the first available stack slot, at [rbp-4].
The reference b, stored as a pointer needing 8 bytes. You might think it would go at [rbp-12], but the x86-64 ABI requires that pointers be naturally aligned on 8-byte boundaries. So the compiler moves down another 4 bytes to achieve this alignment, putting b at [rbp-16]. The 4 bytes at [rbp-8] are unused so far.
The temporary int, also needing 4 bytes. The compiler puts it right below the previously placed variable, at [rbp-20]. True, there was space at [rbp-8] that could have been used instead, which would be more efficient; but since you told the compiler not to optimize, it doesn't perform this optimization. It would if you used one of the -O flags.
As to why a is initialized with an immediate store to memory, whereas the temporary is initialized via a register: to really answer this, you'd have to read the details of the GCC source code, and frankly I don't think you'll find that there is anything very interesting behind it. Presumably there are different code paths in the compiler for creating and initializing named variables versus temporaries, and the code for temporaries may happen to be written as two steps.
It may be that for convenience, the programmer chose to create an extra object in the intermediate representation (GIMPLE or RTL), perhaps because it simplifies the compiler code in handling more general cases. They wouldn't take any trouble to avoid this, because they know that later optimization passes will clean it up. But if you have optimization turned off, this doesn't happen and you get actual instructions emitted for this unnecessary transfer.
In
int a = 4;
you declare a (typically) 4-byte variable and ask the compiler to fill it with the bit representation of 4.
In
int&& b = 2;
you declare a reference ("r-value reference") to, well, to what? To a literal? Is it possible? In C++ references are typically translated, on the assembly level, into pointers. So one can expect that b will be "a pointer in disguise", that is, without the * and -> semantics. But it will likely occupy 64 bits on a 64-bit machine. Now, pointers must point to some memory stored in RAM, not in registers, cache(s) etc. So the compiler most likely creates a temporary (unnamed) integer, initializes it with 2, and then binds its address to b. I write "most likely" because I doubt the standard standardizes this in such great detail. What we know for sure is that there is an extra unnamed variable involved in the initialization of b in int&& b = 2;.
As for the assembler, I have too little knowledge of it to dare explain anything to you. I guess, however, that the concept of a temporary variable and a pointer behind the && reference solves all your problems here.
I am currently learning x86 assembly. Something is not clear to me still however when using the stack for function calls. I understand that the call instruction will involve pushing the return address on the stack and then load the program counter with the address of the function to call. The ret instruction will load this address back to the program counter.
My confusion is, does it matter when the ret instruction is called within the procedure/function? Will it always find the correct return address stored on the stack, or must the stack pointer be currently pointing to where the return address was stored? If that's the case, can't we just use push and pop instead of call and ret?
For example, the code below could be the first on entering the function , if we push different registers on the stack, must the ret instruction only be called after the registers are popped in the reverse order so that after the pop %ebp instruction , the stack pointer will point to the correct place on the stack where the return address is, or will it still find it regardless where it is called? Thanks in advance
push %ebp
mov %ebp, %esp
//push other registers
...
//pop other registers
mov %esp, %ebp
(could ret instruction go here for example and still pop the correct return address?)
pop %ebp
ret
You must leave the stack and non-volatile registers as you found them. The calling function has no clue what you might have done with them otherwise - the calling function will simply continue to its next instruction after ret. Only ret after you're done cleaning up.
ret will always look to the top of the stack for its return address and will pop it into EIP. If the ret is a "far" return then it will also pop the code segment into the CS register (which would also have been pushed by call for a "far" call). Since these are the first things pushed by call, they must be the last things popped by ret. Otherwise you'll end up reting somewhere undefined.
The CPU has no idea what is function/etc... The ret instruction will fetch value from memory pointed to by esp a jump there. For example you can do things like (to illustrate the CPU is not interested into how you structurally organize your source code):
; slow alternative to "jmp continue_there_address"
push continue_there_address
ret
continue_there_address:
...
Also you don't need to restore the registers from stack, (not even restore them to the original registers), as long as esp points to the return address when ret is executed, it will be used:
call SomeFunction
...
SomeFunction:
push eax
push ebx
push ecx
add esp,8 ; forget about last 2 push
pop ecx ; ecx = original eax
ret ; returns back after call
If your function should be interoperable from other parts of code, you may still want to store/restore the registers as required by the calling convention of the platform you are programming for, so from the caller point of view you will not modify some register value which should be preserved, etc... but none of that bothers CPU and executing instruction ret, the CPU just loads value from stack ([esp]), and jumps there.
Also when the return address is stored to stack, it does not differ from other values pushed to stack in any way, all of them are just values written in memory, so the ret has no chance to somehow find "return address" in stack and skip "values", for CPU the values in memory look the same, each 32 bit value is that, 32 bit value. Whether it was stored by call, push, mov, or something else, doesn't matter, that information (origin of value) is not stored, only value.
If that's the case, can't we just use push and pop instead of call and ret?
You can certainly push preferred return address into stack (my first example). But you can't do pop eip, there's no such instruction. Actually that's what ret does, so pop eip is effectively the same thing, but no x86 assembly programmer use such mnemonics, and the opcode differs from other pop instructions. You can of course pop the return address into different register, like eax, and then do jmp eax, to have slow ret alternative (modifying also eax).
That said, the complex modern x86 CPUs do keep some track of call/ret pairings (to predict where the next ret will return, so it can prefetch the code ahead quickly), so if you will use one of those alternative non-standard ways, at some point the CPU will realize it's prediction system for return address is off the real state, and it will have to drop all those caches/preloads and re-fetch everything from real eip value, so you may pay performance penalty for confusing it.
In the example code, if the return was done before pop %ebp, it would attempt to return to the "address" that was in ebp at the start of the function, which would be the wrong address to return to.
In the assembly of the C++ source below. Why is RAX pushed to the stack?
RAX, as I understand it from the ABI could contain anything from the calling function. But we save it here, and then later move the stack back by 8 bytes. So the RAX on the stack is, I think only relevant for the std::__throw_bad_function_call() operation ... ?
The code:-
#include <functional>
void f(std::function<void()> a)
{
a();
}
Output, from gcc.godbolt.org, using Clang 3.7.1 -O3:
f(std::function<void ()>): # #f(std::function<void ()>)
push rax
cmp qword ptr [rdi + 16], 0
je .LBB0_1
add rsp, 8
jmp qword ptr [rdi + 24] # TAILCALL
.LBB0_1:
call std::__throw_bad_function_call()
I'm sure the reason is obvious, but I'm struggling to figure it out.
Here's a tailcall without the std::function<void()> wrapper for comparison:
void g(void(*a)())
{
a();
}
The trivial:
g(void (*)()): # #g(void (*)())
jmp rdi # TAILCALL
The 64-bit ABI requires that the stack is aligned to 16 bytes before a call instruction.
call pushes an 8-byte return address on the stack, which breaks the alignment, so the compiler needs to do something to align the stack again to a multiple of 16 before the next call.
(The ABI design choice of requiring alignment before a call instead of after has the minor advantage that if any args were passed on the stack, this choice makes the first arg 16B-aligned.)
Pushing a don't-care value works well, and can be more efficient than sub rsp, 8 on CPUs with a stack engine. (See the comments).
The reason push rax is there is to align the stack back to a 16-byte boundary to conform to the 64-bit System V ABI in the case where je .LBB0_1 branch is taken. The value placed on the stack isn't relevant. Another way would have been subtracting 8 from RSP with sub rsp, 8. The ABI states the alignment this way:
The end of the input argument area shall be aligned on a 16 (32, if __m256 is
passed on stack) byte boundary. In other words, the value (%rsp + 8) is always
a multiple of 16 (32) when control is transferred to the function entry point. The stack pointer, %rsp, always points to the end of the latest allocated stack frame.
Prior to the call to function f the stack was 16-byte aligned per the calling convention. After control was transferred via a CALL to f the return address was placed on the stack misaligning the stack by 8. push rax is a simple way of subtracting 8 from RSP and realigning it again. If the branch is taken to call std::__throw_bad_function_call()the stack will be properly aligned for that call to work.
In the case where the comparison falls through, the stack will appear just as it did at function entry once the add rsp, 8 instruction is executed. The return address of the CALLER to function f will now be back at the top of the stack and the stack will be misaligned by 8 again. This is what we want because a TAIL CALL is being made with jmp qword ptr [rdi + 24] to transfer control to the function a. This will JMP to the function not CALL it. When function a does a RET it will return directly back to the function that called f.
At a higher optimization level I would have expected that the compiler should be smart enough to do the comparison, and let it fall through directly to the JMP. What is at label .LBB0_1 could then align the stack to a 16-byte boundary so that call std::__throw_bad_function_call() works properly.
As #CodyGray pointed out, if you use GCC (not CLANG) with optimization level of -O2 or higher, the code produced does seem more reasonable. GCC 6.1 output from Godbolt is:
f(std::function<void ()>):
cmp QWORD PTR [rdi+16], 0 # MEM[(bool (*<T5fc5>) (union _Any_data &, const union _Any_data &, _Manager_operation) *)a_2(D) + 16B],
je .L7 #,
jmp [QWORD PTR [rdi+24]] # MEM[(const struct function *)a_2(D)]._M_invoker
.L7:
sub rsp, 8 #,
call std::__throw_bad_function_call() #
This code is more in line with what I would have expected. In this case it would appear that GCC's optimizer may handle this code generation better than CLANG.
In other cases, clang typically fixes up the stack before returning with a pop rcx.
Using push has an upside for efficiency in code-size (push is only 1 byte vs. 4 bytes for sub rsp, 8), and also in uops on Intel CPUs. (No need for a stack-sync uop, which you'd get if you access rsp directly because the call that brought us to the top of the current function makes the stack engine "dirty").
This long and rambling answer discusses the worst-case performance risks of using push rax / pop rcx for aligning the stack, and whether or not rax and rcx are good choices of register. (Sorry for making this so long.)
(TL:DR: looks good, the possible downside is usually small and the upside in the common case makes this worth it. Partial-register stalls could be a problem on Core2/Nehalem if al or ax are "dirty", though. No other 64-bit capable CPU has big problems (because they don't rename partial regs, or merge efficiently), and 32-bit code needs more than 1 extra push to align the stack by 16 for another call unless it was already saving/restoring some call-preserved regs for its own use.)
Using push rax instead of sub rsp, 8 introduces a dependency on the old value of rax, so you'd think it might slow things down if the value of rax is the result of a long-latency dependency chain (and/or a cache miss).
e.g. the caller might have done something slow with rax that's unrelated to the function args, like var = table[ x % y ]; var2 = foo(x);
# example caller that leaves RAX not-ready for a long time
mov rdi, rax ; prepare function arg
div rbx ; very high latency
mov rax, [table + rdx] ; rax = table[ value % something ], may miss in cache
mov [rsp + 24], rax ; spill the result.
call foo ; foo uses push rax to align the stack
Fortunately out-of-order execution will do a good job here.
The push doesn't make the value of rsp dependent on rax. (It's either handled by the stack engine, or on very old CPUs push decodes to multiple uops, one of which updates rsp independently of the uops that store rax. Micro-fusion of the store-address and store-data uops let push be a single fused-domain uop, even though stores always take 2 unfused-domain uops.)
As long as nothing depends on the output push rax / pop rcx, it's not a problem for out-of-order execution. If push rax has to wait because rax isn't ready, it won't cause the ROB (ReOrder Buffer) to fill up and eventually block the execution of later independent instruction. The ROB would fill up even without the push because the instruction that's slow to produce rax, and whatever instruction in the caller consumes rax before the call are even older, and can't retire either until rax is ready. Retirement has to happen in-order in case of exceptions / interrupts.
(I don't think a cache-miss load can retire before the load completes, leaving just a load-buffer entry. But even if it could, it wouldn't make sense to produce a result in a call-clobbered register without reading it with another instruction before making a call. The caller's instruction that consumes rax definitely can't execute/retire until our push can do the same.)
When rax does become ready, push can execute and retire in a couple cycles, allowing later instructions (which were already executed out of order) to also retire. The store-address uop will have already executed, and I assume the store-data uop can complete in a cycle or two after being dispatched to the store port. Stores can retire as soon as the data is written to the store buffer. Commit to L1D happens after retirement, when the store is known to be non-speculative.
So even in the worst case, where the instruction that produces rax was so slow that it led to the ROB filling up with independent instructions that are mostly already executed and ready to retire, having to execute push rax only causes a couple extra cycles of delay before independent instructions after it can retire. (And some of the caller's instructions will retire first, making a bit of room in the ROB even before our push retires.)
A push rax that has to wait will tie up some other microarchitectural resources, leaving one fewer entry for finding parallelism between other later instructions. (An add rsp,8 that could execute would only be consuming a ROB entry, and not much else.)
It will use up one entry in the out-of-order scheduler (aka Reservation Station / RS). The store-address uop can execute as soon as there's a free cycle, so only the store-data uop will be left. The pop rcx uop's load address is ready, so it should dispatch to a load port and execute. (When the pop load executes, it finds that its address matches the incomplete push store in the store buffer (aka memory order buffer), so it sets up the store-forwarding which will happen after the store-data uop executes. This probably consumes a load buffer entry.)
Even an old CPUs like Nehalem has a 36 entry RS, vs. 54 in Sandybridge, or 97 in Skylake. Keeping 1 entry occupied for longer than usual in rare cases is nothing to worry about. The alternative of executing two uops (stack-sync + sub) is worse.
(off topic)
The ROB is larger than the RS, 128 (Nehalem), 168 (Sandybridge), 224 (Skylake). (It holds fused-domain uops from issue to retirement, vs. the RS holding unfused-domain uops from issue to execution). At 4 uops per clock max frontend throughput, that's over 50 cycles of delay-hiding on Skylake. (Older uarches are less likely to sustain 4 uops per clock for as long...)
ROB size determines the out-of-order window for hiding a slow independent operation. (Unless register-file size limits are a smaller limit). RS size determines the out-of-order window for finding parallelism between two separate dependency chains. (e.g. consider a 200 uop loop body where every iteration is independent, but within each iteration it's one long dependency chain without much instruction-level parallelism (e.g. a[i] = complex_function(b[i])). Skylake's ROB can hold more than 1 iteration, but we can't get uops from the next iteration into the RS until we're within 97 uops of the end of the current one. If the dep chain wasn't so much larger than RS size, uops from 2 iterations could be in flight most of the time.)
There are cases where push rax / pop rcx can be more dangerous:
The caller of this function knows that rcx is call-clobbered, so won't read the value. But it might have a false dependency on rcx after we return, like bsf rcx, rax / jnz or test eax,eax / setz cl. Recent Intel CPUs don't rename low8 partial registers anymore, so setcc cl has a false dep on rcx. bsf actually leaves its destination unmodified if the source is 0, even though Intel documents it as an undefined value. AMD documents leave-unmodified behaviour.
The false dependency could create a loop-carried dep chain. On the other hand, a false dependency can do that anyway, if our function wrote rcx with instructions dependent on its inputs.
It would be worse to use push rbx/pop rbx to save/restore a call-preserved register that we weren't going to use. The caller likely would read it after we return, and we'd have introduced a store-forwarding latency into the caller's dependency chain for that register. (Also, it's maybe more likely that rbx would be written right before the call, since anything the caller wanted to keep across the call would be moved to call-preserved registers like rbx and rbp.)
On CPUs with partial-register stalls (Intel pre-Sandybridge), reading rax with push could cause a stall or 2-3 cycles on Core2 / Nehalem if the caller had done something like setcc al before the call. Sandybridge doesn't stall while inserting a merging uop, and Haswell and later don't rename low8 registers separately from rax at all.
It would be nice to push a register that was less likely to have had its low8 used. If compilers tried to avoid REX prefixes for code-size reasons, they'd avoid dil and sil, so rdi and rsi would be less likely to have partial-register issues. But unfortunately gcc and clang don't seem to favour using dl or cl as 8-bit scratch registers, using dil or sil even in tiny functions where nothing else is using rdx or rcx. (Although lack of low8 renaming in some CPUs means that setcc cl has a false dependency on the old rcx, so setcc dil is safer if the flag-setting was dependent on the function arg in rdi.)
pop rcx at the end "cleans" rcx of any partial-register stuff. Since cl is used for shift counts, and functions do sometimes write just cl even when they could have written ecx instead. (IIRC I've seen clang do this. gcc more strongly favours 32-bit and 64-bit operand sizes to avoid partial-register issues.)
push rdi would probably be a good choice in a lot of cases, since the rest of the function also reads rdi, so introducing another instruction dependent on it wouldn't hurt. It does stop out-of-order execution from getting the push out of the way if rax is ready before rdi, though.
Another potential downside is using cycles on the load/store ports. But they are unlikely to be saturated, and the alternative is uops for the ALU ports. With the extra stack-sync uop on Intel CPUs that you'd get from sub rsp, 8, that would be 2 ALU uops at the top of the function.
I have a simple bit reader which uses the SHLD instruction (__shiftleft128) to read a bit stream.
This works great. However, I have been doing some profiling and I notice that whatever instruction comes after the SHLD instruction takes a lot of time.
Assembly CPU Time Instructions Retired
add r10b, r9b 19.000ms 92,000,000
cmp r10b, 0x40 58.000ms 180,000,000
jb 0x140016fa6 <Block 24>
Block 23:
and r10b, 0x3f 43.000ms 204,000,000
mov r15, r11 30.000ms 52,000,000
mov qword ptr [rbp+0x20], r11
add rbx, 0x8 16.000ms 78,000,000
mov qword ptr [rbp+0x10], rbx
mov r11, qword ptr [rbx] 6.000ms 44,000,000
bswap r11 2.000ms
mov qword ptr [rbp+0x28], r11 8.000ms 20,000,000
Block 24:
mov rdx, r15 61.000ms 208,000,000
movzx ecx, r10b 1.000ms 6,000,000
**shld** rdx, r11, cl 24.000ms 58,000,000
inc edi **127.000ms** 470,000,000
As you can see in the table above the inc instruction after the shld instruction takes a lot of time (8% CPU time).
I would like to know a bit more about why this is the case and how I can avoid it? Is there any instructions that can run in parallel with an shld on cpu level?
I remember reading about shld in some AMD optimziation manual but I can't find it again.
Hard to tell but seems like the delay is a result of some exception handling routine.
Behavior
However Intel manual specifies a few cases for shld where undefined response is invoked:-
The destination operand can be a register or a memory location; the
source operand is a register. The count operand is an unsigned integer
that can be stored in an immediate byte or in the CL register. If the
count operand is CL, the shift count is the logical AND of CL and a
count mask. In non-64-bit modes and default 64-bit mode; only bits 0
through 4 of the count are used. This masks the count to a value
between 0 and 31. If a count is greater than the operand size, the
result is undefined. If the count is 1 or greater, the CF
flag is filled with the last bit shifted out of the destination
operand and the SF, ZF, and PF flags are set according to the value of
the result. For a 1-bit shift, the OF flag is set if a sign change
occurred; otherwise, it is cleared. For shifts greater than 1 bit, the
OF flag is undefined. If a shift occurs, the AF flag is undefined. If
the count operand is 0, the flags are not affected. If the count is
greater than the operand size, the flags are undefined.
Exception for shld:-
In Protected Mode --> #GP(0),#SS(0),#PF(fault-code),#AC(0),#UD
UPDATE:: Gotcha:-->
First the definition:-
Instructions Retired — Event select C0H, Umask 00H This event counts the number of instructions at retirement. For instructions that
consist of multiple micro-ops, this event counts the retirement of the
last microop of the instruction. An instruction with a REP prefix
counts as one instruction (not per iteration). Faults before the
retirement of the last micro-op of a multiops instruction are not
counted. This event does not increment under VM-exit conditions.
Counters continue counting during hardware interrupts, traps, and
inside interrupt handlers.
inc edi **127.000ms** 470,000,000(instruction retired)
From the above definition its quite clear that either this instruction breaks into too many micro-ops or some interrupt handler is simultaneously running.
I've been trying to find the OCaml calling convention so that I can manually interpret the stack traces that gdb can't parse. Unfortunately, it seems like nothing has ever been written down in English except for general observations. E.g., people will comment on blogs that OCaml passes many arguments in registers. (If there is English documentation somewhere, a link would be much appreciated.)
So I've been trying to puzzle it out from the ocamlopt source. Could anyone confirm the accuracy of these guesses?
And, if I'm right about the first ten arguments being passed in registers, is it just not generally possible to recover the arguments to a function call? In C, the arguments would still be pushed onto the stack somewhere, if only I walk back up to the correct frame. In OCaml, it would seem that callees are free to destroy their callers' arguments.
Register allocation (from /asmcomp/amd64/proc.ml)
For calling into OCaml functions,
The first 10 integer and pointer arguments are passed in the registers rax, rbx, rdi, rsi, rdx, rcx, r8, r9, r10 and r11
The first 10 floating-point arguments are passed in the registers xmm0 - xmm9
Additional arguments are pushed onto the stack (leftmost-first-in?), floats and ints and pointers intermixed
The trap pointer (see Exceptions below) is passed in r14
The allocation pointer (presumably for the minor heap as described in this blog post) is passed in r15
The return value is passed back in rax if it is an integer or pointer, and in xmm0 if it is a float
All registers are caller-save?
For calling into C functions, the standard amd64 C convention is used:
The first six integer and pointer arguments are passed in rdi, rsi, rdx, rcs, r8, and r9
The first eight float arguments are passed in xmm0 - xmm7
Additional arguments are pushed onto the stack
The return value is passed back in rax or xmm0
The registers rbx, rbp, and r12 - r15 are callee-save
Return address (from /asmcomp/amd64/emit.mlp)
The return address is the first pointer pushed into the call frame, in accordance with amd64 C convention. (I'm guessing the ret instruction assumes this layout.)
Exceptions (from /asmcomp/linearize.ml)
The code try (...body...) with (...handler...); (...rest...) gets linearized like this:
Lsetuptrap .body
(...handler...)
Lbranch .join
Llabel .body
Lpushtrap
(...body...)
Lpoptrap
Llabel .join
(...rest...)
and then emitted as assembly like this (destinations on the right):
call .body
(...handler...)
jmp .join
.body:
pushq %r14
movq %rsp, %r14
(...body...)
popq %r14
addq %rsp, 8
.join:
(...rest...)
Somewhere in the body, there's a linearized opcode Lraise which gets emitted as this exact assembly:
movq %r14, %rsp
popq %r14
ret
Which is really neat! Instead of this setjmp/longjmp business, we create a dummy frame whose return address is the exception handler and whose only local is the previous such dummy frame. The /asmcomp/amd64/proc.ml has a comment calling $r14 the "trap pointer" so I'll call this dummy frame the trap frame. When we want to raise an exception, we set the stack pointer to the most recent trap frame, set the trap pointer to the trap frame before that, and then "return" into the exception handler. And I bet if the exception handler can't handle this exception, it just reraises it.
The exception is in %eax.
This is more an answer than a question! The bit I know on this topic, I have learned by looking at the source, just like you, so don't expect further precisions to be much more authoritative than your post.
Yes, I think OCaml uses specialized calling conventions with caller-save registers only. A benefit of this choice is that it simplifies tail-calls: when you jump through a tail-call¹, you don't have to spill or reload any register.
¹: for non-self tail calls, this only works when there are not too much arguments, and therefore we don't need to spill. If stack allocation is needed, the call is turned into a non-tail call.
Note that calling conventions still depends strongly on the target architecture. On x86 for example, a small numbers of globals are used when the registers are exhausted and before spilling on the stack, to preserve tail-calls.
I also agree on "leftmost-first-in": arguments are traversed in order by calling_conventions in proc.ml, stored in offset order by slot_offset in emit.mlp; they where computed right-to-left, but returned in order, in selectgen.ml.
Yes, you cannot recover the arguments from a call, as OCaml tries to reuse registers as much as possible, and will thus destroy their content if it is not useful anymore in the remaining of a function. Debuggers have no way to print the arguments, they can only, at a given point in the function, print the variables that are still live, but for that, you would need to modify ocamlopt to dump DWARF code to recover the values.