gfortran does not find symbol fabsq_ in libquadmath - fortran

I'm trying to get quad precision to work with my FORTRAN code. I have to replace some intrinsic functions by those in libquadmath, i.e. the dabs function by fabsq.
Unfortunately, if I compile the following test function
program test
integer dp
parameter (dp=10)
real(kind=dp) a
a= -5.0_dp
a=fabsq(a)
write(*,*) "abs(a)", a
end program
I obtain some error at compile time
gfortran -lquadmath -o test.out test.f
/tmp/ccwhLFWr.o: In function `MAIN__':
test.f:(.text+0x2e): undefined reference to `fabsq_'
collect2: error: ld returned 1 exit status
but
nm /usr/lib64/gcc/x86_64-suse-linux/4.8/libquadmath.a | grep -c fabsq
gives me some value greater than 0. What's going wrong here?

In general, things like "dabs" were from the days before type-generic intrinsics. In particular, there is no corresponding "qabs/absq" or whatever you might want to call it, but rather only the type-generic "abs" which is then resolved to the correct library symbol at compile time.
Secondly, your choice of "abs" to test with is a bit unfortunate, since it turns out the compiler expands that inline so you'll never see any function calls. A better choice is e.g. "sin". Consider the example code
function my_sintest(a)
real(16) :: a, my_sintest
my_sintest = sin(a)
end function my_sintest
function my_abstest(a)
real(16) :: a, my_abstest
my_abstest = abs(a)
end function my_abstest
Compiling this with "gfortran -c -O2 -S qm.f90", and inspecting the generated code one sees:
.file "qm.f90"
.text
.p2align 4,,15
.globl my_sintest_
.type my_sintest_, #function
my_sintest_:
.LFB0:
.cfi_startproc
movdqa (%rdi), %xmm0
jmp sinq
.cfi_endproc
.LFE0:
.size my_sintest_, .-my_sintest_
.p2align 4,,15
.globl my_abstest_
.type my_abstest_, #function
my_abstest_:
.LFB1:
.cfi_startproc
movdqa (%rdi), %xmm0
pand .LC0(%rip), %xmm0
ret
.cfi_endproc
.LFE1:
.size my_abstest_, .-my_abstest_
.section .rodata.cst16,"aM",#progbits,16
.align 16
.LC0:
.long 4294967295
.long 4294967295
.long 4294967295
.long 2147483647
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
So one sees that the call to "abs" is handled by inline code, no need to call some external function. OTOH, the call to the generic "sin()" function is resolved to the "sinq" function, which is a quad precision version of the sine function that you can find in libquadmath. No need to try to call "sinq" explicitly, in fact it wouldn't work.
Note also the usage of "real(16)", which is unportable, but a quick-and-dirty way of getting quad precision reals in gfortran.
PS: Another thing, with gfortran there is no need to explicitly link with libquadmath, it's automatically included, just like libgfortran, libm, etc.

Try gfortran -o test.out test.f -lquadmath.

Related

How to remove "noise" from GCC/clang assembly output?

I want to inspect the assembly output of applying boost::variant in my code in order to see which intermediate calls are optimized away.
When I compile the following example (with GCC 5.3 using g++ -O3 -std=c++14 -S), it seems as if the compiler optimizes away everything and directly returns 100:
(...)
main:
.LFB9320:
.cfi_startproc
movl $100, %eax
ret
.cfi_endproc
(...)
#include <boost/variant.hpp>
struct Foo
{
int get() { return 100; }
};
struct Bar
{
int get() { return 999; }
};
using Variant = boost::variant<Foo, Bar>;
int run(Variant v)
{
return boost::apply_visitor([](auto& x){return x.get();}, v);
}
int main()
{
Foo f;
return run(f);
}
However, the full assembly output contains much more than the above excerpt, which to me looks like it is never called. Is there a way to tell GCC/clang to remove all that "noise" and just output what is actually called when the program is ran?
full assembly output:
.file "main1.cpp"
.section .rodata.str1.8,"aMS",#progbits,1
.align 8
.LC0:
.string "/opt/boost/include/boost/variant/detail/forced_return.hpp"
.section .rodata.str1.1,"aMS",#progbits,1
.LC1:
.string "false"
.section .text.unlikely._ZN5boost6detail7variant13forced_returnIvEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIvEET_v,comdat
.LCOLDB2:
.section .text._ZN5boost6detail7variant13forced_returnIvEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIvEET_v,comdat
.LHOTB2:
.p2align 4,,15
.weak _ZN5boost6detail7variant13forced_returnIvEET_v
.type _ZN5boost6detail7variant13forced_returnIvEET_v, #function
_ZN5boost6detail7variant13forced_returnIvEET_v:
.LFB1197:
.cfi_startproc
subq $8, %rsp
.cfi_def_cfa_offset 16
movl $_ZZN5boost6detail7variant13forced_returnIvEET_vE19__PRETTY_FUNCTION__, %ecx
movl $49, %edx
movl $.LC0, %esi
movl $.LC1, %edi
call __assert_fail
.cfi_endproc
.LFE1197:
.size _ZN5boost6detail7variant13forced_returnIvEET_v, .-_ZN5boost6detail7variant13forced_returnIvEET_v
.section .text.unlikely._ZN5boost6detail7variant13forced_returnIvEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIvEET_v,comdat
.LCOLDE2:
.section .text._ZN5boost6detail7variant13forced_returnIvEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIvEET_v,comdat
.LHOTE2:
.section .text.unlikely._ZN5boost6detail7variant13forced_returnIiEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIiEET_v,comdat
.LCOLDB3:
.section .text._ZN5boost6detail7variant13forced_returnIiEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIiEET_v,comdat
.LHOTB3:
.p2align 4,,15
.weak _ZN5boost6detail7variant13forced_returnIiEET_v
.type _ZN5boost6detail7variant13forced_returnIiEET_v, #function
_ZN5boost6detail7variant13forced_returnIiEET_v:
.LFB9757:
.cfi_startproc
subq $8, %rsp
.cfi_def_cfa_offset 16
movl $_ZZN5boost6detail7variant13forced_returnIiEET_vE19__PRETTY_FUNCTION__, %ecx
movl $39, %edx
movl $.LC0, %esi
movl $.LC1, %edi
call __assert_fail
.cfi_endproc
.LFE9757:
.size _ZN5boost6detail7variant13forced_returnIiEET_v, .-_ZN5boost6detail7variant13forced_returnIiEET_v
.section .text.unlikely._ZN5boost6detail7variant13forced_returnIiEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIiEET_v,comdat
.LCOLDE3:
.section .text._ZN5boost6detail7variant13forced_returnIiEET_v,"axG",#progbits,_ZN5boost6detail7variant13forced_returnIiEET_v,comdat
.LHOTE3:
.section .text.unlikely,"ax",#progbits
.LCOLDB4:
.text
.LHOTB4:
.p2align 4,,15
.globl _Z3runN5boost7variantI3FooJ3BarEEE
.type _Z3runN5boost7variantI3FooJ3BarEEE, #function
_Z3runN5boost7variantI3FooJ3BarEEE:
.LFB9310:
.cfi_startproc
subq $8, %rsp
.cfi_def_cfa_offset 16
movl (%rdi), %eax
cltd
xorl %edx, %eax
cmpl $19, %eax
ja .L7
jmp *.L9(,%rax,8)
.section .rodata
.align 8
.align 4
.L9:
.quad .L30
.quad .L10
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.quad .L7
.text
.p2align 4,,10
.p2align 3
.L7:
call _ZN5boost6detail7variant13forced_returnIiEET_v
.p2align 4,,10
.p2align 3
.L30:
movl $100, %eax
.L8:
addq $8, %rsp
.cfi_remember_state
.cfi_def_cfa_offset 8
ret
.p2align 4,,10
.p2align 3
.L10:
.cfi_restore_state
movl $999, %eax
jmp .L8
.cfi_endproc
.LFE9310:
.size _Z3runN5boost7variantI3FooJ3BarEEE, .-_Z3runN5boost7variantI3FooJ3BarEEE
.section .text.unlikely
.LCOLDE4:
.text
.LHOTE4:
.globl _Z3runN5boost7variantI3FooI3BarEEE
.set _Z3runN5boost7variantI3FooI3BarEEE,_Z3runN5boost7variantI3FooJ3BarEEE
.section .text.unlikely
.LCOLDB5:
.section .text.startup,"ax",#progbits
.LHOTB5:
.p2align 4,,15
.globl main
.type main, #function
main:
.LFB9320:
.cfi_startproc
movl $100, %eax
ret
.cfi_endproc
.LFE9320:
.size main, .-main
.section .text.unlikely
.LCOLDE5:
.section .text.startup
.LHOTE5:
.section .rodata
.align 32
.type _ZZN5boost6detail7variant13forced_returnIvEET_vE19__PRETTY_FUNCTION__, #object
.size _ZZN5boost6detail7variant13forced_returnIvEET_vE19__PRETTY_FUNCTION__, 58
_ZZN5boost6detail7variant13forced_returnIvEET_vE19__PRETTY_FUNCTION__:
.string "T boost::detail::variant::forced_return() [with T = void]"
.align 32
.type _ZZN5boost6detail7variant13forced_returnIiEET_vE19__PRETTY_FUNCTION__, #object
.size _ZZN5boost6detail7variant13forced_returnIiEET_vE19__PRETTY_FUNCTION__, 57
_ZZN5boost6detail7variant13forced_returnIiEET_vE19__PRETTY_FUNCTION__:
.string "T boost::detail::variant::forced_return() [with T = int]"
.ident "GCC: (Ubuntu 5.3.0-3ubuntu1~14.04) 5.3.0 20151204"
.section .note.GNU-stack,"",#progbits
Stripping out the .cfi directives, unused labels, and comment lines is a solved problem: the scripts behind Matt Godbolt's compiler explorer are open source on its github project. It can even do colour highlighting to match source lines to asm lines (using the debug info).
You can set it up locally so you can feed it files that are part of your project with all the #include paths and so on (using -I/...). And so you can use it on private source code that you don't want to send out over the Internet.
Matt Godbolt's CppCon2017 talk “What Has My Compiler Done for Me Lately? Unbolting the Compiler's Lid” shows how to use it (it's pretty self-explanatory but has some neat features if you read the docs on github), and also how to read x86 asm, with a gentle introduction to x86 asm itself for total beginners, and to looking at compiler output. He goes on to show some neat compiler optimizations (e.g. for dividing by a constant), and what kind of functions give useful asm output for looking at optimized compiler output (function args, not int a = 123;).
On the Godbolt compiler explorer, it can be useful to use -g0 -fno-asynchronous-unwind-tables if you want to uncheck the filter option for directives, e.g. because you want to see the .section and .p2align stuff in the compiler output. The default is to add -g to your options to get the debug info it uses to colour-highlight matching source and asm lines, but this means .cfi directives for every stack operation, and .loc for every source line, among other things.
With plain gcc/clang (not g++), -fno-asynchronous-unwind-tables avoids .cfi directives. Possibly also useful: -fno-exceptions -fno-rtti -masm=intel. Make sure to omit -g.
Copy/paste this for local use:
g++ -fno-asynchronous-unwind-tables -fno-exceptions -fno-rtti -fverbose-asm \
-Wall -Wextra foo.cpp -O3 -masm=intel -S -o- | less
Or -Os can be more readable, e.g. using div for division by non-power-of-2 constants instead of a multiplicative inverse even though that's a lot worse for performance and only a bit smaller, if at all.
But really, I'd recommend just using Godbolt directly (online or set it up locally)! You can quickly flip between versions of gcc and clang to see if old or new compilers do something dumb. (Or what ICC does, or even what MSVC does.) There's even ARM / ARM64 gcc 6.3, and various gcc for PowerPC, MIPS, AVR, MSP430. (It can be interesting to see what happens on a machine where int is wider than a register, or isn't 32-bit. Or on a RISC vs. x86).
For C instead of C++, you can use -xc -std=gnu11 to avoid flipping the language drop-down to C, which resets your source pane and compiler choices, and has a different set of compilers available.
Useful compiler options for making asm for human consumption:
Remember, your code only has to compile, not link: passing a pointer to an external function like void ext(void*p) is a good way to stop something from optimizing away. You only need a prototype for it, with no definition so the compiler can't inline it or make any assumptions about what it does. (Or inline asm like Benchmark::DoNotOptimize can force a compiler to materialize a value in a register, or forget about it being a known constant, if you know GNU C inline asm syntax well enough to use constraints to understand the effect you're having on what you're requiring of the compiler.)
I'd recommend using -O3 -Wall -Wextra -fverbose-asm -march=haswell for looking at code. (-fverbose-asm can just make the source look noisy, though, when all you get are numbered temporaries as names for the operands.) When you're fiddling with the source to see how it changes the asm, you definitely want compiler warnings enabled. You don't want to waste time scratching your head over the asm when the explanation is that you did something that deserves a warning in the source.
To see how the calling convention works, you often want to look at caller and callee without inlining.
You can use __attribute__((noipa)) foo_t foo(bar_t x) { ... } on a definition, or compile with gcc -O3 -fno-inline-functions -fno-inline-functions-called-once -fno-inline-small-functions to disable inlining. (But those command line options don't disable cloning a function for constant-propagation. noipa = no Inter-Procedural Analysis. It's even stronger than __attribute__((noinline,noclone)).) See From compiler perspective, how is reference for array dealt with, and, why passing by value(not decay) is not allowed? for an example.
Or if you just want to see how functions pass / receive args of different types, you could use different names but the same prototype so the compiler doesn't have a definition to inline. This works with any compiler. Without a definition, a function is just a black box to the optimizer, governed only by the calling convention / ABI.
-ffast-math will get many libm functions to inline, some to a single instruction (esp. with SSE4 available for roundsd). Some will inline with just -fno-math-errno, or other "safer" parts of -ffast-math, without the parts that allow the compiler to round differently. If you have FP code, definitely look at it with/without -ffast-math. If you can't safely enable any of -ffast-math in your regular build, maybe you'll get an idea for a safe change you can make in the source to allow the same optimization without -ffast-math.
-O3 -fno-tree-vectorize will optimize without auto-vectorizing, so you can get full optimization without if you want to compare with -O2 (which doesn't enable autovectorization on gcc11 and earlier, but does on all clang).
-Os (optimize for size and speed) can be helpful to keep the code more compact, which means less code to understand. clang's -Oz optimizes for size even when it hurts speed, even using push 1 / pop rax instead of mov eax, 1, so that's only interesting for code golf.
Even -Og (minimal optimization) might be what you want to look at, depending on your goals. -O0 is full of store/reload noise, which makes it harder to follow, unless you use register vars. The only upside is that each C statement compiles to a separate block of instructions, and it makes -fverbose-asm able to use the actual C var names.
clang unrolls loops by default, so -fno-unroll-loops can be useful in complex functions. You can get a sense of "what the compiler did" without having to wade through the unrolled loops. (gcc enables -funroll-loops with -fprofile-use, but not with -O3). (This is a suggestion for human-readable code, not for code that would run faster.)
Definitely enable some level of optimization, unless you specifically want to know what -O0 did. Its "predictable debug behaviour" requirement makes the compiler store/reload everything between every C statement, so you can modify C variables with a debugger and even "jump" to a different source line within the same function, and have execution continue as if you did that in the C source. -O0 output is so noisy with stores/reloads (and so slow) not just from lack of optimization, but forced de-optimization to support debugging. (also related).
To get a mix of source and asm, use gcc -Wa,-adhln -c -g foo.c | less to pass extra options to as. (More discussion of this in a blog post, and another blog.). Note that the output of this isn't valid assembler input, because the C source is there directly, not as an assembler comment. So don't call it a .s. A .lst might make sense if you want to save it to a file.
Godbolt's color highlighting serves a similar purpose, and is great at helping you see when multiple non-contiguous asm instructions come from the same source line. I haven't used that gcc listing command at all, so IDK how well it does, and how easy it is for the eye to see, in that case.
I like the high code density of godbolt's asm pane, so I don't think I'd like having source lines mixed in. At least not for simple functions. Maybe with a function that was too complex to get a handle on the overall structure of what the asm does...
And remember, when you want to just look at the asm, leave out the main() and the compile-time constants. You want to see the code for dealing with a function arg in a register, not for the code after constant-propagation turns it into return 42, or at least optimizes away some stuff.
Removing static and/or inline from functions will produce a stand-alone definition for them, as well as a definition for any callers, so you can just look at that.
Don't put your code in a function called main(). gcc knows that main is special and assumes it will only be called once, so it marks it as "cold" and optimizes it less.
The other thing you can do: If you did make a main(), you can run it and use a debugger. stepi (si) steps by instruction. See the bottom of the x86 tag wiki for instructions. But remember that code might optimize away after inlining into main with compile-time-constant args.
__attribute__((noinline)) may help, on a function that you want to not be inlined. gcc will also make constant-propagation clones of functions, i.e. a special version with one of the args as a constant, for call-sites that know they're passing a constant. The symbol name will be .clone.foo.constprop_1234 or something in the asm output. You can use __attribute__((noclone)) to disable that, too.).
For example
If you want to see how the compiler multiplies two integers: I put the following code on the Godbolt compiler explorer to get the asm (from gcc -O3 -march=haswell -fverbose-asm) for the wrong way and the right way to test this.
// the wrong way, which people often write when they're used to creating a runnable test-case with a main() and a printf
// or worse, people will actually look at the asm for such a main()
int constants() { int a = 10, b = 20; return a * b; }
mov eax, 200 #,
ret # compiles the same as return 200; not interesting
// the right way: compiler doesn't know anything about the inputs
// so we get asm like what would happen when this inlines into a bigger function.
int variables(int a, int b) { return a * b; }
mov eax, edi # D.2345, a
imul eax, esi # D.2345, b
ret
(This mix of asm and C was hand-crafted by copy-pasting the asm output from godbolt into the right place. I find it's a good way to show how a short function compiles in SO answers / compiler bug reports / emails.)
You can always look at the generated assembly from the object file, instead of using the compilers assembly output. objdump comes to mind.
You can even tell objdump to intermix source with assembly, making it easier to figure out what source line corresponds to what instructions. Example session:
$ cat test.cc
int foo(int arg)
{
return arg * 42;
}
$ g++ -g -O3 -std=c++14 -c test.cc -o test.o && objdump -dS -M intel test.o
test.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <_Z3fooi>:
int foo(int arg)
{
return arg + 1;
0: 8d 47 01 lea eax,[rdi+0x1]
}
3: c3 ret
Explanation of objdump flags:
-d disassembles all executable sections
-S intermixes assembly with source (-g required while compiling with g++)
-M intel choses intel syntax over ugly AT&T syntax (optional)
I like to insert labels that I can easily grep out of the objdump output.
int main() {
asm volatile ("interesting_part_begin%=:":);
do_something();
asm volatile ("interesting_part_end%=:":);
}
I haven't had a problem with this yet, but asm volatile can be very hard on a compiler's optimizer because it tends to leave such code untouched.

assembly output of a simple C++ program

I am trying to understand the assembly output of a simple c++ program. This is my C++ program.
void func()
{}
int main()
{
func();
}
when I use g++ with --save-temps option to get the assembly code for the above program I get the following assembly code.
.file "main.cpp"
.text
.globl _Z4funcv
.type _Z4funcv, #function
_Z4funcv:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size _Z4funcv, .-_Z4funcv
.globl main
.type main, #function
main:
.LFB1:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
call _Z4funcv
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE1:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
According to my knowledge on assembly there should be 3 sections of any assembly program which are data, text and bss. Also text section should start with 'global _start'. I can't see any of them in this assembly code.
Can someone please help me to understand the above assembly code. If you can relate to C++ code as well, It would be great.
Any kind of help is greatly appreciated.
Well, here it is line by line...
.file "main.cpp" # Debugging info (not essential)
.text # Start of text section (i.e. your code)
.globl _Z4funcv # Let the function _Z4funcv be callable
# from outside (e.g. from your main routine)
.type _Z4funcv, #function # Debugging info (possibly not essential)
_Z4funcv: # _Z4funcv is effectively the "name" of your
# function (C++ "mangles" the name; exactly
# how depends on your compiler -- Google "C++
# name mangling" for more).
.LFB0: # Debugging info (possibly not essential)
.cfi_startproc # Provides additional debug info (ditto)
pushq %rbp # Store base pointer of caller function
# (standard function prologue -- Google
# "calling convention" or "cdecl")
.cfi_def_cfa_offset 16 # Provides additional debug info (ditto)
.cfi_offset 6, -16 # Provides additional debug info (ditto)
movq %rsp, %rbp # Reset base pointer to a sensible place
# for this function to put its local
# variables (if any). Standard function
# prologue.
.cfi_def_cfa_register 6 # Debug ...
popq %rbp # Restore the caller's base pointer
# Standard function epilogue
.cfi_def_cfa 7, 8 # Debug...
ret # Return from function
.cfi_endproc # Debug...
.LFE0: # Debug...
.size _Z4funcv, .-_Z4funcv # Debug...
.globl main # Declares that the main function
# is callable from outside
.type main, #function # Debug...
main: # Your main routine (name not mangled)
.LFB1: # Debug...
.cfi_startproc # Debug...
pushq %rbp # Store caller's base pointer
# (standard prologue)
.cfi_def_cfa_offset 16 # Debug...
.cfi_offset 6, -16 # Debug...
movq %rsp, %rbp # Reset base pointer
# (standard prologue)
.cfi_def_cfa_register 6 # Debug...
call _Z4funcv # Call `func` (note name mangled)
movl $0, %eax # Put `0` in eax (eax is return value)
popq %rbp # Restore caller's base pointer
# (standard epilogue)
.cfi_def_cfa 7, 8 # Debug...
ret # Return from main function
.cfi_endproc # Debug...
.LFE1:
.size main, .-main # Debug...
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2" # fluff
.section .note.GNU-stack,"",#progbits # fluff
The linker knows to look for main (and not start) if it is using the standard C or C++ library (which it usually is, unless you tell it otherwise). It links some stub code (which contains start) into the final executable.
So, really, the only important bits are...
.text
.globl _Z4funcv
_Z4funcv:
pushq %rbp
movq %rsp, %rbp
popq %rbp
ret
.globl main
main:
pushq %rbp
movq %rsp, %rbp
call _Z4funcv
movl $0, %eax
popq %rbp
ret
If you want to start from scratch, and not have all the complicated standard library stuff getting in the way of your discovery, you can do something like this and achieve the same result as your C++ code:
.text
.globl _func
_func: # Just as above, really
push %ebp
mov %esp, %ebp
pop %ebp
ret
.globl _start
_start: # A few changes here
push %ebp
mov %esp, %ebp
call _func
movl $1, %eax # Invoke the Linux 'exit' syscall
movl $0, %ebx # With a return value of 0 (pick any char!)
int $0x80 # Actual invocation
The exit syscall is a bit painful, but necessary. If you don't have it, it tries to keep going and run the code that is "past" your code. As that could be important code or data, the machine should stop you with a Segmentation Fault error. Having the exit call avoids all this. If you are using the standard library (as will happen automatically in your C++ example) the exit stuff is taken care of by the linker.
Compile with gcc -nostdlib -o test test.s (noting that gcc is specifically told not to use the standard library). I should say that this is for a 32-bit system, and quite likely will not work on 64-bit. I don't have a 64-bit system to test on, but perhaps some helpful StackOverflower will chip in with a 64-bit translation.

Where is the one to one correlation between the assembly and cpp code?

I tried to examine how the this code will be in assembly:
int main(){
if (0){
int x = 2;
x++;
}
return 0;
}
I was wondering what does if (0) mean?
I used the shell command g++ -S helloWorld.cpp in Linux
and got this code:
.file "helloWorld.cpp"
.text
.globl main
.type main, #function
main:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1"
.section .note.GNU-stack,"",#progbits
I expected that the assembly will contain some JZ but where is it?
How can I compile the code without optimization?
There is no direct, guaranteed relationship between C++ source code and
the generated assembler. The C++ source code defines a certain
semantics, and the compiler outputs machine code which will implement
the observable behavior of those semantics. How the compiler does this,
and the actual code it outputs, can vary enormously, even over the same
underlying hardware; I would be very disappointed in a compiler which
generated code which compared 0 with 0, and then did a conditional
jump if the results were equal, regardless of what the C++ source code
was.
In your example, the only observable behavior in your code is to return
0 to the OS. Anything the compiler generates must do this (and have
no other observable behavior). The code you show isn't optimal for
this:
xorl %eax, %eax
ret
is really all that is needed. But of course, the compiler is free to
generate a lot more if it wants. (Your code, for example, sets up a
frame to support local variables, even though there aren't any. Many
compilers do this systematically, because most debuggers expect it, and
get confused if there is no frame.)
With regards to optimization, this depends on the compiler. With g++,
-O0 (that's the letter O followed by the number zero) turns off all
optimization. This is the default, however, so it is effectively what
you are seeing. In addition to having several different levels of
optimization, g++ supports turning individual optimizations off or on.
You might want to look at the complete list:
http://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc/Optimize-Options.html#Optimize-Options.
The compiler eliminates that code as dead code, e.g. code that will never run. What you're left with is establishing the stack frame and setting the return value of the function. if(0) is never true, after all. If you want to get JZ, then you should probably do something like if(variable == 0). Keep in mind that the compiler is in no way required to actually emit the JZ instruction, it may use any other means to achieve the same thing. Compiling a high level language to assembly is very rarely a clear, one-to-one correlation.
The code has probably been optimized.
if (0){
int x = 2;
x++;
}
has been eliminated.
movl $0, %eax is where the return value been set. It seems the other instructions are just program init and exit.
There is a possibility that the compiler optimized it away, since it's never true.
The optimizer removed the if conditional and all of the code inside, so it doesn't show up at all.
the if (0) {} block has been optimized out by the compiler, as this will never be called.
so your function do only return 0 (movl $0, %eax)

how does the std::sqrt() function work? [duplicate]

This question already has answers here:
How is the square root function implemented? [closed]
(15 answers)
Closed 4 years ago.
Does anyone know how the std::sqrt() function works? (or at least have an idea?)
I've seen methods on the internet that seemed really slow, using lots of approximations and iterations.
Everyone knows sqrt() function is slow, but I'd like to know how the one from std works so I could have a vague idea of when it is beneficial to avoid it. (yes, if I want to be sure I can profile, but it's still nice to have a vague idea)
EDIT: Didn't really formulate the question too well... What I'm interested in:
how would the fastest C++ function calculating a square root look like? (more or less, I just want to know the actual logic behind it)
Nowadays, on modern machines, floating point functions are passed off to the hardware (floating point unit or math-coprocessor).
Sometimes, it uses what the CPU offers:
$ cat main.cc
#include <cmath>
#include <ctime>
#include <cstdlib>
int main(){
srand (clock());
const double d = rand();
return std::sqrt(d) > 2 ? 1 : 0;
}
(the blahblah is just so nothing relevant is optimized away, don't run that program!)
$ g++ -S main.cc
$ cat main.s
.file "main.cc"
.text
.p2align 4,,15
.globl main
.type main, #function
main:
.LFB106:
.cfi_startproc
subq $8, %rsp
.cfi_def_cfa_offset 16
call clock
movl %eax, %edi
call srand
call rand
cvtsi2sd %eax, %xmm1
sqrtsd %xmm1, %xmm0
ucomisd %xmm0, %xmm0
jp .L5
.L2:
xorl %eax, %eax
ucomisd .LC0(%rip), %xmm0
seta %al
addq $8, %rsp
.cfi_remember_state
.cfi_def_cfa_offset 8
ret
.L5:
.cfi_restore_state
movapd %xmm1, %xmm0
call sqrt
jmp .L2
.cfi_endproc
.LFE106:
.size main, .-main
.section .rodata.cst8,"aM",#progbits,8
.align 8
.LC0:
.long 0
.long 1073741824
.ident "GCC: (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2"
.section .note.GNU-stack,"",#progbits
(hint: it is using a sqrt-cpu-instruction)
sqrt(); function Behind the scenes.
It always checks for the mid-points in a graph.
Example: sqrt(16)=4;
sqrt(4)=2;
Now if you give any input inside 16 or 4 like sqrt(10)==?
It finds the mid point of 2 and 4 i.e = x ,then again it finds the mid point of x and 4 (It excludes lower bound in this input). It repeats this step again and again until it gets the perfect answer i.e sqrt(10)==3.16227766017 .It lies b/w 2 and 4.All this in-built function are created using calculus,differentiation and Integration.
The standard does not specify a particular implementation.
One option is to look at a typical implementation, but you'll probably find it's heavily-optimised assembler.

Questions re: assembly generated from my C++ by gcc

Compiling this code:
int main ()
{
return 0;
}
using:
gcc -S filename.cpp
...generates this assembly:
.file "heloworld.cpp"
.text
.globl main
.type main, #function
main:
.LFB0:
.cfi_startproc
.cfi_personality 0x0,__gxx_personality_v0
pushl %ebp
.cfi_def_cfa_offset 8
movl %esp, %ebp
.cfi_offset 5, -8
.cfi_def_cfa_register 5
movl $0, %eax
popl %ebp
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",#progbits
My questions:
Is everything after "." a comment?
What is .LFB0:?
What is .LFE0:?
Why is it so big code only for "int main ()" and "return 0;"?
P.S. I read alot of assembly net books, a lot (at least 30) of tutorials and all I can do is copy code and paste it or rewrite it. Now I'm trying a different approach to try to learn it somehow. The problem is I do understand what are movl, pop, etc, but don't understand how to combine these things to make code "flow". I don't know where or how to correctly start writing a program in asm is. I'm still static not dynamic as in C++ but I want to learn assembly.
As other have said, .file, .text, ... are assembler directives and .LFB0, .LFE0 are local labels. The only instruction in the generated code are:
pushl %ebp
movl %esp, %ebp
movl $0, %eax
popl %ebp
ret
The first two instruction are the function prologue. The frame pointer is stored on the stack and updated. The next intruction store 0 in eax register (i386 ABI states that integer return value are returned via the eax register). The two last instructions are function epilogue. The frame pointer is restored, and then the function return to its caller via the ret instruction.
If you compile your code with -O3 -fomit-frame-pointer, the code will be compiled to just two instructions:
xorl %eax,%eax
ret
The first set eax to 0 (it only takes two bytes to encode, while movl 0,%eax take 5 bytes), and the second is the ret instruction. The frame pointer manipulation is there to ease debugging (it is possible to get backtrace without it, but it is more difficult).
.file, .text, etc are assembler directives.
.LFB0, .LFE0 are local labels, which are normally used as branch destinations within a function.
As for the size, there are really only a few actual instructions - most of the above listing consists of directives, etc. For future reference you might also want to turn up the optimisation level to remove otherwise redudant instructions, i.e. gcc -Wall -O3 -S ....
It's just that there's a lot going on behind your simple program.
If you intend to read assembler outputs, by no means compile C++. Use plain C, the output is far clearer for a number of reasons.