Include of iostream leads to different binary - c++

Compiling the following code
int main() {
return 0;
}
gives the assembly
main:
xorl %eax, %eax
ret
https://gcc.godbolt.org/z/oQvRDd
If now iostream is included
#include <iostream>
int main() {
return 0;
}
this assembly is created.
main:
xorl %eax, %eax
ret
_GLOBAL__sub_I_main:
subq $8, %rsp
movl $_ZStL8__ioinit, %edi
call std::ios_base::Init::Init() [complete object constructor]
movl $__dso_handle, %edx
movl $_ZStL8__ioinit, %esi
movl $_ZNSt8ios_base4InitD1Ev, %edi
addq $8, %rsp
jmp __cxa_atexit
Full optimization is turned on (-O3).
https://gcc.godbolt.org/z/EtrEX8
Can someone explain, why including an unused header changes the binary. What is _GLOBAL__sub_I_main:?

Each translation unit that includes <iostream> contains a copy of ios_base::Init object:
static ios_base::Init __ioinit;
This object is used to initialize the standard streams (std::cout and its friends). This method is called Schwarz Counter and it ensures that the standard streams are always initialized before their first use (provided iostream header has been included).
That function _GLOBAL__sub_I_main is code the compiler generates for each translation unit that calls the constructors of global objects in that translation unit and also arranges for the corresponding destructor calls to be invoked at exit. This code is invoked by the C++ standard library start-up code before main is called.

Including the iostream header has the effect of adding the definition of a static std::ios_base::Init object. The constructor of this static object initializes the standard stream objects std::cout, std::cerr and so forth.
The reason it's done is to avoid the static initialization order fiasco. It ensures the stream objects are properly initialized across translation units.

Related

Throwing exception causes SIGSEGV on OSX 10.11.4 + clang

Given the following code:
#include <stdexcept>
#include <string>
using namespace std;
class exception_base : public runtime_error {
public:
exception_base()
: runtime_error(string()) { }
};
class my_exception : public exception_base {
public:
};
int main() {
throw my_exception();
}
This works fine on GNU/Linux and Windows and used to work fine on OSX before the latest update to version 10.11.4. By fine I mean since nothing catches the exception, std::terminate is called.
However, on OSX 10.11.4 using clang (LLVM 7.3.0), the program crashes with segmentation fault. The stack trace is not helpful:
Program received signal SIGSEGV, Segmentation fault.
0x0000000100000ad1 in main () at test.cpp:17
17 throw my_exception();
(gdb) bt
#0 0x0000000100000ad1 in main () at test.cpp:17
(gdb)
Nor is what valgrind has to say about this:
==6500== Process terminating with default action of signal 11 (SIGSEGV)
==6500== General Protection Fault
==6500== at 0x100000AD1: main (test.cpp:17)
I don't think that code violates the standard in any way. Am I missing something here?
Note that even if I add a try-catch around the throw the code still crashes due to SIGSEGV.
If you look at the disassembly, you will see that a general-protection (GP) exception is occurring on an SSE movaps instruction:
a.out`main:
0x100000ad0 : pushq %rbp
0x100000ad1 : movq %rsp, %rbp
0x100000ad4 : subq $0x20, %rsp
0x100000ad8 : movl $0x0, -0x4(%rbp)
0x100000adf : movl $0x10, %eax
0x100000ae4 : movl %eax, %edi
0x100000ae6 : callq 0x100000dea ; symbol stub for: __cxa_allocate_exception
0x100000aeb : movq %rax, %rdi
0x100000aee : xorps %xmm0, %xmm0
-> 0x100000af1 : movaps %xmm0, (%rax)
0x100000af4 : movq %rdi, -0x20(%rbp)
0x100000af8 : movq %rax, %rdi
0x100000afb : callq 0x100000b40 ; my_exception::my_exception
...
Before the my_exception::my_exception() constructor is even called, a movaps instruction is used to zero out the block of memory returned by __cxa_allocate_exception(size_t). However, this pointer (0x0000000100103498 in my case) is not guaranteed to be 16-byte aligned. When the source or destination operand of a movaps instruction is a memory operand, the operand must be aligned on a 16-byte boundary or else a GP exception is generated.
One way to fix the problem temporarily is to compile without SSE instructions (-mno-sse). It's not an ideal solution because SSE instructions can improve performance.
I think that this is related to http://reviews.llvm.org/D18479 :
r246985 made changes to give a higher alignment for exception objects on the grounds that Itanium says _Unwind_Exception should be "double-word" aligned and the structure is normally declared with __attribute__((aligned)) guaranteeing 16-byte alignment. It turns out that libc++abi doesn't declare the structure with __attribute__((aligned)) and therefore only guarantees 8-byte alignment on 32-bit and 64-bit platforms. This caused a crash in some cases when the backend emitted SIMD store instructions that requires 16-byte alignment (such as movaps).
This patch makes ItaniumCXXABI::getAlignmentOfExnObject return an 8-byte alignment on Darwin to fix the crash.
.. which patch was committed on March 31, 2016 as r264998.
There's also https://llvm.org/bugs/show_bug.cgi?id=24604 and https://llvm.org/bugs/show_bug.cgi?id=27208 which appear related.
UPDATE I installed Xcode 7.3.1 (released yesterday) and the problem appears to be fixed; the generated assembly is now:
a.out`main:
0x100000ac0 : pushq %rbp
0x100000ac1 : movq %rsp, %rbp
0x100000ac4 : subq $0x20, %rsp
0x100000ac8 : movl $0x0, -0x4(%rbp)
0x100000acf : movl $0x10, %eax
0x100000ad4 : movl %eax, %edi
0x100000ad6 : callq 0x100000dea ; symbol stub for: __cxa_allocate_exception
0x100000adb : movq %rax, %rdi
0x100000ade : movq $0x0, 0x8(%rax)
0x100000ae6 : movq $0x0, (%rax)
0x100000aed : movq %rdi, -0x20(%rbp)
0x100000af1 : movq %rax, %rdi
0x100000af4 : callq 0x100000b40 ; my_exception::my_exception
...

Devirtualizing a non-final method

Suppose I have a class setup like the following:
class A {
public:
virtual void foo() { printf("default implementation\n"); }
};
class B : public A {
public:
void foo() override { printf("B implementation\n"); }
};
class C : public B {
public:
inline void foo() final { A::foo(); }
};
int main(int argc, char **argv) {
auto c = new C();
c->foo();
}
In general, can the call to c->foo() be devirtualized and inlined down to the printf("default implementation") call? Is this guaranteed, for example in gcc? My intuition is that A::foo() is non-virtual because the class is specified explicitly, and so the printf will always be inlined.
You're asking about optimizations, so in general we have to pick a compiler and try it. We can look at the assembly output to determine if the compiler is optimizing the way you want.
Let's try GCC 5.2:
.LC0:
.string "B implementation"
B::foo():
movl $.LC0, %edi
jmp puts
.LC2:
.string "default implementation"
A::foo():
movl $.LC2, %edi
jmp puts
C::foo():
movl $.LC2, %edi
jmp puts
main:
subq $8, %rsp
movl $8, %edi
call operator new(unsigned long)
movl $.LC2, %edi
call puts
xorl %eax, %eax
addq $8, %rsp
ret
And let's try out Clang 3.6:
main: # #main
pushq %rax
movl $.Lstr, %edi
callq puts
xorl %eax, %eax
popq %rdx
retq
.Lstr:
.asciz "default implementation"
In both cases you can see that pretty clearly that all of the virtual functions have been inlined.
"Is this guaranteed, for example in gcc?"
If the compiler is confident about what the actual type of an object is, then I suspect this optimization will always happen. I don't have anything to back up this claim though.

When can/will a function be inlined in C++? Can inline behavior be forced?

I am trying to get the expected behavior when I use the keyword inline.
I tried calling a function in different files, templating the function, using different implementation of the inline function, but whatever I do, the compiler is never inlining the function.
So in which case exactly will the compiler chose to inline a function in C++ ?
Here is the code I have tried :
inline auto Add(int i) -> int {
return i+1;
}
int main() {
Add(1);
return 0;
}
In this case, I get:
Add(int):
pushq %rbp
movq %rsp, %rbp
movl %edi, -4(%rbp)
movl -4(%rbp), %eax
addl $1, %eax
popq %rbp
ret
main:
pushq %rbp
movq %rsp, %rbp
movl $1, %edi
call Add(int)
movl $0, %eax
popq %rbp
ret
Or again,
template<typename T>
inline auto Add(const T &i) -> decltype(i+1) {
return i+1;
}
int main() {
Add(1);
return 0;
}
And I got:
main:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movl $1, -4(%rbp)
leaq -4(%rbp), %rax
movq %rax, %rdi
call decltype ({parm#1}+(1)) Add<int>(int const&)
movl $0, %eax
leave
ret
decltype ({parm#1}+(1)) Add<int>(int const&):
pushq %rbp
movq %rsp, %rbp
movq %rdi, -8(%rbp)
movq -8(%rbp), %rax
movl (%rax), %eax
addl $1, %eax
popq %rbp
ret
I used https://gcc.godbolt.org/ to get the assembly code here, but I also tried on my machine with clang and gcc (with and without optimization options).
EDIT:
Ok, I was missing something with the optimization options. If I set GCC to use o3 optimization level, my method is inlined.
But still. How does GCC, or another compiler, know when it is better to inline a function or not ?
As a rule, your code is always inlined only if you specify:
__attribute__((always_inline))
eg (from gcc documentation):
inline void foo (const char) __attribute__((always_inline));
Though it is almost never a good idea to force your compiler to inline your code.
You may set a high optimization level (though the O flag) to achieve maximum inlining, but for more details please see the gcc documentation
Inlining is actually controlled by a number of parameters. You can set them using the -finline-* options. You can have a look at them here
By the way, you did not actually declare a function. You declared a functor, an object that can be called, but can also store state. Instead of using the syntax:
inline auto Add(int i) -> int {
you mean to say simply:
inline int Add(int i) {

What's the easiest way to write an instrumenting profiler for C/C++?

I've seen a few tools like Pin and DynInst that do dynamic code manipulation in order to instrument code without having to recompile. These seem like heavyweight solutions to what seems like it should be a straightforward problem: retrieving accurate function call data from a program.
I want to write something such that in my code, I can write
void SomeFunction() {
StartProfiler();
...
StopProfiler();
}
and post-execution, retrieve data about what functions were called between StartProfiler() and StopProfiler() (the whole call tree) and how long each of them took.
Preferably I could read out debug symbols too, to get function names instead of addresses.
Here's one interesting hint at a solution I discovered.
gcc (and llvm>=3.0) has a -pg option when compiling, which is traditionally for gprof support. When you compile your code with this flag, the compiler adds a call to the function mcount to the beginning of every function definition. You can override this function, but you'll need to do it in assembly, otherwise the mcount function you define will be instrumented with a call to mcount and you'll quickly run out of stack space before main even gets called.
Here's a little proof of concept:
foo.c:
int total_calls = 0;
void foo(int c) {
if (c > 0)
foo(c-1);
}
int main() {
foo(4);
printf("%d\n", total_calls);
}
foo.s:
.globl mcount
mcount:
movl _total_calls(%rip), %eax
addl $1, %eax
movl %eax, _total_calls(%rip)
ret
compile with clang -pg foo.s foo.c -o foo. Result:
$ ./foo
6
That's 1 for main, 4 for foo and 1 for printf.
Here's the asm that clang emits for foo:
_foo:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movl %edi, -8(%rbp) ## 4-byte Spill
callq mcount
movl -8(%rbp), %edi ## 4-byte Reload
...

c++ Function pointer inlining

I know I can pass a function pointer as a template parameter and get a call to it inlined but I wondered if any compilers these days can inline an 'obvious' inline-able function like:
inline static void Print()
{
std::cout << "Hello\n";
}
....
void (*func)() = Print;
func();
Under Visual Studio 2008 its clever enough to get it down to a direct call instruction so it seems a shame it can't take it a step further?
Newer releases of GCC (4.4 and up) have an option named -findirect-inlining. If GCC can prove to itself that the function pointer is constant then it makes a direct call to the function or inlines the function entirely.
GNU's g++ 4.5 inlines it for me starting at optimization level -O1
main:
subq $8, %rsp
movl $6, %edx
movl $.LC0, %esi
movl $_ZSt4cout, %edi
call _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_E
movl $0, %eax
addq $8, %rsp
ret
where .LC0 is the .string "Hello\n".
To compare, with no optimization, g++ -O0, it did not inline:
main:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movq $_ZL5Printv, -8(%rbp)
movq -8(%rbp), %rax
call *%rax
movl $0, %eax
leave
ret
Well the compiler doesn't really know if that variable will be overwritten somewhere or not (maybe in another thread?) so it errs on the side of caution and implements it as a function call.
I just checked in VS2010 in a release build and it didn't get inlined.
By the way, you decorating the function as inline is useless. The standard says that if you ever get the address of a function, any inline hint will be ignored.
edit: note however that while your function didn't get inlined, the variable IS gone. In the disassembly the call uses a direct address, it doesn't load the variable in a register and calls that.