Have a look at this piece of code:
#include <iostream>
#include <string>
void foo(int(*f)()) {
std::cout << f() << std::endl;
}
void foo(std::string(*f)()) {
std::string s = f();
std::cout << s << std::endl;
}
int main() {
auto bar = [] () -> std::string {
return std::string("bla");
};
foo(bar);
return 0;
}
Compiling it with
g++ -o test test.cpp -std=c++11
leads to:
bla
like it should do. Compiling it with
clang++ -o test test.cpp -std=c++11 -stdlib=libc++
leads to:
zsh: illegal hardware instruction ./test
And Compiling it with
clang++ -o test test.cpp -std=c++11 -stdlib=stdlibc++
leads also to:
zsh: illegal hardware instruction ./test
Clang/GCC Versions:
clang version 3.2 (tags/RELEASE_32/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
gcc version 4.7.2 (Gentoo 4.7.2-r1 p1.5, pie-0.5.5)
Anyone any suggestions what is going wrong?
Thanks in advance!
Yes, it is a bug in Clang++. I can reproduce it with CLang 3.2 in i386-pc-linux-gnu.
And now some random analysis...
I've found that the bug is in the conversion from labmda to pointer-to-function: the compiler creates a kind of thunk with the appropriate signature that calls the lambda, but it has the instruction ud2 instead of ret.
The instruction ud2, as you all probably know, is an instruction that explicitly raises the "Invalid Opcode" exception. That is, an instruction intentionally left undefined.
Take a look at the disassemble: this is the thunk function:
main::$_0::__invoke():
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movl 8(%ebp), %eax
movl %eax, (%esp)
movl %ecx, 4(%esp)
calll main::$_0::operator()() const ; this calls to the real lambda
subl $4, %esp
ud2 ; <<<-- What the...!!!
So a minimal example of the bug will be simply:
int main() {
std::string(*f)() = [] () -> std::string {
return "bla";
};
f();
return 0;
}
Curiously enough, the bug doesn't happen if the return type is a simple type, such as int. Then the generated thunk is:
main::$_0::__invoke():
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movl %eax, (%esp)
calll main::$_0::operator()() const
addl $8, %esp
popl %ebp
ret
I suspect that the problem is in the forwarding of the return value. If it fits in a register, such as eax all goes well. But if it is a big struct, such as std::string it is returned in the stack, the compiler is confused and emits the ud2 in desperation.
This is most likely a bug in clang 3.2. I can't reproduce the crash with clang trunk.
Related
When I use Address Sanitizer(clang v3.4) to detect memory leak, I found that using -O(except -O0) option would always lead to a no-leak-detected result.
The code is simple:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
int* array = (int *)malloc(sizeof(int) * 100);
for (int i = 0; i < 100; i++) //Initialize
array[i] = 0;
return 0;
}
when compile with -O0,
clang -fsanitize=address -g -O0 main.cpp
it will detect memory correctly,
==2978==WARNING: Trying to symbolize code, but external symbolizer is not initialized!
=================================================================
==2978==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 400 byte(s) in 1 object(s) allocated from:
#0 0x4652f9 (/home/mrkikokiko/sdk/MemoryCheck/a.out+0x4652f9)
#1 0x47b612 (/home/mrkikokiko/sdk/MemoryCheck/a.out+0x47b612)
#2 0x7fce3603af44 (/lib/x86_64-linux-gnu/libc.so.6+0x21f44)
SUMMARY: AddressSanitizer: 400 byte(s) leaked in 1 allocation(s).
however, when -O added,
clang -fsanitize=address -g -O main.cpp
nothing is detected! And I find nothing about it in official document.
This is because your code is completely optimized away. The resulting assembly is something like:
main: # #main
xorl %eax, %eax
retq
Without any call to malloc, there is no memory allocation... and therefore no memory leak.
In order to to have AddressSanitizer detect the memory leak, you can either:
Compile with optimizations disabled, as Simon Kraemer mentioned in the comments.
Mark array as volatile, preventing the optimization:
main: # #main
pushq %rax
movl $400, %edi # imm = 0x190
callq malloc # <<<<<< call to malloc
movl $9, %ecx
.LBB0_1: # =>This Inner Loop Header: Depth=1
movl $0, -36(%rax,%rcx,4)
movl $0, -32(%rax,%rcx,4)
movl $0, -28(%rax,%rcx,4)
movl $0, -24(%rax,%rcx,4)
movl $0, -20(%rax,%rcx,4)
movl $0, -16(%rax,%rcx,4)
movl $0, -12(%rax,%rcx,4)
movl $0, -8(%rax,%rcx,4)
movl $0, -4(%rax,%rcx,4)
movl $0, (%rax,%rcx,4)
addq $10, %rcx
cmpq $109, %rcx
jne .LBB0_1
xorl %eax, %eax
popq %rcx
retq
Look into the generated code.
Both GCC & Clang actually know about the semantics of malloc. Because on my Linux/Debian system <stdlib.h> contains
extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur;
and the __attribute_malloc__ & _wur (and __THROW) are macros defined elsewhere. Read about Common Function Attributes in GCC documentation, and Clang documentation says:
Clang aims to support a broad range of GCC extensions.
I strongly suspect that with -O the call to malloc is optimized by removing it.
On my Linux/x86-64 machine using clang -O -S psbshdk.c (with clang 3.8) I am indeed getting:
.globl main
.align 16, 0x90
.type main,#function
main: # #main
.cfi_startproc
# BB#0:
xorl %eax, %eax
retq
.Lfunc_end0:
.size main, .Lfunc_end0-main
.cfi_endproc
The address sanitizer is working on the emitted binary (which won't contain any malloc call).
BTW, you could compile with clang -O -g then use valgrind, or compile with clang -O -fsanitize=address -g. Both clang & gcc are able to optimize and give some debug information (which might be "approximate" when optimizing a lot).
Suppose I have a class setup like the following:
class A {
public:
virtual void foo() { printf("default implementation\n"); }
};
class B : public A {
public:
void foo() override { printf("B implementation\n"); }
};
class C : public B {
public:
inline void foo() final { A::foo(); }
};
int main(int argc, char **argv) {
auto c = new C();
c->foo();
}
In general, can the call to c->foo() be devirtualized and inlined down to the printf("default implementation") call? Is this guaranteed, for example in gcc? My intuition is that A::foo() is non-virtual because the class is specified explicitly, and so the printf will always be inlined.
You're asking about optimizations, so in general we have to pick a compiler and try it. We can look at the assembly output to determine if the compiler is optimizing the way you want.
Let's try GCC 5.2:
.LC0:
.string "B implementation"
B::foo():
movl $.LC0, %edi
jmp puts
.LC2:
.string "default implementation"
A::foo():
movl $.LC2, %edi
jmp puts
C::foo():
movl $.LC2, %edi
jmp puts
main:
subq $8, %rsp
movl $8, %edi
call operator new(unsigned long)
movl $.LC2, %edi
call puts
xorl %eax, %eax
addq $8, %rsp
ret
And let's try out Clang 3.6:
main: # #main
pushq %rax
movl $.Lstr, %edi
callq puts
xorl %eax, %eax
popq %rdx
retq
.Lstr:
.asciz "default implementation"
In both cases you can see that pretty clearly that all of the virtual functions have been inlined.
"Is this guaranteed, for example in gcc?"
If the compiler is confident about what the actual type of an object is, then I suspect this optimization will always happen. I don't have anything to back up this claim though.
Environment Details:
Machine: Core i5 M540 processor running Centos 64 bits in a virtual machine in VMware player.
GCC: 4.8.2 built from source tar.
Issue:
I am trying to learn more about SIMD functions in C/C++ and for that I created the following helloworld program.
#include <iostream>
#include <pmmintrin.h>
int main(void){
__m128i a, b, c;
a = _mm_set_epi32(1, 1, 1, 1);
b = _mm_set_epi32(2, 3, 4, 5);
c = _mm_add_epi32(a,b);
std::cout << "Value of first int: " << c[0];
}
When I look at the assembly output for it using the following command I do not see the SIMD instructions.
g++ -S -I/usr/local/include/c++/4.8.2 -msse3 -O3 hello.cpp
Sample of the assembly generated:
movl $.LC2, %esi
movl $_ZSt4cout, %edi
call _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
movabsq $21474836486, %rsi
movq %rax, %rdi
call _ZNSo9_M_insertIxEERSoT_
xorl %eax, %eax
Please advise the correct way of writing or compiling the SIMD code.
Thanks you!!
It looks like your compiler is optimizing away the calls to _mm_foo_epi32, since all the values are known. Try taking all the relevant inputs from the user and see what happens.
Alternately, compile with -O0 instead of -O3 and see what happens.
I've seen a few tools like Pin and DynInst that do dynamic code manipulation in order to instrument code without having to recompile. These seem like heavyweight solutions to what seems like it should be a straightforward problem: retrieving accurate function call data from a program.
I want to write something such that in my code, I can write
void SomeFunction() {
StartProfiler();
...
StopProfiler();
}
and post-execution, retrieve data about what functions were called between StartProfiler() and StopProfiler() (the whole call tree) and how long each of them took.
Preferably I could read out debug symbols too, to get function names instead of addresses.
Here's one interesting hint at a solution I discovered.
gcc (and llvm>=3.0) has a -pg option when compiling, which is traditionally for gprof support. When you compile your code with this flag, the compiler adds a call to the function mcount to the beginning of every function definition. You can override this function, but you'll need to do it in assembly, otherwise the mcount function you define will be instrumented with a call to mcount and you'll quickly run out of stack space before main even gets called.
Here's a little proof of concept:
foo.c:
int total_calls = 0;
void foo(int c) {
if (c > 0)
foo(c-1);
}
int main() {
foo(4);
printf("%d\n", total_calls);
}
foo.s:
.globl mcount
mcount:
movl _total_calls(%rip), %eax
addl $1, %eax
movl %eax, _total_calls(%rip)
ret
compile with clang -pg foo.s foo.c -o foo. Result:
$ ./foo
6
That's 1 for main, 4 for foo and 1 for printf.
Here's the asm that clang emits for foo:
_foo:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movl %edi, -8(%rbp) ## 4-byte Spill
callq mcount
movl -8(%rbp), %edi ## 4-byte Reload
...
I know I can pass a function pointer as a template parameter and get a call to it inlined but I wondered if any compilers these days can inline an 'obvious' inline-able function like:
inline static void Print()
{
std::cout << "Hello\n";
}
....
void (*func)() = Print;
func();
Under Visual Studio 2008 its clever enough to get it down to a direct call instruction so it seems a shame it can't take it a step further?
Newer releases of GCC (4.4 and up) have an option named -findirect-inlining. If GCC can prove to itself that the function pointer is constant then it makes a direct call to the function or inlines the function entirely.
GNU's g++ 4.5 inlines it for me starting at optimization level -O1
main:
subq $8, %rsp
movl $6, %edx
movl $.LC0, %esi
movl $_ZSt4cout, %edi
call _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_E
movl $0, %eax
addq $8, %rsp
ret
where .LC0 is the .string "Hello\n".
To compare, with no optimization, g++ -O0, it did not inline:
main:
pushq %rbp
movq %rsp, %rbp
subq $16, %rsp
movq $_ZL5Printv, -8(%rbp)
movq -8(%rbp), %rax
call *%rax
movl $0, %eax
leave
ret
Well the compiler doesn't really know if that variable will be overwritten somewhere or not (maybe in another thread?) so it errs on the side of caution and implements it as a function call.
I just checked in VS2010 in a release build and it didn't get inlined.
By the way, you decorating the function as inline is useless. The standard says that if you ever get the address of a function, any inline hint will be ignored.
edit: note however that while your function didn't get inlined, the variable IS gone. In the disassembly the call uses a direct address, it doesn't load the variable in a register and calls that.