Meaning of this=<optimized out> in GDB - c++

I understand the general concept of using optimization flags like -O2 and ending up having had things optimized out, makes sense. But what does it mean for the 'this' function parameter in a gdb frame to be optimized out? Does it mean the use of an Object was determined to be entirely pointless, and that it, and the following function call was elided from existence? Is it indicative of a function having been inlined? Is it indicative of the function call having been elided?
How would I go about investigating further? This occurs with both -O0 and -Og.
If it makes any difference, this is with an ARM process. I'm doing remote debugging using GNU gdbserver (GDB) 7.12.1.20170417-git and 'gdb-multiarch' GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1.

But what does it mean for the 'this' function parameter in a gdb frame to be optimized out?
It means that GDB doesn't have sufficient debug info to understand the current value of this.
It could happen for two reasons:
the compiler failed to emit relevant debug info
the info is there, but GDB failed to understand it
GCC used to do (1) a lot with -O2 and higher optimization levels, but that has been significantly improved around 2015-2016. I have never seen <optimized out> with GCC at -O0.
Clang still does (1) with -O2 and above on x86_64 in 2022, but again I've never seen it do that at -O0.
How would I go about investigating further?
You can run readelf --debug-dump ./a.out and see what info is present in the binary. Beware -- there is a lot of info, and making sense of it requires understanding of what's supposed to be there.
Or you could file a bugzilla issue with exact compiler and debugger versions and compilation command, attach a small binary, and hope that someone will look.
But first make sure you still get this behavior from the latest released version of GCC and GDB (or the current tip-of-trunk versions if you can build them).

Related

Assembler Messages: no such instruction when Compiling C++

I am attempting to compile a C++ code using gcc/5.3 on Scientific Linux release 6.7. I keep getting the following errors whenever I run my Makefile though:
/tmp/ccjZqIED.s: Assembler messages:
/tmp/ccjZqIED.s:768: Error: no such instruction: `shlx %rax,%rdx,%rdx'
/tmp/ccjZqIED.s:1067: Error: no such instruction: `shlx %rax,%rdx,%rdx'
/tmp/ccjZqIED.s: Assembler messages:
/tmp/ccjZqIED.s:6229: Error: no such instruction: `mulx %r10,%rcx,%rbx'
/tmp/ccjZqIED.s:6248: Error: no such instruction: `mulx %r13,%rcx,%rbx'
/tmp/ccjZqIED.s:7109: Error: no such instruction: `mulx %r10,%rcx,%rbx'
/tmp/ccjZqIED.s:7128: Error: no such instruction: `mulx %r13,%rcx,%rbx'
I've attmpted to follow the advice from this question with no change to my output:
Compile errors with Assembler messages
My compiler options are currently:
CXXFLAGS = -g -Wall -O0 -pg -std=c++11
Does anyone have any idea what could be causing this?
This means that GCC is outputting an instruction that your assembler doesn't support. Either that's coming from inline asm in the source code, or that shouldn't happen, and suggests that you have compiled GCC on a different machine with a newer assembler, then copied it to another machine where it doesn't work properly.
Assuming those instructions aren't used explicitly in an asm statement you should be able to tell GCC not to emit those instructions with a suitable flag such as -mno-avx (or whatever flag is appropriate to disable use of those particular instructions).
#jonathan-wakely's answer is correct in that the assembler, which your compiler invokes, does not understand the assembly code, which your compiler generates.
As to why that happens, there are multiple possibilities:
You installed the newer compiler by hand without also updating your assembler
Your compiler generates 64-bit instructions, but assembler is limited to 32-bit ones for some reason
Disabling AVX (-mno-avx) is unlikely to help, because it is not explicitly requested either -- there is no -march in the quoted CXXFLAGS. If it did help, then you did not show us all of the compiler flags -- it would've been best, if you simply included the entire compiler command-line.
If my suspicion is correct in 1. above, then you should build and/or install the latest binutils package, which will provide as aware of AVX instructions, among other things. You would then need to rebuild the compiler with the --with-as=/path/to/the/updated/as flag passed to configure.
If your Linux installation is 32-bit only (suspicion 2.), then you should not be generating 64-bit binaries at all. It is possible, but not trivial...
Do post the output of uname -a and your entire compiler command-line leading to the above error-messages.

-mimplicit-it compiler flag not recognized

I am attempting to compile a C++ library for a Tegra TK1. The library links to TBB, which I pulled using the package manager. During compilation I got the following error
/tmp/cc4iLbKz.s: Assembler messages:
/tmp/cc4iLbKz.s:9541: Error: thumb conditional instruction should be in IT block -- `strexeq r2,r3,[r4]'
A bit of googling and this question led me to try adding -mimplicit-it=thumb to CMAKE_CXX_FLAGS, but the compiler doesn't recognize it.
I am compiling on the tegra with kernal 3.10.40-grinch-21.3.4, and using gcc 4.8.4 compiler (thats what comes back when I type c++ -v)
I'm not sure what the initial error message means, though I think it has something to do with the TBB linked library rather than the source I'm compiling. The problem with the fix is also mysterious. Can anyone shed some light on this?
-mimplicit-it is an option to the assembler, not to the compiler. Thus, in the absence of specific assembler flags in your makefile (which you probably don't have, given that you don't appear to be using a separate assembler step), you'll need to use the -Wa option to the compiler to pass it through, i.e. -Wa,-mimplicit-it=thumb.
The source of the issue is almost certainly some inline assembly - possibly from a static inline in a header file if you're really only linking pre-built libraries - which contains conditionally-executed instructions (I'm going to guess its something like a cmpxchg implementation). Since your toolchain could well be configured to compile to the Thumb instruction set - which requires a preceding it (If-Then) instruction to set up conditional instructions - by default, another alternative might be to just compile with -marm (and/or remove -mthumb if appropriate) and sidestep the issue by not using Thumb at all.
Adding compiler option:
-wa
should solve the problem.

This pointer is 0xfffffffc, potential causes?

I'm compiling the Crypto++ library at -O3. According to Undefined Behavior Sanitizer (UBsan) and Address Sanitizer (Asan), its OK. The program runs fine at -O2 (and -O3 on many platforms).
Its also OK according to Valgrind under -O2. At -O3, Valgrind dies with "Your program just tried to execute an instruction that Valgrind does not understand". I'm fairly certain that's because of SSE4 instructions and vectorizations at -O3.
However, I'm catching a crash on some platforms with -O3. This particular machine is Fedora 22 i686, and its has GCC 5.2.1. The frame in question shows this=0xfffffffc:
Program received signal SIGSEGV, Segmentation fault.
0x0807be29 in CryptoPP::DL_GroupParameters_IntegerBased::GetEncodedElementSize
(this=0xfffffffc, reversible=0x1) at gfpcrypt.h:55
55 unsigned int GetEncodedElementSize(bool reversible) const {return GetModulus().ByteCount();}
The best I can tell, there's nothing located around that address:
(gdb) info shared
From To Syms Read Shared Object Library
0xb7fdd860 0xb7ff6b30 Yes (*) /lib/ld-linux.so.2
0xb7eb63d0 0xb7f7a344 Yes (*) /lib/libstdc++.so.6
0xb7e005f0 0xb7e32bd8 Yes (*) /lib/libm.so.6
0xb7951060 0xb7980cc4 Yes (*) /lib/libubsan.so.0
0xb7932090 0xb7948001 Yes (*) /lib/libgcc_s.so.1
0xb7916840 0xb79238d1 Yes (*) /lib/libpthread.so.0
0xb775d3f0 0xb78a0b6b Yes (*) /lib/libc.so.6
0xb7741a90 0xb7742a31 Yes (*) /lib/libdl.so.2
I've seen this=0x00000000 if a static class object declared in one translation unit is used in another translation unit before initialization is complete. But I don't recall seeing 0xfffffffc in the past.
What are some potential reasons for this=0xfffffffc? Or how can I troubleshoot it further?
If you have a 32 bits machine 0xfffffffc is ((int*)nullptr)-1. So perhaps you are taking the previous element of a nil pointer (e.g. wrongly using some reverse iterator, etc etc...)
Use the bt or backtrace command of gdb to understand what has happened. I guess that the trouble is in the caller (or its caller, etc...)
Try also some other compiler (e.g. some older version of GCC and several versions of Clang/LLVM....). You could have some undefined behavior that your other tools did not detect as such. You need to understand if the bug is inside Crypto++ (or perhaps, but very unlikely, it is inside GCC itself; then report a bug on GCC bugzilla....). If you suspect the compiler, pass -S -fverbose-asm -fdump-tree-all -O3 to g++ to understand what GCC is doing.... (this will dump hundreds of files, including the generated .s assembler code).
Ask also on crypto++ lists; perhaps report the bug on Crypto++ bug tracker. Test with other versions or snapshot of that library
BTW, I'm not sure that -fsanitize=undefined or -fsanitize=address should be used with -O3; I guess that they are more suitable with -O0 -g or -Og -g

Trace gcc compilation and what code slows it down

I want to find out what code causes slow compilation times in gcc. I previously had a code being compiled slowly and someone told me the command-line switch that makes gcc to print each step that it compiles, including each function/variable/symbol and so on. That helped a lot (I could literally see in console where gcc chokes), but I forgot what was the switch.
I found it (from the gcc man page):
-Q
Makes the compiler print out each function name as it is compiled, and print some statistics about each pass when it finishes.
See also this answer to a quite similar question.
You very probably want to invoke GCC with -time or more probably -ftime-report which gives you the time spent by cc1 or cc1plus ... (the compiler proper started by the gcc or g++command) which shows the time spent in each internal phases or passes of the GCC compiler. Don't forget also optimizations, debugging, and warnings flags (e.g. -Wall -O -g); they all slow down the compilation.
You'll learn that for C programs, parsing is a small fraction of the compilation time, as soon as you ask for some optimization, e.g. -O1 or -O2. (This is less true for C++, when parsing can take half of the time, notably since template expansion is considered as parsing).
Empirically, what slows down GCC are very long function bodies. Better have 50 functions of 1000 lines each than one single function of 50000 lines (and this could happen in programs generating some of their C++ code, e.g. RefPerSys or perhaps -in spring 2021- Bismon).
Try the -v (verbose) compilation.
See this link:
http://www.network-theory.co.uk/docs/gccintro/gccintro_75.html
edit:
I understand. Maybe this will help:
gcc -fdump-tree-all -fdump-rtl-all
and the like (-fdump-passes). See here: http://fizz.phys.dal.ca/~jordan/gcc-4.0.1/gcc/Debugging-Options.html

Find out compilation optimization flag from executable

Here I have an executable without knowing its build environment, with the assumption of gcc/g++ being used.
Is there a way to find out the optimization flag used during compilation (like O0, O2, ...)?
All means are welcomed, no matter it's by analyzing the binary or some debug test via gdb (if we assume that -g flag is available during compilation).
If you are lucky, the command-line is present in the executable file itself, depending on the operating system and file format used. If it is an Elf-file, try to dump the content using the objdump from GNU binutils
I really don't know if this can help, but checking O0 is quite easy in the disassembly (objdump -d), since the generated code has no optimization at all and adds some few instructions to simplify debugging.
Typically on an x86, the prologue of a function includes saving the stack pointer (for the backtrace, I presume). So if you locate, for example, the main function, you should see something like:
... main:
... push %rbp
... mov %rsp,%rbp
And you should see this "pattern" at almost every beginning of the functions.
For other targets (I dunno what your target platform is), you should see more or less similar assembly sequences in the prologues or before the function calls.
For other optimization levels, things are way way trickier.
Sorry for remaining vague and not answering the entire question... Just hoping it will help.