This pointer is 0xfffffffc, potential causes? - c++

I'm compiling the Crypto++ library at -O3. According to Undefined Behavior Sanitizer (UBsan) and Address Sanitizer (Asan), its OK. The program runs fine at -O2 (and -O3 on many platforms).
Its also OK according to Valgrind under -O2. At -O3, Valgrind dies with "Your program just tried to execute an instruction that Valgrind does not understand". I'm fairly certain that's because of SSE4 instructions and vectorizations at -O3.
However, I'm catching a crash on some platforms with -O3. This particular machine is Fedora 22 i686, and its has GCC 5.2.1. The frame in question shows this=0xfffffffc:
Program received signal SIGSEGV, Segmentation fault.
0x0807be29 in CryptoPP::DL_GroupParameters_IntegerBased::GetEncodedElementSize
(this=0xfffffffc, reversible=0x1) at gfpcrypt.h:55
55 unsigned int GetEncodedElementSize(bool reversible) const {return GetModulus().ByteCount();}
The best I can tell, there's nothing located around that address:
(gdb) info shared
From To Syms Read Shared Object Library
0xb7fdd860 0xb7ff6b30 Yes (*) /lib/ld-linux.so.2
0xb7eb63d0 0xb7f7a344 Yes (*) /lib/libstdc++.so.6
0xb7e005f0 0xb7e32bd8 Yes (*) /lib/libm.so.6
0xb7951060 0xb7980cc4 Yes (*) /lib/libubsan.so.0
0xb7932090 0xb7948001 Yes (*) /lib/libgcc_s.so.1
0xb7916840 0xb79238d1 Yes (*) /lib/libpthread.so.0
0xb775d3f0 0xb78a0b6b Yes (*) /lib/libc.so.6
0xb7741a90 0xb7742a31 Yes (*) /lib/libdl.so.2
I've seen this=0x00000000 if a static class object declared in one translation unit is used in another translation unit before initialization is complete. But I don't recall seeing 0xfffffffc in the past.
What are some potential reasons for this=0xfffffffc? Or how can I troubleshoot it further?

If you have a 32 bits machine 0xfffffffc is ((int*)nullptr)-1. So perhaps you are taking the previous element of a nil pointer (e.g. wrongly using some reverse iterator, etc etc...)
Use the bt or backtrace command of gdb to understand what has happened. I guess that the trouble is in the caller (or its caller, etc...)
Try also some other compiler (e.g. some older version of GCC and several versions of Clang/LLVM....). You could have some undefined behavior that your other tools did not detect as such. You need to understand if the bug is inside Crypto++ (or perhaps, but very unlikely, it is inside GCC itself; then report a bug on GCC bugzilla....). If you suspect the compiler, pass -S -fverbose-asm -fdump-tree-all -O3 to g++ to understand what GCC is doing.... (this will dump hundreds of files, including the generated .s assembler code).
Ask also on crypto++ lists; perhaps report the bug on Crypto++ bug tracker. Test with other versions or snapshot of that library
BTW, I'm not sure that -fsanitize=undefined or -fsanitize=address should be used with -O3; I guess that they are more suitable with -O0 -g or -Og -g

Related

Meaning of this=<optimized out> in GDB

I understand the general concept of using optimization flags like -O2 and ending up having had things optimized out, makes sense. But what does it mean for the 'this' function parameter in a gdb frame to be optimized out? Does it mean the use of an Object was determined to be entirely pointless, and that it, and the following function call was elided from existence? Is it indicative of a function having been inlined? Is it indicative of the function call having been elided?
How would I go about investigating further? This occurs with both -O0 and -Og.
If it makes any difference, this is with an ARM process. I'm doing remote debugging using GNU gdbserver (GDB) 7.12.1.20170417-git and 'gdb-multiarch' GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1.
But what does it mean for the 'this' function parameter in a gdb frame to be optimized out?
It means that GDB doesn't have sufficient debug info to understand the current value of this.
It could happen for two reasons:
the compiler failed to emit relevant debug info
the info is there, but GDB failed to understand it
GCC used to do (1) a lot with -O2 and higher optimization levels, but that has been significantly improved around 2015-2016. I have never seen <optimized out> with GCC at -O0.
Clang still does (1) with -O2 and above on x86_64 in 2022, but again I've never seen it do that at -O0.
How would I go about investigating further?
You can run readelf --debug-dump ./a.out and see what info is present in the binary. Beware -- there is a lot of info, and making sense of it requires understanding of what's supposed to be there.
Or you could file a bugzilla issue with exact compiler and debugger versions and compilation command, attach a small binary, and hope that someone will look.
But first make sure you still get this behavior from the latest released version of GCC and GDB (or the current tip-of-trunk versions if you can build them).

What does DT_TEXTREL mean and how to solve? [duplicate]

64 bit Linux uses the small memory model by default, which puts all code and static data below the 2GB address limit. This makes sure that you can use 32-bit absolute addresses. Older versions of gcc use 32-bit absolute addresses for static arrays in order to save an extra instruction for relative address calculation. However, this no longer works. If I try to make a 32-bit absolute address in assembly, I get the linker error:
"relocation R_X86_64_32S against `.data' can not be used when making a shared object; recompile with -fPIC".
This error message is misleading, of course, because I am not making a shared object and -fPIC doesn't help.
What I have found out so far is this: gcc version 4.8.5 uses 32-bit absolute addresses for static arrays, gcc version 6.3.0 doesn't. version 5 probably doesn't either. The linker in binutils 2.24 allows 32-bit absolute addresses, verson 2.28 does not.
The consequence of this change is that old libraries have to be recompiled and legacy assembly code is broken.
Now I want to ask: When was this change made? Is it documented somewhere? And is there a linker option that makes it accept 32-bit absolute addresses?
Your distro configured gcc with --enable-default-pie, so it's making position-independent executables by default, (allowing for ASLR of the executable as well as libraries). Most distros are doing that, these days.
You actually are making a shared object: PIE executables are sort of a hack using a shared object with an entry-point. The dynamic linker already supported this, and ASLR is nice for security, so this was the easiest way to implement ASLR for executables.
32-bit absolute relocation aren't allowed in an ELF shared object; that would stop them from being loaded outside the low 2GiB (for sign-extended 32-bit addresses). 64-bit absolute addresses are allowed, but generally you only want that for jump tables or other static data, not as part of instructions.1
The recompile with -fPIC part of the error message is bogus for hand-written asm; it's written for the case of people compiling with gcc -c and then trying to link with gcc -shared -o foo.so *.o, with a gcc where -fPIE is not the default. The error message should probably change because many people are running into this error when linking hand-written asm.
How to use RIP-relative addressing: basics
Always use RIP-relative addressing for simple cases where there's no downside. See also footnote 1 below and this answer for syntax. Only consider using absolute addressing when it's actually helpful for code-size instead of harmful. e.g. NASM default rel at the top of your file.
AT&T foo(%rip) or in GAS .intel_syntax noprefix use [rip + foo].
Disable PIE mode to make 32-bit absolute addressing work
Use gcc -fno-pie -no-pie to override this back to the old behaviour. -no-pie is the linker option, -fno-pie is the code-gen option. With only -fno-pie, gcc will make code like mov eax, offset .LC0 that doesn't link with the still-enabled -pie.
(clang can have PIE enabled by default, too: use clang -fno-pie -nopie. A July 2017 patch made -no-pie an alias for -nopie, for compat with gcc, but clang4.0.1 doesn't have it.)
Performance cost of PIE for 64-bit (minor) or 32-bit code (major)
With only -no-pie, (but still -fpie) compiler-generated code (from C or C++ sources) will be slightly slower and larger than necessary, but will still be linked into a position-dependent executable which won't benefit from ASLR. "Too much PIE is bad for performance" reports an average slowdown of 3% for x86-64 on SPEC CPU2006 (I don't have a copy of the paper so IDK what hardware that was on :/). But in 32-bit code, the average slowdown is 10%, worst-case 25% (on SPEC CPU2006).
The penalty for PIE executables is mostly for stuff like indexing static arrays, as Agner describes in the question, where using a static address as a 32-bit immediate or as part of a [disp32 + index*4] addressing mode saves instructions and registers vs. a RIP-relative LEA to get an address into a register. Also 5-byte mov r32, imm32 instead of 7-byte lea r64, [rel symbol] for getting a static address into a register is nice for passing the address of a string literal or other static data to a function.
-fPIE still assumes no symbol-interposition for global variables / functions, unlike -fPIC for shared libraries which have to go through the GOT to access globals (which is yet another reason to use static for any variables that can be limited to file scope instead of global). See The sorry state of dynamic libraries on Linux.
Thus -fPIE is much less bad than -fPIC for 64-bit code, but still bad for 32-bit because RIP-relative addressing isn't available. See some examples on the Godbolt compiler explorer. On average, -fPIE has a very small performance / code-size downside in 64-bit code. The worst case for a specific loop might only be a few %. But 32-bit PIE can be much worse.
None of these -f code-gen options make any difference when just linking,
or when assembling .S hand-written asm. gcc -fno-pie -no-pie -O3 main.c nasm_output.o is a case where you want both options.
Checking your GCC config
If your GCC was configured this way, gcc -v |& grep -o -e '[^ ]*pie' prints --enable-default-pie. Support for this config option was added to gcc in early 2015. Ubuntu enabled it in 16.10, and Debian around the same time in gcc 6.2.0-7 (leading to kernel build errors: https://lkml.org/lkml/2016/10/21/904).
Related: Build compressed x86 kernels as PIE was also affected by the changed default.
Why doesn't Linux randomize the address of the executable code segment? is an older question about why it wasn't the default earlier, or was only enabled for a few packages on older Ubuntu before it was enabled across the board.
Note that ld itself didn't change its default. It still works normally (at least on Arch Linux with binutils 2.28). The change is that gcc defaults to passing -pie as a linker option, unless you explicitly use -static or -no-pie.
In a NASM source file, I used a32 mov eax, [abs buf] to get an absolute address. (I was testing if the 6-byte way to encode small absolute addresses (address-size + mov eax,moffs: 67 a1 40 f1 60 00) has an LCP stall on Intel CPUs. It does.)
nasm -felf64 -Worphan-labels -g -Fdwarf testloop.asm &&
ld -o testloop testloop.o # works: static executable
gcc -v -nostdlib testloop.o # doesn't work
...
..../collect2 ... -pie ...
/usr/bin/ld: testloop.o: relocation R_X86_64_32 against `.bss' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
gcc -v -no-pie -nostdlib testloop.o # works
gcc -v -static -nostdlib testloop.o # also works: -static implies -no-pie
GCC can also make a "static PIE" with -static-pie; ASLRed by no dynamic libraries or ELF interpreter. Not the same thing as -static -pie - those conflict with each other (you get a static non-PIE) although it might possibly get changed.
related: building static / dynamic executables with/without libc, defining _start or main.
Checking if an existing executable is PIE or not
This has also been asked at: How to test whether a Linux binary was compiled as position independent code?
file and readelf say that PIEs are "shared objects", not ELF executables. ELF-type EXEC can't be PIE.
$ gcc -fno-pie -no-pie -O3 hello.c
$ file a.out
a.out: ELF 64-bit LSB executable, ...
$ gcc -O3 hello.c
$ file a.out
a.out: ELF 64-bit LSB shared object, ...
## Or with a more recent version of file:
a.out: ELF 64-bit LSB pie executable, ...
gcc -static-pie is a special thing that GCC doesn't do by default, even with -nostdlib. It shows up as LSB pie executable, dynamically linked with current versions of file. (See What's the difference between "statically linked" and "not a dynamic executable" from Linux ldd?). It has ELF-type DYN, but readelf shows no .interp, and ldd will tell you it's statically linked. GDB starti and /proc/maps confirms that execution starts at the top of its _start, not in an ELF interpreter.
Semi-related (but not really): another recent gcc feature is gcc -fno-plt. Finally calls into shared libraries can be just call [rip + symbol#GOTPCREL] (AT&T call *puts#GOTPCREL(%rip)), with no PLT trampoline.
The NASM version of this is call [rel puts wrt ..got]
as an alternative to call puts wrt ..plt. See Can't call C standard library function on 64-bit Linux from assembly (yasm) code. This works in a PIE or non-PIE, and avoids having the linker build a PLT stub for you.
Some distros have started enabling it. It also avoids needing writeable + executable memory pages so it's good for security against code-injection. (I think modern PLT implementation's don't need that either, just updating a GOT pointer not rewriting a jmp rel32 instruction, so there might not be a security difference.)
It's a significant speedup for programs that make a lot of shared-library calls, e.g. x86-64 clang -O2 -g compiling tramp3d goes from 41.6s to 36.8s on whatever hardware the patch author tested on. (clang is maybe a worst-case scenario for shared library calls, making lots of calls to small LLVM library functions.)
It does require early binding instead of lazy dynamic linking, so it's slower for big programs that exit right away. (e.g. clang --version or compiling hello.c). This slowdown could be reduced with prelink, apparently.
This doesn't remove the GOT overhead for external variables in shared library PIC code, though. (See the godbolt link above).
Footnotes 1
64-bit absolute addresses actually are allowed in Linux ELF shared objects, with text relocations to allow loading at different addresses (ASLR and shared libraries). This allows you to have jump tables in section .rodata, or static const int *foo = &bar; without a runtime initializer.
So mov rdi, qword msg works (NASM/YASM syntax for 10-byte mov r64, imm64, aka AT&T syntax movabs, the only instruction which can use a 64-bit immediate). But that's larger and usually slower than lea rdi, [rel msg], which is what you should use if you decide not to disable -pie. A 64-bit immediate is slower to fetch from the uop cache on Sandybridge-family CPUs, according to Agner Fog's microarch pdf. (Yes, the same person who asked this question. :)
You can use NASM's default rel instead of specifying it in every [rel symbol] addressing mode. See also Mach-O 64-bit format does not support 32-bit absolute addresses. NASM Accessing Array for some more description of avoiding 32-bit absolute addressing. OS X can't use 32-bit addresses at all, so RIP-relative addressing is the best way there, too.
In position-dependent code (-no-pie), you should use mov edi, msg when you want an address in a register; 5-byte mov r32, imm32 is even smaller than RIP-relative LEA, and more execution ports can run it.

GCC: conflicting optimizations

Due to long build times, I haven't been able to sufficiently narrow down the culprit leading to internal compiler error: Segmentation fault (I have managed to rule out LTO, though). Present in GCC versions 4.8.2, 4.8.3, and 4.9.1, rather than a bug I'm suspecting a conflict between the remaining various optimization strategies:
Generic: most likely unrelated, here for completeness
-pipe
-march=native
-O3
-msse2
-mfpmath=sse
-ffast-math
Graphite: loop optimization with regard to memory access
-floop-interchange
-floop-strip-mine
-floop-block
Graphite: not really sure
-fgraphite-identity
ISL: loop optimizations with regard to memory access and automatic parallelism
-floop-nest-optimize
Graphite: loop optimization with regard to automatic parallelism
-floop-parallelize-all
-ftree-parallelize-loops=2
Sets of options seem to share significant functional overlap. If this has likely been leading to the segmentation fault during compilation, which options should I preserve and which should I cull in order to maximize performance?
Finally narrowed the segfault down to the -ffast-math and -floop-parallelize-all options, exclusively. This issue is identical to [4.8/4.9 Regression] [graphite] Segmentation fault with -Ofast -floop-paralle..., and should be fixed upstream. Since the fix was pushed Jun 29, while gcc 4.9.1 was released Jul 16 it was branched in Apr, it is not included in the 4.8.3 and 4.9.1 releases.

Jsoncpp - very simple test crashes when Json::reader goes ot of scope

I have downloaded and installed the jsoncpp library. I then try to use the library in my own application:
#include <json/json.h>
void parseJson() {
Json::Reader reader;
}
int main(int argc, char ** argv) {
parseJson();
exit(0);
}
The program compiles and links fine, but it crashes with SIGSEGV when running. The gdb backtrace looks like this:
(gdb) bt
#0 0x0000003a560b7672 in __gnu_cxx::__exchange_and_add () from /usr/lib64/libstdc++.so.6
#1 0x00000000004031e9 in std::string::_Rep::_M_dispose (this=0xffffffffffffffe9, __a=#0x7fffbfe60e57)
at /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/basic_string.h:232
#2 0x0000000000403236 in ~basic_string (this=0x7fffbfe60fb0)
at /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/basic_string.h:478
#3 0x00000000004038d4 in ~Reader (this=0x7fffbfe60eb0) at /private/joaho/Parser/opm-parser/external/json/json-cpp/include/json/reader.h:23
#4 0x0000000000402990 in parseJson () at /private/joaho/Parser/opm-parser/opm/parser/eclipse/ExternalTests/ExternalTests.cpp:51
#5 0x00000000004029ab in main (argc=1, argv=0x7fffbfe610c8)
at /home/user/Parser/opm-parser/opm/parser/eclipse/ExternalTests/ExternalTests.cpp:56
I.e. to me it seems to crash in the destructor. As far as I can tell the Json::Reader does not have it's own dstructor, so this must be a default destructor. As you can see I am running a quite old version of g++ - could that be the problem?
As I commented:
When compiled with GCC version 4.8.1 on Debian/Sid (so libjsoncpp-dev 0.6.0~rc2-3) as g++-4.8 -g -Wall -I/usr/include/jsoncpp/ esjson.cc -ljsoncpp -o esjson your program is compiled without warnings, and does not crash when running.
And GCC 4.1.2 is really old (febr. 2007 !) and is not supported anymore, and not very well C++ standard conforming (GCC, now at version 4.8.1, has made huge progress on C++ standard conformance since 4.1).
So I am not sure GCC 4.1. is faulty, but I won't be surprised it is: it had bad C++ reputation, and both the C++ standard and the GCC compiler have been improved a lot since that. Upgrading your GCC is worth the effort, both for better support of C++ and for improved diagnostics and optimizations.
So I suggest you to use a newer GCC; if you don't have root access, consider compiling its from its source tarball; build it outside of the source tree with e.g. ../gcc-4.8.1/configure --program-suffix=-4.8 --prefix=$HOME/pub then make then make install - after having installed its dependencies

How to remove unused C/C++ symbols with GCC and ld?

I need to optimize the size of my executable severely (ARM development) and
I noticed that in my current build scheme (gcc + ld) unused symbols are not getting stripped.
The usage of the arm-strip --strip-unneeded for the resulting executables / libraries doesn't change the output size of the executable (I have no idea why, maybe it simply can't).
What would be the way (if it exists) to modify my building pipeline, so that the unused symbols are stripped from the resulting file?
I wouldn't even think of this, but my current embedded environment isn't very "powerful" and
saving even 500K out of 2M results in a very nice loading performance boost.
Update:
Unfortunately the current gcc version I use doesn't have the -dead-strip option and the -ffunction-sections... + --gc-sections for ld doesn't give any significant difference for the resulting output.
I'm shocked that this even became a problem, because I was sure that gcc + ld should automatically strip unused symbols (why do they even have to keep them?).
For GCC, this is accomplished in two stages:
First compile the data but tell the compiler to separate the code into separate sections within the translation unit. This will be done for functions, classes, and external variables by using the following two compiler flags:
-fdata-sections -ffunction-sections
Link the translation units together using the linker optimization flag (this causes the linker to discard unreferenced sections):
-Wl,--gc-sections
So if you had one file called test.cpp that had two functions declared in it, but one of them was unused, you could omit the unused one with the following command to gcc(g++):
gcc -Os -fdata-sections -ffunction-sections test.cpp -o test -Wl,--gc-sections
(Note that -Os is an additional compiler flag that tells GCC to optimize for size)
If this thread is to be believed, you need to supply the -ffunction-sections and -fdata-sections to gcc, which will put each function and data object in its own section. Then you give and --gc-sections to GNU ld to remove the unused sections.
You'll want to check your docs for your version of gcc & ld:
However for me (OS X gcc 4.0.1) I find these for ld
-dead_strip
Remove functions and data that are unreachable by the entry point or exported symbols.
-dead_strip_dylibs
Remove dylibs that are unreachable by the entry point or exported symbols. That is, suppresses the generation of load command commands for dylibs which supplied no symbols during the link. This option should not be used when linking against a dylib which is required at runtime for some indirect reason such as the dylib has an important initializer.
And this helpful option
-why_live symbol_name
Logs a chain of references to symbol_name. Only applicable with -dead_strip. It can help debug why something that you think should be dead strip removed is not removed.
There's also a note in the gcc/g++ man that certain kinds of dead code elimination are only performed if optimization is enabled when compiling.
While these options/conditions may not hold for your compiler, I suggest you look for something similar in your docs.
Programming habits could help too; e.g. add static to functions that are not accessed outside a specific file; use shorter names for symbols (can help a bit, likely not too much); use const char x[] where possible; ... this paper, though it talks about dynamic shared objects, can contain suggestions that, if followed, can help to make your final binary output size smaller (if your target is ELF).
The answer is -flto. You have to pass it to both your compilation and link steps, otherwise it doesn't do anything.
It actually works very well - reduced the size of a microcontroller program I wrote to less than 50% of its previous size!
Unfortunately it did seem a bit buggy - I had instances of things not being built correctly. It may have been due to the build system I'm using (QBS; it's very new), but in any case I'd recommend you only enable it for your final build if possible, and test that build thoroughly.
While not strictly about symbols, if going for size - always compile with -Os and -s flags. -Os optimizes the resulting code for minimum executable size and -s removes the symbol table and relocation information from the executable.
Sometimes - if small size is desired - playing around with different optimization flags may - or may not - have significance. For example toggling -ffast-math and/or -fomit-frame-pointer may at times save you even dozens of bytes.
It seems to me that the answer provided by Nemo is the correct one. If those instructions do not work, the issue may be related to the version of gcc/ld you're using, as an exercise I compiled an example program using instructions detailed here
#include <stdio.h>
void deadcode() { printf("This is d dead codez\n"); }
int main(void) { printf("This is main\n"); return 0 ; }
Then I compiled the code using progressively more aggressive dead-code removal switches:
gcc -Os test.c -o test.elf
gcc -Os -fdata-sections -ffunction-sections test.c -o test.elf -Wl,--gc-sections
gcc -Os -fdata-sections -ffunction-sections test.c -o test.elf -Wl,--gc-sections -Wl,--strip-all
These compilation and linking parameters produced executables of size 8457, 8164 and 6160 bytes, respectively, the most substantial contribution coming from the 'strip-all' declaration. If you cannot produce similar reductions on your platform,then maybe your version of gcc does not support this functionality. I'm using gcc(4.5.2-8ubuntu4), ld(2.21.0.20110327) on Linux Mint 2.6.38-8-generic x86_64
strip --strip-unneeded only operates on the symbol table of your executable. It doesn't actually remove any executable code.
The standard libraries achieve the result you're after by splitting all of their functions into seperate object files, which are combined using ar. If you then link the resultant archive as a library (ie. give the option -l your_library to ld) then ld will only include the object files, and therefore the symbols, that are actually used.
You may also find some of the responses to this similar question of use.
I don't know if this will help with your current predicament as this is a recent feature, but you can specify the visibility of symbols in a global manner. Passing -fvisibility=hidden -fvisibility-inlines-hidden at compilation can help the linker to later get rid of unneeded symbols. If you're producing an executable (as opposed to a shared library) there's nothing more to do.
More information (and a fine-grained approach for e.g. libraries) is available on the GCC wiki.
From the GCC 4.2.1 manual, section -fwhole-program:
Assume that the current compilation unit represents whole program being compiled. All public functions and variables with the exception of main and those merged by attribute externally_visible become static functions and in a affect gets more aggressively optimized by interprocedural optimizers. While this option is equivalent to proper use of static keyword for programs consisting of single file, in combination with option --combine this flag can be used to compile most of smaller scale C programs since the functions and variables become local for the whole combined compilation unit, not for the single source file itself.
You can use strip binary on object file(eg. executable) to strip all symbols from it.
Note: it changes file itself and don't create copy.