What is gcc compiler option "-unsigned" meant for? - c++

I am compiling legacy code using gcc compiler 4.8.5 on RHEL7. All C and C++ files have "-unsigned" as one of the default flags for both gcc and g++ compilation. The compiler accepts this option and compiles successfully. However, I cannot find any documentation of this option anywhere either in gcc manual or online. Does anyone know what this option is? I have to port the code and am confused whether this compiler option needs to be ported or not.

I suspect it was just a mistake in the Makefile, or whatever is used to compile the code.
gcc does not support a -unsigned option. However, it does pass options to the linker, and GNU ld has a -u option:
'-u SYMBOL'
'--undefined=SYMBOL'
Force SYMBOL to be entered in the output file as an undefined
symbol. Doing this may, for example, trigger linking of additional
modules from standard libraries. '-u' may be repeated with
different option arguments to enter additional undefined symbols.
This option is equivalent to the 'EXTERN' linker script command.
The space between -u and the symbol name is optional.
So the intent might have been to do something with unsigned types, but the effect, at least with modern versions of gcc, is to enter nsigned in the output file as an undefined symbol.
This seems to have no effect in the quick test I did (compiling and running a small "hello, world" program). The program runs correctly, and the output of nm hello includes:
U nsigned
With an older version of gcc (2.95.2) on a different system, I get a fatal link-time error about the undefined symbol.

I suspect the intention is for the program to be compiled with -funsigned-char. I don't know whether this instead used to be -unsigned, back in dinosaur times, and perhaps GCC actually still heeds that spelling; there's no way to know from the manual because, if it does, it's an undocumented feature.
I'd ask the original author for the intention here, since they didn't see fit to document.

Related

Is it allowed to name a global variable `read` or `malloc` in C++?

Consider the following C++17 code:
#include <iostream>
int read;
int main(){
std::ios_base::sync_with_stdio(false);
std::cin >> read;
}
It compiles and runs fine on Godbolt with GCC 11.2 and Clang 12.0.1, but results in runtime error if compiled with a -static key.
As far as I understand, there is a POSIX(?) function called read (see man read(2)), so the example above actually invokes ODR violation and the program is essentially ill-formed even when compiled without -static. GCC even emits warning if I try to name a variable malloc: built-in function 'malloc' declared as non-function
Is the program above valid C++17? If no, why? If yes, is it a compiler bug which prevents it from running?
The code shown is valid (all C++ Standard versions, I believe). The similar restrictions are all listed in [reserved.names]. Since read is not declared in the C++ standard library, nor in the C standard library, nor in older versions of the standard libraries, and is not otherwise listed there, it's fair game as a name in the global namespace.
So is it an implementation defect that it won't link with -static? (Not a "compiler bug" - the compiler piece of the toolchain is fine, and there's nothing forbidding a warning on valid code.) It does at least work with default settings (though because of how the GNU linker doesn't mind duplicated symbols in an unused object of a dynamic library), and one could argue that's all that's needed for Standard compliance.
We also have at [intro.compliance]/8
A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this International Standard. Having done so, however, they can compile and execute such programs.
We can consider POSIX functions such an extension. This is intentionally vague on when or how such extensions are enabled. The g++ driver of the GCC toolset links a number of libraries by default, and we can consider that as adding not only the availability of non-standard #include headers but also adding additional translation units to the program. In theory, different arguments to the g++ driver might make it work without the underlying link step using libc.so. But good luck - one could argue it's a problem that there's no simple way to link only names from the C++ and C standard libraries without including other unreserved names.
(Does not altering a well-formed program even mean that an implementation extension can't use non-reserved names for the additional libraries? I hope not, but I could see a strict reading implying that.)
So I haven't claimed a definitive answer to the question, but the practical situation is unlikely to change, and a Standard Defect Report would in my opinion be more nit-picking than a useful clarification.
Here is some explanation on why it produces a runtime error with -static only.
The https://godbolt.org/z/asKsv95G5 link in the question indicates that the runtime error with -static is Program returned: 139. The output of kill -l in Bash on Linux contains 11) SIGSEGV (and 128 + 11 = 139), so the process exits with fatal signal SIGSEGV (Segmentation fault) indicating invalid memory reference. The reason for that is that the process tries to run the contents (4 bytes) of the read variable as machine code. (Eventually std::cin >> ... calls read.) Either somethings fails in those 4 bytes accidentally interpreted as machine code, or it fails because the memory page containing those 4 bytes is not executable.
The reason why it succeeds without -static is that with dynamic linking it's possible to have multiple symbols with the same name (read): one in the program executable, and another one in the shared library (libc.so.6). std::cin >> ... (in libstdc++.so.6) links against libc.so.6, so when the dynamic linker tries to find the symbol read at program load time (to be used by libstdc++.so.6), it will look at libc.so.6 first, finding read there, and ignoring the read symbol in the program executable.

Mingw64 Linker error when trying to include -lhid [duplicate]

Context: I'm using Qt 5.9.3 on Windows, building for MinGW 32-bit. The Qt part is a side issue though - the problem seems to be with MinGW. The version of MinGW is 4.3.0, supplied prebuilt as part of the Qt installation.
I'm building a library which talks to a USB device over HID. Everything compiles fine, but it fails at the link stage with
./..\..\object\debug\usb_hid_device.o: In function `ZN8MyApp3USB5Win3213getDevicePathB5cxx11Ell':
<MYPATH>/../../source/win32/usb_hid_device.cpp:99: undefined reference to `HidD_GetAttributes(void*, _HIDD_ATTRIBUTES*)#8'
./..\..\object\debug\usb_hid_device.o: In function `ZN8MyApp3USB5Win3214CHIDDeviceImplC2EllRNS_15LogPerComponentE':
<MYPATH>/../../source/win32/usb_hid_device.cpp:200: undefined reference to `HidD_FlushQueue(void*)#4'
The linker command is
g++ -shared -mthreads -Wl,-subsystem,windows -Wl,--out-implib,<MYPATH>\bin\debug\libusb_hid_comms.a -o <MYPATH>\bin\debug\usb_hid_comms.dll object_script.usb_hid_comms.Debug -lhid -lsetupapi -LC:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib C:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib\libQt5Guid.a C:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib\libQt5Cored.a
If I omit -lhid I get the same errors. I also get the same errors if I remove -lhid and explicitly set the path and filename to libhid.a. If I deliberately mistype the path and filename, it comes up with an error, so I know the command-line is getting parsed correctly. But for whatever reason, MinGW appears to not be linking with one of its own library files.
I've also tried removing -lsetupapi and I get the linker errors I'd expect for the functions defined in there. Likewise the Qt library files. But it seems that specifically for libhid.a, MinGW can see the library file but just isn't going to link with it.
Has anyone else seen this? Or can anyone else with the same (or similar) version of MinGW confirm or deny that they can link with libhid.a? Or is there something obviously wrong with what I'm doing?
I've just found the answer. I'm posting an answer myself so that other people know in future, because I think this is still a valid question which people might want to know about.
The problem is the include file hidsdi.h. The majority of other header files which pull in Win32 API calls have extern "C" around the function declarations. However this one doesn't! The result is that we end up with C++ name mangling for linker symbols, instead of the C-style "_" in front of the linker symbols.
The solution is to use
extern "C"
{
#include <hidsdi.h>
}
and then everything works fine.
The version of hidsdi.h with the older version of MinGW (which I'm porting from) did have that protection around the function declarations. However it looks like it's gone in the newer version.

Barebones C++ without standard library?

Compilers such as GCC and Clang allow to compile C++ programs without the C++ standard library, e.g. using the -nostdlib command line flag. It seems that such often fail to link thou, for example:
void f() noexcept { throw 42; }
int main() { f(); }
Usually fails to link due to undefined symbols like __cxa_allocate_exception, typeinfo for int, __cxa_throw, __gxx_personality_v0, __clang_call_terminate, __cxa_begin_catch, std::terminate() etc.
Even a simple
int main() {}
Fails to link with
ld: warning: cannot find entry symbol _start; defaulting to 0000000000400120
and is killed by the OS upon execution. Using -c the compiler still runs the linker which blatantly fails with:
ld: error in mytest(.eh_frame); no .eh_frame_hdr table will be created.
Is it a realistic goal to program and compile C++ applications or libraries without using and linking to the standard library? How can I compile my code using GCC or Clang on Linux? What core language features would one be unable to use without the standard library?
You will basically find all of your questions answered at osdev.org, but I'll give a brief summary anyway.
When you give GCC -nostdlib, you are saying "no startup or library files". This includes:
crti.o, crtbegin.o, crtend.o and crtn.o. Generally kernel developers only care about implementing crti.o and crtend.o and let GCC supply crtbegin.o and crtend.o by passing -print-file-name= to the linker. Generally these are just stubs that consist of .init and .fini respectively, leaving room for GCC to shove the contents of crtbegin.o and crtend.o respectively. These files are necessary for calling global constructors/destructors.
You can't avoid linking libgcc (the "low-level runtime library" (-lgcc) because even if you pass -nostdlib GCC will emit calls to its functions whenever you use it, leading to inexplicable linking errors for seemingly no reason. This is the case even when you're implementing/porting a C library.
You don't "need" libstdc++ no, but typically kernel developers want it. Porting a C library then implementing the C++ standard library from scratch is an extremely difficult task.
Since you only want to get rid of the "standard library", but keeping libc (on a Linux system) you're essentially programming C++ with just a C library. Of course, there's nothing wrong with this and you do you, but ultimately I don't see the point unless you plan on developing a kernel.
Required reading:
OSDev's C++ page - If you really care about RTTI/exception support, it's more annoying to implement than it sounds. Typically people just pass -fno-rtti or -fno-exceptions and then worry about it down the line or not at all.
"Standard" is a misnomer. In this context it doesn't mean "the library (set of functions, classes etc) as defined by the C++ standard" but "the usual set of libraries and objects (compiled files in a certain format) gcc links with by default". Some of those are necessary for most or even all programs to function.
If you use this flag, it's your responsibility to provide any missing functionality. There are several ways to do so:
Cherry-pick libraries and objects that your program really needs out of the default set. (Makes little sense as the result will most probably be exactly the same as with the default link flags).
Provide your own implementation of missing functionality.
Explicitly disable, through compiler flags, language features your program isn't using. I know of two such features: exceptions and RTTI. This is needed because the compiler needs to generate exceptions-related code and RTTI info even if these features are not explicitly used in this module.

How to debug GCC/LD linking process for STL/C++

I'm working on a bare-metal cortex-M3 in C++ for fun and profit. I use the STL library as I needed some containers. I thought that by simply providing my allocator it wouldn't add much code to the final binary, since you get only what you use.
I actually didn't even expect any linking process at all with the STL
(giving my allocator), as I thought it was all template code.
I am compiling with -fno-exception by the way.
Unfortunately, about 600KB or more are added to my binary. I looked up what symbols are included in the final binary with nm and it seemed a joke to me. The list is so long I won't try and past it. Although there are some weak symbols.
I also looked in the .map file generated by the linker and I even found the scanf symbols
.text
0x000158bc 0x30 /CodeSourcery/Sourcery_CodeBench_Lite_for_ARM_GNU_Linux/bin/../arm-none-linux-gnueabi/libc/usr/lib/libc.a(sscanf.o)
0x000158bc __sscanf
0x000158bc sscanf
0x000158bc _IO_sscanf
And:
$ arm-none-linux-gnueabi-nm binary | grep scanf
000158bc T _IO_sscanf
0003e5f4 T _IO_vfscanf
0003e5f4 T _IO_vfscanf_internal
000164a8 T _IO_vsscanf
00046814 T ___vfscanf
000158bc T __sscanf
00046814 T __vfscanf
000164a8 W __vsscanf
000158bc T sscanf
00046814 W vfscanf
000164a8 W vsscanf
How can I debug this? For first I wanted to understand what exactly GCC is using for linking (I'm linking through GCC). I know that if symbol is found in a text segment, the
whole segment is used, but still that's too much.
Any suggestion on how to tackle this would really be appreciated.
Thanks
Using GCC's -v and -Wl,-v options will show you the linker commands (and version info of the linker) being used.
Which version of GCC are you using? I made some changes for GCC 4.6 (see PR 44647 and PR 43863) to reduce code size to help embedded systems. There's still an outstanding enhancement request (PR 43852) to allow disabling the inclusion of the IO symbols you're seeing - some of them come from the verbose terminate handler, which prints a message when the process is terminated with an active exception. If you're not using execptions then some of that code is useless to you.
The problem is not about the STL, it is about the Standard library.
The STL itself is pure (in a way), but the Standard Library also includes all those streams packages and it seems that you also managed to pull in the libc as well...
The problem is that the Standard Library has never been meant to be picked apart, so there might not have been much concern into re-using stuff from the C Standard Library...
You should first try to identify which files are pulled in when you compile (using strace for example), this way you can verify that you only ever use header-only files.
Then you can try and remove the linking that occurs. There are options to pass to gcc to precise that you would like a standard library-free build, something like --nostdlib for example, however I am not well versed enough in those to instruct you exactly here.

g++ compilation of a separately preprocessed file gives error depending on the architecture

I am using g++ version 4.1.2 on a x64_86 GNU linux architecture. Code base is very huge and I don't have sufficient understanding of makefiles used in the project. The code compiles fine as it is.
For some debugging purpose, I need to preprocess (g++ -E) few source files individually and then re-compile it. I am giving the required include paths using -I. Ideally the compilation should go fine.
But I am getting few discrepancies in standard headers like:
typedef unsigned long size_t; causes errors with operator new()
declaration generated by compiler (if I change to unsigned int
manually then this error disappears)
In library functions like unsigned long numeric_limits<>::max(),
compiler complains for big numbers such as 922...807L; it generates
compiler error as integer constant is too large for long type
Mismatch declaration of __errorno_location() gives compiler error
I am having hard time finding what is going wrong. Why compilation goes fine when I do make on unchanged file and why standard headers start cribbing when I give g++ -I <> -E option on individual file ?
(Note that there is no problem with the code we have written, it's just from standard library side. I tried locating the stddef.h which has unsigned int as typedef, but that just fixes the 1st problem. )
Any idea to fix this errors would be highly appreciated.
Don't preprocess and compile separately, or if you must then use consistent compiler options and a consistent environment.
It sounds a though you're running the preprocessor on a 32-bit machine (or using the -m32 option) then compiling on a 64-bit machine.
When compiling the output of the preprocessor, make sure that you use the-fpreprocessed compiler option so that the preprocessor will not run again.
If you don't pass in that option certain constructs that produced identifiers that look like macros may get expanded again into something they shouldn't get expanded to. It's hard for me to come up with a case that shows a difference (I'm sure I can, but it would take a bit of puzzling out and would be pretty contrived). However, the implementation headers may well use some arcane macro techniques that might be sensitive to this option.