`Multiple definitions` error points to pthread.h - c++

Using gcc 4.6.1, I get some pretty bizarre errors at link time. I've defined various objects in a namespace SpacetimeAlgebra, and the compiler claims that they're already defined in pthread.h and in std::_Vector_base<double>::_M_deallocate. The errors look like this:
build/temp.linux-x86_64-2.7/Waveforms.o: In function `~vector':
GWFrames/Code/Waveforms.cpp:4978: multiple definition of `SpacetimeAlgebra::I3'
build/temp.linux-x86_64-2.7/SpacetimeAlgebra/SpacetimeAlgebra.o:/usr/include/pthread.h:1112: first defined here
Obviously, pthread.h does not actually directly contain anything involving my objects, and certainly not in that namespace. I seriously doubt that the error is in my code, as it compiles just fine with other compilers, and this is such a ridiculous error.
It strikes me as particularly bizarre that the "first defined" references change from pthread to _M_deallocate for objects that are defined in the same place. I don't have any using commands involving SpacetimeAlgebra. Is there something else I might be doing wrong?
The compiler command and full error output are at this gist. The command was created by python's distutils. The code itself is here, in the hpp and cpp files. (These were mostly generated by Gaigen, with a few tweaks from me.)
On a related note, the compilation works without any trouble on Apple LLVM 5.1. The compiler used here is on a cluster that many people successfully use all the time for some crazy compilations, so that at least usually works.

GWFrames/Code/Waveforms.cpp:4978: multiple definition of `SpacetimeAlgebra::psI'
build/temp.linux-x86_64-2.7/SpacetimeAlgebra/SpacetimeAlgebra.o:/usr/include/pthread.h:1112: first defined here
You're reading the error message wrong, because it's confusing (and possibly buggy, though before blaming the compiler I'd check that you have debug symbols in all your .os).
The error is telling you that you've defined SpacetimeAlgebra::psI inside Waveforms.cpp and inside SpacetimeAlgebra.cpp.
Unfortunately, we cannot see Waveforms.cpp, so this cannot be verified.

Related

Is it allowed to name a global variable `read` or `malloc` in C++?

Consider the following C++17 code:
#include <iostream>
int read;
int main(){
std::ios_base::sync_with_stdio(false);
std::cin >> read;
}
It compiles and runs fine on Godbolt with GCC 11.2 and Clang 12.0.1, but results in runtime error if compiled with a -static key.
As far as I understand, there is a POSIX(?) function called read (see man read(2)), so the example above actually invokes ODR violation and the program is essentially ill-formed even when compiled without -static. GCC even emits warning if I try to name a variable malloc: built-in function 'malloc' declared as non-function
Is the program above valid C++17? If no, why? If yes, is it a compiler bug which prevents it from running?
The code shown is valid (all C++ Standard versions, I believe). The similar restrictions are all listed in [reserved.names]. Since read is not declared in the C++ standard library, nor in the C standard library, nor in older versions of the standard libraries, and is not otherwise listed there, it's fair game as a name in the global namespace.
So is it an implementation defect that it won't link with -static? (Not a "compiler bug" - the compiler piece of the toolchain is fine, and there's nothing forbidding a warning on valid code.) It does at least work with default settings (though because of how the GNU linker doesn't mind duplicated symbols in an unused object of a dynamic library), and one could argue that's all that's needed for Standard compliance.
We also have at [intro.compliance]/8
A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this International Standard. Having done so, however, they can compile and execute such programs.
We can consider POSIX functions such an extension. This is intentionally vague on when or how such extensions are enabled. The g++ driver of the GCC toolset links a number of libraries by default, and we can consider that as adding not only the availability of non-standard #include headers but also adding additional translation units to the program. In theory, different arguments to the g++ driver might make it work without the underlying link step using libc.so. But good luck - one could argue it's a problem that there's no simple way to link only names from the C++ and C standard libraries without including other unreserved names.
(Does not altering a well-formed program even mean that an implementation extension can't use non-reserved names for the additional libraries? I hope not, but I could see a strict reading implying that.)
So I haven't claimed a definitive answer to the question, but the practical situation is unlikely to change, and a Standard Defect Report would in my opinion be more nit-picking than a useful clarification.
Here is some explanation on why it produces a runtime error with -static only.
The https://godbolt.org/z/asKsv95G5 link in the question indicates that the runtime error with -static is Program returned: 139. The output of kill -l in Bash on Linux contains 11) SIGSEGV (and 128 + 11 = 139), so the process exits with fatal signal SIGSEGV (Segmentation fault) indicating invalid memory reference. The reason for that is that the process tries to run the contents (4 bytes) of the read variable as machine code. (Eventually std::cin >> ... calls read.) Either somethings fails in those 4 bytes accidentally interpreted as machine code, or it fails because the memory page containing those 4 bytes is not executable.
The reason why it succeeds without -static is that with dynamic linking it's possible to have multiple symbols with the same name (read): one in the program executable, and another one in the shared library (libc.so.6). std::cin >> ... (in libstdc++.so.6) links against libc.so.6, so when the dynamic linker tries to find the symbol read at program load time (to be used by libstdc++.so.6), it will look at libc.so.6 first, finding read there, and ignoring the read symbol in the program executable.

What is gcc compiler option "-unsigned" meant for?

I am compiling legacy code using gcc compiler 4.8.5 on RHEL7. All C and C++ files have "-unsigned" as one of the default flags for both gcc and g++ compilation. The compiler accepts this option and compiles successfully. However, I cannot find any documentation of this option anywhere either in gcc manual or online. Does anyone know what this option is? I have to port the code and am confused whether this compiler option needs to be ported or not.
I suspect it was just a mistake in the Makefile, or whatever is used to compile the code.
gcc does not support a -unsigned option. However, it does pass options to the linker, and GNU ld has a -u option:
'-u SYMBOL'
'--undefined=SYMBOL'
Force SYMBOL to be entered in the output file as an undefined
symbol. Doing this may, for example, trigger linking of additional
modules from standard libraries. '-u' may be repeated with
different option arguments to enter additional undefined symbols.
This option is equivalent to the 'EXTERN' linker script command.
The space between -u and the symbol name is optional.
So the intent might have been to do something with unsigned types, but the effect, at least with modern versions of gcc, is to enter nsigned in the output file as an undefined symbol.
This seems to have no effect in the quick test I did (compiling and running a small "hello, world" program). The program runs correctly, and the output of nm hello includes:
U nsigned
With an older version of gcc (2.95.2) on a different system, I get a fatal link-time error about the undefined symbol.
I suspect the intention is for the program to be compiled with -funsigned-char. I don't know whether this instead used to be -unsigned, back in dinosaur times, and perhaps GCC actually still heeds that spelling; there's no way to know from the manual because, if it does, it's an undocumented feature.
I'd ask the original author for the intention here, since they didn't see fit to document.

How to debug GCC/LD linking process for STL/C++

I'm working on a bare-metal cortex-M3 in C++ for fun and profit. I use the STL library as I needed some containers. I thought that by simply providing my allocator it wouldn't add much code to the final binary, since you get only what you use.
I actually didn't even expect any linking process at all with the STL
(giving my allocator), as I thought it was all template code.
I am compiling with -fno-exception by the way.
Unfortunately, about 600KB or more are added to my binary. I looked up what symbols are included in the final binary with nm and it seemed a joke to me. The list is so long I won't try and past it. Although there are some weak symbols.
I also looked in the .map file generated by the linker and I even found the scanf symbols
.text
0x000158bc 0x30 /CodeSourcery/Sourcery_CodeBench_Lite_for_ARM_GNU_Linux/bin/../arm-none-linux-gnueabi/libc/usr/lib/libc.a(sscanf.o)
0x000158bc __sscanf
0x000158bc sscanf
0x000158bc _IO_sscanf
And:
$ arm-none-linux-gnueabi-nm binary | grep scanf
000158bc T _IO_sscanf
0003e5f4 T _IO_vfscanf
0003e5f4 T _IO_vfscanf_internal
000164a8 T _IO_vsscanf
00046814 T ___vfscanf
000158bc T __sscanf
00046814 T __vfscanf
000164a8 W __vsscanf
000158bc T sscanf
00046814 W vfscanf
000164a8 W vsscanf
How can I debug this? For first I wanted to understand what exactly GCC is using for linking (I'm linking through GCC). I know that if symbol is found in a text segment, the
whole segment is used, but still that's too much.
Any suggestion on how to tackle this would really be appreciated.
Thanks
Using GCC's -v and -Wl,-v options will show you the linker commands (and version info of the linker) being used.
Which version of GCC are you using? I made some changes for GCC 4.6 (see PR 44647 and PR 43863) to reduce code size to help embedded systems. There's still an outstanding enhancement request (PR 43852) to allow disabling the inclusion of the IO symbols you're seeing - some of them come from the verbose terminate handler, which prints a message when the process is terminated with an active exception. If you're not using execptions then some of that code is useless to you.
The problem is not about the STL, it is about the Standard library.
The STL itself is pure (in a way), but the Standard Library also includes all those streams packages and it seems that you also managed to pull in the libc as well...
The problem is that the Standard Library has never been meant to be picked apart, so there might not have been much concern into re-using stuff from the C Standard Library...
You should first try to identify which files are pulled in when you compile (using strace for example), this way you can verify that you only ever use header-only files.
Then you can try and remove the linking that occurs. There are options to pass to gcc to precise that you would like a standard library-free build, something like --nostdlib for example, however I am not well versed enough in those to instruct you exactly here.

g++ compilation of a separately preprocessed file gives error depending on the architecture

I am using g++ version 4.1.2 on a x64_86 GNU linux architecture. Code base is very huge and I don't have sufficient understanding of makefiles used in the project. The code compiles fine as it is.
For some debugging purpose, I need to preprocess (g++ -E) few source files individually and then re-compile it. I am giving the required include paths using -I. Ideally the compilation should go fine.
But I am getting few discrepancies in standard headers like:
typedef unsigned long size_t; causes errors with operator new()
declaration generated by compiler (if I change to unsigned int
manually then this error disappears)
In library functions like unsigned long numeric_limits<>::max(),
compiler complains for big numbers such as 922...807L; it generates
compiler error as integer constant is too large for long type
Mismatch declaration of __errorno_location() gives compiler error
I am having hard time finding what is going wrong. Why compilation goes fine when I do make on unchanged file and why standard headers start cribbing when I give g++ -I <> -E option on individual file ?
(Note that there is no problem with the code we have written, it's just from standard library side. I tried locating the stddef.h which has unsigned int as typedef, but that just fixes the 1st problem. )
Any idea to fix this errors would be highly appreciated.
Don't preprocess and compile separately, or if you must then use consistent compiler options and a consistent environment.
It sounds a though you're running the preprocessor on a 32-bit machine (or using the -m32 option) then compiling on a 64-bit machine.
When compiling the output of the preprocessor, make sure that you use the-fpreprocessed compiler option so that the preprocessor will not run again.
If you don't pass in that option certain constructs that produced identifiers that look like macros may get expanded again into something they shouldn't get expanded to. It's hard for me to come up with a case that shows a difference (I'm sure I can, but it would take a bit of puzzling out and would be pretty contrived). However, the implementation headers may well use some arcane macro techniques that might be sensitive to this option.

Cuda with Boost

I am currently writing a CUDA application and want to use the boost::program_options library to get the required parameters and user input.
The trouble I am having is that NVCC cannot handle compiling the boost file any.hpp giving errors such as
1>C:\boost_1_47_0\boost/any.hpp(68): error C3857: 'boost::any': multiple template parameter lists are not allowed
I searched online and found it is because NVCC cannot handle the certain constructs used in the boost code but that NVCC should delegate compilation of host code to the C++ compiler. In my case I am using Visual Studio 2010 so host code should be passed to cl.
Since NVCC seemed to be getting confused I even wrote a simple wrapper around the boost stuff and stuck it in a separate .cpp (instead of .cu) file but I am still getting build errors. Weirdly the error is thrown upon compiling my main.cu instead of the wrapper.cpp but still is caused by boost even though main.cu doesn't include any boost code.
Does anybody know of a solution or even workaround for this problem?
Dan, I have written a CUDA code using boost::program_options in the past, and looked back to it to see how I dealt with your problem. There are certainly some quirks in the nvcc compile chain. I believe you can generally deal with this if you've decomposed your classes appropriately, and realize that often NVCC can't handle C++ code/headers, but your C++ compiler can handle the CUDA-related headers just fine.
I essentially have main.cpp which includes my program_options header, and the parsing stuff dictating what to do with the options. The program_options header then includes the CUDA-related headers/class prototypes. The important part (as I think you've seen) is to just not have the CUDA code and accompanying headers include that options header. Pass your objects to an options function and have that fill in relevant info. Something like an ugly version of a Strategy Pattern. Concatenated:
main.cpp:
#include "myprogramoptionsparser.hpp"
(...)
CudaObject* MyCudaObj = new CudaObject;
GetCommandLineOptions(argc,argv,MyCudaObj);
myprogramoptionsparser.hpp:
#include <boost/program_options.hpp>
#include "CudaObject.hpp"
void GetCommandLineOptions(int argc,char **argv,CudaObject* obj){
(do stuff to cuda object) }
CudaObject.hpp:
(do not include myprogramoptionsparser.hpp)
CudaObject.cu:
#include "CudaObject.hpp"
It can be a bit annoying, but the nvcc compiler seems to be getting better at handling more C++ code. This has worked fine for me in VC2008/2010, and linux/g++.
You have to split the code in two parts:
the kernel have to be compiled by nvcc
the program that invokes the kernel has to be compiled by g++.
Then link the two objects together and everything should be working.
nvcc is required only to compile the CUDA kernel code.
Thanks to #ronag's comment I realised I was still (indirectly) including boost/program_options.hpp indirectly in my header since I had some member variables in my wrapper class definition which needed it.
To get around this I moved these variables outside the class and thus could move them outside the class defintion and into the .cpp file. They are no longer member variables and now global inside wrapper.cpp
This seems to work but it is ugly and I have the feeling nvcc should handle this gracefully; if anybody else has a proper solution please still post it :)
Another option is to wrap cpp only code in
#ifndef __CUDACC__