GCC dies trying to compile 64bit code on OSX 10.6 - c++

I have a brand-new off-the-cd OSX 10.6 installation.
I'd now like to compile the following trivial C program as a 64bit binary:
#include <stdio.h>
int main()
{
printf("hello world");
return 0;
}
I invoke gcc as follows:
gcc -m64 hello.c
However, this fails with the following error:
Undefined symbols:
"___gxx_personality_v0", referenced from:
_main in ccUAOnse.o
CIE in ccUAOnse.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
What's going on here? Why is gcc dying?
Compiling without the -m64 flag works fine.

Two things:
I don't think you actually used gcc -m64 hello.c. The error you got is usually the result of doing something like gcc -m64 hello.cc- using the C compiler to compile C++ code.
shell% gcc -m64 hello.c
shell% ./a.out
hello world [added missing newline]
shell% cp hello.c hello.cc
shell% gcc -m64 hello.cc
Undefined symbols:
"___gxx_personality_v0", referenced from:
_main in ccYaNq32.o
CIE in ccYaNq32.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
You can "get this to work" with the following:
shell% gcc -m64 hello.cc -lstdc++
shell% ./a.out
hello world
Second, -m64 is not the preferred way of specifying that you'd like to generate 64-bit code on Mac OS X. The preferred way is to use -arch ARCH, where ARCH is one of ppc, ppc64, i386, or x86_64. There may be more (or less) architectures available depending on how your tools are set up (i.e., iPhone ARM, ppc64 deprecated, etc). Also, on 10.6, gcc defaults to -arch x86_64, or generating 64-bit code by default.
Using this style, it's possible to have the compiler create "fat binaries" automatically- you can use -arch multiple times. For example, to create a "Universal Binary":
shell% gcc -arch x86_64 -arch i386 -arch ppc hello.c
shell% file a.out
a.out: Mach-O universal binary with 3 architectures
a.out (for architecture x86_64): Mach-O 64-bit executable x86_64
a.out (for architecture i386): Mach-O executable i386
a.out (for architecture ppc7400): Mach-O executable ppc
EDIT: The following was added to answer the OPs question "I did make a mistake and call my file .cc instead of .c. I'm still confused about why this should matter?"
Well... that's a sort of complicated answer. I'll give a brief explanation, but I'll ask that you have a little faith that "there's actually a good reason."
It's fair to say that "compiling a program" is a fairly complicated process. For both historical and practical reasons, when you execute gcc -m64 hello.cc, it's actually broken up in to several discrete steps behind the scenes. These steps, each of which usually feeds the result of each step to the next step, are approximately:
Run the C Pre-Processor, cpp, on the source code that is being compiled. This step is responsible for performing all the #include statements, various #define macro expansions, and other "pre-processing" stuff.
Run the C compiler proper on the C Pre-Processed results. The output of this step is a .s file, or the result of the C code compiled to assembly language.
Run the as assembler on the .s source. This assembles the assembly language in to a .o object file.
Run the ld linker on the .o file(s) to link the various compiled object files and various static and dynamically linked libraries in to a useable executable.
Note: This is a "typical" flow for most compilers. An individual implementation of a compiler doesn't have to follow the above steps. Some compilers combine multiple steps in to one for performance reasons. Modern versions of gcc, for example, don't use a separate cpp pass. The tcc compiler, on the other hand, performs all the above steps in one pass, using no additional external tools or intermediate steps.
In the above, traditional compiler tool chain flow, the cc (or, in our case, gcc) command is called a "compiler driver". It's a "logical front end" to all of the above tools and steps and knows how to intelligently apply all the steps and tools (like the assembler and linker) in order to create a final executable. In order to do this, though, it usually needs to know "the kind of" file it is dealing with. You can't really feed an assembled .o file to the C compiler, for example. Therefore, there are a couple of "standard" .* designations used to specify the "kind" of file (see man gcc for more info):
.c, .h C source code and C header files.
.m Objective-C source code.
.cc, .cp, .cpp, .cxx, .c++ C++ Source code.
.hh C++ header file.
.mm, .M Objective-C++ source code.
.s Assembly language source code.
.o Assembled object code.
.a ar archive or static library.
.dylib Dynamic shared library.
It's also possible to over-ride this "automatically determined file type" using various compiler flags (see man gcc for how to do this), but it's generally MUCH easier to just stick with the standard conventions so that everything "just works" automatically.
And, in a round about way, if you had used the C++ "compiler driver", or g++, in your original example, you wouldn't have encountered this problem:
shell% g++ -m64 hello.cc
shell% ./a.out
hello world
The reason for this is gcc essentially says "Use C rules when driving the tool chain" and g++ says "Use C++ rules when driving the tool chain". g++ knows that to create a working executable, it needs to pass -lstdc++ to the linker stage, whereas gcc obviously doesn't think this is necessary even though it knew to use the C++ compiler at the "Compile the source code" stage because of the .cc file ending.
Some of the other C/C++ compilers available to you on Mac OS X 10.6 by default: gcc-4.0, gcc-4.2, g++-4.0, g++-4.2, llvm-gcc, llvm-g++, llvm-gcc-4.0, llvm-g++-4.0, llvm-gcc-4.2, llvm-g++-4.2, clang. These tools (usually) swap out the first two steps in the tool chain flow and use the same lower-level tools like the assembler and linker. The llvm- compilers use the gcc front end to parse the C code and turn it in to an intermediate representation, and then use the llvm tools to transform that intermediate representation in to code. Since the llvm tools use a "low-level virtual machine" as its near-final output, it allows for a richer set of optimization strategies, the most notable being that it can perform optimizations across different, already compiled .o files. This is typically called link time optimization. clang is a completely new C compiler that also targets the llvm tools as its output, allowing for the same kinds of optimizations.
So, there you go. The not so short explanation of why gcc -m64 hello.cc failed for you. :)
EDIT: One more thing...
It's a common "compiler driver technique" to have commands like gcc and g++ sym-link to the same "all-in-one" compiler driver executable. Then, at run time, the compiler driver checks the path and file name that was used to create the process and dynamically switch rules based on whether that file name ends with gcc or g++ (or equivalent). This allows the developer of the compiler to re-use the bulk of the front end code and then just change the handful of differences required between the two.

I don't know why this happens (works fine for me), but
Try compile with g++, or link to libstdc++. ___gxx_personality_v0 is a symbol used by GNU C++ to set up the SjLj callback for the destructors, so for some reason C++ code creeps into your C code. or
Remove the -m64 flag. Binaries generated by GCC 4.2 on 10.6 defaults to 64-bit as far as I know. You can check by file the output and ensure it reads "Mach-O 64-bit executable x86_64". or
Reinstall the latest Xcode from http://developer.apple.com/technology/xcode.html.

OK:
adding -m64 doesn't do anything, a normal gcc with no options is a 64-bit compile
if you are really just off-the-cd then you should update and install a new xcode
your program works fine for me with or without -m64 on 10.6.2

We had a similar issue when working with CocoaPods when creating and using a pod that contained Objective-C++ code so I thought it's also worth mentioning:
You should edit the .podspec of the pod that contains the c++ code, and add a line like:
s.xcconfig = {
'OTHER_LDFLAGS' => '$(inherited) -lstdc++',
}

Related

buinding and running in different gcc

environment A: centos7(same os) / gcc.7.3.1(higher gcc)
environment B: centos7(same os) / gcc.4.8.5(lower gcc)
I have built an C++ executable in environment A and run it in environment B.
I haven't had a problem so far, but could there be a problem with this approach?
Technically, building a static executable improves portability.
So compile your C++ files with g++ -O -Wall -Wextra -g and link the object files with g++ -static *.o
Alternatively, use dynamically only libc.so, not the C++ standard library. So link with g++ *.o -Bstatic your libraries -Bdynamic -lc
For details, read the documentation of GCC, of GNU binutils, perhaps of GNU bash, and the Program Library Howto and Drepper's paper How to write shared libraries
Notice that to run an executable (see elf(5) and execve(2) and other syscalls(2)....) you don't need a compiler.
You may want to use strace(1) and ldd(1) on machine B.
Don't forget to have many testcases.
Notice that GCC, Bash, Binutils, are free software. You are allowed to download and study their source code and improve it (there are licensing conditions when you redistribute an improved binary, read their GPL license)
You could have legal or licensing issues (e.g. if your C++ code uses Qt). You may need to consult a lawyer about them.
See also LinuxFromScratch.

Undefined reference when combining C++ and Fortran [duplicate]

I am trying to link a .o file generated using g++ and another .o file generated using gfortran.
g++ -c mycppcode.cpp
produces the file mycppcode.o and the command
gfortran -c myfortrancode.f
produces the file myfortrancode.o
When I link these two files to get an output file
g++ -O mycppcode.o myfortrancode.o
I get the following error
Undefined symbols for architecture x86_64:
"__gfortran_pow_c8_i4", referenced from:
Could some one help me with this? Should I use another compiler? Also, I would like to know what functions or subroutines call "__gfortran_pow_c8_i4", so that I can try to avoid these functions or subroutines in fortran in future.
The following assumes you are using the GNU compiler tools. Things may be slightly different if you are using other compilers.
You can use either compiler to link the two together, but you need to provide the appropriate libraries.
Typically, you can use either
gfortran fortobj.o cppobj.o -lstdc++
or
g++ fortobj.o cppobj.o -lgfortran
This assumes that you are using a setup where both compilers know about each other's libraries (like if you installed through a linux repository).
In the case of the OP the C compilers came from XCode and gfortran is from homebrew. In that case, gfortran knows about the g++ libraries (since they were used to compile the compiler), but g++ doesn't know about the gfortran libraries. This is why using gfortran to link worked as advertised above. However, to link with g++ you need to add the path to libgfortran.* when you call the linker using the -L flag, like
g++ fortobj.o cppobj.o -L/path/to/fortran/libs -lgfortran
If for some reason your gfortran compiler is unaware of your g++ libs, you would do
gfortran fortobj.o cppobj.o -L/path/to/c++/libs -lstdc++
Note that there shouldn't be any difference in the final executable. I'm no compiler expert, but my understanding is that using the compiler to link your objects together is a convenience for calling the linker (ld on UNIX-like OS's) with the appropriate libraries associated with the language you are using. Therefore, using one compiler or the other to link shouldn't matter, as long as the right libraries are included.

Code size is doubled when compiling with GCC ARM Embedded?

I've just ported a STM32 microcontroller project from Keil uVision (using Keil ARM Compiler) to CooCox CoIDE (using GCC ARM Embedded compiler).
Problem is, the code size is the double size when compiled in CoIDE with GCC compared to Keil uVision.
How can this be? What can I do?
Code size in Keil: 54632b (.text)
Code size in CoIDE: 100844b (.text)
GCC compiler flags:
arm-none-eabi-gcc -mcpu=cortex-m3 -mthumb -g2 -Wl,-Map=project.map -Os
-Wl,--gc-sections -Wl,-TC:\arm-gcc-link.ld -g -o project.elf -L -lm
I am suspecting CoIDE and GCC to compile a lot of functions and files, that are present in the project, though aren't used (yet). Is it possible that it compiles whole files even if I only use 1 function out of 20 in there? (even though I have -Os)..
Hard to say which files are really compiled/linked in your final binary from the information you give. I suppose it takes all the C files it finds on your project if you did not explicitly specified which one to compile or if you don't use your own Makefile.
But from the compiler options you give, the linker flag --gc-sections won't do much garbage if you don't have the following compiler flags: -ffunction-sections -fdata-sections. Try to add those options to strip all unused functions and data at link time.
Since the question was tagged with C++, I wonder if you would like to disable exceptions and RTTI. Those take quite a bit of code. Add -fno-exceptions -fno-rtti to linker flags.

Static linking to libcrypto++, with g++

I am trying to compile a program on my system with Debian Wheezy and g++4.7. I want it to be able to run on another system with Debian Squeeze (and no recent g++). I can't compile the program on the Squeeze, because I use certain C++11 features the old g++ does not support, as well as a new Boost version and libcrypto++9.
As far as I understand the usual way to get around this problem is to static link the libraries not supported at the other system, in my case libstdc, boost and crypto++.
My (linking) compiler call right now is
g++-4.7 .obj/btcmirco.o -Wl,-Bstatic -lboost_program_options -lboost_system -lcrypto++ -Wl,-Bdynamic -lcurl -static-libgcc -std=c++11 -o MyProgram
However I seem to have missed something, because it throws a lot of undefined reference errors. It works fine if I dynamic link to crypto++ (and only static link libstdc and boost).
Can anyone tell me whats wrong, or if there is a fundamental error in my approach?
The linker errors I get are (shorted):
`.text._ZN8CryptoPP22BufferedTransformationD2Ev' referenced in section `.text._ZN8CryptoPP22BufferedTransformationD1Ev[_ZN8CryptoPP22BufferedTransformationD1Ev]' of /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libcrypto++.a(cryptlib.o): defined in discarded section `.text._ZN8CryptoPP22BufferedTransformationD2Ev[_ZN8CryptoPP22BufferedTransformationD5Ev]' of /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libcrypto++.a(cryptlib.o)
`.text._ZN8CryptoPP25MessageAuthenticationCodeD2Ev' referenced in section `.text._ZN8CryptoPP25MessageAuthenticationCodeD1Ev[_ZN8CryptoPP25MessageAuthenticationCodeD1Ev]' of /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libcrypto++.a(cryptlib.o): defined in discarded section `.text._ZN8CryptoPP25MessageAuthenticationCodeD2Ev[_ZN8CryptoPP25MessageAuthenticationCodeD5Ev]' of /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../../lib/libcrypto++.a(cryptlib.o)
I experienced the same problem and this has to do with the fact that you are trying to mix code generated by g++-4.7 (your program) with code generated by a previous version of g++ (cryptopp library).
The reason behind this is that when you execute compile the library executing make command, it uses the default version of g++ set up for your system, usually the one that comes with the OS.
In order to solve the issue what you should do is compile cryptopp library with g++-4.7.
For that, compile the library by executing make CXX=g++-4.7. The resulting static library shouldn't give you the error when being linked with your code.

equivalent gcc flags for g++ call

I'm playing around with a toolchain that seems to wrap gcc (qcc), but also uses g++ for a few things. This caused a bit of confusion when I couldn't link libs I built with g++ using g(q)cc even though it was for the same architecture (due to missing lib errors). After a bit more research, I found that g++ is basically gcc with a few default flags and a slightly different interpretation mechanism for file extensions (there may be other differences I've glanced over). I'd like to know exactly which flags can be passed to gcc to amount to the equivalent g++ call. For instance:
g++ -g -c hello.cpp // I know at the very least that this links in stl
gcc -g -c -??? // I want the exact same result as I got with g++... what flags do I use?
The way the tool chain is set up makes it sort of difficult to simply replace the gcc calls with g++. It'd be much easier to know which flags I need to pass.
The differences between using gcc vs. g++ to compile C++ code is that (a) g++ compiles files with the .c, .h, and .i extensions as C++ instead of C, and that (b) it automatically links with the C++ standard library (-lstdc++). See the man page.
So assuming that you're not compiling .c, .h., or .i files as C++, all you need to do to make gcc act like g++ is add the -lstdc++ command line option to your linker flags. If you are compiling those other files as C++, you can add -x c++, but I'd advise you instead to rename them to use .cc or .ii files (.h can stay that way, if you're using precompiled headers).