Expose debugging symbols for a program compiled with multiple languages - c++

As per official instructions, to compile a program with debugging support you can run
g++ -std=c++11 -O0 -g -c -o program1.o program1.cpp
Now to do the same with a C program, it's just:
gcc -O0 -g -c -o program2.o program2.c
In order to link both types together, I could use:
g++ --std=c++11 -O0 -g -o program program.o program2.o
Then, to debug:
gdb program
gdb > run <PARAMS>
It worked completely after several attempts at tinkering with the compiler options (the above options are for a working version). In some cases the C symbols would load, but the C++ symbols would not.
Can someone shed some light as to what the recommended options are to enable debugging for non-trivial examples that mix several compiled languages? All of the documentation only refers to trivial examples.

Note that if you use just the -g option then the compiler will use operating system's native format, which can vary. You can explicitly specify the format instead, using other -g... varieties (for example -gdwarf-3 or -g-stabs). This allows you to guarantee that your object files will all have a consistent debug format, irrespective of where they were built.
You can also disable gdb-only extensions by using this approach, should you wish to use other debuggers. See this for details.

In each compilation and linking step, add the -g option to the compiler flags. -O0 is recommended too for debug builds so you don't get the compiler optimizing functions away. Being inconsistent may result in no debug symbols or partial debug symbols.

Related

Compile different files where functions and common blocks have the same name

I am currently trying to implement some 3rd party code (program A) within another 3rd party program (program B). Unfortunately, it seems like some COMMON blocks and subroutines share names between the two codes. This is not detected by the compiler (I suspect because the compilation process involves many different files and creating a shared object), but the program crashes when accessing certain common blocks / subroutines with very general names (e.g. BASIS, JACOBIAN), and renaming them alleviates the problem. However, renaming all common blocks and subroutines within program A is not feasible because of its size.
At the moment, I have two seperate directories of code. I compile both seperately with the intel compiler into .o files and then create a shared object from both:
ifort -c -fPIC -fp-model precise codeA.f
ifort -c -fPIC -fp-model precise codeB.f
ifort -c -fPIC -fp-model precise code_coupling.F90
ld -shared -o library.so codeA.o codeB.o code_coupling.o
The code in code_coupling.F90 is for coupling both codes and it is called within codeB.f, which I cannot change.
Is there a possibility to compile codeA.f with some additional compiler flags so that the names of the COMMON blocks and subroutines don't interfere with each other?
Is there some other way I can prevent the names from interfering with each other?
One (a little bit hacky) solution I have discovered is to compile codeA.f with the flag -assume nounderscore, and rename the functions that need to be called in code_coupling.F90 manually with a trailing underscore:
ifort -c -fPIC -fp-model precise -assume nounderscore codeA.f
ifort -c -fPIC -fp-model precise codeB.f
ifort -c -fPIC -fp-model precise code_coupling.F90
ld -shared -o library.so codeA.o codeB.o code_coupling.o
Rename the subroutine codeA_subroutine within codeA.f to codeA_subroutine_.

Debian gcc undesirable behaviour

I am creating a gcc shared library having a static library dependency.
I compile the parts for static library as following:
gcc -c -m64 -O2 -fPIC -std=c99 -Wall ms*.c //there are 10 C files, no warnings
Next I create a static library with:
ar rc static_lib.a ms*.o
Next I compile the parts for my program as following:
g++ -c -m64 -O2 -fPIC -std=c++14 -Wall ab*.cpp //there are 5 C++ files, just -Wunused-variable warnings
Then I create a shared library as following:
g++ -shared -g -Wall ab*.o static_lib.a -o shared_lib.so
in the normal case, this shared_lib.so will be called by a Ruby program using a foreign function interface. There is no problem if I do it on ubuntu or mac(.dylib), but if I try this on debian stretch I get an error related to the static library as if the configurations are not set properly. If I run the application without foreign function interface, such as creating a tester and running with the cpp file main function as following:
> g++ -o library_test ab*.o static_lib.a
> ./library_test
There is no problem!
My question is what kind of configuration for creating a shared library may be missing here to not get that undesirable behaviour. Especially on debian stretch 9.5!
Or is there a way that I can understand if there is a problem in the shared library.
From the comments, you indicate the problem is with a #define. Those are preprocessor directives. Libraries are for the linker.
You might be confused because g++ does include the preprocessor phase, and might call the linker depending on the requested output. Still, g++ follows the C++ language rules.

Why does gcc produce a different result when bulding from source compared to linking a static library?

I have a single C++14 file, my.cpp, and from within it I'm trying to use a C99 library called open62541. For the latter, both full source open62541.c/.h and a library libopen62541.a exist. In my.cpp, where I include the open62541.h, I'm using C++ specific code (e.g. iostream), so technically I'm mixing C and C++.
I can get my.cpp to compile successfully by referencing the libopen62541.a:
gcc -x c++ -std=c++14 -Wall my.cpp -l:libopen62541.a -lstdc++ -o out
This outputs no warnings, and creates an executable out.
However, if I try to compile using source code only:
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -o out
I get a lot of ISO C++ warnings (e.g. "ISO C++ forbids converting a string constant to ‘char'*") and some "jump to label" errors originating from within open62541.c, resulting in compilation failure.
I can get compilation to succeed by using the -fpermissive switch:
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -fpermissive -o out
which still outputs a lot of warnings, but creates the executable successfully. However, I'm unsure if doing this is a good idea.
Perhaps worth mentioning is that open62541.h considers C++ at the beginning:
#ifdef __cplusplus
extern "C" {
#endif
Given that .a library, which comes bundled with the open62541 library code, is supposedly built from the same source, why are the first two approaches not consistent in terms of warnings and errors generated? Why does one work and the other doesn't?
Should one method - linking .a vs referring to .c - be preferred to another? I was under impression that they should be equivalent, but apparently they aren't.
Is using -fpermissive in this case more of a hack that could mask potential problems, and should thus be avoided?
The error (and warning) you see are compilation errors (and warning) output by a C++ compiler when compiling C code.
For instance, in C "literal" has type char[] while in C++ it has type const char[].
Would you get a C++ compiler build libopen62541.a from open62541.c, you would see the same errors (warnings). But a C compiler might be OK with it (depending on the state of that C source file).
On the other hand, when you compile my.cpp and links it against libopen62541.a, the compiler doesn't see that offending C code, so no errors (warnings).
From here, you basically have two options:
Use the procompiled library if it suits you as is
g++ -std=c++14 -Wall -Wextra -Werror my.cpp -lopen62541.a -o out
Compile the library's code as a first step if you need to modify it
gcc -Wall -Wextra -Werror -c open62541.c
g++ -std=c++14 -Wall -Wextra -Werror -c my.cpp
g++ open62541.o my.o -o out
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -o out
This command forces the C code in open62541.c to be compiled as C++. That file apparently contains constructs that are valid in C but not C++.
What you should be doing is compiling each file as its own language and then linking them together:
gcc -std=gnu11 -Wall -c open62541.c
g++ -std=gnu++14 -Wall -c my.cpp
g++ -o out my.o open62541.o
Wrapping up those commands in an easily repeatable package is what Makefiles are for.
If you're wondering why I changed from the strict -std=c++14 to the loose -std=gnu++14 mode, it's because the strict mode is so strict that it may break the system headers! You don't need to deal with that on top of everything else. If you want a more practical additional amount of strictness, try adding -Wextra and -Wpedantic instead ... but be prepared for that to throw lots of warnings that don't actually indicate bugs, on the third-party code.

Where can I set the -O optimization compiler option for C++ in NetBeans?

When I build a Release C++ project in NetBeans, it automatically configures it with the -O2 option.
I don't see anywhere in the compiler options where I can override this value. I know it's set to -O2 because I can see the cmdlines it uses in the Build window: g++ -O2 ...
If I add -O1 into the "Additional Otions" within the compiler settings it doesn't honour it because the cmdline now becomes g++ -O1 -O2 ... and so the -O2 supersedes my own setting.
So, where in the IDE can I set the -O optimization level compile setting?
I am using GNU compile tools on Linux if that makes any difference.
I finally found the solution by exploring a bit more. In the dialog from the OP there is the option, 'Development Mode' which is currently set to 'Release'. There are a number of options under there and each of those correspond to different optimization levels and/or debug output compile flags:
No Flags -c
Debug -c -g
Performance Debug -c -g -O
Test Coverage -g -c
Diagnosable Release -c -g -O2
Release -c -O2
Performance Release -c -O3
Although there doesn't seem to be an option for -O1, that's basically the intended way for you to select different optimization levels in NetBeans.
Please look at the nbproject/Makefile-Release.mk file.
nekto#ubuntu:~/host/ex/dt-netbeans-samples-cpp-Welcome$ grep -r O2 *
nbproject/Makefile-Release.mk: $(COMPILE.cc) -O2 -MMD -MP -MF "$#.d" -o ${OBJECTDIR}/welcome.o welcome.cc
It looks like the -O2 option presence in the Release configuration is the default and unchangeable, however you always can create your own build configuration (and you did as I see).
Each build configuration has its own nbproject/Makefile-<configuration name>.mk file, which contains following lines:
# CC Compiler Flags
CCFLAGS=-O1
CXXFLAGS=-O1
I've created a new configuration, made it active, and set the -O1 option above from the NetBeans properties pop-up window, C++ Compiler -> Additional Options, and my compilation line didn't contain the -O2 option. My Additional Options panel is below:

Significantly slower code when compiling with G++ instead of LLVM

I am experimenting with an algorithm I programmed in C++ using XCode 7.0. When I compare the performance of the standard LLVM compiler in XCode to the binary created when compiling using G++ (5.2.0) the binary created using LLVM is an order of magnitude (>10x) faster than the code created using the g++ compiler.
I am using the -o3 code optimisation flag for the g++ compiler as follows:
/usr/local/Cellar/gcc/5.2.0/bin/g++-5 -o3 -fopenmp -DNDEBUG main.cpp \
PattersonInstance.cpp \
... \
-o RROTprog
The g++ compilation is needed because the algorithm has to be compiled and run on a high performance computer where I cannot use the LLVM compiler. Plus I would like to use Open MP to make the code faster.
All ideas on the reason what is causing these speed differences and how they could be resolved is more than welcome.
Thanks in advance for the help!
L
I can bet that what happens is the following: you pass -o3 to the compiler, instead of -O3 (i.e. with CAPITAL O) and for this reason -o3 just instructs the compiler to output the executable to a file called "3". However you use -o RROTprog later in the same command line, and the last -o is the one that's considered by the compiler when outputting the executable.
The net effect: the -O3 is not present, hence no optimization is being done.