I have code that includes main program and many modules in separate files that I am linking. Currently I have a makefile that creates .o files for each module (one on separate line) and then I put them all together, such as here:
mpif90 - modutils
mpif90 -c modvarsym
mpif90 -c s1_Phi.f90
mpif90 -c s2_Lambda.f90
mpif90 maincode.f90 modutils.o modvarsym.o s1_Phi.o s2_Lambda.o -o maincode
The above compiles fine and runs OK - except tat I suspect that I suspect array bound problems in my variables. So I include -fbounds-check maincode statement such as here:
mpif90 maincode.f90 modutils.o modvarsym.o s1_Phi.o s2_Lambda.o -o -fbounds-check maincode
That's when numerous "multiple definition" errors appear, and the code will no longer compile. I imagine that is because of -fbounds-check: rather than just enabling checking for array bounds, it probably does some additional checks. I also suspect that the error is in the way that I enter files in the make file. However I could not find the way that would work. In these files, both modvarsym and modutils is used by the main code as well as by the other two modules. The main code uses all four modules.
There is no include statement in these files. Maincode is the only file with the program statement, the variables are declared only once in modvarsym. Overall, the code compiles and runs without -fbounds-check. However I really want to use -fbounds-check to make sure the arrays do not overrun. Would anybody be able to put me on the right track? Thank you.
This is the answer #dave_thompson_085 gave in his comments, it seems to solve the problem.
First, I assume your first command is meant to have -c, and your first two are meant to have .f90 (or .f95 or similar) suffix as otherwise the compiler shouldn't do anything for them. Second, -o -fbounds-check maincode (in the absence of -c) means to put the linked output in file -fbounds-check and include maincode (if it exists) among the files linked. Since you have already linked all your routines into maincode, linking those same routines again PLUS maincode produces duplicates.
Move -fbounds-check before the -o at least; even better, it is usual style (though not required) to put options that affect parsing and code generation before the source file(s) as well, and in your example that is maincode.f90. Also note that this generates bound checks only for the routines in maincode; if there are any subscripting errors in the other routines they won't be caught. When you have a bug in a compiled language the place where a problem is detected may not be the actual origin, and it usually best to apply debugging options to everything you can.
Related
Let's say I'm including a header file that has tons of functions.
#include "1000Functions.h"
Function1(42);
Function2("Hello");
Function1000("geeks!");
But, I just want to use a few of the functions from the header. After preprocessing, compiling, and linking (for example, with g++), would my program include all 1000 functions, or just the 3 that I used?
I found this article that was useful. Using objdump -tC ProgramName can show you the unnecessary code that ends up being loaded into .text when your program is loaded into the memory.
Link-time optimization was what I was looking for, which worked for me once I added both of these tags to the linking command, not just -flto.
-O2 -flto
I'm doing the "Hello World" in the GTKMM tutorial, the "app" uses three files, the main.cc, helloworld.h and helloworld.cc.
At the beginning I thought that compiling the main.cc :
g++ -o HW main.cc $(pkg-config ... )
would be enough, but gives an error (undefined reference to Helloworld::Helloworld), etc.
In other words, it compiles the main and the header, but not the HW class, and this makes sense because the header is included in Main but not the Helloworld.cc. The thing is I'm kinda scared of including it because I read in other question that "including everything was a bad practice".
That being said, when I compile using all the files in the same command:
g++ -o HW main.cc helloworld.cc $(pkg-config ... )
the "app" works without errors.
So, since using the last command works, is compiling in this way a good practice?
What happens if my app uses a big ton of classes?
Must I manually write them all down in the command?
If not, must I use #include?
Is it good practice using #include for all cc used files?
Is normal to list all the cpp/cc files when compiling with g++?
Yes, completely.
How else will it know what source code you want it to compile?
The thing is I'm kinda scared of including it because I read in other question that including everything was a bad practice.
#includeing excess headers is bad practice.
Passing your complete source code to the compiler is not.
Is it good practice using #include for all cc used files?
Absolutely not.
What happens if my app uses a big ton of classes? Must I manually write them all down in the command?
No. You should be using a build system that handles this for you. That could be an IDE which takes all the files in your project and passes them to the compiler in turn, or it could be a CMakeLists.txt/Makefile with a *.cpp wildcard in (although I actually recommend listing source files explicitly, one-by-one; it's not hard).
Invoking g++ manually on the command-line is fine for a quick test, but for real usage you don't want to be clowning around with such machinery.
is good practice using #include for all cc used files
It's not only bad practice, never do it.
In order to create an executable you actually have to do two things:
Compile all the source code files to object files or libraries.
Link all the object files and needed libraries into an executable.
You seem to be missing the point that the link phase is where symbols defined in separate source files are resolved or linked.
Must I manually write them all down in the command?
For the compiler to know about the DEFINTION of the symbols DECLARED in your headers, you must include all source files. Exceptions to this rule can be (but are not limited to) headers containing template metaprogramming (TMP) code that usually exist entirely in header files.
What happens if my app uses a big ton of classes?
Most of the large C++ projects utilize build configuration tools such as CMAKE to handle the generation of makefiles for them.
I am creating *.h5 files so I have been compiling with:
h5c++ -o output myFile.cpp
However, I added MPI to speed up the code in one of the sections. The same compilation gives me an undefined reference error.
undefined reference to `MPI_Init'
How do I compile the code so that I can use MPI as well as HDF5?
you can tell the HDF5 wrapper to use the MPI wrapper instead of your C++ compiler.
for example, if your MPI wrapper is mpiCC, you can simply
export HDF5_CXX=mpiCC
export HDF5_CLINKER=mpiCC
[this answer has been edited]
Both mpicc, as well as h5cc (and their C++ counterparts), are not compilers, but only wrappers, that only add some flags to the compiler call. These flags typically include linked libraries and include paths. You can actually inspect them!
$ mpicc --showme # OpenMPI
$ mpicc -show # MPICH
$ h5cc -show
So the answer to you question is: Make a compiler call with all flags from both wrappers.
However, typically you should just leave this to a build system like CMake, which will assemble all relevant compiler flags.
This problem is not specific to Fubi, but a general linker issue. These past few days (read as 5) have been full of linking errors, but I've managed to narrow it down to just a handful.
I'm trying to compile Fubi (Full Body Interaction framework) under the Linux environment. It has only been tested on Windows 7, and the web is lacking resources for compiling on a *nix platform.
Now, like I mentioned above, I had a plethora of linking problems that dealt mostly with incorrect g++ flags. Fubi requires OpenNI and NITE ( as well as OpenCV, if you want ) in order to provide it's basic functionality. I've been able to successfully compile both samples from the OpenNI and NITE frameworks.
As far as I understand, Fubi is a framework, thus I would need to compile a shared library and not a binary file.
When I try to compile it as a binary file using the following command
g++ *.cpp -lglut -lGL -lGLU -lOpenNI -lXnVNite_1_5_2 -I/usr/include/nite -I/usr/include/ni -I/usr/include/GL -I./GestureRecognizer/ -o FubiBin
and I get the output located here. (It's kind of long and I did not want to ruin the format)
If I instead compile into object files (-c flag), no errors appear and it builds the object files successfully. Note, I'm using the following command:
g++ -c *.cpp -lglut -lGL -lGLU -lOpenNI -lXnVNite_1_5_2 -I/usr/include/nite -I/usr/include/ni -I/usr/include/GL -I./GestureRecognizer/
I then am able to use the ar command to generate a statically linked library. No error [probably] occurs (this is only a guess on my end) because it has not run through the linker yet, so those errors won't appear.
Thanks for being patient and reading all of that. Finally, question time:
1) Is the first error regarding the undefined reference to main normal when trying to compile to a binary file? I searched all of the files within that folder and not a single main function exists.
2) The rest of the undefined reference errors complain that they cannot find the functions mentioned. All of these functions are located in .cpp and .h files in the subdirectory GestureRecognizer/ which is a subdirectory of the path I'm compiling in. So wouldn't the parameter -I./GestureRecognizer/ take care of this issue?
I want to be sure that when I do create the shared library that I won't have any linking issues during run-time. Would all of these errors disappear when trying to compile to a binary file if they were initially linked properly?
You are telling the compiler to create an executable in the first invocation and an executable needs a main() function, which it can't find. So no, the error is not normal. In order to create a shared library, use GCC's "-shared" option for that. Trying some test code here, on my system it also wants "-fPIC" when compiling, but that might differ. Best idea is to dissect the compiler and linker command lines of a few other libraries that build correctly on your system.
In order to add the missing symbols from the subdirs, you have to compile those, too: g++ *.cpp ./GestureRecognizer/*.cpp .... The "-I..." only tells the compiler where to search when it finds an #include .... I wouldn't be surprised if this wasn't even necessary, many projects use #include "GestureRecognizer/Foo.h" to achieve that directly.
BTW:
Consider activating warnings when running the compiler ("-W...").
You can split between compiling ("-c") and linking. In both cases, use "g++" though. This should decrease your turnaround time when testing different linker settings.
When compiling using G++ GNU compiler every time I do a build, without changing source code I get a different binary object file. Is there a compile option that will give me the same binary each time.
Copied from the GCC man-page:
-frandom-seed=string
This option provides a seed that GCC uses when it would otherwise
use random numbers. It is
used to generate certain symbol names that have to be different
in every compiled file. It
is also used to place unique stamps in coverage data files
and the object files that produce
them. You can use the -frandom-seed option to produce reproducibly identical object files.
The string should be different for every file you compile.
You should better use make. This way if your source didn't change, the compilation will be skipped, so the object files won't be changed.
Edit: after some thinking, it's possible to address your comment with the makefile which separates preprocessing and actual compilation. and some dirty tricks.
Example makefile:
all: source
source: source.i.cpp
#cmp -s source.i.cpp source.i.prev || g++ source.i.cpp -o source
#touch source
#cp source.i.cpp source.i.prev
source.i.cpp: source.cpp
#g++ -E source.cpp >source.i.cpp
Please note the the executable's time is changed, but the contents not (if you changed only the comments, not the actual code).