binary object file changing in each build - c++

When compiling using G++ GNU compiler every time I do a build, without changing source code I get a different binary object file. Is there a compile option that will give me the same binary each time.

Copied from the GCC man-page:
-frandom-seed=string
This option provides a seed that GCC uses when it would otherwise
use random numbers. It is
used to generate certain symbol names that have to be different
in every compiled file. It
is also used to place unique stamps in coverage data files
and the object files that produce
them. You can use the -frandom-seed option to produce reproducibly identical object files.
The string should be different for every file you compile.

You should better use make. This way if your source didn't change, the compilation will be skipped, so the object files won't be changed.
Edit: after some thinking, it's possible to address your comment with the makefile which separates preprocessing and actual compilation. and some dirty tricks.
Example makefile:
all: source
source: source.i.cpp
#cmp -s source.i.cpp source.i.prev || g++ source.i.cpp -o source
#touch source
#cp source.i.cpp source.i.prev
source.i.cpp: source.cpp
#g++ -E source.cpp >source.i.cpp
Please note the the executable's time is changed, but the contents not (if you changed only the comments, not the actual code).

Related

Check if files were compiled with certain flags in Makefile

I have multiple files in my project which are compiled to object files using either -fPIC or without this option. When compiling those files for the use in a shared library, this option is necessary, else not. Thus when I compile the project into a shared library, this option is necessary.
Nevertheless I never am sure if the last compilation was done into the shared library or not. If not, and I want to compile it into the shared library, the compilation fails, and I have to delete the generated object files. The recompilation of those object files takes quite a lot of time, which I would like to avoid.
Is there a way for the makefile to detect if the object files have been compiled with or without this option right at the beginning, so that either the compilation can continue, or both have to be recompiled, without generating either an error or spending a lot of time in an unnecessary recompilation loop?
Q: "Is there a way for the makefile to detect if the object files have been compiled with or without this option"
Short answer: No
Long answer: if source files can be build with different options and you have to have access to the different builds at the same time, then you must have several output folders. And the makefiles must generate them in the correct folder.
Say something like this for a simple debug/release ("-g" flag) :
|
-src
-include
-BUILD
|
-obj
| -debug
| -release
-bin
|-debug
|-release
Of course, this approach has limitations. For example, if you need to have both "debug/release" and "PIC/not PIC", then you will need 4 output folders.
You can also mix my approach with the one proposed by #BasileStarynkevitch (generating specific names).
A possible approach could be to have your own convention about object files and their naming. For example, file foo.c would be compiled into foo.pic.o when it is position-independent code, and into foo.o when it is not. And you could adapt that idea to other cases (e.g. foo.dbg.o for a file with DWARF debug information, compiled with -g). You could also use subdirectories, e.g. compile foo.c into PICOBJ/foo.o or PICOBJ/foo.pic.o for a PIC object file, but this might be less convenient with make (beware of recursive makes).
Writing appropriate rules for your Makefile is then quite easy.
Be aware of other build automation systems, e.g. ninja.

What is the difference between 'compiling and linking' and just 'compiling' (with g++)?

I'm new to c++ and have been learning how to create a makefile and have noticed that one of the examples I have (which has to do with 'updating' changed files, and ignoring unchanged files) has the following command:
# sysCompiler is set to g++
.o:.cpp
$(sysCompiler) -c $<
According to the manual for g++, this compiles or assembles the source files but doesn't link them.
>
-c
Compile or assemble the source files, but do not link. The linking stage simply is not done. The ultimate output is in the form of an object file for each source file.
By default, the object file name for a source file is made by replacing the suffix .c',.i', .s', etc., with.o'.
Unrecognized input files, not requiring compilation or assembly, are ignored.
In other words, am just wondering what exactly 'not linking' means when it comes to compiling in c++?
The code of a single C or C++ program may be split among multiple C or C++ files. These files are called translation units.
Compiling transforms each translation unit to a special format representing the binary code that belongs to a single translation unit, along with some additional information to connect multiple units together.
For example, one could define a function in one a.c file, and call it from b.c file. The format places the binary code of the function into a.o, and also records at what location the code of the function starts. File b.c records all references to the function into b.o.
Linking connects references from b.o to the function from a.o, producing the final executable of the program.
Splitting translation process into two stages, compilation and linking, is done to improve translation speed. When you modify a single file form a set of a dozen of files or so, only that one file needs to be compiled, while the remaining .o files could be reused form previous translation.

gfortran: multiple definitions of... first defined here

I have code that includes main program and many modules in separate files that I am linking. Currently I have a makefile that creates .o files for each module (one on separate line) and then I put them all together, such as here:
mpif90 - modutils
mpif90 -c modvarsym
mpif90 -c s1_Phi.f90
mpif90 -c s2_Lambda.f90
mpif90 maincode.f90 modutils.o modvarsym.o s1_Phi.o s2_Lambda.o -o maincode
The above compiles fine and runs OK - except tat I suspect that I suspect array bound problems in my variables. So I include -fbounds-check maincode statement such as here:
mpif90 maincode.f90 modutils.o modvarsym.o s1_Phi.o s2_Lambda.o -o -fbounds-check maincode
That's when numerous "multiple definition" errors appear, and the code will no longer compile. I imagine that is because of -fbounds-check: rather than just enabling checking for array bounds, it probably does some additional checks. I also suspect that the error is in the way that I enter files in the make file. However I could not find the way that would work. In these files, both modvarsym and modutils is used by the main code as well as by the other two modules. The main code uses all four modules.
There is no include statement in these files. Maincode is the only file with the program statement, the variables are declared only once in modvarsym. Overall, the code compiles and runs without -fbounds-check. However I really want to use -fbounds-check to make sure the arrays do not overrun. Would anybody be able to put me on the right track? Thank you.
This is the answer #dave_thompson_085 gave in his comments, it seems to solve the problem.
First, I assume your first command is meant to have -c, and your first two are meant to have .f90 (or .f95 or similar) suffix as otherwise the compiler shouldn't do anything for them. Second, -o -fbounds-check maincode (in the absence of -c) means to put the linked output in file -fbounds-check and include maincode (if it exists) among the files linked. Since you have already linked all your routines into maincode, linking those same routines again PLUS maincode produces duplicates.
Move -fbounds-check before the -o at least; even better, it is usual style (though not required) to put options that affect parsing and code generation before the source file(s) as well, and in your example that is maincode.f90. Also note that this generates bound checks only for the routines in maincode; if there are any subscripting errors in the other routines they won't be caught. When you have a bug in a compiled language the place where a problem is detected may not be the actual origin, and it usually best to apply debugging options to everything you can.

Creating several precompiled header files using GNU make

I use gcc (running as g++) and GNU make.
I use gcc to precompile a header file precompiled.h, creating precompiled.h.gch; the following line in a Makefile does it:
# MYCCFLAGS is a list of command-line parameters, e.g. -g -O2 -DNDEBUG
precompiled.h.gch: precompiled.h
g++ $(MYCCFLAGS) -c $< -o $#
All was well until i had to run g++ with different command-line parameters.
In this case, even though precompiled.h.gch exists, it cannot be used, and the compilation will be much slower.
In the gcc documentation i have read that to handle this situation,
i have to make a directory called precompiled.h.gch and put
the precompiled header files there,
one file for each set of g++ command-line parameters.
So now i wonder how i should change my Makefile to tell g++ to create
the gch-files this way.
Maybe i can run g++ just to test whether it can use any existing file
in the precompiled.h.gch directory,
and if not, generate a new precompiled header with a unique file name.
Does gcc have support for doing such a test?
Maybe i can implement what i want in another way?
It seems weird to answer my own question; anyway, here goes.
To detect whether a suitable precompiled header file exists, i add a deliberate error to my header file:
// precompiled.h
#include <iostream>
#include <vector>
...
#error Precompiled header file not found
This works because if gcc finds a precompiled header, it will not read the .h file, and will not encounter the error.
To "compile" such a file, i remove the error first, placing the result in a temporary file:
grep -v '#error' precompiled.h > precompiled.h.h
g++ -c -x c++ $(MYCCFLAGS) precompiled.h.h -o MORE_HACKERY
Here MORE_HACKERY is not just a plain file name, but contains some code to make a file with unique name (mktemp). It was omitted for clarity.
There is a simpler way than introducing an #error in precompiled.h: never create this file at all. Neither G++ nor Visual C++ (at least up to 2005) expect the "real" file to be there, if a precompiled version is around (and if they get the necessary compilation flags).
Let's say the list of #includes that we want to precompile is called "to_be_precompiled.cpp". The filename extension doesn't matter much, but I don't like to call this a .h file, since it has to be used in a way different from genuine header files, and it's easier in Visual C++ if this is a .cpp. Then pick a different name to refer to it throughout the code, let's say "precompiled_stuff". Again, I I don't like to call this a .h file, because it's not a file at all, it's a name to refer to precompiled data.
Then in all other source files, the statement #include "precompiled_stuff" is not a genuine include, but simply loads precompiled data. It's up to you to prepare the precompiled data.
For g++, you need a build rule to create "precompiled_stuff.gch" from a source file whose name doesn't matter to the compiler (but would be "to_be_precompiled.cpp" here).
In Visual C++, the string "precompiled_stuff" equals the value of the /Yu flag and the precompiled data loaded comes from a .pch file with an unrelated name, that you also created from an unrelated source file (again "to_be_precompiled.cpp" here).
Only when building with a compiler without precompiled header support, a build rule needs to generate an actual file called "precompiled_stuff", preferably in the build directory away from the real source files. "precompiled_stuff" is either a copy of "to_be_precompiled.cpp", a hard or symbolic link, or a small file containing #include "to_be_precompiled.cpp".
In other words, you take the viewpoint that every compiler supports precompilation, but it's just a dumb copy for some compilers.

What is *.o file?

I'm compiling own project. And it halted by this error:
LINK||fatal error LNK1181: cannot open
input file
'obj\win\release\src\lua\bindings.o'|
Compiling using Code::Blocks with VS 2005/2008 compiler under win7.
There are also lot of another empty directories where *.o files are missing.
What do they do?
A file ending in .o is an object file. The compiler creates an object file for each source file, before linking them together, into the final executable.
You've gotten some answers, and most of them are correct, but miss what (I think) is probably the point here.
My guess is that you have a makefile you're trying to use to create an executable. In case you're not familiar with them, makefiles list dependencies between files. For a really simple case, it might have something like:
myprogram.exe: myprogram.o
$(CC) -o myprogram.exe myprogram.o
myprogram.o: myprogram.cpp
$(CC) -c myprogram.cpp
The first line says that myprogram.exe depends on myprogram.o. The second line tells how to create myprogram.exe from myprogram.o. The third and fourth lines say myprogram.o depends on myprogram.cpp, and how to create myprogram.o from myprogram.cpp` respectively.
My guess is that in your case, you have a makefile like the one above that was created for gcc. The problem you're running into is that you're using it with MS VC instead of gcc. As it happens, MS VC uses ".obj" as the extension for its object files instead of ".o".
That means when make (or its equivalent built into the IDE in your case) tries to build the program, it looks at those lines to try to figure out how to build myprogram.exe. To do that, it sees that it needs to build myprogram.o, so it looks for the rule that tells it how to build myprogram.o. That says it should compile the .cpp file, so it does that.
Then things break down -- the VC++ compiler produces myprogram.obj instead of myprogram.o as the object file, so when it tries to go to the next step to produce myprogram.exe from myprogram.o, it finds that its attempt at creating myprogram.o simply failed. It did what the rule said to do, but that didn't produce myprogram.o as promised. It doesn't know what to do, so it quits and give you an error message.
The cure for that specific problem is probably pretty simple: edit the make file so all the object files have an extension of .obj instead of .o. There's room for a lot of question whether that will fix everything though -- that may be all you need, or it may simply lead to other (probably more difficult) problems.
A .o object file file (also .obj on Windows) contains compiled object code (that is, machine code produced by your C or C++ compiler), together with the names of the functions and other objects the file contains. Object files are processed by the linker to produce the final executable. If your build process has not produced these files, there is probably something wrong with your makefile/project files.
It is important to note that object files are assembled to binary code in a format that is relocatable. This is a form which allows the assembled code to be loaded anywhere into memory for use with other programs by a linker.
Instructions that refer to labels will not yet have an address assigned for these labels in the .o file.
These labels will be written as '0' and the assembler creates a relocation record for these unknown addresses. When the file is linked and output to an executable the unknown addresses are resolved and the program can be executed.
You can use the nm tool on an object file to list the symbols defined in a .o file.
Ink-Jet is right. More specifically, an .o (.obj) -- or object file is a single source file compiled in to machine code (I'm not sure if the "machine code" is the same or similar to an executable machine code). Ultimately, it's an intermediate between an executable program and plain-text source file.
The linker uses the o files to assemble the file executable.
Wikipedia may have more detailed information. I'm not sure how much info you'd like or need.