What is the difference between a linker and a makefile? - build

a linker receives obj's and lib's to create exe's or other libs. But so does a makefile(but it can start from sources on top of that). what is the difference between the two?

No, a Makefile does not.
A Makefile is essentially a script to tell the make program what files to send to what programs as part of the build file.
So, a Makefile can invoke the compiler, linker, etc with the appropriate source files/object files, but it doesn't actually do the work itself.
I think you have missed the whole concept of a Makefile, so I suggest you do some further reading.

A linker and a makefile have almost nothing in common. A makefile is a set of instructions used by make. They can be instructions that build a program, install some files, or whatever you want make to do. A linker is a program that takes object files as input and combines them into an output executable (or shared library).

A makefile is just some kind of ruleset that defines what needs to be compiled and which compiling process depends on what. That way the make software can automate the process.
Using make and makefiles however does not mean that you are not using a regular compiler and linker. So basically those will still run, you just don't need to run it yourself, but define the process before in the makefile.

Make uses a makefile to evaluate a set of rules on whether to invoke the compiler on a source file (due to it changing, most likely) and eventually invoke the linker to create final object file.
You can read more about make here: http://www.gnu.org/software/make/

Related

How can I configure cmake to compile a file twice with two different compilers?

I'm adding a SYCL/OpenCL kernel to a parallel C++ program which is built with cmake. Using SYCL means I need to get cmake to compile my C++ source file twice: once with the SYCL compiler, and once with the project's default compiler, which is GCC. Both compilations produce outputs which need to be included when linking.
I'm completely new to cmake. I've added the GCC compile and link steps to the project's CMakeLists.txt, but what's the best way to add the SYCL compile step? I'm currently trying the "add_custom_command" option with "PRE_BUILD", but the command which is run doesn't seem to know about the paths which are provided to the normal compile and link steps: the current working directory, include directories, source directories, etc. I'm having to specify all of these manually, and I'm having to figure some of them out first.
It feels like I'm doing this the hard way. Is there a recommended (or at least better) way to get cmake to compile a file twice with two different compilers?
Also, there used to be a SYCL tag, but it's disappeared. Can someone recreate it, please?
Be aware that PRE_BUILD only works as PRE_BUILD in Visual Studio 7, for other targets is just PRE_LINK.
If you need to use two compilers on the same source file, just add a dependency from the GCC compile and link to the custom target you are using, so the GCC is executed after the SYCL compiler.
I can think of a couple other ways to do it.
Generate two build configurations
Write a script to call both compilers
The first method is probably the easiest. You might need to maintain two seperate CMakeLists.txt files, or possibly just parameterize the compiler and options and pass them arguments to Cmake when you generate (CC=gcc, CXX=g++, CFLAGS/CXXFLAGS, etc...). You might be able to do the same with the underlying build system (e.g. make) and just run it twice.
The second method is a bit more complicated. Write a simple script that accepts both sets of compiler options and compile each file using the compilers in sequence. Then the script could be then configured as CC/CXX.
So, the command options would look something like this...
--cc1 sycl --cc2 gcc --cc1opts ... --cc2opts ...
I'm not familiar with SYCL though, so I don't know how it's normally used.

make SCons compile everything in one gcc line?

I have a rather complex SCons script that compiles a big C++ project.
This gcc manual page says:
The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.
So it's better to give all my files to a single g++ invocation and let it drive the compilation however it pleases.
But SCons does not do this. it calls g++ separately for every single C++ file in the project and then links them using ld
Is there a way to make SCons do this?
The main reason to have a build system with the ability to express dependencies is to support some kind of conditional/incremental build. Otherwise you might as well just use a script with the one command you need.
That being said, the result of having gcc/g++ optimize as the manual describe is substantial. In particular if you have C++ templates you use often. Good for run-time performance, bad for recompile performance.
I suggest you try and make your own builder doing what you need. Here is another question with an inspirational answer: SCons custom builder - build with multiple files and output one file
Currently the answer is no.
Logic similar to this was developed for MSVC only.
You can see this in the man page (http://scons.org/doc/production/HTML/scons-man.html) as follows:
MSVC_BATCH When set to any true value, specifies that SCons should
batch compilation of object files when calling the Microsoft Visual
C/C++ compiler. All compilations of source files from the same source
directory that generate target files in a same output directory and
were configured in SCons using the same construction environment will
be built in a single call to the compiler. Only source files that have
changed since their object files were built will be passed to each
compiler invocation (via the $CHANGED_SOURCES construction variable).
Any compilations where the object (target) file base name (minus the
.obj) does not match the source file base name will be compiled
separately.
As always patches are welcome to add this in a more general fashion.
In general this should be left up to the program developer. Trying to compile all together in an amalgamation may introduce unintended behaviour to the program if it even compiles in the first place. Your best bet if you want this kind of optimisation without editing the source yourself is to use a compiler with inter-process optimisation like icc -ipo.
Example where an amalgamation of two .c files would not compile is for example if they use two identical static symbols with different functionality.

Compile and link many source files from different folders with g++ and make

What is proper way to compile and link many .cpp files which comes from different folders into one exacutable using makefile?
For example I have following file structure:
./Foo/Wow/Bar/example1.cpp
./Foo/Bar/example2.cpp
./Foo/example3.cpp
./main.cpp
Now I want to compile and link all of these files into one executable. Which is proper way to do this with makefile?
Thanks,
S.
There is no one correct way to do this.
One possibility would be for to separate the code into libraries, one per subdirectory. Each of those libraries would have its own makefile.
Then the project root would have a makefile that invoked make recursively to ensure those libraries were all up to date, then used the libraries to build the main executable(s).
Others object (sometimes vociferously) to that whole notion. It might be all right to use existing libraries (and their existing makefiles), but they often object to the basic idea of turning code in the subdirectories into libraries, just for the sake of having a single result file to link into the final executable (OTOH, few seem to have a solid explanation of why you'd put code into a subdirectory to start with, if it didn't embody some logical concept that would probably qualify it as a meaningful library).

C++ Compile on different platforms

I am currently developing a C++ command line utility to be distributed as an open-source utility on Github. However, I want people who download the program to be able to easily compile and run the program on any platform (specifically Mac, Linux, and Windows) in as few steps as possible. Assuming only small changes have to be made to the code to make it compatible with the various platform-independent C++ compilers (g++ and win32), how can I do this? Are makefiles relevant?
My advice is, do not use make files, maintaining the files for big enougth projects is tedious and errors happen sometimes which you don't catch immediatly (because the *.o file is still there).
See this question here
Makefiles are indeed highly relevant. You may find that you need (at least) two different makefiles to compensate for the fact that you have different compilers.
It's hard to be specific about how you solve this, since it depends on how complex the project is. It may be easiest to write a script/batchfile, and just document "Use the command build.sh on Linux/Unix, and build.bat on Windows") - and then let the respective files deal with for example setting up the name of the compiler and flags, etc.
Or you can have an include into the makefile, which is determined by the architecture. Or different makefiles.
If the project is REALLY simple, it may be just enough to provide a basic makefile - but it's unlikely, as a compile of x.cpp on Linux/MacOS makes an object file is called x.o, on windows the object file is called x.obj. Libraries have different names, dll's have differnet names, and on Linux/MacOS, the final executable has no extension (typically) so it's called "myprog", where the executable under windows is called "myprog.exe".
These sorts of differences mean that the makefile needs to be different.

Compile the Python interpreter statically?

I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.