c++ automake precompiled header support on centos - c++

The background about this question is: my project(C++ language) contains too many files, which including boost, thrift, zookeeper, etc. Now the compilation duration takes too long.
As you know, Visual Studio supports precompiled headers, so as GCC. Because I use automake to manager the Make procedure, so What I want to ask for is whether automake supports precompiled headers? How can I write automake files if so?
Waiting and thranks for your answers.

The whole thing is not about automake, but rather about writing makefiles.
Actually, you can try the .gch, but there're restrictons:
write one common header to include everything (like stdafx.h)
It should be the 1st include in all your sources
use same CFLAGS to compile all your sources
It's a new functionality. You'd want to detect it in your configure.ac
Write a rule in your Makefile.am, name it like stdafx.gch:. Make it empty, if gch is not supported:
stdafx.gch: stdafx.h
$(OPTIONAL_COMPILE_GCH)
.PHONY: $(OPTIONAL_STDAFX_GCH)
Make your _OBJECTS depend on stdafx.gch:
$(foo_SOURCES:.cpp=.$(OBJEXT)): stdafx.gch
# or (is it documented var?)
$(foo_OBJECTS)): stdafx.gch
Youc can use the documented CXXCOMPILE command to be sure that all CXXFLAGS are the same.
Detect in your configure.ac:
if ...; then
[OPTIONAL_COMPILE_GCH='$(CXXCOMPILE) -c -o $# $<']
[OPTIONAL_STDAFX_GCH=]
else
[OPTIONAL_COMPILE_GCH=]
[OPTIONAL_STDAFX_GCH='stdafx.gch']
fi
AC_SUBST(OPTIONAL_COMPILE_GCH)
AC_SUBST(OPTIONAL_STDAFX_GCH)
Update
You want your .gch to be recompiled, when any header file indirectly used by stdafx.h is modified.
Then you could add stdafx.cpp that does nothing, but includes stdagx.h and make .gch depend on stdafx.o. This way the dependency tracking would be managed by automake, but there's a problem:
stdafx.o itself could make use of .gch to compile faster, but if we add such dependency, it would be circular. It's not easy for me to find a solution.
Here's an example that uses status files to solve this: https://gist.github.com/basinilya/e00ea0055f74092b4790
Ideally, we would override the compilation command for stdafx.o, so it first creates .gch and then runs the standard automake compilation, but automake does not use $(CXXCOMPILE) as is. It creates a complex recipe from it (and it depends on automake version):
Update 2
Another solution is to use gcc dependency tracking directly:
stdafx.h.gch: stdafx.h
g++ -MT stdafx.h.gch -MD -MP -MF .deps/stdafx.Tpo -c -o stdafx.h.gch stdafx.h
mv .deps/stdafx.Tpo .deps/stdafx.Po
-include .deps/stdafx.Po
By default, if precompiled header is used, gcc -MD will not put the list of header files into the generated dependencies file, just .gch. There's an option to fix this : -fpch-deps (see bug), but perhaps the default behavior is not that bad: if only the .gch depends on the headers, make will have less work to do.

Related

GCC, CMake, precompiled headers and maintaining dependencies

I'm trying to figure out how to maintain dependencies of my precompiled headers. It includes STL headers, some third-parties like boost and some of our rarely changing infrastructure headers.
I came out with something like this
SET(PCH_DIR ${CMAKE_CURRENT_BINARY_DIR})
SET(PCH_HEADER ${CMAKE_CURRENT_SOURCE_DIR}/../include/server/server.h)
SET(PCH_DST server.h.gch)
ADD_CUSTOM_TARGET(serverPCH DEPENDS ${PCH_DST})
ADD_CUSTOM_COMMAND(OUTPUT ${PCH_DST} ${PCH_DEP}
COMMAND ${CMAKE_CXX_COMPILER} -x c++-header ${COMMON_CXXFLAGS} ${COMPILER_DEFINITIONS} -std=gnu++1z -c ${PCH_HEADER} -o ${PCH_DST} -I${CMAKE_SOURCE_DIR}/lib/include/server -I${CMAKE_SOURCE_DIR}/lib/include
MAIN_DEPENDENCY ${PCH_HEADER}
WORKING_DIRECTORY ${PCH_DIR}
COMMENT "Building precompiled header"
VERBATIM)
Looks like its doing its job and it gets recompiled once the header is edited. However, PCH recompilation is not triggered when one of files included in server.h is changed. Is there a way to trigger re-compilation if any of headers included in server.h is changed?
Well, 2 years later. CMake now supports precompiled headers and unity builds.

How to eliminate certain (non system) headers from dependency files(.d)?

We link in a library(TAO) which is composed of many header files.
Every time I run the pre-processor command on a cpp file( g++ -MM $< $# ), these library files are automatically included in every .d file generated.
These are obviously not system files and almost never change as far as we're concerned, so I would like to eliminate them from my .d files.
Short of filtering out these header files using sed, is there any built in way to accomplish this?
You may instruct gcc to consider some path as system headers with -isystem; and g++ -MM ignore system-headers.

How to use a library with headers and .so files?

I'm new to C and wanted to use a library (MLT Multimedia Framework)
I've built it and it produced the following directories: include lib share
Inside lib there are .so .a .la files
Inside include there are .h files
Now, I'm instructed to do this:
#include <framework/mlt.h> which is inside include/mlt/framework/
Questions:
Why I do I need to place the header file that contains only function prototypes? Where are the real functions then? are they linked someway to the ones included in lib directory?
Where to place my own files and how to compile it?
How to learn more about the topics:
Dynamic/Static libraries
Building / making / installing
How to use any C library
If you don't have the function prototypes, how would the compiler know what functions exist in the library? Short answer is: It doesn't. Longer answer: The compiler doesn't care about library files, static (files ending in .a) or shared (files ending in .so), all it cares about is the current translation unit. It's up to the linker to handle resolving undefined references.
When you use libraries, you include the header files that contain the needed declarations (structures, classes, types, function prototypes) into the source file. The source file plus all included header files forms the translation unit that the compiler uses to generate code. If there are undefined references (for example a call to a function in the library) the compiler adds special information about that to the generated object file. The linker then looks through all object files, and if it finds an unresolved reference it tries to find it in the other object files and the provided libraries. If all definitions are resolved the linker generates the final executable, otherwise it will report the unresolved definitions as errors.
To answer your other questions:
Where to place my own files and how to compile it?
This is two questions, the answer to the first one (about placement of your files) is that it doesn't really matter. For small project with only a few source and header files, it's common to place all files in a common project directory.
The second question, about compiling, there are different ways to do it too. If there are only one or two source files you could use the compiler frontend (e.g. gcc) to compile and link and generate your executable all in one go:
$ gcc -Wall -g source1.c source2.c -o your_program_name
The above command takes two source files, compiles and links them into the program your_program_name.
If you need to use a library, there are one or two things that you need to add to the above command line:
You need to tell the linker to link with the library, this is done with e.g. the -l (lower case L) option:
$ gcc -Wall -g source1.c source2.c -o your_program_name -lthe_library
It's important to note that the_library is the base name of the library. If the library file is named libthe_library.so then only the_library part is needed, the linker will add the other parts automatically.
If the library is not in a standard location, then you need to tell the compiler and linker where the library file are. This is done with the -I (capital i) option to tell the preprocessor where the header files are, and the -L (capital l) where the linker files are.
Something like
$ gcc -Wall -g -Ilocation/of/headers source1.c source2.c -o your_program_name -Llocation/of/libraries -lthe_library
If you have more than a couple of source files, it's common to use so called makefiles that lists all source files, their dependencies, compiler and linker flags, and contain rules on how to build object files and link the final program. Such a makefile could look like
CFLAGS = -Wall -g
LDFLAGS = -g
SOURCES = source1.c source2.c
OBJECTS = $(SOURCES:.c=.o)
TARGET = your_program_name
.PHONY: all
all: $(TARGET)
$(TARGET): $(OBJECTS)
$(LD) $(LDFLAGS) $^ -o $#
%.o: %.c
$(CC) $(CFLAGS) $< -c -o $#
The above makefile should do just about the same as the previous command line. The big difference is that it's much easier to add more source files, add special rules for special files, and most importantly, the make program will handle dependencies so that if one source file haven't been modified since last build then it won't be compiled. The last bit will make big projects with many source files build much quicker when only one or a few source files has been modified.
How to learn more about the topics [...]
By going to your favorite search engine, and looking for those topics there. I also recommend e.g. Wikipedia.
Of course, if you use an Integrated Development Environment (a.k.a. an IDE) then you don't have to compile from the command line, or to make your own makefiles, the IDE will handle all that for you. It will also have dialogs for the project settings where you can enter include paths and library paths, and what libraries to link with.
Why I do I need to place the header file that contains only function prototypes?
So as to satisfy your compiler for declaration of those functions or declaration of classes. As C++ is static type checking language, they must know the type of objects which they will be using.
Where to place my own files and how to compile it?
You can place you code anywhere in you filesystem; only make sure to include .h files in includes path and lib while compiling. Usually you need to modify your path.
You can check about building on this link:
https://en.wikipedia.org/wiki/GNU_build_system
Check the README file that came with the code. It should tell you how to install it into the system properly. Usually there is an install build target which installs the resulting files into the proper directories.
The usual sequence of commands to build and install most products is:
$ ./configure
$ make
$ sudo make install

Build CUDA and C++ using Autotools

I'm setting up Autotools for a large scientific code written primarily in C++, but also some CUDA. I've found an example for compiling & linking CUDA code to C code within Autotools, but I cannot duplicate that success with C++ code. I've heard that this is much easier with CMake, but we're committed to Autotools, unfortunately.
In our old hand-written Makefile, we simply use a make rule to compile 'cuda_kernels.cu' into 'cuda_kernels.o' using nvcc, and add cuda_kernels.o to the list of objects to be compiled into the final binary. Nice, simple, and it works.
The basic strategy with Autotools, on the other hand, seems to be to use Libtool to compile the .cu files into a 'libcudafiles.la', and then link the rest of the code against that library. However, this fails upon linking, with a whole bunch of "undefined reference to ..." statements coming from the linker. This seems like it might be a name-mangling issue with g++ vs. the nvcc compiler (which would explain why it works with C code), but I'm not sure what to do at this point.
All .cpp and .cu files are in the top/src directory, and all the compilation is done in the top/obj directory. Here's the relevant details of obj/Makefile.am:
cuda_kernals.cu.o:
$(NVCC) -gencode=arch=compute_20,code=sm_20 -o $# -c $<
libcudafiles_la_LINK= $(LIBTOOL) --mode=link $(CXX) -o $# $(CUDA_LDFLAGS) $(CUDA_LIBS)
noinst_LTLIBRARIES = libcudafiles.la
libcudafiles_la_SOURCES = ../src/cuda_kernels.cu
___bin_main_LDADD += libcudafiles.la
___bin_main_LDFLAGS += -static
For reference, the example which I managed to get working on our GPU cluster is available at clusterchimps.org.
Any help is appreciated!
libtool in conjunction with automake currently generates foo.lo (libtool-object metadata) files, the non-PIC (static) object foo.o, and the PIC object .libs/foo.o.
For consistent .lo files, I'd use a rule like:
.cu.lo:
$(LIBTOOL) --tag=CC --mode=compile $(NVCC) [options...] -c $<
I have no idea if, or how, -PIC flags are handled by nvcc. More options here. I don't know what calls you are making from the program, but are you forward declaring CUDA code with C linkage? e.g.,
extern "C" void cudamain (....);
It seems others have run up against the libtool problem. At worst, you might need a 'script' solution that mimics the .lo syntax and file locations, as described on the clusterchimps site.

Makefile for compiling a number of .cpp and .h into a lib

I am running Windows 7 with gcc/g++ under Cygwin. What would be the Makefile format (and extension, I think it's .mk?) for compiling a set of .cpp (C++ source) and .h (header) files into a static library (.dll). Say I have a variable set of files:
file1.cpp
file1.h
file2.cpp
file2.h
file3.cpp
file3.h
....
What would be the makefile format (and extension) for compiling these into a static library? (I'm very new to makefiles) What would be the fastest way to do this?
The extension would be none at all, and the file is called Makefile (or makefile) if you want GNU Make to find it automatically.
GNU Make, at least, lets you rely on certain automatic variables that alone give you control over much of the building process with C/C++ files as input. These variables include CC, CPP, CFLAGS, CPPFLAGS, CXX, CXXFLAGS, and LDFLAGS. These control the switches to the C/C++ preprocessor, compiler, and the linker (the program that in the end assembles your program) that make will use.
GNU Make also includes a lot of implicit rules designed to enable it automatically build programs from C/C++ source code, so you don't [always] have to write your own rules.
For instance, even without a makefile, if you try to run make foobar, GNU Make will attempt to first build foobar.o from foobar.c or foobar.cpp if it finds either, by invoking appropriate compiler, and then will attempt to build foobar by assembling (incl. linking) its parts from system libraries and foobar.o. In short, GNU Make knows how to build the foobar program even without a makefile being present -- thanks to implicit rules. You can see these rules by invoking make with the -p switch.
Some people like to rely on GNU Make's implicit rule database to have lean and short makefiles where only that specific to their project is specified, while some people may go as far as to disable the entire implicit rule database (using the -r switch) and have full control of the building process by specifying everything in their makefile(s). I won't comment on superiority of either strategy, rest assured both do work to some degree.
There are a lot of options you can set when building a dll, but here's a basic command that you could use if you were doing it from the command line:
gcc -shared -o mydll.dll file1.o file2.o file3.o
And here's a makefile (typically called Makefile) that will handle the whole build process:
# You will have to modify this line to list the actual files you use.
# You could set it to use all the "fileN" files that you have,
# but that's dangerous for a beginner.
FILES = file1 file2 file3
OBJECTS = $(addsuffix .o,$(FILES)) # This is "file1.o file2.o..."
# This is the rule it uses to assemble file1.o, file2.o... into mydll.dll
mydll.dll: $(OBJECTS)
gcc -shared $^ -o $# # The whitespace at the beginning of this line is a TAB.
# This is the rule it uses to compile fileN.cpp and fileN.h into fileN.o
$(OBJECTS): %.o : %.cpp %.h
g++ -c $< -o $# # Again, a TAB at the beginning.
Now to build mydll.dll, just type "make".
A couple of notes. If you just type "make" without specifying the makefile or the target (the thing to be built), Make will try to use the default makefile ("GNUMakefile", "makefile" or "Makefile") and the default target (the first one in the makefile, in this case mydll.dll).