How do I compile a project with multiple source files? - c++

I have a main code (.cpp) and some other files (.cpp and their corresponding .h file) which contain the function that are used in my main program. I'd like to know how to compile my files and run the main code.

This depends on your compiler. For example, with GCC you might write:
g++ foo.cpp bar.cpp baz.cpp -o foo # compile program
./foo # run program

Basile's right on, you need build machinery to have software you can easily work on. You really want to be able to just run make or something similar and get your project re-built quickly in a repeatable fashion. You also want to make sure that your entire project is kept up to date, rebuilding everything that depends upon something that has changed. While you could just keep around a shell script containing g++ lines as in ruakh's answer, this will needlessly recompile more than you need. (Which isn't a big deal with small projects but becomes important when projects grow larger than trivial.)
I've only taken fifteen minutes to skim the OMake documentation, but it looks very promising. make is standard on systems and provides a huge array of pre-defined build rules, including rules for C++, so it isn't very difficult to write new Makefiles for projects.
It'll look something like this:
# the compiler and its flags
CXX = g++
CXXFLAGS = -Wall
PROGRAM = foo
OBJECTS = foo_a.o foo_b.o foo_c.o foo_d.o
.PHONY: all
.DEFAULT: all
all: $(PROGRAM)
$(PROGRAM): $(OBJECTS)
The objects foo_a.o will be built from a corresponding source file named foo_a.c, foo_a.cc, foo_a.C, foo_a.cpp, foo_a.p, foo_a.f, foo_a.F, etc., for C, C++, Fortran, Pascal, Lex, Yacc, and so on.
The O'Reilly make book is sadly quite dated at this point; the .SUFFIX rules have been supplanted by pattern rules, but there is no coverage of pattern rules in the O'Reilly book. That said, it is still a good starting point if the full GNU Make documentation isn't a good fit for you.

But the real answer is to use some build machinery, like e.g. have a Makefile and use GNU Make.
There are some better builder than GNU Make, like Omake
Builders are very useful, because in the example by ruakh you may don't want to recompile foo.cpp when only bar.cpp changed so you need to take dependencies into account.
(and there are Makefile generators, like automake, cmake ...)

Related

Is normal to list all the cpp/cc files when compiling with g++?

I'm doing the "Hello World" in the GTKMM tutorial, the "app" uses three files, the main.cc, helloworld.h and helloworld.cc.
At the beginning I thought that compiling the main.cc :
g++ -o HW main.cc $(pkg-config ... )
would be enough, but gives an error (undefined reference to Helloworld::Helloworld), etc.
In other words, it compiles the main and the header, but not the HW class, and this makes sense because the header is included in Main but not the Helloworld.cc. The thing is I'm kinda scared of including it because I read in other question that "including everything was a bad practice".
That being said, when I compile using all the files in the same command:
g++ -o HW main.cc helloworld.cc $(pkg-config ... )
the "app" works without errors.
So, since using the last command works, is compiling in this way a good practice?
What happens if my app uses a big ton of classes?
Must I manually write them all down in the command?
If not, must I use #include?
Is it good practice using #include for all cc used files?
Is normal to list all the cpp/cc files when compiling with g++?
Yes, completely.
How else will it know what source code you want it to compile?
The thing is I'm kinda scared of including it because I read in other question that including everything was a bad practice.
#includeing excess headers is bad practice.
Passing your complete source code to the compiler is not.
Is it good practice using #include for all cc used files?
Absolutely not.
What happens if my app uses a big ton of classes? Must I manually write them all down in the command?
No. You should be using a build system that handles this for you. That could be an IDE which takes all the files in your project and passes them to the compiler in turn, or it could be a CMakeLists.txt/Makefile with a *.cpp wildcard in (although I actually recommend listing source files explicitly, one-by-one; it's not hard).
Invoking g++ manually on the command-line is fine for a quick test, but for real usage you don't want to be clowning around with such machinery.
is good practice using #include for all cc used files
It's not only bad practice, never do it.
In order to create an executable you actually have to do two things:
Compile all the source code files to object files or libraries.
Link all the object files and needed libraries into an executable.
You seem to be missing the point that the link phase is where symbols defined in separate source files are resolved or linked.
Must I manually write them all down in the command?
For the compiler to know about the DEFINTION of the symbols DECLARED in your headers, you must include all source files. Exceptions to this rule can be (but are not limited to) headers containing template metaprogramming (TMP) code that usually exist entirely in header files.
What happens if my app uses a big ton of classes?
Most of the large C++ projects utilize build configuration tools such as CMAKE to handle the generation of makefiles for them.

Check if files were compiled with certain flags in Makefile

I have multiple files in my project which are compiled to object files using either -fPIC or without this option. When compiling those files for the use in a shared library, this option is necessary, else not. Thus when I compile the project into a shared library, this option is necessary.
Nevertheless I never am sure if the last compilation was done into the shared library or not. If not, and I want to compile it into the shared library, the compilation fails, and I have to delete the generated object files. The recompilation of those object files takes quite a lot of time, which I would like to avoid.
Is there a way for the makefile to detect if the object files have been compiled with or without this option right at the beginning, so that either the compilation can continue, or both have to be recompiled, without generating either an error or spending a lot of time in an unnecessary recompilation loop?
Q: "Is there a way for the makefile to detect if the object files have been compiled with or without this option"
Short answer: No
Long answer: if source files can be build with different options and you have to have access to the different builds at the same time, then you must have several output folders. And the makefiles must generate them in the correct folder.
Say something like this for a simple debug/release ("-g" flag) :
|
-src
-include
-BUILD
|
-obj
| -debug
| -release
-bin
|-debug
|-release
Of course, this approach has limitations. For example, if you need to have both "debug/release" and "PIC/not PIC", then you will need 4 output folders.
You can also mix my approach with the one proposed by #BasileStarynkevitch (generating specific names).
A possible approach could be to have your own convention about object files and their naming. For example, file foo.c would be compiled into foo.pic.o when it is position-independent code, and into foo.o when it is not. And you could adapt that idea to other cases (e.g. foo.dbg.o for a file with DWARF debug information, compiled with -g). You could also use subdirectories, e.g. compile foo.c into PICOBJ/foo.o or PICOBJ/foo.pic.o for a PIC object file, but this might be less convenient with make (beware of recursive makes).
Writing appropriate rules for your Makefile is then quite easy.
Be aware of other build automation systems, e.g. ninja.

SCons and/or CMake: any way to automatically map from "header included during compilation" to "corresponding object file must be linked"?

Super-simple, totally boring setup: I have a directory full of .hpp and .cpp files. Some of these .cpp files need to be built into executables; naturally, these .cpp files #include some of the .hpp files in the same directory, which may then include others, etc. etc. Most of those .hpp files have corresponding .cpp files, which is to say: if some_application.cpp #includes foo.hpp, either directly or transitively, then chances are there's also a foo.cpp file that needs to be compiled and linked into the some_application executable.
Super-simple, but I'm still clueless about what the "best" way to build it is, either in SCons or CMake (neither of which I have any expertise in yet, other than staring at documentation for the last day or so and becoming sad). I fear that the sort of solution I want may actually be impossible (or at least grossly overcomplicated) to pull off in most build systems, but if so, it'd be nice to know that so I can just give up and be less picky. Naturally, I'm hoping I'm wrong, which wouldn't be surprising given how ignorant I am about build systems (in general, and about CMake and SCons in particular).
CMake and SCons can, of course, both automatically detect that some_application.cpp needs to be recompiled whenever any of the header files it depends on (either directly or transitively) changes, since they can "parse" C++ files well enough to pick out those dependencies. OK, great: we don't have to list each .cpp-#includes-.hpp dependency by hand. But: we still need to decide what subset of object files need to get sent to the linker when it's time to actually generate each executable.
As I understand it, the two most straightforward alternatives to dealing with that part of the problem are:
A. Explicitly and laboriously enumerating the "anything using this object file needs to use these other object files too" dependencies by hand, even though those dependencies are exactly mirrored by the corresponding-.cpp-transitively-includes-the-corresponding-.hpp dependencies that the build system already went to the trouble of figuring out for us. Why? Because computers.
B. Dumping all the object files in this directory into a single "library", and then having all executables depend on and link in that one library. This is much simpler, and what I understand most people would do, but it's also kinda sloppy. Most of the executables don't actually need everything in that library, and wouldn't actually need to be rebuilt if only the contents of one or two .cpp files changed. Isn't this setting up exactly the kind of unnecessary computation a supposed "build system" should be avoiding? (I suppose maybe they wouldn't need to be rebuilt if the library were dynamically linked, but suffice it to say I dislike dynamically linked libraries for other reasons.)
Can either CMake or SCons do better than this in any remotely straightforward fashion? I see a bunch of limited ways to twiddle the automatically generated dependency graph, but no general-purpose way to do so interactively ("OK, build system, what do you think the dependencies are? Ah. Well, based on that, add the following dependencies and think again: ..."). I'm not too surprised about that. I haven't yet found a special-purpose mechanism in either build system for dealing with the super-common case where link-time dependencies should mirror corresponding compile-time #include dependencies, though. Did I miss something in my (admittedly somewhat cursory) reading of the documentation, or does everyone just go with option (B) and quietly hate themselves and/or their build systems?
Your statement in point A) "anything using this object file needs to use these other object files too" is something that will indeed need to be done by hand. Compilers dont automatically find object files needed by a binary. You have to explicitly list them at link time. If I understand your question correctly, you dont want to have to explicitly list the objects needed by a binary, but want the build tool to automatically find them. I doubt there is any build too that does this: SCons and Cmake definitely dont do this.
If you have an application some_application.cpp that includes foo.hpp (or other headers used by these cpp files), and subsequently needs to link the foo.cpp object, then in SCons, you will need to do something like this:
env = Environment()
env.Program(target = 'some_application',
source = ['some_application.cpp', 'foo.cpp'])
This will only link when 'some_application.cpp', 'foo.hpp', or 'foo.cpp' have changed. Assuming g++, this will effectively translate to something like the following, independently of SCons or Cmake.
g++ -c foo.cpp -o foo.o
g++ some_application.cpp foo.o -o some_application
You mention you have "a directory full of .hpp and .cpp files", I would suggest you organize those files into libraries. Not all in one library, but logically organize them into smaller, cohesive libraries. Then your applications/binaries would link the libraries they need, thus minimizing recompilations due to not used objects.
I had more or less the same problem as you have and I solved it as follows:
import SCons.Scanner
import os
def header_to_source(header_file):
"""Specify the location of the source file corresponding to a given
header file."""
return header_file.replace('include/', 'src/').replace('.hpp', '.cpp')
def source_files(main_file, env):
"""Returns list of source files the given main_file depends on. With
the function header_to_source one must specify where to look for
the source file corresponding to a given header. The resulting
list is filtered for existing files. The resulting list contains
main_file as first element."""
## get the dependencies
node = File(main_file)
scanner = SCons.Scanner.C.CScanner()
path = SCons.Scanner.FindPathDirs("CPPPATH")(env)
deps = node.get_implicit_deps(env, scanner, path)
## collect corresponding source files
root_path = env.Dir('#').get_abspath()
res = [main_file]
for dep in deps:
source_path = header_to_source(
os.path.relpath(dep.get_abspath(), root_path))
if os.path.exists(os.path.join(root_path, source_path)):
res.append(source_path)
return res
The header_to_source method is the one you need to modify such that it returns the source file corresponding to a given header file. Then the method source_file gives you all the source files you need to build the given main_file (including the main_file as first element). Non existing files are automatically removed. So the following should be sufficient to define the target for an executable:
env.Program(source_files('main.cpp', env))
I am not sure whether this works in all possible setups, but at least for me it works.

Generalizing include statements in c++ files when building with make

Hello (I am using Windows, mingw g++ compiler and mingw32-make)
To generalize my question I would like to learn how to write a c++ source file as follows:
Assuming that foo.cpp depends on foo.h where foo.cpp is in src\ and foo.h is in include\
// foo.cpp
#include "foo.h"
Normally I would just write it like this
//foo.cpp
#include "..\include\foo.h"
but I have found that as my project grows, and I begin to need more organization, that this method isn't dynamic enough. Reason being I have to change every include for every file if I want to move foo.h to a new directory (say include\bar\foo.h). Is there a way for make to achieve this. If so can it be done for header file dependencies as well.
As a side note I am new to makefiles. I am not even sure that it knows these includes are there since they are within the code (in fact from what I understand it doesn't). That would lead me to an unfortunate secondary question, which is can make see these includes? If not is it possible to change it so that it can? Feel free to answer how you would approach this problem because I have a feeling I am going about this the wrong way by putting the includes in the file rather than linking them in the makefile.
The compiler is always looking into some default paths to look for .h-files. You can add your path.
For example gcc takes multiple -I arguments which contain a path. In your foo.cpp you do:
#include "foo.h"
and when compiling you say:
g++ -I../include foo.cpp -c [other options]
.
Regarding the second part of your question: The makefile and the call to make does not normally know anything about the files to be compiled and about your project. However there are several default variables and directives in make which lead to that impression: It could be, that in your environment you only need to change the CFLAGS or CPPFLAGS variable to add the -I-argument and it will work.
Patrick B has answered very well on how to make the compiler know where to include from, but not the following bit:
As a side note I am new to makefiles. I am not even sure that it knows
these includes are there since they are within the code (in fact from
what I understand it doesn't). That would lead me to an unfortunate
secondary question, which is can make see these includes? If not is it
possible to change it so that it can?
No, make doesn't understand what your source files contain, or how they depend on other files [make also doesn't really care if you are programming in C, C++, Fortran, Pascal, ADA, Lisp, Cobol or Haskell - as long as there is a "If you have a file like this, and want a file like that by doing something" relationship between files, make will sort it for you.
There are several ways to do this. You can manually add:
foo.cpp: foo.h
Or you can use a dependency file for your include-file, and let make built it automatically, by adding this, for example:
SOURCES = foo.cpp # Add any further source files here.
INCLUDES = -I../includes # Add other include directories if needed.
CFLAGS += ${INCLUDES}
TARGET = foo.exe # in Windows. Just foo in linux/MacOS.
all: ${TARGET} deps.mk
${TARGET}: ${SOURCES}
gcc -O $# $^
desp.mk: ${SOURCES}
gcc -MM ${INCLUDES} $^ > $#
include deps.mk
Note that makefiles are RELYING on indentation being tabs. This post uses spaces, so you will need to "tabify" the recepies. Also note that in a "proper" makefile, you'd make foo.o from foo.cpp, etc, and link all the different .o files together. That way, the compile is a fair bit quicker for large projects. I've simplified it for readability.
Maybe I should expand a little bit:
gcc -MM gives a list (to standard out) of the files that are being "compiled" and all of it's dependencies. It doesn't actually compile the code (and as long as the code is at least SOMEWHAT) close to being compileable, it will happily process your files.
For more details on gcc -MM and related, have a look at the GCC invocation documentation.
The $# and $&^ are what make calls "Automatic variables" - they expand to the "target" (easy to remember, as it looks sort of like a target for shooting arrows at or similar) and "all dependencies" (no visual clue here, I'm afraid - and every now and again, I have to remind myself) respectively. Check out here for more details.

Using a class from another project in C++

I have access to a large C++ project, full of files and with a very complicated makefile courtesy of automake & friends
Here is an idea of the directory structure.
otherproject/
folder1/
some_headers.h
some_files.cpp
...
folderN/
more_headers.h
more_files.cpp
build/
lots_of things here
objs/
lots_of_stuff.o
an_executable_I_dont_need.exe
my_stuff/
my_program.cpp
I want to use a class from the big project, declared in say, "some_header.h"
/* my_program.cpp */
#include "some_header.h"
int main()
{
ThatClass x;
x.frobnicate();
}
I managed to compile my file by painstakingly passing lots of "-I" options to gcc so that it could find all the header files
g++ my_program.cpp -c -o myprog.o -I../other/folder1 ... -I../other/folderN
When it comes to compiling I have to manually include all his ".o"s, which is probably overkill
g++ -o my_executable myprog.o ../other/build/objs/*.o
However, not only do I have to do things like manually removing his "main.o" from the list, but this isn't even enough since I forgot to also link against all the libraries that he happened to use.
otherproject/build/objs/StreamBuffer.h:50: undefined reference to `gzread'
At this point I am starting to feel I am probably doing something very wrong. How should I proceed? What is the usual and what is the best approach this kind of issue?
I need this to work on Linux in case something platform-specific needs to be done.
Generally the project's .o files should come grouped together into a library (on Linux, .a file if it's a static library, or .so if it's a dynamic library), and you link to the library using the -L option to specify the location and the -l option to specify the library name.
For example, if the library file is at /path/to/big_project/libbig_project.a, you would add the options -L /path/to/big_project -l big_project to your gcc command line.
If the project doesn't have a library file that you can link to (e.g. it's not a library but an executable program and you just want some of the code used by the executable program), you might want to try asking the project's author to create such a library file (if he/she is familiar with "automake and friends" it shouldn't be too much trouble for him), or try doing so yourself.
EDIT Another suggestion: you said the project comes with a makefile. Try makeing it with the makefile, and see what its compiler command line looks like. Does it have many includes and individual object files as well?
Treating an application which was not developed as a library as if it was a library isn't likely to work. As an offhand example, omitting the main might wind up cutting out initialization code that the class you want depends upon.
The responsible thing to do here is to read the code, understand it, and turn the functionality you want into a proper library. Build the "exe you don't need" with debug symbols and set breakpoints in the constructors and methods of the class. Step into them so you get a grasp on the functionality and what parts of the program are relevant and irrelevant to your needs.
Hopefully the code is under some kind of version control system that supports branching (such as Git). If not, make your own repository that does. Edit the files until you've organized them into a library and code that uses the library. Make sure it works properly within the context of the original program. Then turn around and use this library in your own program.
If you've done a good job, you might be able to convince the original authors to accept the separation back into their original codebase. If not, at least version control has your back so you can manage integration of future changes.