The way of the include in c++ using Eclipse - c++

I learned that if I compile main.cpp the compiler simply replaces all includes with the actual content of the file i.e. #include "LongClassName.h" with the text in that file. This is done recursively in LongClassName.h. In the end the compiler sees a huge "virtual" file with the complete code of all .cpp and .h files.
But it seems to be much more complicated in real projects. I had a look at the Makefile Eclipse created for my Qt project and it seems that there is an entry for every file named file.o and its dependencies are file.cpp and file.h. So that means that eclipse compiles each .cpp separately(?)
Does that mean that class.cpp will know nothing about global stuff in main.cpp or a class in higher include hirarchy?
I stumbled upon this problem while trying to create an alias for a long class name. It is my main class and I wanted to call static functions with a shorter name: Ln::globalFunction() instead of LongClassName::globalFunction()
I have a class LongClassName whose header I include in main.cpp. This is the main class. All other classes are included in it.
LongClassName.h
#define PI 3.14159265
#include <QDebug>
Class LongClassName
{
...
public:
...
private:
...
};
typedef LongClassName Ln;
LongClassName.cpp
#include "Class1.h"
#include "Class2.h"
#include "Class3.h"
/*implementations of LongClassName's functions*/
So I assumed that when the code is included in one single "virtual" file by the compiler every class will be inserted after this source code and because of that every class should know that Ln is an alias for LongClassName
This didn't work
So what is the best way to propagate this alias to all classes?
I want to avoid including LongClassname.h in all classes because of reverse dependencies. LongClassName includes all other classes in its implementation. And almost all the other classes use some static functions of LongClassName.
(At the moment I have a seperate class Ln but try to merge it with LongClassName because it seems more logical.)

The compiler knows how to compile a .cpp file (if it's a cpp compiler) into a .o file called 'object file', which is your code translated (and probably manipulated, optimized, etc.) to a machine code. Actually the compiler creates an assembly code, which is translated to machine code by the assembler.
So each cpp file is compiled to a different object file, and knows nothing about variables declared in other cpp files, unless you include declarations you want the object file to know about, either in the cpp file or in an h file it includes.
Although the compilation is done separately for each cpp, the linker links all object files to a single executable (or a library), so a variable declared in the global namespace is indeed global, and every declaration not explicitly placed in a named
namespace is placed in the global namespace.
You will probably benefit from reading about all stages of "compiling", for example here: http://www.network-theory.co.uk/docs/gccintro/gccintro_83.html

In the end the compiler sees a huge "virtual" file with the complete code of all .cpp and .h files.
This is wrong. In .cpps you should include just the .hs (or .hpps if you like), almost never the .cpps; the .h in general just contain the declarations of the classes and of the methods, and not their actual body1 (i.e. their definition), so when you compile each .cpp the compiler still knows nothing about the definition of the functions defined in other .cpps, it just knows their declaration, and with it it can perform syntactical checks, generate code for function calls, ... but still it will generate an "incomplete" object file (.o), that will contain several "placeholders" ("here goes the address of this function defined somewhere else" "here goes the address of this extern variable" and so on)
After all the object files have been generated, it's the linker that have to take care of these placeholders, by plumbing all the object files together and linking their references to the actual code (which now can be found, since we have all the object files).
For some more info about the classical compile+link model, see here.
Does that mean that class.cpp will know nothing about global stuff in main.cpp or a class in higher include hirarchy?
Yes, it's exactly like that.
But why doesn't the Makefile created by eclipse simply compile main.cpp. Why isn't this enough? main.cpp contains all the dependencies. Why compile every .cpp separately?
main.cpp doesn't contain all the code, but just the declarations. You don't include all the code in the same .cpp (e.g. by including the other .cpps) mainly to decrease compilation time.
I want to avoid including LongClassname.h in all classes because of reverse dependencies. LongClassName includes all other classes in its implementation. And almost all the other classes use some static functions of LongClassName.
If you use header guards, you shouldn't have problems.
1. Ok, they also contain inline and template functions, but they are the exception, not the rule.

Related

Why does including the .h also make the .cpp source come along with it?

I'm an experienced programmer, but only in high level languages; I'm doing my first really large project in C++ right now.
I've got two classes, ClassA and ClassB; a ClassA is (among other things) an index of ClassBs, so ClassA needs to know what a ClassB is to build arrays out of it, and a ClassB needs to know what a ClassA is so it can update the index when something changes. Both of these classes are in their own .h & .cpp files.
I figured including each from the other would just cause infinite recursion, so I decided to instead have #include "ClassA.cpp" and #include "ClassB.cpp" at the beginning of main.cpp; but doing this just caused the compiler to warn about multiple definitions of every class and method in those files.
After some experimentation I found out that including ClassA.h and ClassB.h produces the desired behavior - but this doesn't make any sense, I'm only including the prototypes of those classes. Surely the code that actually makes them up never gets mixed in? And yet it does.
What's going on here that I don't understand? Why does including ClassA.h also make the actual code for ClassA show up with it? And why does including ClassA.cpp cause every include of ClassA.h to trigger "multiple definition" errors even though they're in a header shield or whatever it's called?
The missing step is that the definitions in ClassA.cpp and ClassB.cpp will not be seen by the linker unless those files are also compiled at some point. If you did something like this:
g++ main.cpp ClassA.cpp ClassB.cpp
then all references to definitions in ClassA.cpp and ClassB.cpp from main.cpp would be resolved. However, if you only did
g++ main.cpp
then the linker would have no idea where to find the definitions in ClassA.cpp and ClassB.cpp and you would probably get an error.
If you're using an IDE, this detail is hidden from you: the IDE ensures that as long as you add a .cpp file to your "project", it will be compiled into the final binary when you build the project.
This is the way how C++ is designed:
Your classes don't need to now anything more than the prototypes of other classes, so you don't have to include more than the headers.
Why is this so? Well, compilation of an entire application is the combination of two steps: compilation of the code itself and then linking (actually, there is a third step preceding these: pre-processing, but we could consider this one as part of code compilation).
Example function call: It is sufficient (exception: inline functions!) to know that a function with a specific proto type exists. The compiler then can generate all the code necessary to do the function call, except for the actuall address of the function - for which it leaves some kind of place holder.
The linker then combines all code generated during the compilation step to a single unit. As now knowing where every function is located, it can fill their actual addresses into the place holders, wherever they may appear.
C++ code is compiled to *.obj for per .cpp file, and it is the link process make the obj files to an executable.
Never include *.cpp because it usually causes redifinition issue.
For each *.h file, add a macro to avoid multiple including:
#ifndef XXX_H
#define XXX_H
//your code goes here
#endif

dealing with includes and using headers

I have "Hello World" code that uses function fhi from another hi.cpp file that has it's header.
Correct my if my understanding is wrong according following:
I can do include cpp file like #include "c:\c\hi.cpp" instead of using header without any problems except that fact that it looks more readable in header file.
If I include header like sample in my main program hi.h, must hi.h include hi.cpp, or it is done automatically according the same file name hi. I'm wondering how compiler knows where is function fhi body.
Is it possible to have different names for header and cpp files?
Programm:
#include "stdafx.h"
#include "c:\c\hi.h"
int _tmain(int argc, _TCHAR* argv[])
{
fhi(1);
return 0;
}
hi.h
#include <cstdlib>
#include <iostream>
int var;
int fhi(int f);
hi.cpp
#include <cstdlib>
#include <iostream>
int fhi(int f)
{
return 0;
}
must hi.h include hi.cpp
No. hi.h contains only declarations, that can be other by other .cpp files.
I'm wondering how compiler knows where is function fhi body.
It doesn't. You need to compile all *.cpp files into the object files. In your case, you will have two object files: program.o and hi.o. The linker can now take these two object files, and spit out the executable. References to other functions(in this case the actual definition of fhi(..)) is resolved in this stage.
Also why are you using absolute paths in #includes? It will break when you move the "c" directory around.
What normally happens is that the build system compiles the .cpp files into object files, that then are used to build the main executable. The means to tell this to the build system vary greatly.
One important point is that your hi.cpp must include hi.h. You should also put an include guard in hi.h, to make it safe to be included more than once in a translation unit.
I can do include cpp file like #include "c:\c\hi.cpp" instead of using
header without any problems except that fact that it looks more
readable in header file.
yes, you can do so but it is not recommended, one of the problems is encapsulation; you are not hiding implementation details. readability as you mention is also a concern, a header is easier to read since it clearly shows what methods are public.
If I include header like sample in my main program hi.h, must hi.h
include hi.cpp, or it is done automatically according the same file
name hi. I'm wondering how compiler knows where is function fhi body.
the header needs to be explicitly included in hi.cpp and any .cpp file that use the class defined in the header.
Is it possible to have different names for header and cpp files?
yes but it is not recommended, it makes it more difficult to find things.
as a general rule: think about that other programmers may want to look in your code so you need to structure it so that it is easy to read and understand as well as making it easier for you 2 years down the road to remember where things are.
In Visual Studio all CPP files included in the project will be compiled to produce OBJ files. These OBJ files will be linked together to form the EXE or DLL.
Including files are similar to pasting the contents of the file at that location. The only difference is that this pasting is done by the pre-compiler during compilation.
Finding out where a function body resides is done by the either the compiler if the function is inline or by the linker when the final binary is created.
First, if the header file is in the same directory as the source file including it, you can use just
#include "hi.h"
In other words, you don't have to use a full path. (See e.g. the inclusion of "stdafx.h".)
Second, in your header file you don't need to include other header files, unless you need types from those. In your header file you don't have anything that needed from the header files you include.
Third, you should protect header files header files from being included more than once in the same source file, this can be done with a so called include guard, on in some compiler via a special directive called #pragma once.
Fourth, in your header file you define a global variable var. This variable will then be defined in every source file you include the header file in, which will lead to errors. You need to declare the variable as extern:
extern int var;
Then in one source file you define the variable like you do now.
Fifth, you should never include source files in header file (with some special exceptions that you don't have to think about yet). Instead you add all source files to the project (I assume you are in MS VisualStudio) and it they will all be built and linked together automatically.
Sixth, since you seem to be using VisualC++, then you are probably using something called precompiled headers. This is something the compiler uses to speed up compilation. However, for this to work you have to include "stdafx.h" in all source files. That include actually has to be the first non-comment line in each source file.

Does putting a whole class definition in a ".h" make the executable larger?

We define a C++ class in a .h and define its methods in a .cpp, but it makes the code look less organized.
I want to put all method's definition in the class definition which is in a .h file, but I'm worrying that the compiler generate duplicated code for the same methods/functions when one class header file is included by different files.
Does the linker find out and merge the duplicated code pieces to reduce the file size?
If not, is it better to use .hpp instead? I heard that a .hpp is for this.
And it does make minor difference when I just change a .h file for a .hpp (I don't know why), compiled with G++.
Yes. It may create larger executable and that is because the member functions which are defined in the class itself, are inline by default, whether you mention the keyword inline in the defintion or not. Usually, inline function causes larger executable because the compiler will define it multiple times wherever it is called from.
.h vs .hpp is the 90% equivalence of
#include <cmath> vs #include <math.h>
Some people prefer to use .hpp when they are doing exclusive C++ programming. You will see .hpp in libraries like Boost.
However, the other 10% is really important. For example, taking from Boost library doc, they explain the reason of using .hpp over .h:
Most Boost libraries are header-only: they consist entirely of header
files containing templates and inline functions, and require no
separately-compiled library binaries or special treatment when
linking.
If you fall in that case, you should use .hpp, but this can cost longer compilation time. Otherwise, you might want to keep .h style. That's just my personal taste. It isn't C-oriented at all, in my honest opinion.
Further reading:
Splitting templated C++ classes into .hpp/.cpp files--is it possible?
Condensing Declaration and Implementation into an HPP file
C++ templates declare in .h, define in .hpp
You have nothing to worry about. It makes absolutely no difference how it's broken up, it's what your files describe that makes it bigger, not how that description is spread out.
.h or .hpp makes no difference as well.
To answer your question about a larger executable, yes it will make your executable larger. When a you #include a header file in a source or header file, the preprocessor replaces the #include with the contents of the header file. This is why it is necessary to protect your header files with the following header protection:
#ifndef HDR_H
#define HDR_H
...
#endif
However, you will get linker errors if you include the header file (that has function definitions) in multiple files that are part of the same executable. It would wise for you to split class and function definitions and declarations into .cpp and .hpp files, respectively. This will greatly reduce the amount of linker headaches.
Also, .h = .hpp. Doesn't matter which one you choose. Personal preference...
There's all you need here: Header files, pros and cons of putting all you code in them. Hope it helps!
Using header files results in quicker compile time and smaller executable. It also looks considerably cleaner because you can get a quick overview of your class by looking at its .h declaration.

Why use #ifndef CLASS_H and #define CLASS_H in .h file but not in .cpp?

I have always seen people write
class.h
#ifndef CLASS_H
#define CLASS_H
//blah blah blah
#endif
The question is, why don't they also do that for the .cpp file that contain definitions for class functions?
Let's say I have main.cpp, and main.cpp includes class.h. The class.h file does not include anything, so how does main.cpp know what is in the class.cpp?
First, to address your first inquiry:
When you see this in .h file:
#ifndef FILE_H
#define FILE_H
/* ... Declarations etc here ... */
#endif
This is a preprocessor technique of preventing a header file from being included multiple times, which can be problematic for various reasons. During compilation of your project, each .cpp file (usually) is compiled. In simple terms, this means the compiler will take your .cpp file, open any files #included by it, concatenate them all into one massive text file, and then perform syntax analysis and finally it will convert it to some intermediate code, optimize/perform other tasks, and finally generate the assembly output for the target architecture. Because of this, if a file is #included multiple times under one .cpp file, the compiler will append its file contents twice, so if there are definitions within that file, you will get a compiler error telling you that you redefined a variable. When the file is processed by the preprocessor step in the compilation process, the first time its contents are reached the first two lines will check if FILE_H has been defined for the preprocessor. If not, it will define FILE_H and continue processing the code between it and the #endif directive. The next time that file's contents are seen by the preprocessor, the check against FILE_H will be false, so it will immediately scan down to the #endif and continue after it. This prevents redefinition errors.
And to address your second concern:
In C++ programming as a general practice we separate development into two file types. One is with an extension of .h and we call this a "header file." They usually provide a declaration of functions, classes, structs, global variables, typedefs, preprocessing macros and definitions, etc. Basically, they just provide you with information about your code. Then we have the .cpp extension which we call a "code file." This will provide definitions for those functions, class members, any struct members that need definitions, global variables, etc. So the .h file declares code, and the .cpp file implements that declaration. For this reason, we generally during compilation compile each .cpp file into an object and then link those objects (because you almost never see one .cpp file include another .cpp file).
How these externals are resolved is a job for the linker. When your compiler processes main.cpp, it gets declarations for the code in class.cpp by including class.h. It only needs to know what these functions or variables look like (which is what a declaration gives you). So it compiles your main.cpp file into some object file (call it main.obj). Similarly, class.cpp is compiled into a class.obj file. To produce the final executable, a linker is invoked to link those two object files together. For any unresolved external variables or functions, the compiler will place a stub where the access happens. The linker will then take this stub and look for the code or variable in another listed object file, and if it's found, it combines the code from the two object files into an output file and replaces the stub with the final location of the function or variable. This way, your code in main.cpp can call functions and use variables in class.cpp IF AND ONLY IF THEY ARE DECLARED IN class.h.
I hope this was helpful.
The CLASS_H is an include guard; it's used to avoid the same header file being included multiple times (via different routes) within the same CPP file (or, more accurately, the same translation unit), which would lead to multiple-definition errors.
Include guards aren't needed on CPP files because, by definition, the contents of the CPP file are only read once.
You seem to have interpreted the include guards as having the same function as import statements in other languages (such as Java); that's not the case, however. The #include itself is roughly equivalent to the import in other languages.
It doesn't - at least during the compilation phase.
The translation of a c++ program from source code to machine code is performed in three phases:
Preprocessing - The Preprocessor parses all source code for lines beginning with # and executes the directives. In your case, the contents of your file class.h is inserted in place of the line #include "class.h. Since you might be includein your header file in several places, the #ifndef clauses avoid duplicate declaration-errors, since the preprocessor directive is undefined only the first time the header file is included.
Compilation - The Compiler does now translate all preprocessed source code files to binary object files.
Linking - The Linker links (hence the name) together the object files. A reference to your class or one of its methods (which should be declared in class.h and defined in class.cpp) is resolved to the respective offset in one of the object files. I write 'one of your object files' since your class does not need to be defined in a file named class.cpp, it might be in a library which is linked to your project.
In summary, the declarations can be shared through a header file, while the mapping of declarations to definitions is done by the linker.
That's the distinction between declaration and definition. Header files typically include just the declaration, and the source file contains the definition.
In order to use something you only need to know it's declaration not it's definition. Only the linker needs to know the definition.
So this is why you will include a header file inside one or more source files but you won't include a source file inside another.
Also you mean #include and not import.
That's done for header files so that the contents only appear once in each preprocessed source file, even if it's included more than once (usually because it's included from other header files). The first time it's included, the symbol CLASS_H (known as an include guard) hasn't been defined yet, so all the contents of the file are included. Doing this defines the symbol, so if it's included again, the contents of the file (inside the #ifndef/#endif block) are skipped.
There's no need to do this for the source file itself since (normally) that's not included by any other files.
For your last question, class.h should contain the definition of the class, and declarations of all its members, associated functions, and whatever else, so that any file that includes it has enough information to use the class. The implementations of the functions can go in a separate source file; you only need the declarations to call them.
main.cpp doesn't have to know what is in class.cpp. It just has to know the declarations of the functions/classes that it goes to use, and these declarations are in class.h.
The linker links between the places where the functions/classes declared in class.h are used and their implementations in class.cpp
.cpp files are not included (using #include) into other files. Therefore they don't need include guarding. Main.cpp will know the names and signatures of the class that you have implemented in class.cpp only because you have specified all that in class.h - this is the purpose of a header file. (It is up to you to make sure that class.h accurately describes the code you implement in class.cpp.) The executable code in class.cpp will be made available to the executable code in main.cpp thanks to the efforts of the linker.
It is generally expected that modules of code such as .cpp files are compiled once and linked to in multiple projects, to avoid unnecessary repetitive compilation of logic. For example, g++ -o class.cpp would produce class.o which you could then link from multiple projects to using g++ main.cpp class.o.
We could use #include as our linker, as you seem to be implying, but that would just be silly when we know how to link properly using our compiler with less keystrokes and less wasteful repetition of compilation, rather than our code with more keystrokes and more wasteful repetition of compilation...
The header files are still required to be included into each of the multiple projects, however, because this provides the interface for each module. Without these headers the compiler wouldn't know about any of the symbols introduced by the .o files.
It is important to realise that the header files are what introduce the definitions of symbols for those modules; once that is realised then it makes sense that multiple inclusions could cause redefinitions of symbols (which causes errors), so we use include guards to prevent such redefinitions.
its because of Headerfiles define what the class contains (Members, data-structures) and cpp files implement it.
And of course, the main reason for this is that you could include one .h File multiple times in other .h files, but this would result in multiple definitions of a class, which is invalid.

C\C++ - Re-using functions across multiple programs

In Python whenever I had a bunch of functions that I wanted to use across multiple programs I'd make another .py file and then just import that wherever I needed it. How would I do that in C/C++? Do I dump both prototype and implementation into an .h file? or do I need to place the function prototypes in the .h file and the implementations in a separate .cpp file with the same name as the .h file and #include the .h wherever I need it?
You need to do a couple of things:
Add the prototype to a header file.
Write a new source file with the function definitions.
In a source file that just wants to use the shared function, you need to add #include "header.h" (replacing header.h with the name of the file from step 1) someplace before you try to call the shared function (normally you put all includes at the top of the source file).
Make sure your build compiles the new source file and includes that in the link.
A couple of other comments. It's normal to have foo.h as the header for the foo.c but that is only a style guideline.
When using headers, you want to add include guards to protect against the multiple include issue.
In C/C++ we usually put declarations in .h files and implementation in .c/cpp files.
(Note: there're many other ways, for example the include, templates, inline, extern, ... so you may find some code only in header files or only in c/cpp files - for example some of the STL and templates.)
Then you need to "link" the file with your program, which works like the "import" in Python interpreter but actually works in static linking object files together into a single executable file.
However the "link" command and syntax depends on your compiler and OS linker. So you need to check your compiler for more information, for example "ld" on UNIX and "link.exe" on DOS/Windows. Moreover, usually the C compiler will invoke the linker automatically.
For example, say you have 2 files: a.c and b.c (with a.h and b.h), on gcc:
gcc -o a.out a.c b.c
On MSVC:
cl a.c b.c
There are two ways to approach this that differ only slightly. As others have said, the first steps are:
-Create a header file which contains your function prototypes. You'll want to mark this with
# ifndef myheader_h
# define myheader_h
// prototypes go here...
# endif
to prevent problems with multiple inclusions.
-Create a .c file which contains the actual definitions.
Here's where the solutions branch.
If you want to include the source directly in your project, make the .c file part of your compilation stage as well as your link stage.
However, if you really plan on using this across multiple projects, you'll probably want to compile this source file independently, and reference the object file from your other projects. This is loosely what a "library" is, though libraries may consist of multiple object modules - each of which has been compiled but not yet linked.
update
Someone pointed out that this really only keeps the header from being included in a single cpp file. News flash: that's all you need to do.
Compilers treat each cpp file individually. The header files included by each cpp source file tell the compiler, "hey! This thing is defined in another source file! Assume references that match this prototype are A-OK and keep moving on."
The LINKER, on other other hand, is responsible for fixing up these references, and IT will throw a fit if the same symbol is defined in multiple object files. For that to happen, a function would have to be defined in two separate source files - a real definition with a body, not just an extern prototype - OR the object file that contains its body/definition would have to be included in the link command more than once.
Re:"inline"
Use of "inline" is meant as an optmization feature. Functions declared as inline have their bodies expanded inline at each place where they are called. Using this to get around multiple definition errors is very, very bad. This is similar to macro expansion.
See Francis's answer. The sentence that you wrote, "or do I need to place the function prototypes in the .h file and the implementations in a separate .cpp file with the same name as the .h file and #include the .h wherever I need it?", is pretty-much correct. You don't have to do things exactly this way, but it works.
It's up to you how you do this, The compiler doesn't care. But if you put your functions in a .h file, you should declare them __inline otherwise if you include the header file in more than one .cpp file, you will have multiply defined symbols.
On the other hand, if you make them __inline, you will tend to get a copy created in each place that you use the function. This will bloat the size of your program. So unless the functions are quite small, it's probably best to put the functions in a .cpp and create a parallel .h with function prototypes and public structures. This is the way most programmers work.
On the other hand, in the STL (Standard Template Library), virtually all of the code is in header files. (without the .h extension)