Related
I've never really understood why C++ needs a separate header file with the same functions as in the .cpp file. It makes creating classes and refactoring them very difficult, and it adds unnecessary files to the project. And then there is the problem with having to include header files, but having to explicitly check if it has already been included.
C++ was ratified in 1998, so why is it designed this way? What advantages does having a separate header file have?
Follow up question:
How does the compiler find the .cpp file with the code in it, when all I include is the .h file? Does it assume that the .cpp file has the same name as the .h file, or does it actually look through all the files in the directory tree?
Some people consider header files an advantage:
It is claimed that it enables/enforces/allows separation of interface and implementation -- but usually, this is not the case. Header files are full of implementation details (for example member variables of a class have to be specified in the header, even though they're not part of the public interface), and functions can, and often are, defined inline in the class declaration in the header, again destroying this separation.
It is sometimes said to improve compile-time because each translation unit can be processed independently. And yet C++ is probably the slowest language in existence when it comes to compile-times. A part of the reason is the many many repeated inclusions of the same header. A large number of headers are included by multiple translation units, requiring them to be parsed multiple times.
Ultimately, the header system is an artifact from the 70's when C was designed. Back then, computers had very little memory, and keeping the entire module in memory just wasn't an option. A compiler had to start reading the file at the top, and then proceed linearly through the source code. The header mechanism enables this. The compiler doesn't have to consider other translation units, it just has to read the code from top to bottom.
And C++ retained this system for backwards compatibility.
Today, it makes no sense. It is inefficient, error-prone and overcomplicated. There are far better ways to separate interface and implementation, if that was the goal.
However, one of the proposals for C++0x was to add a proper module system, allowing code to be compiled similar to .NET or Java, into larger modules, all in one go and without headers. This proposal didn't make the cut in C++0x, but I believe it's still in the "we'd love to do this later" category. Perhaps in a TR2 or similar.
You seem to be asking about separating definitions from declarations, although there are other uses for header files.
The answer is that C++ doesn't "need" this. If you mark everything inline (which is automatic anyway for member functions defined in a class definition), then there is no need for the separation. You can just define everything in the header files.
The reasons you might want to separate are:
To improve build times.
To link against code without having the source for the definitions.
To avoid marking everything "inline".
If your more general question is, "why isn't C++ identical to Java?", then I have to ask, "why are you writing C++ instead of Java?" ;-p
More seriously, though, the reason is that the C++ compiler can't just reach into another translation unit and figure out how to use its symbols, in the way that javac can and does. The header file is needed to declare to the compiler what it can expect to be available at link time.
So #include is a straight textual substitution. If you define everything in header files, the preprocessor ends up creating an enormous copy and paste of every source file in your project, and feeding that into the compiler. The fact that the C++ standard was ratified in 1998 has nothing to do with this, it's the fact that the compilation environment for C++ is based so closely on that of C.
Converting my comments to answer your follow-up question:
How does the compiler find the .cpp file with the code in it
It doesn't, at least not at the time it compiles the code that used the header file. The functions you're linking against don't even need to have been written yet, never mind the compiler knowing what .cpp file they'll be in. Everything the calling code needs to know at compile time is expressed in the function declaration. At link time you will provide a list of .o files, or static or dynamic libraries, and the header in effect is a promise that the definitions of the functions will be in there somewhere.
C++ does it that way because C did it that way, so the real question is why did C do it that way? Wikipedia speaks a little to this.
Newer compiled languages (such as
Java, C#) do not use forward
declarations; identifiers are
recognized automatically from source
files and read directly from dynamic
library symbols. This means header
files are not needed.
To my (limited - I'm not a C developer normally) understanding, this is rooted in C. Remember that C does not know what classes or namespaces are, it's just one long program. Also, functions have to be declared before you use them.
For example, the following should give a compiler error:
void SomeFunction() {
SomeOtherFunction();
}
void SomeOtherFunction() {
printf("What?");
}
The error should be that "SomeOtherFunction is not declared" because you call it before it's declaration. One way of fixing this is by moving SomeOtherFunction above SomeFunction. Another approach is to declare the functions signature first:
void SomeOtherFunction();
void SomeFunction() {
SomeOtherFunction();
}
void SomeOtherFunction() {
printf("What?");
}
This lets the compiler know: Look somewhere in the code, there is a function called SomeOtherFunction that returns void and does not take any parameters. So if you encouter code that tries to call SomeOtherFunction, do not panic and instead go looking for it.
Now, imagine you have SomeFunction and SomeOtherFunction in two different .c files. You then have to #include "SomeOther.c" in Some.c. Now, add some "private" functions to SomeOther.c. As C does not know private functions, that function would be available in Some.c as well.
This is where .h Files come in: They specify all the functions (and variables) that you want to 'Export' from a .c file that can be accessed in other .c files. That way, you gain something like a Public/Private scope. Also, you can give this .h file to other people without having to share your source code - .h files work against compiled .lib files as well.
So the main reason is really for convenience, for source code protection and to have a bit of decoupling between the parts of your application.
That was C though. C++ introduced Classes and private/public modifiers, so while you could still ask if they are needed, C++ AFAIK still requires declaration of functions before using them. Also, many C++ Developers are or were C devleopers as well and took over their concepts and habits to C++ - why change what isn't broken?
First advantage: If you don't have header files, you would have to include source files in other source files. This would cause the including files to be compiled again when the included file changes.
Second advantage: It allows sharing the interfaces without sharing the code between different units (different developers, teams, companies etc..)
The need for header files results from the limitations that the compiler has for knowing about the type information for functions and or variables in other modules. The compiled program or library does not include the type information required by the compiler to bind to any objects defined in other compilation units.
In order to compensate for this limitation, C and C++ allow for declarations and these declarations can be included into modules that use them with the help of the preprocessor's #include directive.
Languages like Java or C# on the other hand include the information necessary for binding in the compiler's output (class-file or assembly). Hence, there is no longer a need for maintaining standalone declarations to be included by clients of a module.
The reason for the binding information not being included in the compiler output is simple: it is not needed at runtime (any type checking occurs at compile time). It would just waste space. Remember that C/C++ come from a time where the size of an executable or library did matter quite a bit.
Well, C++ was ratified in 1998, but it had been in use for a lot longer than that, and the ratification was primarily setting down current usage rather than imposing structure. And since C++ was based on C, and C has header files, C++ has them too.
The main reason for header files is to enable separate compilation of files, and minimize dependencies.
Say I have foo.cpp, and I want to use code from the bar.h/bar.cpp files.
I can #include "bar.h" in foo.cpp, and then program and compile foo.cpp even if bar.cpp doesn't exist. The header file acts as a promise to the compiler that the classes/functions in bar.h will exist at run-time, and it has everything it needs to know already.
Of course, if the functions in bar.h don't have bodies when I try to link my program, then it won't link and I'll get an error.
A side-effect is that you can give users a header file without revealing your source code.
Another is that if you change the implementation of your code in the *.cpp file, but do not change the header at all, you only need to compile the *.cpp file instead of everything that uses it. Of course, if you put a lot of implementation into the header file, then this becomes less useful.
C++ was designed to add modern programming language features to the C infrastructure, without unnecessarily changing anything about C that wasn't specifically about the language itself.
Yes, at this point (10 years after the first C++ standard and 20 years after it began seriously growing in usage) it is easy to ask why doesn't it have a proper module system. Obviously any new language being designed today would not work like C++. But that isn't the point of C++.
The point of C++ is to be evolutionary, a smooth continuation of existing practise, only adding new capabilities without (too often) breaking things that work adequately for its user community.
This means that it makes some things harder (especially for people starting a new project), and some things easier (especially for those maintaining existing code) than other languages would do.
So rather than expecting C++ to turn into C# (which would be pointless as we already have C#), why not just pick the right tool for the job? Myself, I endeavour to write significant chunks of new functionality in a modern language (I happen to use C#), and I have a large amount of existing C++ that I am keeping in C++ because there would be no real value in re-writing it all. They integrate very nicely anyway, so it's largely painless.
It doesn't need a separate header file with the same functions as in main. It only needs it if you develop an application using multiple code files and if you use a function that was not previously declared.
It's really a scope problem.
C++ was ratified in 1998, so why is it designed this way? What advantages does having a separate header file have?
Actually header files become very useful when examining programs for the first time, checking out header files(using only a text editor) gives you an overview of the architecture of the program, unlike other languages where you have to use sophisticated tools to view classes and their member functions.
If you want the compiler to find out symbols defined in other files automatically, you need to force programmer to put those files in predefined locations (like Java packages structure determines folders structure of the project). I prefer header files. Also you would need either sources of libraries you use or some uniform way to put information needed by compiler in binaries.
I think the real (historical) reason behind header files was making like easier for compiler developers... but then, header files do give advantages.
Check this previous post for more discussions...
Well, you can perfectly develop C++ without header files. In fact some libraries that intensively use templates does not use the header/code files paradigm (see boost). But In C/C++ you can not use something that is not declared. One practical way to
deal with that is to use header files. Plus, you gain the advantage of sharing interface whithout sharing code/implementation. And I think it was not envisionned by the C creators : When you use shared header files you have to use the famous :
#ifndef MY_HEADER_SWEET_GUARDIAN
#define MY_HEADER_SWEET_GUARDIAN
// [...]
// my header
// [...]
#endif // MY_HEADER_SWEET_GUARDIAN
that is not really a language feature but a practical way to deal with multiple inclusion.
So, I think that when C was created, the problems with forward declaration was underestimated and now when using a high level language like C++ we have to deal with this sort of things.
Another burden for us poor C++ users ...
I've never really understood why C++ needs a separate header file with the same functions as in the .cpp file. It makes creating classes and refactoring them very difficult, and it adds unnecessary files to the project. And then there is the problem with having to include header files, but having to explicitly check if it has already been included.
C++ was ratified in 1998, so why is it designed this way? What advantages does having a separate header file have?
Follow up question:
How does the compiler find the .cpp file with the code in it, when all I include is the .h file? Does it assume that the .cpp file has the same name as the .h file, or does it actually look through all the files in the directory tree?
Some people consider header files an advantage:
It is claimed that it enables/enforces/allows separation of interface and implementation -- but usually, this is not the case. Header files are full of implementation details (for example member variables of a class have to be specified in the header, even though they're not part of the public interface), and functions can, and often are, defined inline in the class declaration in the header, again destroying this separation.
It is sometimes said to improve compile-time because each translation unit can be processed independently. And yet C++ is probably the slowest language in existence when it comes to compile-times. A part of the reason is the many many repeated inclusions of the same header. A large number of headers are included by multiple translation units, requiring them to be parsed multiple times.
Ultimately, the header system is an artifact from the 70's when C was designed. Back then, computers had very little memory, and keeping the entire module in memory just wasn't an option. A compiler had to start reading the file at the top, and then proceed linearly through the source code. The header mechanism enables this. The compiler doesn't have to consider other translation units, it just has to read the code from top to bottom.
And C++ retained this system for backwards compatibility.
Today, it makes no sense. It is inefficient, error-prone and overcomplicated. There are far better ways to separate interface and implementation, if that was the goal.
However, one of the proposals for C++0x was to add a proper module system, allowing code to be compiled similar to .NET or Java, into larger modules, all in one go and without headers. This proposal didn't make the cut in C++0x, but I believe it's still in the "we'd love to do this later" category. Perhaps in a TR2 or similar.
You seem to be asking about separating definitions from declarations, although there are other uses for header files.
The answer is that C++ doesn't "need" this. If you mark everything inline (which is automatic anyway for member functions defined in a class definition), then there is no need for the separation. You can just define everything in the header files.
The reasons you might want to separate are:
To improve build times.
To link against code without having the source for the definitions.
To avoid marking everything "inline".
If your more general question is, "why isn't C++ identical to Java?", then I have to ask, "why are you writing C++ instead of Java?" ;-p
More seriously, though, the reason is that the C++ compiler can't just reach into another translation unit and figure out how to use its symbols, in the way that javac can and does. The header file is needed to declare to the compiler what it can expect to be available at link time.
So #include is a straight textual substitution. If you define everything in header files, the preprocessor ends up creating an enormous copy and paste of every source file in your project, and feeding that into the compiler. The fact that the C++ standard was ratified in 1998 has nothing to do with this, it's the fact that the compilation environment for C++ is based so closely on that of C.
Converting my comments to answer your follow-up question:
How does the compiler find the .cpp file with the code in it
It doesn't, at least not at the time it compiles the code that used the header file. The functions you're linking against don't even need to have been written yet, never mind the compiler knowing what .cpp file they'll be in. Everything the calling code needs to know at compile time is expressed in the function declaration. At link time you will provide a list of .o files, or static or dynamic libraries, and the header in effect is a promise that the definitions of the functions will be in there somewhere.
C++ does it that way because C did it that way, so the real question is why did C do it that way? Wikipedia speaks a little to this.
Newer compiled languages (such as
Java, C#) do not use forward
declarations; identifiers are
recognized automatically from source
files and read directly from dynamic
library symbols. This means header
files are not needed.
To my (limited - I'm not a C developer normally) understanding, this is rooted in C. Remember that C does not know what classes or namespaces are, it's just one long program. Also, functions have to be declared before you use them.
For example, the following should give a compiler error:
void SomeFunction() {
SomeOtherFunction();
}
void SomeOtherFunction() {
printf("What?");
}
The error should be that "SomeOtherFunction is not declared" because you call it before it's declaration. One way of fixing this is by moving SomeOtherFunction above SomeFunction. Another approach is to declare the functions signature first:
void SomeOtherFunction();
void SomeFunction() {
SomeOtherFunction();
}
void SomeOtherFunction() {
printf("What?");
}
This lets the compiler know: Look somewhere in the code, there is a function called SomeOtherFunction that returns void and does not take any parameters. So if you encouter code that tries to call SomeOtherFunction, do not panic and instead go looking for it.
Now, imagine you have SomeFunction and SomeOtherFunction in two different .c files. You then have to #include "SomeOther.c" in Some.c. Now, add some "private" functions to SomeOther.c. As C does not know private functions, that function would be available in Some.c as well.
This is where .h Files come in: They specify all the functions (and variables) that you want to 'Export' from a .c file that can be accessed in other .c files. That way, you gain something like a Public/Private scope. Also, you can give this .h file to other people without having to share your source code - .h files work against compiled .lib files as well.
So the main reason is really for convenience, for source code protection and to have a bit of decoupling between the parts of your application.
That was C though. C++ introduced Classes and private/public modifiers, so while you could still ask if they are needed, C++ AFAIK still requires declaration of functions before using them. Also, many C++ Developers are or were C devleopers as well and took over their concepts and habits to C++ - why change what isn't broken?
First advantage: If you don't have header files, you would have to include source files in other source files. This would cause the including files to be compiled again when the included file changes.
Second advantage: It allows sharing the interfaces without sharing the code between different units (different developers, teams, companies etc..)
The need for header files results from the limitations that the compiler has for knowing about the type information for functions and or variables in other modules. The compiled program or library does not include the type information required by the compiler to bind to any objects defined in other compilation units.
In order to compensate for this limitation, C and C++ allow for declarations and these declarations can be included into modules that use them with the help of the preprocessor's #include directive.
Languages like Java or C# on the other hand include the information necessary for binding in the compiler's output (class-file or assembly). Hence, there is no longer a need for maintaining standalone declarations to be included by clients of a module.
The reason for the binding information not being included in the compiler output is simple: it is not needed at runtime (any type checking occurs at compile time). It would just waste space. Remember that C/C++ come from a time where the size of an executable or library did matter quite a bit.
Well, C++ was ratified in 1998, but it had been in use for a lot longer than that, and the ratification was primarily setting down current usage rather than imposing structure. And since C++ was based on C, and C has header files, C++ has them too.
The main reason for header files is to enable separate compilation of files, and minimize dependencies.
Say I have foo.cpp, and I want to use code from the bar.h/bar.cpp files.
I can #include "bar.h" in foo.cpp, and then program and compile foo.cpp even if bar.cpp doesn't exist. The header file acts as a promise to the compiler that the classes/functions in bar.h will exist at run-time, and it has everything it needs to know already.
Of course, if the functions in bar.h don't have bodies when I try to link my program, then it won't link and I'll get an error.
A side-effect is that you can give users a header file without revealing your source code.
Another is that if you change the implementation of your code in the *.cpp file, but do not change the header at all, you only need to compile the *.cpp file instead of everything that uses it. Of course, if you put a lot of implementation into the header file, then this becomes less useful.
C++ was designed to add modern programming language features to the C infrastructure, without unnecessarily changing anything about C that wasn't specifically about the language itself.
Yes, at this point (10 years after the first C++ standard and 20 years after it began seriously growing in usage) it is easy to ask why doesn't it have a proper module system. Obviously any new language being designed today would not work like C++. But that isn't the point of C++.
The point of C++ is to be evolutionary, a smooth continuation of existing practise, only adding new capabilities without (too often) breaking things that work adequately for its user community.
This means that it makes some things harder (especially for people starting a new project), and some things easier (especially for those maintaining existing code) than other languages would do.
So rather than expecting C++ to turn into C# (which would be pointless as we already have C#), why not just pick the right tool for the job? Myself, I endeavour to write significant chunks of new functionality in a modern language (I happen to use C#), and I have a large amount of existing C++ that I am keeping in C++ because there would be no real value in re-writing it all. They integrate very nicely anyway, so it's largely painless.
It doesn't need a separate header file with the same functions as in main. It only needs it if you develop an application using multiple code files and if you use a function that was not previously declared.
It's really a scope problem.
C++ was ratified in 1998, so why is it designed this way? What advantages does having a separate header file have?
Actually header files become very useful when examining programs for the first time, checking out header files(using only a text editor) gives you an overview of the architecture of the program, unlike other languages where you have to use sophisticated tools to view classes and their member functions.
If you want the compiler to find out symbols defined in other files automatically, you need to force programmer to put those files in predefined locations (like Java packages structure determines folders structure of the project). I prefer header files. Also you would need either sources of libraries you use or some uniform way to put information needed by compiler in binaries.
I think the real (historical) reason behind header files was making like easier for compiler developers... but then, header files do give advantages.
Check this previous post for more discussions...
Well, you can perfectly develop C++ without header files. In fact some libraries that intensively use templates does not use the header/code files paradigm (see boost). But In C/C++ you can not use something that is not declared. One practical way to
deal with that is to use header files. Plus, you gain the advantage of sharing interface whithout sharing code/implementation. And I think it was not envisionned by the C creators : When you use shared header files you have to use the famous :
#ifndef MY_HEADER_SWEET_GUARDIAN
#define MY_HEADER_SWEET_GUARDIAN
// [...]
// my header
// [...]
#endif // MY_HEADER_SWEET_GUARDIAN
that is not really a language feature but a practical way to deal with multiple inclusion.
So, I think that when C was created, the problems with forward declaration was underestimated and now when using a high level language like C++ we have to deal with this sort of things.
Another burden for us poor C++ users ...
I'm not sure how to phrase the question, so please feel free to rephrase it if there is a better way:
Normally code is divided between a .h and .cpp file in the following manner..
Things like class declarations, function prototypes, and enumerations
typically go in header files. In a word, "definitions".
Code files (.cpp) are designed to provide the implementation
information that only needs to be known in one file. In general,
function bodies, and internal variables that should/will never be
accessed by other modules, are what belong in .cpp files. In a word,
"implementations".
With reference to apps using native code (C++) submitted to the iOS and Android app stores, what information can an outsider learn by inspecting the package contents ? For example, I heard that one can discover class definitions, and function names ? This makes me think this is because of what is normally in .h files. But if function bodies are included in the .h files, will they be visible as well ? If someone is inspecting my app, does how I separate my code between .h and .cpp files affect what someone can discover ?
EDIT: There is no problem that I am trying to solve. I am just trying to learn what people can see based on what code is in my .h file.
Disclaimer: this is solely focusing on the C++ part of the project and its compile options (as per the tags of the question), not the other languages.
After the compilation of your native library, there's no difference between the header and the source file. They get combined together into a single "translation unit" by the preprocessor (#includes literally inserted in their respective places etc.) and from that point on it makes no difference at all which line came from which file.
What is visible are the names of your exported functions. You can't change that, you can make the attackers' life a bit worse by giving them obscure names. And you need exported functions because they are the common language the C++ and Objective-C slash Java parts of your project share: without any interface, your native library could not be accessed in any way.
As far as code goes – as Some Programmer Dude said, if your attacker wants really bad they will find out what your code does. The C++ commands aren't directly recoverable (unless you compile with debugging symbols, like -g in GCC or Clang) the same way they are from Java, so that's a level of obfuscation, but no machine code is completely immune to reverse engineering. If nothing else, your processor needs to ultimately understand the steps it is asked to follow, and you can simply simulate a processor in doing so.
TLDR: code compiled by Java is quite easy to "read", code compiled by C++ isn't, regardless of .cpp / .h separation.
This question already has answers here:
Why have header files and .cpp files? [closed]
(9 answers)
Closed 7 years ago.
In C++, I've heard that you need to separate the implementation from the interface in order to reduce compile times. In this question, the answerer basically says that if you change the header then the source is recompiled, so if you separate the interface from the implementation then there is less chance you will need to change the header. But here are the two scenarios I see:
Have all code in header file
Whenever you change any code, the entire file is recompiled.
Have code in separate header files and source files
Since the header file is usually copy and pasted (via include) into the source file, if you make a change to either the header or source files the whole file will still have to be recompiled because it is as if they are one file.
Whether you change either the header or the source file, the whole file (both the header and the source) are recompiled.
So then what is the advantage of separating interface from implementation? You might say it is so that you can have multiple implementations for a single interface by means of inheritance, but can't you still do that when you have all the code in the header file? You could also say that it is so that the interface does not see any implementation details, but what is the use of that?
The only reason I can see why separating interface from implementation is that if the C++ compiler skips over the part of the source file that is the header file that is copy and pasted into the source code if it is not changed and just compiles the rest of the source file. So is this the case? Do C++ compilers skip compilation of certain parts of files that have not changed? I know that that is probably not what happens, but I cannot think of any other explanation.
Edit
I've already seen this question. What I am asking is why are the compilation times faster with separate implementation and interface files. I understand that there are advantages with separate implementation and interface files such as what Lightness Races in Orbit stated, but I am asking is why are compile times better if they are at all.
You missed one crucial fact: you should be minimising changes to header files.
The headers contain your interface, which should remain unchanged for as long as possible to keep your code stable. Most of your changes going forwards ought to be in the implementation files, which may be recompiled in isolation.
The real benefit of separating interface from implementation is so that users of your code/library can be given a distributable that contains your header files, allowing them to interface with your code. They do not need to handle, manage, build etc the far more bulky implementation code. If you are running Linux, look at your /usr/include and /usr/lib folders and you'll see what I mean — all the implementation can be shipped as a compiled binary, which is far less unwieldy (and harder to modify in-place).
Frankly this seems like common sense to me. When you buy a product, you receive with it an instruction manual, not the steps to manufacture it.
It's also next to impossible to mock out the implementation for unit tests if you lump everything together into a single translation unit. The C compilation model was simply not designed for that.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
In C++ why have header files and cpp files?
Why in C++ is there a .h and .cpp and not only one file like c# and Java ?
For historical reasons. Specifically, for compatibility with C. That language was originally designed to run on (for 70s standards) low-end machines; header files were (and often still are) substituted inline by a separate program, to keep the memory use of the compiler down. It still helps to keep libraries small.
Because c# and java do not require forward declarations. C++ requires the forward declarations.
C++ has a pre-processor that it inherited from C. The pre-processor has many interesting features, but one of the things that it is used for is to structure the code into headers and source files.
Structuring the code into headers and source files allows you to determine which parts of the code are visible to other source files (i.e. the parts you put in the header) and which ones aren't.
It also means that you don't need any special tools to know which classes are available to you, as you would if you had import in stead of #include: you have part of the source code to work with in stead (which you can read with any text editor).
The advantages notwithstanding, the pre-processor is really a legacy from C++'s parent, C, and has survived the evolution of C++ almost entirely in-tact.