Splitting program changes memory? - c++

I want to split up all classes from my program into cpp and hpp files, each file containing few classes from the same topic. Like this:
main.cpp:
#include <cstdio>
using namespace std;
class TopicFoo_Class1 {
... (Functions, variables, public/privates, etc.)
}
class TopicFoo_Class2 {
... (Functions, variables, public/privates, etc.)
}
class TopicBar_Class1 {
... (Stuff)
}
class TopicBar_Class2 {
... (Stuff)
}
int main(int argc, const char** argv) { ... }
into:
foo.hpp:
class TopicFoo_Class1 {
... (Declarations)
}
class TopicFoo_Class2 {
... (Declarations)
}
foo.cpp:
#include <cstdio>
#include "foo.hpp"
void TopicFoo_Class1::function1 { ... }
void TopicFoo_Class2::function1 { ... }
bar.hpp:
class TopicBar_Class1 {
... (Declarations)
}
class TopicBar_Class2 {
... (Declarations)
}
bar.cpp:
#include <cstdio>
#include "bar.hpp"
void TopicBar_Class1::function1 { ... }
void TopicBar_Class2::function1 { ... }
main.cpp:
#include "foo.hpp"
#include "bar.hpp"
int main(int argc, const char** argv) { ... }
The plan is to compile foo.o and bar.o, then compile main.cpp along with the object files to form foo_bar_executable, instead of just compiling a big main.cpp into foo_bar_executable.
This is just an example, header guards and better names will be included.
I'm wondering, will this affect program speed? Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
Could the multiple includes of the same file by different cpp files cause lag?
Is there a better way to split up my code?
Which one is faster?
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
How would the above command work?
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)

I'm wondering, will this affect program speed? Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
You are mixing things that affect the build speed with run-time speed of your executable. The run-time speed shouldn't change. For a small project the difference in build time may be negligible. For larger projects, initial build times may be long, but subsequent ones may become much shorter. The reason is that you only need to rebuild what changed, and re-link.
Could the multiple includes of the same file by different cpp files cause lag?
Including a file always adds some delta to the build time. But it's something you'd need to measure. Nowadays compilers are pretty good with doing that in a smart fashion. If you couple that with smart header specification (no superfluous includes in headers, forward declarations and such), and precompiled headers, you shouldn't see a significant slowdown.
Is there a better way to split up my code?
Depends on the code. It's highly subjective.
Which one is faster?
Measure for yourself, we can't predict it for you.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
Last I checked the GCC docs, it was.
How would the above command work?
It will take the above source files and produce a single executable
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)
I wouldn't recommend that. Include the bare minimum to make the single line program #include "foo.hpp" compile successfully. Headers should strive to be minimal and complete (kind of like a certain quality of posts on SO).

m wondering, will this affect program speed?
No.
Could the multiple includes of the same file by different cpp files cause lag?
No.
Which one is faster?
Speed is not really important to most programs, and how you arrange your files has no effect on run-time performance.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable
Yes
How would the above command work?
RTFM
Hey, I'm thirteen and a half!
We don't care.

I'm wondering, will this affect program speed?
It can, but it might not.
When functions are not defined in a single translation unit, the compiler can not optimize the function calls using inline expansion. However, if enabled, some linkers can perform inlining across translation units.
On the other hand, your program might not benefit from inlining optimization.
Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
This is irrelevant to the speed of the compiled program.
Could the multiple includes of the same file by different cpp files cause lag?
It may have a (possibly insignificant) effect on compilation time from scratch.
Is there a better way to split up my code?
This is subjective. The more you split your code, the less you need to recompile when you make changes. The less you split, the faster it is to compile the entire project from scratch.
Which one is faster?
Possibly neither.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
Yes.
How would the above command work?
Use the man g++ command.
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)
No. Including unneeded files slows compilation. Besides this severely reduces the biggest advantage of splitting translation units, which is the lack of needing to compile the entire project when small part changes.

No, it will not affect speed except if you're relying on heavy optimizations, but as a self described "newbie", you likely won't be worrying about this yet. Often in the trade-off between maintaining a code structure to improve optimization vs. improving maintainability, maintenance will usually be the higher priority.
It might make compilation longer, but won't affect the executable. With a proper makefile, you might see compilation actually improve.
It's all subjective. Some packages split up the source per function.
No affect on executable.
Yes, but would recommend learning about makefiles, then you're compiling only what needs to be compiled.
It will compile the files, link to some default libraries, and output the executable. If you're interested in what is happening behind the scenes, compile with verbosity turned on. You can also compile to assembler, which can be really interesting to look at.
Ideally, each source file should include only the headers it needs.

Related

C++ modules to speed up template functions

In general, using function templates makes the compilation significantly longer.
A friend suggested that I check the modules (C++20) for optimization.
I don't think it will affect compilation speed at all.
I have no idea how to test this, so I'm asking here.
Will the following code somehow magically optimize the build process?
The definition will still have to be created and compiled, so it won't make any difference?
math.ixx:
module;
#include <typeinfo>
export module math;
import <iostream>;
export
template<typename T>
T square(T x) {
std::cout << typeid(T).name() << std::endl;
return x * x;
}
main.cpp
import math;
void main() {
square(int());
square(double());
}
The code example is too trivial for modules to be of any real use. One file which includes a second file, and nothing includes anything else is not a compilation problem. It's like trying to benchmark how fast adding two integer literals is and then making a statement about the quality of C++'s addition operator.
From a performance perspective, modules solves the following problem: they keep the cost of recompiling a single file from being equal to the cost of recompiling every file that first file includes regardless of whether the included files changed.
If you #include <vector> in a simple program, your source file now contains thousands of lines of code. If you change that source file, the compiler will have to recompile thousands of lines of code which did not change. If you have 1000 files that each include <vector>, you now have 1000 identical copies of <vector> which the compiler must compile every time you compile all of those files.
This is the sort of thing that modules prevent. If you import a module for a library, you changing your source file will not necessitate recompiling those included headers. If you import dozens of modules across hundreds or thousands of files, this adds up pretty quickly.
Pre-modules, making a small change to a widely included header prompts a full recompilation of your entire project. In a fully-modularized codebase, there will be a lot of files that get recompiled. But what doesn't happen is that you recompile stuff that didn't rely on the change. You may have changed a widely used header, but you didn't change the C++ standard library. So if you included it via modules, then <vector> and such won't get recompiled.
This is where modules saves performance.

c++ class without header

Ok, so I don't have a problem, but a question:
When using c++, you can transfer class to another file and include it without creating header, like this:
foo.cpp :
#include <iostream>
using namespace std;
class foo
{
public:
string str;
foo(string inStr)
{
str = inStr;
}
void print()
{
cout<<str<<endl;
}
};
main.cpp :
#include "foo.cpp"
using namespace std;
int main()
{
foo Foo("That's a string");
Foo.print();
return 0;
}
So the question is: is this method any worse than using header files? It's much easier and much more clean, but is it any slower, any more bug-inducing etc?
I've searched for this topic for a long time now but I haven't seen a single topic on the internet considering this even an option...
So the question is: is this method any worse than using header files?
You might consider reviewing the central idea of what the "C++ translation unit" is.
In your example, what the preprocessor does is as if it inserts a copy of foo.cpp into an internal copy of main.cpp. The preprocessor does this, not the compiler.
So ... the compiler never sees your code when they were separate files. It is this single, concatenated, 'translation unit' that is submitted to the compiler. There is no magic in .hh nor .cc, except that they fulfill your peer's (or boss's) expectations.
Now think about your question ... the translation unit is neither of your source files, nor any of your system include files, but it is one stream of text, one thing, put together by the preprocessor. So how would it be better or worse?
It's much easier and much more clean,
It can be. I often take this 'different' approach in my 'private' coding efforts.
When I did a quick eval of using gmpxx.h (mpz_class) in factorial, I did indeed take just these kinds of shortcuts, and did not need a .hpp file to properly create my compilation unit. FYI - The factorial of 12345, is more than 45,000 bytes. It is pointless to read the chars, too.
A 'more formal' effort (job, cooperation, etc), I always use header's, and separate compilation, and the building a library of functions useful to the app as part of how things should be done. Especially if I might share this code or contribute to a companies archives. There are too many good reasons for me to describe why I recommend you learn these issues.
but is it any slower, any more bug-inducing etc?
I think not. I think not. There is one compilation unit, and concatenating the parts has to be right, but I think is no more difficult.
I've searched for this topic for a long time now but I haven't seen a single
topic on the internet considering this even an option...
I'm not sure I've ever seen it discussed either. I have acquired the information. The separate compilations and library development are generally perceived to save development time. (Time is money, right?)
Also, a library, and header files, are how you package your success for others to use, how you can improve your value to a team.
There's no semantic difference between naming your files .cpp or .hpp (or .c / .h).
People will be surprised by the #include "foo.cpp", the compiler doesn't care
You've still created a "header file", but you've given it the ".cpp" extension. File extensions are for the programmer, the compiler doesn't care.
From the compiler's point of view, there is no difference between your example and
foo.h :
#include <iostream>
using namespace std;
class foo
{
//...
};
main.cpp :
#include "foo.h"
using namespace std;
int main()
{
// ...
}
A "header file" is just a file that you include at the beginning i.e. the head of another file (technically, headers don't need to be at the beginning and sometimes are not but typically they are, hence the name).
You've simply created a header file named foo.cpp.
Naming header files with extension that is conventionally used for source files is not a good idea. Some IDE's and other tools may erroneously assume that your header is a source file, and therefore attempt to compile as if it were such, wasting resources if nothing else.
Not to mention the confusion it may cause in your colleagues. Source files may have definitions that the C++ standard allows to be defined exactly once (see one definition rule, odr) because source files are not included in other files. If you name your header as if it were a source file, someone might assume that they can have odr definitions there when they can't.
If you ever build some larger project, the two main differences will become clear to you:
If you deliver your code as a library to others, you have to give them all your code - all your IP - instead of only the headers of the exposed classes plus a compiled library.
If you change one letter in any file, you will need to recompile everything. Once compile times for a larger project hits minutes, you will lose a lot of productivity.
Otherwise, of course it works, and the result is the same.

How to speed up C++ compilation

do you have any tips to really speed up a large C++ source code ?
I compiled QT5 with last visual studio 2013 compiler, this took at least 3 hours with intel quad core 3.2GHz, 8GB memory and SSD drive.
What solutions do I have if I want to do this in 30 minutes ?
thanks.
Forward declarations and PIMPL.
example.h:
// no include
class UsedByExample;
class Example
{
// ...
UsedByExample *ptr; // forward declaration is good enough
UsedByExample &ref; // forward declaration is good enough
};
example.cpp:
#include "used_by_example.h"
// ...
UsedByExample object; // need #include
A little known / underused fact is that forward declarations are also good enough for function return values:
class Example;
Example f(); // forward declaration is good enough
Only the code which calls f() and has to operate on the returned Example object actually needs the definition of Example.
The purpose of PIMPL, an idiom depending on forward declarations, is to hide private members completely from outside compilation units. This can thus also reduce compile time.
So, if you have this class:
example.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class Example
{
// ...
public:
void f();
void g();
private:
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
You can actually turn it into two classes, one being the implementation and the other the interface.
example.h:
// no more includes
class ExampleImpl; // forward declaration
class Example
{
// ...
public:
void f();
void g();
private:
ExampleImpl *impl;
};
example_impl.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class ExampleImpl
{
// ...
void f();
void g();
// ...
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
Disadvantages may include higher complexity, memory-management issues and an added layer of indirection.
Use a fast SSD setup. Or even create a ram disk, if suitable on your system.
Project -> Properties -> Configuration Properties -> C/C++ -> General -> Multi-processor Compilation: Yes (/MP)
Tools -> Options -> Projects and Solutions -> Build and Run: and set the maximum number of parallel project builds. (already set to 8 on my system, probably determined on first run of VS2013)
Cut down on the number of dependencies, so that if one part of the code changes, the rest doesn't have to be recompiled. I.e. any .cp/.cpp/.cc file that includes a particular header needs to be recompiled when that header is changed. So forward-declare stuff if you can.
Avoid compiling as much as possible. If there are modules you don't need, leave them out. If you have code that rarely changes, put it in a static library.
Don't use excessive amounts of templates. The compiler has to generate a new copy of each version of the template, and all code for a template goes in the header and needs to be re-read over and over again. That in itself is not a problem, but it is the opposite of forward-declaring, and adds dependencies.
If you have headers that every file uses, and which change rarely, see if you can put them in a precompiled header. A precompiled header is only compiled once and saved in a format specific to the compiler that is fast to read, and so for classes used a lot, can lead to great speed-ups.
Note that this only works with code you have written. For code from third parties, only #2 and #4 can help, but will not improve absolute compile times by much, only reduce the number of times code needs to be analyzed again after you've built it once.
To actually make things faster, your options are more limited. You already have an SSD, so you're probably not hard disk bound anymore, and swapping with an SSD should also be faster, so RAM is probably not the most pressing issue. So you are probably CPU-bound.
There are a couple of options:
1: you can make 1 (or a few) .cpp files that includes lots of cpp files from your project
Then, you compile only those files and ignore the rest.
Advantages:
compilation is a lot faster on one machiene
there should already be tools to generate those files for you. In any case the technology is really simple , you could build a simple script to parse the project files and to compilation units, after that you only need to include those in the project and ignore the rest of the files
Disadvantages:
changing one cpp files will trigger the rebuild that includes that cpp file and for minor changes it takes while loger to compile
you might need to change a bit of code to make it work , it might not work out of the box, for example if you have a function with the same name in two different cpp files you need to change that.
2: use a tool like incredibuild
Advantages:
works out of the box for your project. Install the app and you can already compile your project
compilation is really fast even for small changes
Disadvantages:
is not free
you will need more computers to achieve a speadup
You might find alternatives for option 2 , here is a related question.
Other tips to improve compilation time is to move as much of the code in cpp files and avoid inline declarations. Also extensive use of metaprogramming adds build time.
Simply find your bottleneck, and improve that part of your PC. For example, HDD / SSD performance is often a bottleneck.
On the code side of things, use things like forward declaration to avoiding including headers where possible and thus improve compilation time further.
Don't use templates. Really. If every class you use is templated (and if everything is a class), you have only a single translation unit. Consequently, your build system is powerless to reduce compilation to only the few parts that actually need rebuilding.
If you have a large number of templated classes, but a fair amount of untemplated classes, the situation is not much better: Any templated class that is used in more than one compilation unit has to be compiled several times!
Of course, you don't want to throw out the small, usefull templated helper classes, but for all code you write, you should think twice before you make a template out of it. Especially, you should avoid using templates for complex classes that use five more different templated classes. Your code might actually get a lot more readable from it.
If you want to go radical, write in C. The only thing that compiles faster than that is assembler (thanks for reminding me of that #black).
In addition to what's been said previously, you should avoid things like:
1. dynamic binding. The more it's complex, the more work the compiler will have to do.
2. High levels of optimization: compiling for a certain architecture using optimized code (Ox) takes longer.
Thanks for all your answers.
I have to enable multicores compilation and do some optimizations a little everywhere.
Most of the time cost is because of template.
thanks.

Including sources and headers into a toplevel C wrapper

I stumbled upon the following code:
//
// Top-level file that includes all of the C/C++ files required
//
// The C code may be compiled by compiling this top file only,
// or by compiling individual files then linking them together.
#ifdef __cplusplus
extern "C" {
#endif
#include <stdlib.h>
#include "my_header.h"
#include "my_source1.cc"
#include "my_source2.cc"
#ifdef __cplusplus
}
#endif
This is definitely unusual but is it considered bad practice and if so why?
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
First off: extern "C" { #include "my_cpp_file.cc" } just doesn't add up... anyway, I'll attempt to answer your question using a practical example.
Note that sometimes, you do see #include "some_file.c" in a source file. Often this is done because the code in the other file is under development, or it's not certain that the feature that is being developed in that file will make the release.
Another reason is quite simple: to improve readability: not having to scroll too much), or even: Reflecting you're threading. To some, having the child's code in a separate file helps, especially when learning threading.
Of course, the major benefit of including translation units into one master translation unit (which, to me, is abusing the pre-processor, but that's not the point) is simple: less I/O while compiling, hence, faster compilation. It's all been explained here.
That's one side of the story, though. This technique is not perfect. Here's a couple of considerations. And just to balance out the "the magic of unity builds" article, here's the "the evils of unity builds" article.
Anyway, here's a short list of my objections, and some examples:
static global variables (be honest, we've all used them)
extern and static functions alike: both are callable everywhere
Debugging would require you to build everything, unless (as the "pro" article suggests) have both a Unity-Build and modular-build ready for the same project. IMO a bit of a faff
Not suitable if you're looking to extract a lib from your project you'd like to re-use later on (think generic shared libraries or DLL's)
Just compare these two situation:
//foo.h
struct foo
{
char *value;
int checksum;
struct foo *next;
};
extern struct foo * get_foo(const char *val);
extern void free_foo( struct foo **foo);
//foo.c
#include <foo.h>
static int get_checksum( const char *val);
struct foo * get_foo( const char *val)
{
//call get_checksum
struct foo *retVal = malloc(sizeof *retVal);
retVal->value = calloc(strlen(val) + 1, 1);
retVal->cecksum = get_checksum(val);
retVal->next = NULL;
return retVal;
}
void free_foo ( struct foo **foo)
{
free(*foo->value);
if (*foo->next != NULL)
free_foo(&(*foo->next));
free(*foo);
*foo = NULL;
}
If I were to include this C file in another source file, the get_checksum function would be callable in that file, too. Here, this is not the case.
Name conflicts would be a lot more common, too.
Imagine, too, if you wrote some code to easily perform certain quick MySQL queries. I'd write my own header, and source files, and compile them like so:
gccc -Wall -std=c99 mysql_file.c `mysql_config --cflags --libs` -o mysql.o
And simply use that mysql.o compiled file in other projects, by linking it simply like this:
//another_file.c
include <mysql_file.h>
int main ( void )
{
my_own_mysql_function();
return 0;
}
Which I can then compile like so:
gcc another_file.c mysql.o -o my_bin
This saves development time, compilation time, and makes your projects easier to manage (provided you know your way around a make file).
Another advantage with these .o files is when collaborating on projects. Suppose I would announce a new feature for our mysql.o file. All projects that have my code as a dependency can safely continue to use the last stable compiled mysql.o file while I'm working on my piece of the code.
Once I'm done, we can test my module using stable dependencies (other .o files) and make sure I didn't add any bugs.
The problem is that each of your *.cc files will be compiled every time the header is included.
For example, if you have:
// foo.cc:
// also includes implementations of all the functions
// due to my_source1.cc being included
#include "main_header.h"
And:
// bar.cc:
// implementations included (again!)
// ... you get far more object code at best, and a linker error at worst
#include "main_header.h"
Unrelated, but still relevant: Sometimes, compilers have trouble when your headers include C stdlib headers in C++ code.
Edit: As mentioned above, there is also the problem of having extern "C" around your C++ sources.
This is definitely unusual but is it considered bad practice and if so why?
You're likely looking at a "Unity Build". Unity builds are a fine approach, if configured correctly. It can be problematic to configure a library to be built this way initially because there may be conflicts due to expanded visibility -- including implementations which were intended by an author to be private to a translation.
However, the definitions (in *.cc) should be outside if the extern "C" block.
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
It reduces dependency/complexity because the translation count goes down.

Define C++ class in Java way

This maybe not a real question, please close this if it's not appropriate.
In C++, you can't mix the header and implementation in a single source file, for example, the class will be defined twice if it's included by a.cc and b.cc:
// foo.h
#pragma once
class Foo {
void bar() { ... }
};
// a.cc
#include "foo.h"
class A {};
// b.cc
#include "foo.h"
class B {};
// Link all
$ gcc -c -o a.o a.cc
$ gcc -c -o b.o b.cc
$ gcc -o main main.cc foo.o a.o b.o
Then, it's ambiguous Foo::bar() in three object files!
Instead, we must separate the implementation into another file:
// foo.h
#pragma once
class Foo {
void bar();
};
// foo.cc
# include "foo.h"
void Foo::bar() { ... }
Though maybe not a big problem, because usually the binary for foo::bar() in a.o and b.o are the same. But at least there's some redundant declarations, isn't it? And some more confusions caused by this redundant:
Where to give the default values for the optional parameters?
class Foo {
void bar(int x = 100);
};
or,
void Foo::bar(int x = 100) { ... }
or, both?
Move between inline & not inline...?
If you want to switch a non-inlined function to an inlined one, you should move the code from foo.cc to foo.h, and add the inline keyword prefix. And maybe two seconds later, you are regreted what you've done, then, you move out the inlined one in foo.h back to the foo.cc and remove the inline keyword again.
But you won't need to do so if the declaration and definition are sit together.
And there are more of this kind of minor headaches.
The idea is, if you write the definition of a function just along with the declaration, along, there is no way a compiler couldn't infer the prototype of a function.
For example, by using a single source.cc, and import the type information only, for example,
#include_typedef "source.cc"
Things will be simpler. It's easy for a compiler to ignore variable allocation and function definitions by just filter at parser time even before constructing the AST, isn't it?
I'm used to programming in separate source/header files, I'm certainly capable of doing the separation. You can argue on the programming style, but, that will degrade the question what's the correct way to represent logics to what's the better programming style. I don't think Java is a better programming style, in the source/header separation context. but Java gives the correct way.
Do you mind to separate the headers from the classes?
If possible, how will you mix them into one?
Is there any C++ front-end which can separate the declaration and definition into separate files from mixed sources? Or how can I write such one? (I mean to the GCC.)
EDIT
I'm not bashing C++ anyway, I'm sorry if you get wrong information from this question.
As C++ is a multi paradigm language, you can see how MFC setup message bindings by using magic macros, and how boost library implements everything using templates. When the separation comes into the problem domain, (Thanks to ONeal pointing out the domain belongs to the packaging system) one can try to figure out how to resolve it. There are so many kinds of programming styles, to me, because I have spent so much time on programming C++, so any small convenient accumulates to a big convenient. Write implementation along with the declaration is one of the convenient. I guess I can reduce the source lines by at least 10% if I need not to separate them. You may ask, if convenient is so important, why not just use Java? It's obviously that C++ is different with Java, they are not interchangeable.
inline function maybe resolve the problem, but it changes the semantics at all.
With this question, you're bringing up a fundamental shortcoming of the usual C++ compilation system. But letting that asside, there seems to be something broken with your first example. Indeed, such a style, where you put all of your classes completely inline, works perfectly well. The moment you start working with templates more in-depth, it is even the only way of making things work. E.g. the boost libraries are mostly just headers, even the most involved technical details are written inline, just because it wouldn't work otherwise.
My guess is that you missed the so called header guards -- and thus got the redefinition warning.
// foo.h --------
#ifndef FOO_H
#define FOO_H
class Foo {
void bar() { ... }
};
#endif // FOO_H
This should just work fine
Where to give the default values for the optional parameters?
The header file. Default arguments are just syntactic sugar which is injected in at the call site. To give them at the definition point should not compile.
Move between inline & not inline...?
Depends on two things: A. if performance matters, and B. if you have a compiler supporting Whole Program Optimization (both GCC and MSVC++ support this (not sure about others); also called "link-time code generation"). If your compiler does not support LTCG, then you might get additional performance by putting the function in the header file, because the compiler will know more about the code in each translation unit. If performance doesn't matter, try to put as much as possible into the implementation file. This way, if you change a bit of implementation you don't need to recompile your whole codebase. On the other hand if your compiler supports LTCG then it doesn't matter even from a performance prospective, because the compiler optimizes across all translation units at once.
Do you mind to separate the headers from the classes? If possible, how will you mix them into one?
There are advantages both Java/C#/friends' package system, and there are advantages to C++'s include system. Unfortunately for C++, machines with mountains of memory have made Java/C#'s system probably better. The issue with a package system like that is that you need to be able to keep the entire program structure in memory at any given time, whereas with the include system, only a single translation unit needs to be in memory at any given time. This might not matter for the average C++ or Java programmer, but when compiling a codebase of many millions of lines of code, doing something like what C++ does would have been necessary for extremely under-spec'd machines (like those C was designed to run on in 1972). Nowadays it's not so impractical to keep the program database in memory though, so this limitation no longer really applies. But we're going to be stuck with these sorts of things, at least for a while, simply because that's the way the language is structured. Perhaps a future revision of the C++ language will add a packaging system -- there's at least one proposal to do so after C++0x.
As for "mind" from a programmer prospective, I don't mind using either system, thouch C++'s seperation of the class declaration and function definition is sometimes nice because it saves an indentation level.
The first example you gave is in fact perfectly valid C++; the functions defined in the body of the class declaration are the same as if you had defined them as inline. You must have included the .h twice without some kind of include guards to make sure the compiler only saw the first copy. You may have seen this before:
#ifndef FOO_H
#define FOO_H 1
...
#endif
This ensures that if foo.h is included twice that the second copy sees that FOO_H is defined and thus skips the code.
Default parameters must be declared in the .h, not the .cc (.cpp). The reason for this is that the compiler must create a full function call with all parameters when the calling code is compiled, so it must know the value of any default parameters at the point of the call. Defining them again in the code definition is redundant and generally treated as an error.