I stumbled upon the following code:
//
// Top-level file that includes all of the C/C++ files required
//
// The C code may be compiled by compiling this top file only,
// or by compiling individual files then linking them together.
#ifdef __cplusplus
extern "C" {
#endif
#include <stdlib.h>
#include "my_header.h"
#include "my_source1.cc"
#include "my_source2.cc"
#ifdef __cplusplus
}
#endif
This is definitely unusual but is it considered bad practice and if so why?
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
First off: extern "C" { #include "my_cpp_file.cc" } just doesn't add up... anyway, I'll attempt to answer your question using a practical example.
Note that sometimes, you do see #include "some_file.c" in a source file. Often this is done because the code in the other file is under development, or it's not certain that the feature that is being developed in that file will make the release.
Another reason is quite simple: to improve readability: not having to scroll too much), or even: Reflecting you're threading. To some, having the child's code in a separate file helps, especially when learning threading.
Of course, the major benefit of including translation units into one master translation unit (which, to me, is abusing the pre-processor, but that's not the point) is simple: less I/O while compiling, hence, faster compilation. It's all been explained here.
That's one side of the story, though. This technique is not perfect. Here's a couple of considerations. And just to balance out the "the magic of unity builds" article, here's the "the evils of unity builds" article.
Anyway, here's a short list of my objections, and some examples:
static global variables (be honest, we've all used them)
extern and static functions alike: both are callable everywhere
Debugging would require you to build everything, unless (as the "pro" article suggests) have both a Unity-Build and modular-build ready for the same project. IMO a bit of a faff
Not suitable if you're looking to extract a lib from your project you'd like to re-use later on (think generic shared libraries or DLL's)
Just compare these two situation:
//foo.h
struct foo
{
char *value;
int checksum;
struct foo *next;
};
extern struct foo * get_foo(const char *val);
extern void free_foo( struct foo **foo);
//foo.c
#include <foo.h>
static int get_checksum( const char *val);
struct foo * get_foo( const char *val)
{
//call get_checksum
struct foo *retVal = malloc(sizeof *retVal);
retVal->value = calloc(strlen(val) + 1, 1);
retVal->cecksum = get_checksum(val);
retVal->next = NULL;
return retVal;
}
void free_foo ( struct foo **foo)
{
free(*foo->value);
if (*foo->next != NULL)
free_foo(&(*foo->next));
free(*foo);
*foo = NULL;
}
If I were to include this C file in another source file, the get_checksum function would be callable in that file, too. Here, this is not the case.
Name conflicts would be a lot more common, too.
Imagine, too, if you wrote some code to easily perform certain quick MySQL queries. I'd write my own header, and source files, and compile them like so:
gccc -Wall -std=c99 mysql_file.c `mysql_config --cflags --libs` -o mysql.o
And simply use that mysql.o compiled file in other projects, by linking it simply like this:
//another_file.c
include <mysql_file.h>
int main ( void )
{
my_own_mysql_function();
return 0;
}
Which I can then compile like so:
gcc another_file.c mysql.o -o my_bin
This saves development time, compilation time, and makes your projects easier to manage (provided you know your way around a make file).
Another advantage with these .o files is when collaborating on projects. Suppose I would announce a new feature for our mysql.o file. All projects that have my code as a dependency can safely continue to use the last stable compiled mysql.o file while I'm working on my piece of the code.
Once I'm done, we can test my module using stable dependencies (other .o files) and make sure I didn't add any bugs.
The problem is that each of your *.cc files will be compiled every time the header is included.
For example, if you have:
// foo.cc:
// also includes implementations of all the functions
// due to my_source1.cc being included
#include "main_header.h"
And:
// bar.cc:
// implementations included (again!)
// ... you get far more object code at best, and a linker error at worst
#include "main_header.h"
Unrelated, but still relevant: Sometimes, compilers have trouble when your headers include C stdlib headers in C++ code.
Edit: As mentioned above, there is also the problem of having extern "C" around your C++ sources.
This is definitely unusual but is it considered bad practice and if so why?
You're likely looking at a "Unity Build". Unity builds are a fine approach, if configured correctly. It can be problematic to configure a library to be built this way initially because there may be conflicts due to expanded visibility -- including implementations which were intended by an author to be private to a translation.
However, the definitions (in *.cc) should be outside if the extern "C" block.
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
It reduces dependency/complexity because the translation count goes down.
Related
Ok, so I don't have a problem, but a question:
When using c++, you can transfer class to another file and include it without creating header, like this:
foo.cpp :
#include <iostream>
using namespace std;
class foo
{
public:
string str;
foo(string inStr)
{
str = inStr;
}
void print()
{
cout<<str<<endl;
}
};
main.cpp :
#include "foo.cpp"
using namespace std;
int main()
{
foo Foo("That's a string");
Foo.print();
return 0;
}
So the question is: is this method any worse than using header files? It's much easier and much more clean, but is it any slower, any more bug-inducing etc?
I've searched for this topic for a long time now but I haven't seen a single topic on the internet considering this even an option...
So the question is: is this method any worse than using header files?
You might consider reviewing the central idea of what the "C++ translation unit" is.
In your example, what the preprocessor does is as if it inserts a copy of foo.cpp into an internal copy of main.cpp. The preprocessor does this, not the compiler.
So ... the compiler never sees your code when they were separate files. It is this single, concatenated, 'translation unit' that is submitted to the compiler. There is no magic in .hh nor .cc, except that they fulfill your peer's (or boss's) expectations.
Now think about your question ... the translation unit is neither of your source files, nor any of your system include files, but it is one stream of text, one thing, put together by the preprocessor. So how would it be better or worse?
It's much easier and much more clean,
It can be. I often take this 'different' approach in my 'private' coding efforts.
When I did a quick eval of using gmpxx.h (mpz_class) in factorial, I did indeed take just these kinds of shortcuts, and did not need a .hpp file to properly create my compilation unit. FYI - The factorial of 12345, is more than 45,000 bytes. It is pointless to read the chars, too.
A 'more formal' effort (job, cooperation, etc), I always use header's, and separate compilation, and the building a library of functions useful to the app as part of how things should be done. Especially if I might share this code or contribute to a companies archives. There are too many good reasons for me to describe why I recommend you learn these issues.
but is it any slower, any more bug-inducing etc?
I think not. I think not. There is one compilation unit, and concatenating the parts has to be right, but I think is no more difficult.
I've searched for this topic for a long time now but I haven't seen a single
topic on the internet considering this even an option...
I'm not sure I've ever seen it discussed either. I have acquired the information. The separate compilations and library development are generally perceived to save development time. (Time is money, right?)
Also, a library, and header files, are how you package your success for others to use, how you can improve your value to a team.
There's no semantic difference between naming your files .cpp or .hpp (or .c / .h).
People will be surprised by the #include "foo.cpp", the compiler doesn't care
You've still created a "header file", but you've given it the ".cpp" extension. File extensions are for the programmer, the compiler doesn't care.
From the compiler's point of view, there is no difference between your example and
foo.h :
#include <iostream>
using namespace std;
class foo
{
//...
};
main.cpp :
#include "foo.h"
using namespace std;
int main()
{
// ...
}
A "header file" is just a file that you include at the beginning i.e. the head of another file (technically, headers don't need to be at the beginning and sometimes are not but typically they are, hence the name).
You've simply created a header file named foo.cpp.
Naming header files with extension that is conventionally used for source files is not a good idea. Some IDE's and other tools may erroneously assume that your header is a source file, and therefore attempt to compile as if it were such, wasting resources if nothing else.
Not to mention the confusion it may cause in your colleagues. Source files may have definitions that the C++ standard allows to be defined exactly once (see one definition rule, odr) because source files are not included in other files. If you name your header as if it were a source file, someone might assume that they can have odr definitions there when they can't.
If you ever build some larger project, the two main differences will become clear to you:
If you deliver your code as a library to others, you have to give them all your code - all your IP - instead of only the headers of the exposed classes plus a compiled library.
If you change one letter in any file, you will need to recompile everything. Once compile times for a larger project hits minutes, you will lose a lot of productivity.
Otherwise, of course it works, and the result is the same.
I've recently picked up C++ as part of my course, and I'm trying to understand in more depth the partnership between headers and classes. From every example or tutorial I've looked up on header files, they all use a class file with a constructor and then follow up with methods if they were included. However I'm wondering if it's fine just using header files to hold a group of related functions without the need to make an object of the class every time you want to use them.
//main file
#include <iostream>
#include "Example.h"
#include "Example2.h"
int main()
{
//Example 1
Example a; //I have to create an object of the class first
a.square(4); //Then I can call the function
//Example 2
square(4); //I can call the function without the need of a constructor
std::cin.get();
}
In the first example I create an object and then call the function, i use the two files 'Example.h' and 'Example.cpp'
//Example1 cpp
#include <iostream>
#include "Example.h"
void Example::square(int i)
{
i *= i;
std::cout << i << std::endl;
}
//Example1 header
class Example
{
public:
void square(int i);
};
In example2 I call the function directly from file 'Example2.h' below
//Example2 header
void square(int i)
{
i *= i;
std::cout << i;
}
Ultimately I guess what I'm asking is, if it's practical to use just the header file to hold a group of related functions without creating a related class file. And if the answer is no, what's the reason behind that. Either way I'm sure I've over looked something, but as ever I appreciate any kind of insight from you guys on this!
Of course, it's just fine to have only headers (as long as you consider the One Definition Rule as already mentioned).
You can as well write C++ sources without any header files.
Strictly speaking, headers are nothing else than filed pieces of source code which might be #included (i.e. pasted) into multiple C++ source files (i.e. translation units). Remembering this basic fact was sometimes quite helpful for me.
I made the following contrived counter-example:
main.cc:
#include <iostream>
// define float
float aFloat = 123.0;
// make it extern
extern float aFloat;
/* This should be include from a header
* but instead I prevent the pre-processor usage
* and simply do it by myself.
*/
extern void printADouble();
int main()
{
std::cout << "printADouble(): ";
printADouble();
std::cout << "\n"
"Surprised? :-)\n";
return 0;
}
printADouble.cc:
/* This should be include from a header
* but instead I prevent the pre-processor usage
* and simply do it by myself.
*
* This is intentionally of wrong type
* (to show how it can be done wrong).
*/
// use extern aFloat
extern double aFloat;
// make it extern
extern void printADouble();
void printADouble()
{
std::cout << aFloat;
}
Hopefully, you have noticed that I declared
extern float aFloat in main.cc
extern double aFloat in printADouble.cc
which is a disaster.
Problem when compiling main.cc? No. The translation unit is consistent syntactically and semantically (for the compiler).
Problem when compiling printADouble.cc? No. The translation unit is consistent syntactically and semantically (for the compiler).
Problem when linking this mess together? No. Linker can resolve every needed symbol.
Output:
printADouble(): 5.55042e-315
Surprised? :-)
as expected (assuming you expected as well as me nothing with sense).
Live Demo on wandbox
printADouble() accessed the defined float variable (4 bytes) as double variable (8 bytes). This is undefined behavior and goes wrong on multiple levels.
So, using headers doesn't support but enables (some kind of) modular programming in C++. (I didn't recognize the difference until I once had to use a C compiler which did not (yet) have a pre-processor. So, this above sketched issue hit me very hard but was really enlightening for me, also.)
IMHO, header files are a pragmatic replacement for an essential feature of modular programming (i.e. the explicit definion of interfaces and separation of interfaces and implementations as language feature). This seems to have annoyed other people as well. Have a look at A Few Words on C++ Modules to see what I mean.
C++ has a One Definition Rule (ODR). This rule states that functions and objects should be defined only once. Here's the problem: headers are often included more than once. Your square(int) function might therefore be defined twice.
The ODR is not an absolute rule. If you declare square as
//Example2 header
inline void square(int i)
// ^^^
{
i *= i;
std::cout << i;
}
then the compiler will inform the linker that there are multiple definitions possible. It's your job then to make sure all inline definitions are identical, so don't redefine square(int) elsewhere.
Templates and class definitions are exempt; they can appear in headers.
C++ is a multi paradigm programming language, it can be (at least):
procedural (driven by condition and loops)
functional (driven by recursion and specialization)
object oriented
declarative (providing compile-time arithmetic)
See a few more details in this quora answer.
Object oriented paradigm (classes) is only one of the many that you can leverage programming in C++.
You can mix them all, or just stick to one or a few, depending on what's the best approach for the problem you have to solve with your software.
So, to answer your question:
yes, you can group a bunch of (better if) inter-related functions in the same header file. This is more common in "old" C programming language, or more strictly procedural languages.
That said, as in MSalters' answer, just be conscious of the C++ One Definition Rule (ODR). Use inline keyword if you put the declaration of the function (body) and not only its definition (templates exempted).
See this SO answer for description of what "declaration" and "definition" are.
Additional note
To enforce the answer, and extend it to also other programming paradigms in C++,
in the latest few years there is a trend of putting a whole library (functions and/or classes) in a single header file.
This can be commonly and openly seen in open source projects, just go to github or gitlab and search for "header-only":
The common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers.
There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler.
Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers.
Some exception case is Templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them.
Some downsides of writing header only code
If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: How to reduce compilation times with Boost Asio, which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly. When dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful.
Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. NOTE: C++17's inline variables will make this particular example doable in the future.
To be more specific boost, Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.
I want to split up all classes from my program into cpp and hpp files, each file containing few classes from the same topic. Like this:
main.cpp:
#include <cstdio>
using namespace std;
class TopicFoo_Class1 {
... (Functions, variables, public/privates, etc.)
}
class TopicFoo_Class2 {
... (Functions, variables, public/privates, etc.)
}
class TopicBar_Class1 {
... (Stuff)
}
class TopicBar_Class2 {
... (Stuff)
}
int main(int argc, const char** argv) { ... }
into:
foo.hpp:
class TopicFoo_Class1 {
... (Declarations)
}
class TopicFoo_Class2 {
... (Declarations)
}
foo.cpp:
#include <cstdio>
#include "foo.hpp"
void TopicFoo_Class1::function1 { ... }
void TopicFoo_Class2::function1 { ... }
bar.hpp:
class TopicBar_Class1 {
... (Declarations)
}
class TopicBar_Class2 {
... (Declarations)
}
bar.cpp:
#include <cstdio>
#include "bar.hpp"
void TopicBar_Class1::function1 { ... }
void TopicBar_Class2::function1 { ... }
main.cpp:
#include "foo.hpp"
#include "bar.hpp"
int main(int argc, const char** argv) { ... }
The plan is to compile foo.o and bar.o, then compile main.cpp along with the object files to form foo_bar_executable, instead of just compiling a big main.cpp into foo_bar_executable.
This is just an example, header guards and better names will be included.
I'm wondering, will this affect program speed? Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
Could the multiple includes of the same file by different cpp files cause lag?
Is there a better way to split up my code?
Which one is faster?
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
How would the above command work?
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)
I'm wondering, will this affect program speed? Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
You are mixing things that affect the build speed with run-time speed of your executable. The run-time speed shouldn't change. For a small project the difference in build time may be negligible. For larger projects, initial build times may be long, but subsequent ones may become much shorter. The reason is that you only need to rebuild what changed, and re-link.
Could the multiple includes of the same file by different cpp files cause lag?
Including a file always adds some delta to the build time. But it's something you'd need to measure. Nowadays compilers are pretty good with doing that in a smart fashion. If you couple that with smart header specification (no superfluous includes in headers, forward declarations and such), and precompiled headers, you shouldn't see a significant slowdown.
Is there a better way to split up my code?
Depends on the code. It's highly subjective.
Which one is faster?
Measure for yourself, we can't predict it for you.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
Last I checked the GCC docs, it was.
How would the above command work?
It will take the above source files and produce a single executable
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)
I wouldn't recommend that. Include the bare minimum to make the single line program #include "foo.hpp" compile successfully. Headers should strive to be minimal and complete (kind of like a certain quality of posts on SO).
m wondering, will this affect program speed?
No.
Could the multiple includes of the same file by different cpp files cause lag?
No.
Which one is faster?
Speed is not really important to most programs, and how you arrange your files has no effect on run-time performance.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable
Yes
How would the above command work?
RTFM
Hey, I'm thirteen and a half!
We don't care.
I'm wondering, will this affect program speed?
It can, but it might not.
When functions are not defined in a single translation unit, the compiler can not optimize the function calls using inline expansion. However, if enabled, some linkers can perform inlining across translation units.
On the other hand, your program might not benefit from inlining optimization.
Some cpps will depend on other topics' hpps to compile, and multiple cpps will depend on one hpp.
This is irrelevant to the speed of the compiled program.
Could the multiple includes of the same file by different cpp files cause lag?
It may have a (possibly insignificant) effect on compilation time from scratch.
Is there a better way to split up my code?
This is subjective. The more you split your code, the less you need to recompile when you make changes. The less you split, the faster it is to compile the entire project from scratch.
Which one is faster?
Possibly neither.
Is it possible to run g++ main.cpp foo.cpp bar.cpp -o foo_bar_executable?
Yes.
How would the above command work?
Use the man g++ command.
Should I make foo.hpp contain most required includes and include it in most files? This might make it faster(?)
No. Including unneeded files slows compilation. Besides this severely reduces the biggest advantage of splitting translation units, which is the lack of needing to compile the entire project when small part changes.
No, it will not affect speed except if you're relying on heavy optimizations, but as a self described "newbie", you likely won't be worrying about this yet. Often in the trade-off between maintaining a code structure to improve optimization vs. improving maintainability, maintenance will usually be the higher priority.
It might make compilation longer, but won't affect the executable. With a proper makefile, you might see compilation actually improve.
It's all subjective. Some packages split up the source per function.
No affect on executable.
Yes, but would recommend learning about makefiles, then you're compiling only what needs to be compiled.
It will compile the files, link to some default libraries, and output the executable. If you're interested in what is happening behind the scenes, compile with verbosity turned on. You can also compile to assembler, which can be really interesting to look at.
Ideally, each source file should include only the headers it needs.
This maybe not a real question, please close this if it's not appropriate.
In C++, you can't mix the header and implementation in a single source file, for example, the class will be defined twice if it's included by a.cc and b.cc:
// foo.h
#pragma once
class Foo {
void bar() { ... }
};
// a.cc
#include "foo.h"
class A {};
// b.cc
#include "foo.h"
class B {};
// Link all
$ gcc -c -o a.o a.cc
$ gcc -c -o b.o b.cc
$ gcc -o main main.cc foo.o a.o b.o
Then, it's ambiguous Foo::bar() in three object files!
Instead, we must separate the implementation into another file:
// foo.h
#pragma once
class Foo {
void bar();
};
// foo.cc
# include "foo.h"
void Foo::bar() { ... }
Though maybe not a big problem, because usually the binary for foo::bar() in a.o and b.o are the same. But at least there's some redundant declarations, isn't it? And some more confusions caused by this redundant:
Where to give the default values for the optional parameters?
class Foo {
void bar(int x = 100);
};
or,
void Foo::bar(int x = 100) { ... }
or, both?
Move between inline & not inline...?
If you want to switch a non-inlined function to an inlined one, you should move the code from foo.cc to foo.h, and add the inline keyword prefix. And maybe two seconds later, you are regreted what you've done, then, you move out the inlined one in foo.h back to the foo.cc and remove the inline keyword again.
But you won't need to do so if the declaration and definition are sit together.
And there are more of this kind of minor headaches.
The idea is, if you write the definition of a function just along with the declaration, along, there is no way a compiler couldn't infer the prototype of a function.
For example, by using a single source.cc, and import the type information only, for example,
#include_typedef "source.cc"
Things will be simpler. It's easy for a compiler to ignore variable allocation and function definitions by just filter at parser time even before constructing the AST, isn't it?
I'm used to programming in separate source/header files, I'm certainly capable of doing the separation. You can argue on the programming style, but, that will degrade the question what's the correct way to represent logics to what's the better programming style. I don't think Java is a better programming style, in the source/header separation context. but Java gives the correct way.
Do you mind to separate the headers from the classes?
If possible, how will you mix them into one?
Is there any C++ front-end which can separate the declaration and definition into separate files from mixed sources? Or how can I write such one? (I mean to the GCC.)
EDIT
I'm not bashing C++ anyway, I'm sorry if you get wrong information from this question.
As C++ is a multi paradigm language, you can see how MFC setup message bindings by using magic macros, and how boost library implements everything using templates. When the separation comes into the problem domain, (Thanks to ONeal pointing out the domain belongs to the packaging system) one can try to figure out how to resolve it. There are so many kinds of programming styles, to me, because I have spent so much time on programming C++, so any small convenient accumulates to a big convenient. Write implementation along with the declaration is one of the convenient. I guess I can reduce the source lines by at least 10% if I need not to separate them. You may ask, if convenient is so important, why not just use Java? It's obviously that C++ is different with Java, they are not interchangeable.
inline function maybe resolve the problem, but it changes the semantics at all.
With this question, you're bringing up a fundamental shortcoming of the usual C++ compilation system. But letting that asside, there seems to be something broken with your first example. Indeed, such a style, where you put all of your classes completely inline, works perfectly well. The moment you start working with templates more in-depth, it is even the only way of making things work. E.g. the boost libraries are mostly just headers, even the most involved technical details are written inline, just because it wouldn't work otherwise.
My guess is that you missed the so called header guards -- and thus got the redefinition warning.
// foo.h --------
#ifndef FOO_H
#define FOO_H
class Foo {
void bar() { ... }
};
#endif // FOO_H
This should just work fine
Where to give the default values for the optional parameters?
The header file. Default arguments are just syntactic sugar which is injected in at the call site. To give them at the definition point should not compile.
Move between inline & not inline...?
Depends on two things: A. if performance matters, and B. if you have a compiler supporting Whole Program Optimization (both GCC and MSVC++ support this (not sure about others); also called "link-time code generation"). If your compiler does not support LTCG, then you might get additional performance by putting the function in the header file, because the compiler will know more about the code in each translation unit. If performance doesn't matter, try to put as much as possible into the implementation file. This way, if you change a bit of implementation you don't need to recompile your whole codebase. On the other hand if your compiler supports LTCG then it doesn't matter even from a performance prospective, because the compiler optimizes across all translation units at once.
Do you mind to separate the headers from the classes? If possible, how will you mix them into one?
There are advantages both Java/C#/friends' package system, and there are advantages to C++'s include system. Unfortunately for C++, machines with mountains of memory have made Java/C#'s system probably better. The issue with a package system like that is that you need to be able to keep the entire program structure in memory at any given time, whereas with the include system, only a single translation unit needs to be in memory at any given time. This might not matter for the average C++ or Java programmer, but when compiling a codebase of many millions of lines of code, doing something like what C++ does would have been necessary for extremely under-spec'd machines (like those C was designed to run on in 1972). Nowadays it's not so impractical to keep the program database in memory though, so this limitation no longer really applies. But we're going to be stuck with these sorts of things, at least for a while, simply because that's the way the language is structured. Perhaps a future revision of the C++ language will add a packaging system -- there's at least one proposal to do so after C++0x.
As for "mind" from a programmer prospective, I don't mind using either system, thouch C++'s seperation of the class declaration and function definition is sometimes nice because it saves an indentation level.
The first example you gave is in fact perfectly valid C++; the functions defined in the body of the class declaration are the same as if you had defined them as inline. You must have included the .h twice without some kind of include guards to make sure the compiler only saw the first copy. You may have seen this before:
#ifndef FOO_H
#define FOO_H 1
...
#endif
This ensures that if foo.h is included twice that the second copy sees that FOO_H is defined and thus skips the code.
Default parameters must be declared in the .h, not the .cc (.cpp). The reason for this is that the compiler must create a full function call with all parameters when the calling code is compiled, so it must know the value of any default parameters at the point of the call. Defining them again in the code definition is redundant and generally treated as an error.
I have a .h file which is used almost throughout the source code (in my case, it is just one directory with. .cc and .h files). Basically, I keep two versions of .h file: one with some debugging info for code analysis and the regular one. The debugging version has only one extra macro and extern function declaration. I switch pretty regularly between two versions. However, this causes a 20 minute recompilation.
How would you recommend to avoid this issue of recompilation? Perhaps to set some flags, create different tree? what are the common solutions and how to embed them?
The new .h file contains:
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function*/) , foo())
/*some_functions*_for_debugging/
As, you can see that will ensue a recompilation. I build with gcc on Linux AS 3
Thanks
To avoid the issue with an external function , you could leave the prototype in both versions, it doesn't harm being there, if not used. But with the macro no chance, you can forget it, it needs recompilation for code replacements.
I would make intensive use of precompiled headers to fasten recompilation (as it cannot be avoided). GCC and Precompiled-Headers. For other compilers use your favorite search engine. Any modern compiler should support this feature, for large scale projects it's inevitable you have to use it otherwise you'll be really unproductive.
Beside this, if you have enough disk space, I would check out two working copies. Each of them compiled with different settings. You would have to commit and update each time to transfer changes to the other working copy but it'll take for sure less than 20mins ;-)
You need to minimize the amount of your code (specifically - the number of files) that depend on that header file. Other than that you can't do much - when you need to change the header you will face recompilation of everything that includes it.
So you need to reorganize your code in such a way that only a select files include the header. For example you could move the functions that need its contents into a separate source file (or several files) and only include the header into those but into other files.
If the debugging macros are actually used in most of the files that include the header, then they need to be recompiled anyway! In this case, you have two options:
Keep two sets of object files, one without debugging code and one with. Use different makefiles/build configurations to allow them to be kept in separate locations.
Use a global variable, along these lines:
In your common.h:
extern int debug;
In your debug.c:
int debug = 1;
Everywhere else (can use a macro for this):
if (debug) {
/*(do_debug_stuff*/
}
A slight variation of the concept is to call an actual function in debug.c that might just do nothing if debugging is disabled.
I don't exactly understand your problem. As I understood, you are trying to create a test framework. I can suggest something. You may move the changing stuff to .c file like follows.
In new.h
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function_dummy*/) , foo())
/*some_functions*_for_debugging/
In new.c
call_some_function_dummy()
{
#ifdef _DEBUG
call_some_function()
#endif
}
Now if you switch to debug mode, only New.c need to be recompiled and compilation will be much faster. Hope this will help you.
Solution 2:
In New.h
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function[0]*/) , foo())
/*some_functions*_for_debugging/
In New.c
#ifdef _DEBUG
call_some_function[] =
{
call_some_function0,
call_some_function1
};
#else
call_some_function[]
{
dummy_nop,
dummy_nop
};
#endif
Why not move the macro to its own header and only include it where needed.
Just another thought.
I cannot see how you can avoid recompiling the dependent source files. However you may be able to speed up the other processing in the build.
For example can you use a form of precompiled headers, and only include your headerr in the code files and not other headers. Another way could be to parallelise the build or perhaps use a fast piece of hardware such as a solid state drive.
Remember that hardware is cheap programmers are expensive to quote wotshisname.