how to handle optimizations in code - c++

I am currently writing various optimizations for some code. Each of theses optimizations has a big impact on the code efficiency (hopefully) but also on the source code. However I want to keep the possibility to enable and disable any of them for benchmarking purpose.
I traditionally use the #ifdef OPTIM_X_ENABLE/#else/#endif method, but the code quickly become too hard to maintain.
One can also create SCM branches for each optimizations. It's much better for code readability until you want to enable or disable more than a single optimization.
Is there any other and hopefully better way work with optimizations ?
EDIT :
Some optimizations cannot work simultaneously. I may need to disable an old optimization to bench a new one and see which one I should keep.

I would create a branch for an optimization, benchmark it until you know it has a significant improvement, and then simply merge it back to trunk. I wouldn't bother with the #ifdefs once it's back on trunk; why would you need to disable it once you know it's good? You always have the repository history if you want to be able to rollback a particular change.

There are so many ways of choosing which part of your code that will execute. Conditional inclusion using the preprocessor is usually the hardest to maintain, in my experience. So try to minimize that, if you can. You can separate the functionality (optimized, unoptimized) in different functions. Then call the functions conditionally depending on a flag. Or you can create an inheritance hierarchy and use virtual dispatch. Of course it depends on your particular situation. Perhaps if you could describe it in more detail you would get better answers.
However, here's a simple method that might work for you: Create two sets of functions (or classes, whichever paradigm you are using). Separate the functions into different namespaces, one for optimized code and one for readable code. Then simply choose which set to use by conditionally using them. Something like this:
#include <iostream>
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
int main()
{
f();
}
Then in optimized.h:
namespace optimized
{
void f() { std::cout << "optimized selected" << std::endl; }
}
and in readable.h:
namespace readable
{
void f() { std::cout << "readable selected" << std::endl; }
}
This method does unfortunately need to use the preprocessor, but the usage is minimal. Of course you can improve this by introducing a wrapper header:
wrapper.h:
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
Now simply include this header and further minimize the potential preprocessor usage. Btw, the usual separation of header/cpp should still be done.
Good luck!

I would work at class level (or file level for C) and embed all the various versions in the same working software (no #ifdef) and choose one implementation or the other at runtime through some configuration file or command line options.
It should be quite easy as optimizations should not change anything at internal API level.
Another way if you'are using C++ can be to instantiate templates to avoid duplicating high level code or selecting a branch at run-time (even if this is often an acceptable option, some switches here and there are often not such a big issue).
In the end various optimized backend could eventually be turned to libraries.
Unit Tests should be able to work without modifying them with every variant of implementation.
My rationale is that embedding every variant mostly change software size, and it's very rarely a problem. This approach also has other benefits : you can take care easily of changing environment. An optimization for some OS or some hardware may not be one on another. In many cases it will even be easy to choose the best version at runtime.

You may have two (three/more) version of function you optimise with names like:
function
function_optimized
which have identical arguments and return same results.
Then you may #define selector in som header like:
#if OPTIM_X_ENABLE
#define OPT(f) f##_optimized
#else
#define OPT(f) f
#endif
Then call functions having optimized variants as OPT(function)(argument, argument...). This method is not so aestetic but it does so.
You may go further and use re#define names for all your optimized functions:
#if OPTIM_X_ENABLE
#define foo foo_optimized
#define bar bar_optimized
...
#endif
And leave caller code as is. Preprocessor does function substitution for you. I like it most, because it works transparently while per-function (and also per datatype and per variable) grained which is enough in most cases for me.
More exotic method is to make separate .c file for non-optimized and optimized code and compile only one of them. They may have same names but with different paths, so switching can be made by change single option in command line.

I'm confused. Why don't you just find out where each performance problem is, fix it, and continue. Here's an example.

Related

C++ best way to switch between configs

I have some code that I need to run for separate cases. I would have to switch mostly some enums and statics for those cases. So, let's say I have enums
enum class City { NY, LA, W_DC, ... }
City capital = City::W_DC
and for the other case
enum class City { LDN, BMH, EDB, ... }
City capital = City::LDN
Assuming I have a lot of those enums, what is the best way to reuse most of the code and switch between those configuration. To be clear, this is not meant to happen during runtime, the program is supposed to compile for one case and be oblivious about anything else.
EDIT: following StackOverflowUser to use macros
would it be a good approach to store the different enum configs in different namespaces and then do
#IFDEF USE_NAMESPACE_A
using namespace namespace_a
#ELSE
using namespace namespace_a
#ENDIF
Creating macros and using #ifdef MACRONAME is the best way to check things before run time, in my opinion.
You can also create constexpr variables and use ifs to evaluate those variables' values. Since the variables are constexpr, the compiler would most likely optimize them away.
One option is to create separate source files, each containing the enum you want. You then make different compile targets that compile the relevant file as part of the build.
Another option is to use the #ifdef...#else preprocessor macros as stated previously, but you'll likely have different compile targets to define the macro that includes the correct file. Rather than set it up so you have to change code AND change the build, just put it in the build.
But, honestly, enums are probably not the best way to do what you're wanting to do. A lookup from a file/database/or some other datasource at runtime would likely be a more maintainable approach. It requires a bit more work obviously, but if this is something to maintain long-term you'll thank yourself later.

Are there techniques to greatly improve C++ building time for 3D applications?

There are many slim laptops who are just cheap and great to use. Programming has the advantage of being done in any place where there is silence and comfort, since concentrating for long hours is important factor to be able to do effective work.
I'm kinda old fashioned as I like my statically compiled C or C++, and those languages can be pretty long to compile on those power-constrainted laptops, especially C++11 and C++14.
I like to do 3D programming, and the libraries I use can be large and won't be forgiving: bullet physics, Ogre3D, SFML, not to mention the power hunger of modern IDEs.
There are several solutions to make building just faster:
Solution A: Don't use those large libraries, and come up with something lighter on your own to relieve the compiler. Write appropriate makefiles, don't use an IDE.
Solution B: Set up a building server elsewhere, have a makefile set up on an muscled machine, and automatically download the resulting exe. I don't think this is a casual solution, as you have to target your laptop's CPU.
Solution C: use the unofficial C++ module
???
Any other suggestion ?
Compilation speed is something, that can be really boosted, if you know how to. It is always wise to think carefully about project's design (especially in case of large projects, consisted of multiple modules) and modify it, so compiler can produce output efficiently.
1. Precompiled headers.
Precompiled header is a normal header (.h file), that contains the most common declarations, typedefs and includes. During compilation, it is parsed only once - before any other source is compiled. During this process, compiler generates data of some internal (most likely, binary) format, Then, it uses this data to speed up code generation.
This is a sample:
#pragma once
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Include common headers
#include "BaseConfig.h"
#include "Atomic.h"
#include "Limits.h"
#include "DebugDefs.h"
#include "CommonApi.h"
#include "Algorithms.h"
#include "HashCode.h"
#include "MemoryOverride.h"
#include "Result.h"
#include "ThreadBase.h"
//Others...
namespace Asx
{
//Forward declare common types
class String;
class UnicodeString;
//Declare global constants
enum : Enum
{
ID_Auto = Limits<Enum>::Max_Value,
ID_None = 0
};
enum : Size_t
{
Max_Size = Limits<Size_t>::Max_Value,
Invalid_Position = Limits<Size_t>::Max_Value
};
enum : Uint
{
Timeout_Infinite = Limits<Uint>::Max_Value
};
//Other things...
}
#endif /* __Asx_Core_Prerequisites_H__ */
In project, when PCH is used, every source file usually contains #include to this file (I don't know about others, but in VC++ this actually a requirement - every source attached to project configured for using PCH, must start with: #include PrecompiledHedareName.h). Configuration of precompiled headers is very platform-dependent and beyond the scope of this answer.
Note one important matter: things, that are defined/included in PCH should be changed only when absolutely necessary - every chnge can cause recompilation of whole project (and other depended modules)!
More about PCH:
Wiki
GCC Doc
Microsoft Doc
2. Forward declarations.
When you don't need whole class definition, forward declare it to remove unnecessary dependencies in your code. This also implicates extensive use of pointers and references when possible. Example:
#include "BigDataType.h"
class Sample
{
protected:
BigDataType _data;
};
Do you really need to store _data as value? Why not this way:
class BigDataType; //That's enough, #include not required
class Sample
{
protected:
BigDataType* _data; //So much better now
};
This is especially profitable for large types.
3. Do not overuse templates.
Meta-programming is a very powerful tool in developer's toolbox. But don't try to use them, when they are not necessary.
They are great for things like traits, compile-time evaluation, static reflection and so on. But they introduce a lot of troubles:
Error messages - if you have ever seen errors caused by improper usage of std:: iterators or containers (especially the complex ones, like std::unordered_map), than you know what is this all about.
Readability - complex templates can be very hard to read/modify/maintain.
Quirks - many techniques, templates are used for, are not so well-known, so maintenance of such code can be even harder.
Compile time - the most important for us now:
Remember, if you define function as:
template <class Tx, class Ty>
void sample(const Tx& xv, const Ty& yv)
{
//body
}
it will be compiled for each exclusive combination of Tx and Ty. If such function is used often (and for many such combinations), it can really slow down compilation process. Now imagine, what will happen, if you start to overuse templating for whole classes...
4. Using PIMPL idiom.
This is a very useful technique, that allows us to:
hide implementation details
speed up code generation
easy updates, without breaking client code
How does it work? Consider class, that contain a lot of data (for example, representing person). It could look like this:
class Person
{
protected:
string name;
string surname;
Date birth_date;
Date registration_date;
string email_address;
//and so on...
};
Our application evolves and we need to extend/change Person definition. We add some new fields, remove others... and everything crashes: size of Person changes, names of fields change... cataclysm. In particular, every client code, that depends on Person's definition needs to be changed/updated/fixed. Not good.
But we can do it the smart way - hide the details of Person:
class Person
{
protected:
class Details;
Details* details;
};
Now, we do few nice things:
client cannot create code, that depends on how Person is defined
no recompilation needed as long as we don't modify public interface used by client code
we reduce the compilation time, because definitions of string and Date no longer need to be present (in previous version, we had to include appropriate headers for these types, that adds additional dependencies).
5. #pragma once directive.
Although it may give no speed boost, it is clearer and less error-prone. It is basically the same thing as using include guards:
#ifndef __Asx_Core_Prerequisites_H__
#define __Asx_Core_Prerequisites_H__
//Content
#endif /* __Asx_Core_Prerequisites_H__ */
It prevents from multiple parses of the same file. Although #pragma once is not standard (in fact, no pragma is - pragmas are reserved for compiler-specific directives), it is quite widely supported (examples: VC++, GCC, CLang, ICC) and can be used without worrying - compilers should ignore unknown pragmas (more or less silently).
6. Unnecessary dependencies elimination.
Very important point! When code is being refactored, dependencies often change. For example, if you decide to do some optimizations and use pointers/references instead of values (vide point 2 and 4 of this answer), some includes can become unnecessary. Consider:
#include "Time.h"
#include "Day.h"
#include "Month.h"
#include "Timezone.h"
class Date
{
protected:
Time time;
Day day;
Month month;
Uint16 year;
Timezone tz;
//...
};
This class has been changed to hide implementation details:
//These are no longer required!
//#include "Time.h"
//#include "Day.h"
//#include "Month.h"
//#include "Timezone.h"
class Date
{
protected:
class Details;
Details* details;
//...
};
It is good to track such redundant includes, either using brain, built-in tools (like VS Dependency Visualizer) or external utilities (for example, GraphViz).
Visual Studio has also a very nice option - if you click with RMB on any file, you will see an option 'Generate Graph of include files' - it will generated a nice, readable graph, that can be easily analyzed and used to track unnecessary dependencies.
Sample graph, generated inside my String.h file:
As Mr. Yellow indicated in a comment, one of the best ways to improve compile times is to pay careful attention to your use of header files. In particular:
Use precompiled headers for any header that you don't expect to change including operating system headers, third party library headers, etc.
Reduce the number of headers included from other headers to the minimum necessary.
Determine whether a include is needed in the header or whether it can be moved to cpp file. This sometimes causes a ripple effect because someone else was depending on you to include the header for it, but it is better in the long term to move the include to the place where it's actually needed.
Using forward declared classes, etc. can often eliminate the need to include the header in which that class is declared. Of course, you still need to include the header in the cpp file, but that only happens once, as opposed to happening every time the corresponding header file is included.
Use #pragma once (if it is supported by your compiler) rather than include guard symbols. This means the compiler does not even need to open the header file to discover the include guard. (Of course many modern compilers figure that out for you anyway.)
Once you have your header files under control, check your make files to be sure you no longer have unnecessary dependencies. The goal is to rebuild everything you need to, but no more. Sometimes people err on the side of building too much because that is safer than building too little.
If you've tried all of the above, there's a commercial product that does wonders, assuming you have some available PCs on your LAN. We used to use it at a previous job. It's called Incredibuild (www.incredibuild.com) and it shrunk our build time from over an hour (C++) to about 10 minutes. From their website:
IncrediBuild accelerates build time through efficient parallel computing. By harnessing idle CPU resources on the network, IncrediBuild transforms a network of PCs and servers into a private computing cloud that can best be described as a “virtual supercomputer.” Processes are distributed to remote CPU resources for parallel processing, dramatically shortening build time up by to 90% or more.
Another point that's not mentioned in the other answers: Templates. Templates can be a nice tool, but they have fundamental drawbacks:
The template, and all the templates it depends upon, must be included. Forward declarations don't work.
Template code is frequently compiled several times. In how many .cpp files do you use an std::vector<>? That is how many times your compiler will need to compile it!
(I'm not advocating against the use of std::vector<>, on the contrary you should use it frequently; it's simply an example of a really frequently used template here.)
When you change the implementation of a template, you must recompile everything that uses that template.
With template heavy code, you often have relatively few compilation units, but each of them is huge. Of course, you can go all-template and have only a single .cpp file that pulls in everything. This would avoid multiple compiling of template code, however it renders make useless: any compilation will take as long as a compilation after a clean.
I would recommend going the opposite direction: Avoid template-heavy or template-only libraries, and avoid creating complex templates. The more interdependent your templates become, the more repeated compilation is done, and the more .cpp files need to be rebuilt when you change a template. Ideally any template you have should not make use of any other template (unless that other template is std::vector<>, of course...).

How to speed up C++ compilation

do you have any tips to really speed up a large C++ source code ?
I compiled QT5 with last visual studio 2013 compiler, this took at least 3 hours with intel quad core 3.2GHz, 8GB memory and SSD drive.
What solutions do I have if I want to do this in 30 minutes ?
thanks.
Forward declarations and PIMPL.
example.h:
// no include
class UsedByExample;
class Example
{
// ...
UsedByExample *ptr; // forward declaration is good enough
UsedByExample &ref; // forward declaration is good enough
};
example.cpp:
#include "used_by_example.h"
// ...
UsedByExample object; // need #include
A little known / underused fact is that forward declarations are also good enough for function return values:
class Example;
Example f(); // forward declaration is good enough
Only the code which calls f() and has to operate on the returned Example object actually needs the definition of Example.
The purpose of PIMPL, an idiom depending on forward declarations, is to hide private members completely from outside compilation units. This can thus also reduce compile time.
So, if you have this class:
example.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class Example
{
// ...
public:
void f();
void g();
private:
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
You can actually turn it into two classes, one being the implementation and the other the interface.
example.h:
// no more includes
class ExampleImpl; // forward declaration
class Example
{
// ...
public:
void f();
void g();
private:
ExampleImpl *impl;
};
example_impl.h:
#include "other_class.h"
#include "yet_another_class.h"
#include "and_yet_another_class.h"
class ExampleImpl
{
// ...
void f();
void g();
// ...
OtherClass a;
YetAnotherClass b;
AndYetAnotherClass c;
};
Disadvantages may include higher complexity, memory-management issues and an added layer of indirection.
Use a fast SSD setup. Or even create a ram disk, if suitable on your system.
Project -> Properties -> Configuration Properties -> C/C++ -> General -> Multi-processor Compilation: Yes (/MP)
Tools -> Options -> Projects and Solutions -> Build and Run: and set the maximum number of parallel project builds. (already set to 8 on my system, probably determined on first run of VS2013)
Cut down on the number of dependencies, so that if one part of the code changes, the rest doesn't have to be recompiled. I.e. any .cp/.cpp/.cc file that includes a particular header needs to be recompiled when that header is changed. So forward-declare stuff if you can.
Avoid compiling as much as possible. If there are modules you don't need, leave them out. If you have code that rarely changes, put it in a static library.
Don't use excessive amounts of templates. The compiler has to generate a new copy of each version of the template, and all code for a template goes in the header and needs to be re-read over and over again. That in itself is not a problem, but it is the opposite of forward-declaring, and adds dependencies.
If you have headers that every file uses, and which change rarely, see if you can put them in a precompiled header. A precompiled header is only compiled once and saved in a format specific to the compiler that is fast to read, and so for classes used a lot, can lead to great speed-ups.
Note that this only works with code you have written. For code from third parties, only #2 and #4 can help, but will not improve absolute compile times by much, only reduce the number of times code needs to be analyzed again after you've built it once.
To actually make things faster, your options are more limited. You already have an SSD, so you're probably not hard disk bound anymore, and swapping with an SSD should also be faster, so RAM is probably not the most pressing issue. So you are probably CPU-bound.
There are a couple of options:
1: you can make 1 (or a few) .cpp files that includes lots of cpp files from your project
Then, you compile only those files and ignore the rest.
Advantages:
compilation is a lot faster on one machiene
there should already be tools to generate those files for you. In any case the technology is really simple , you could build a simple script to parse the project files and to compilation units, after that you only need to include those in the project and ignore the rest of the files
Disadvantages:
changing one cpp files will trigger the rebuild that includes that cpp file and for minor changes it takes while loger to compile
you might need to change a bit of code to make it work , it might not work out of the box, for example if you have a function with the same name in two different cpp files you need to change that.
2: use a tool like incredibuild
Advantages:
works out of the box for your project. Install the app and you can already compile your project
compilation is really fast even for small changes
Disadvantages:
is not free
you will need more computers to achieve a speadup
You might find alternatives for option 2 , here is a related question.
Other tips to improve compilation time is to move as much of the code in cpp files and avoid inline declarations. Also extensive use of metaprogramming adds build time.
Simply find your bottleneck, and improve that part of your PC. For example, HDD / SSD performance is often a bottleneck.
On the code side of things, use things like forward declaration to avoiding including headers where possible and thus improve compilation time further.
Don't use templates. Really. If every class you use is templated (and if everything is a class), you have only a single translation unit. Consequently, your build system is powerless to reduce compilation to only the few parts that actually need rebuilding.
If you have a large number of templated classes, but a fair amount of untemplated classes, the situation is not much better: Any templated class that is used in more than one compilation unit has to be compiled several times!
Of course, you don't want to throw out the small, usefull templated helper classes, but for all code you write, you should think twice before you make a template out of it. Especially, you should avoid using templates for complex classes that use five more different templated classes. Your code might actually get a lot more readable from it.
If you want to go radical, write in C. The only thing that compiles faster than that is assembler (thanks for reminding me of that #black).
In addition to what's been said previously, you should avoid things like:
1. dynamic binding. The more it's complex, the more work the compiler will have to do.
2. High levels of optimization: compiling for a certain architecture using optimized code (Ox) takes longer.
Thanks for all your answers.
I have to enable multicores compilation and do some optimizations a little everywhere.
Most of the time cost is because of template.
thanks.

A macro for long and short function names in C++

I am currently working on a general-purpose C++ library.
Well, I like using real-word function names and actually my project has a consistent function naming system. The functions (or methods) start with a verb if they do not return bool (in this case they start with is_)
The problem is this can be somewhat problematic for some programmers. Consider this function:
#include "something.h"
int calculate_geometric_mean(int* values)
{
//insert code here
}
I think such functions seem to be formal, so I name my functions so.
However I designed a simple Macro system for the user to switch function names.
#define SHORT_NAMES
#include "something.h"
#ifdef SHORT_NAMES
int calc_geometric_mean(int* values)
#else
int calculate_geometric_mean(int* values)
#endif
{
//some code
}
Is this wiser than using alias (since each alias of function will be allocated in the memory), or is this solution a pure evil?
FWIW, I don't think this dual-naming system adds a lot of value. It does, however, has the potential for causing a lot of confusion (to put it mildly).
In any case, if you are convinced is a great idea, I would implement it through inline functions rather than macros.
// something.h
int calculate_geometric_mean(int* values); // defined in the .cpp file
inline int calc_geo_mean(int* values) {
return calculate_geometric_mean(values);
}
What symbols will be exported to the object file/library? What if you attempt to use the other version? Will you distribute two binaries with their own symbols?
So - no, bad idea.
Usually, the purpose behind a naming system is to aid the readability and understanding of the code.
Now, you effectively have 2 systems, each of which has a rationale. You're already forcing the reader/maintainer to keep two approaches to naming in mind, which dilutes the end goal of readability. Never mind the ugly #defines that end up polluting your code base.
I'd say choose one system and stick to it, because consistency is the key. I wouldn't say this solution is pure evil per se - I would say that this is not a solution to begin with.

C++ One Header Multiple Sources

I have a large class Foo1:
class Foo {
public:
void apples1();
void apples2();
void apples3();
void oranges1();
void oranges2();
void oranges3();
}
Splitting the class is not an option2, but the foo.cpp file has grown rather large. Are there any major design flaws to keeping the definition of the class in foo.h and splitting the implementation of the functions into foo_apples.cpp and foo_oranges.cpp.
The goal here is purely readability and organization for myself and other developers working on the system that includes this class.
1"Large" means some 4000 lines, not machine-generated.
2Why? Well, apples and oranges are actually categories of algorithms that operate on graphs but use each other quite extensively. They can be separated but due to the research nature of the work, I'm constantly rewiring the way each algorithm works which I found for me does not (in the early stage) jive well with the classic OOP principles.
Are there any major design flaws to keeping the definition of the class in foo.h and splitting the implementation of the functions into foo_apples.cpp and foo_oranges.cpp.
to pick nits: Are there any major design flaws to keeping the declaration of the class in foo.h and splitting the definitions of the methods into foo_apples.cpp and foo_oranges.cpp.
1) apples and oranges may use the same private programs. an example of this would be implementation found in an anonymous namespace.
in that case, one requirement would be to ensure your static data is not multiply defined. inline functions are not really a problem if they do not use static data (although their definitions may be multiply exported).
to overcome those problems, you may then be inclined to utilise storage in the class -- which could introduce dependencies by increasing of data/types which would have otherwise been hidden. in either event, it can increase complexity or force you to write your program differently.
2) it increases complexity of static initialization.
3) it increases compile times
the alternative i use (which btw many devs detest) in really large programs is to create a collection of exported local headers. these headers are visible only to the package/library. in your example, it can be illustrated by creating the following headers: Foo.static.exported.hpp (if needed) + Foo.private.exported.hpp (if needed) + Foo.apples.exported.hpp + Foo.oranges.exported.hpp.
then you would write Foo.cpp like so:
#include "DEPENDENCIES.hpp"
#include "Foo.static.exported.hpp" /* if needed */
#include "Foo.private.exported.hpp" /* if needed */
#include "Foo.apples.exported.hpp"
#include "Foo.oranges.exported.hpp"
/* no definitions here */
you can easily adjust how those files are divided based on your needs. if you write your programs using c++ conventions, there are rarely collisions across huge TUs. if you write like a C programmer (lots of globals, preprocessor abuse, low warning levels and free declarations), then this approach will expose a lot of issues you probably won't care to correct.
From a technical standpoint, there is no penalty to doing this at all, but I have never seen it done in practice. This is simply a issue of style, and in that spirit, if it helps you to better read the class, then you would be doing yourself a disservice by not using multiple source files.
edit: Adding to that though, are you physically scrolling through your source, like, with your middle mouse wheel? As someone else already mentioned, IDE's almost universally let you right click on a function declaration, and go to the definition. And even if that's not the case for your IDE, and you use notepad or something, it will at least have ctrl+f. I would be lost without find and replace.
Yes, you can define the class in one header file and split the function implementations accross multiple source files. It is not usually the common practice but yes it will work and there will be no overheads.
If the aim to do so, is just plain readability, then perhaps it is not a good idea to do so, because it is not so common practice to have class function definitions accross multipls source files and might just confuse someone.
Actually i don't see any reasons to split implementation because other developers should work with the interface, but not the implementation.
Also any normal IDE provide an easy ability to jump from function declaration to it's defenition. So there is no reason to search the function implementations manually.