Let me first state that I know that inline does not mean that the compiler will always inline a function...
In C++ there really are two places for a non-template non-constexpr function implementation to go:
A header, definition should be inline
A source file
There are benefits/negatives to placing the implementation in one or the other:
inline function definition
compiler can inline the function
slower compiler times both due to having to parse definitions and include implementation dependencies.
multiple copies of a function between multiple users on the same site
source file definition
compiler can never inline the function (maybe that's not true with LTO?)
can avoid recompilation if the file hasn't changed
one copy per site
I am in the midst of writing a reusable math library where inlining can offer significant speedups. I only have test code and snippets to work with right now, so profiling isn't an option for helping me decide. Are there any rules - or just rules of thumb - on deciding where to define the function? Are there certain types of functions, like those with exceptions, which are known to always generate large amounts of code that should be relegated to a source file?
If you have no data, keep it simple.
Libraries that suck to develop don't get finished, and those that suck to use don't get used. So split h/cpp by default; that makes build times slower and development faster.
Then get data. Write tests and see if you get significant speedups from inlining. Then go and learn how to profile and realize your speedups where spurious, and write better tests.
How to profile and determine what is spurious and what is microbenchmark noise is between a chapter of a book and a book in length. Read SO questions about performance in C++ and you'll at least learn the 10 most common ways to microbenchmark are not accurate.
For general rules, smallish bits of code in tight loops benefit from inlining, as do cases where external vectorization is plausible, and where false aliasing could block compiler optimizations.
Often you can hoist the benefits of inlining into your library by offering vector operations.
Generally speaking, if you are statically linking (as opposed to DLL/DSO methods), then the compiler/linker will basically ignore inline and do what's sensible.
The old rule of thumb (which everyone seems to ignore) is that inline should only be used for small functions. The one problem with inlining is that all do often I see people doing some timed test, e.g.
auto startTime = getTime();
for(int i = 0; i < BIG_NUM; ++i)
{
doThing();
}
auto endTime = getTime();
The immediate conclusion from that test is that inline is good for performance everywhere. But that isn't the case.
inlining also increases the size of your compiled exe. This has a nasty side effect in that it increases the burden placed on the instruction and uop caches, which can cause a performance loss. So in the case of a large scale app, more often than not you'll find that removing inline from commonly used functions can actually be a performance win.
One of the nastiest problems with inline is that if it's applied to the wrong method, it's very hard to get a profiler to point out a hot spot - It's just a little warmer than needed in multiple points in the codebase.
My rule of thumb - if the code for a method can fit on one line, inline it. If the code doesn't fit on one line, put it in the cpp file until a profiler indicates moving it to the header would be beneficial.
The rule of thumb I work by is simple: No function definitions in headers, and all function definitions in a source file, unless I have a specific reason to do otherwise.
Generally speaking, C++ code (like code in many languages) is easier to maintain if there is a clear separation of interface from implementation. Maintenance effort is (quite often) a cost driver in non-trivial programs, because it translates into developer time and salary costs. In C++, interface is represented by declarations of functions (without definition), type declarations, struct and class definition, etc i.e. the things that are typically placed in a header, if the intent is to use them in more than one source file. Changing the interface (e.g. changing a function's argument types or return type, adding a member to a class, etc) means that everything which depends on that interface must be recompiled. In the long run, it often works out that header files need to change less often than source files - as long as interface is kept separate from implementation. Whenever a header changes, all source files which use that header (i.e. that #include it) must be recompiled. If a header doesn't change, but a function definition changes, then only the source file which contains the changed function definition, needs to be recompiled.
In large projects (say, with hundreds of source and header files) this sort of thing can make the difference between incremental builds taking a few seconds or a few minutes to recompile a few changed source files versus significantly longer to recompile a large number of source files because a header they all depend on has changed.
Then the focus can be on getting the code working correctly - in the sense of producing the same observable output given a set of inputs, meeting its functional requirements, and passing suitable test cases.
Once the code is working correctly enough, attention can turn to other program characteristics, such as performance. If profiling shows that a function is called many times and represents a performance hot-spot, then you can look at options for improving performance. One option that MIGHT be relevant for improving performance of a program that is otherwise correct is to selectively inline functions. But, every time this is done, it amounts to deciding to accept a greater maintenance burden in order to get performance. But it is necessary to have evidence of the need (e.g. by profiling).
Like most rules of thumb, there are exceptions. For example, templated functions or classes in C++ do generally need to be inlined since, more often than not, the compiler needs to see their definition in order to instantiate the template. However, that is not an justification to inlining everything (and it is not a justification for turning every class or function into a template).
Without profiling or other evidence, I would rarely bother to inline functions. Inlining is a hint to the compiler, which the compiler may well ignore, so the effort of inlining may not even be worth it. Doing such a thing without evidence may achieve nothing - in which case it is simply premature optimisation.
Related
I'm working on a big project in C++.
I have many classes which have methods that do completely different things (like one dumps, another modifies the object, another checks it to see if it's valid and so on...).
Is it a good style to put the implementation of a method for all the classes in a source file (or in a group of object files that may be archived) and all of the implementations of another method of the classes in another archive?
Could this be good when linking, maybe when someone doesn't need the dumping methods (for example), or it's better to keep the method implementations of the same class in the same source file, in order to not make confusion?
There are trade-offs.
When you change the implementation of any function, the entire translation unit must be re-compiled into a new object file.
If you write only a single function per translation unit, you minimize the length of compilation time caused by unnecessary rebuilds.
On the other hand, writing a single function per translation unit, you maximize the length of compilation time from scratch, because it's slower to compile many small TU's than a few bit TU's.
The optimal solution is personal preference, but usually somewhere in between "single function per TU" and "one massive TU for entire program" (rather than exactly one of those). For member functions, one TU per class is a popular heuristic, but not necessarily always the best choice.
Another consideration is optimisation. Calls to non-inline functions can be expanded inline, but only within the same translation unit. Therefore, it is easier for the compiler to optimize a single massive TU.
Of course, you can choose to define the functions inline, in the header file, but that causes a re-building problem, because if any of the inline functions change, then all who include the header must re-build. This is worse problem than simply having bigger TUs but not as bad as having one massive TU.
So, defining related non-inline functions within the same TU allows the compiler to decide on optimization within that TU, while preventing a re-build cascade. This is advantageous if those related functions would benefit from inline expansion and call each other a lot.
This advantage is mitigated by whole program optimisation.
Third consideration is organisation. It may be likely, that a programmer who looks at member function of a class would also be interested in other member functions of that class. Having them in the same source file will allow them to spend less time on searching the correct file.
The organizational advantage of grouping all class functions into a common source file is somewhat mitigated by modern IDEs that allow for quickly jumping from source file to header and from there to the other function.
Fourth consideration is the performance of the editor. Parsing a file of tens of thousands of lines or more can be slow and may use a lot of memory depending on parsing technique. One massive TU doesn't necessarily cause this, because you can use separate files that are only included together.
On the other hand, massive number of files can be problematic for some file browsers (probably not much these days) and also for version control systems.
Finally, my opinion: I think that one source file per class is a decent heuristic. But it should not be followed religiously when it's not appropriate.
Some organizations have rules that mandate one definition per unit. In these organizations, a header file can define only one class, and a translation unit can define only one function. Other organizations mandate at most one source file for each header files (some header files have no implementation).
The optimal thing to do is somewhere in between. I generally don't care about compiler or linker performance. I do care a lot about code readability and maintainability. A source file that implements some class that is thousands of lines long is hard to navigate. It's better to break that file into multiple files. Breaking it into hundreds of files, one file per function, makes for a directory structure that is difficult to navigate. Breaking it into chunks of closely related functions keeps the directory structure and the contents of each file navigable.
However, and this is a big however: Why is your class so large that you have to worry about this? A class whose implementation spans thousands of lines or dozens of files is a code smell.
The pimpl (also: compiler firewall) idiom is used to shorten compile times, at the cost of readability and a little runtime performance. At the moment a project takes to long to compile, how to measure the best pimpl candidates?
I have my experience in using pimpl, shortening a project its compile time from two hours to ten minutes, but I did this just following my instincts: I reasoned that class header files that include (1) a lot of source code (2) complex/template classes, are the best candidates to use the pimple idiom on.
Is there a tool that points out which classes are good pimpl candidates objectively?
This is true that Pimpl is useful for incremental compile.
But the main reason to use Pimpl is to preserve ABI compatibility. This was the rule in my past company for almost all public class in the API.
Other advantage is that you can also distribute your library as a package containing header that not expose implementation details.
For this I will say : use Pimpl wherever possible.
A very good article of the Qt Pimpl implementation details and the benefits : https://wiki.qt.io/D-Pointer
The compile time problem must be addressed with :
using precompiled header
dividing your big projects into small ones by code touch frequency. Parts that not change often can be compiled in library and published in local repository that others project reference by version.
...
I'm not aware of a existing tool to do this, but I would suggest:
First, measure the stand-alone cost of including every header by itself. Make a list of all headers, and for each header, preprocess it. The simplest measure of the cost of that header is the number of lines that result from preprocessing. A possibly more accurate measure would be to count the occurrences of 'template', as processing template definitions seems to dominate compilation time in my experience. You could also count occurrences of 'inline', as I've seen large numbers of inline functions defined in headers be an issue too (but be aware that inline definitions of class methods don't necessarily use the keyword).
Next, measure the number of translation units (TUs) that include that header. For each main file of a TU (e.g., .cpp file), preprocess that file and gather the set of distinct headers that appear in the output (in the # lines). Afterward, invert that to get a map from header to number of TUs that use it.
Finally, for each header, multiply its stand-alone cost by the number of TUs that include it. This is a measure of the cumulative effect of this header on total compilation time. Sort that list and go through it in descending order, moving private implementation details into the associated implementation file and trimming the public header accordingly.
Now, the main issue with this or any such approach to measuring the benefit of private implementations is you probably won't see much change at first because, in the absence of engineering discipline to do otherwise, usually there will be many headers that include many others, with lots of overlap. Consequently, optimizing one heavily-used header will simply mean that some other heavily-used header that includes almost as much will keep compilation times high. But once you break through the critical mass of commonly used headers that have many dependencies, optimizing most or all of them, compilation times should start to drop dramatically.
One way to focus the effort, so it's not so "pie in the sky", is to begin by selecting the single TU that takes the most time to compile, and work on optimizing only the headers that it depends on. Once you've significantly reduced the time for that TU, look again at the big picture. And if you can't significantly improve that one TU's compilation time through the private implementation technique, then that suggests you need to consider other approaches for that code base.
Maybe, my question is stupid but I didn't find any answer and I really wonder to know it. When we have program with functions which are not called (they are for example only prepared for future implementation) I think that compiler read also these lines (minimally function declaration). It would be no problem but how about performance in bigger projects? Is there anything what we should avoid (for example some allocations / include files) which has bigger impact?
For example:
//never called/used
class abc{
...
}
//never called/used
float function_A(float x, int y){
...}
int main(){
...
}
This is just a short example but I think everyone know what I mean.
Thank you very much!
current implementations of compiler will not generate code for some kind of functions as you can read here. Non used code is typically not a performance hit especially if you declare and do not define the functions. Only functions with a lot of instructions can be a performance hit, but therefore I recommend you to read about instruction caching.
In bigger libraries you should care about include files. If you use and (more important) include them intelligently, you can gain performance at compile time. I.e use forward declarations in header files, and include headers in cpp files.
Another thing is, if your split to a few header files, the compiler can skip whole .o files (which the compiler creates during compilation) at link time if they are not used.
Hope this helped you a bit
If you mean application performance, leaving in unused code will have no impact. The compiler does dead code elimination. But having to go through more code, the compiler will slow down slightly, so you will have to wait a bit longer for program compilation. Not including unused header files is a good idea, as one header file can pull in dozens or hundreds of others. (But precompiled headers can also help with that.)
Instruction caching may still be an issue if unremoved dead code reduces locality of the program as a whole.
Imagine two functions A and B, where B is called from A repeatedly. If A and B fit into the same cache line, calling B from A is unlikely to produce a cache miss. But if a third function is placed in between the two functions by the linker so that A and B are not on the same cache line anymore, cache misses when calling B are becoming more likely, reducing overall execution speed.
While the effect may not be measurable very well and depend on a lot of other factors, reducing dead code is generally a good idea.
If the compiler can detect it as dead code, it will remove it completely and probably print a warning. If not, it will increases the object code size. With static linkage, linker will remove unused functions.
suppose you have a program in C, C++ or any other language that employs the "compile-objects-then-link-them"-scheme.
When your program is not small, it is likely to compromise several files, in order to ease code management (and shorten compilation time). Furthermore, after a certain degree of abstraction you likely have a deep call hierarchy. Especially at the lowest level, where tasks are most repetitive, most frequent you want to impose a general framework.
However, if you fragment your code into different object files and use a very abstract archictecture for your code, it might inflict performance (which is bad if you or your supervisor emphasizes performance).
One way to circuvent this is might be extensive inlining - this is the approach of template meta-programming: in each translation unit you include all the code of your general, flexible structures, and count on the compiler to counteract performance issues. I want to do something similar without templates - say, because they are too hard to handle or because you use plain C.
You could write all your code into one single file. That would be horrible. What about writing a script, which merges all your code into one source file and compiles it? Requiring your source files are not too wildly written. Then a compiler could probably apply much more optimization (inlining, dead code elamination, compile-time arithmetics, etc.).
Do you Have any experience with or objections against this "trick"?
Pointless with a modern compiler. MSVC, GCC and clang all support link-time code generation (GCC and clang call it 'link-time optimisation'), which allows for exactly this. Plus, combining multiple translation units into one large makes you unable to parallelise the compilation process, and (at least in case of C++) makes RAM usage go through the roof.
in each translation unit you include all the code of your general, flexible structures, and count on the compiler to counteract performance issues.
This is not a feature, and it's not related to performance in any way. It's an annoying limitation of compilers and the include system.
This is a semi-valid technique, iirc KDE used to use this to speed up compilation back in the day when most people had one cpu core. There are caveats though, if you decide to do something like this you need to write your code with it in mind.
Some samples of things to watch out for:
Anonymous namespaces - namespace { int x; }; in two source files.
Using-declarations that affect following code. using namespace foo; in a .cpp file can be OK - the appended sources may not agree
The C version of anon namespaces, static globals. static int i; at file scope in several cpp files will cause problems.
#define's in .cpp files - will affect source files that don't expect it
Modern compilers/linkers are fully able to optimize across translation units (link-time code generation) - I don't think you'll see any noticeable difference using this approach.
It would be better to profile your code for bottlenecks, and apply inlining and other speed hacks only where appropriate. Optimization should be performed with a scalpel, not with a shotgun.
Though it is not suggested, using #include statements for C files is essentially the same as appending the entire contents of the included file in the current one.
This way, if you include all of your files in one "master file" that file will be essentially compile as if all the source code were appended in it.
SQlite does that with its Amalgamation source file, have a look at:
http://www.sqlite.org/amalgamation.html
Do you mind if I share some experience about what makes software slow, especially when the call tree gets bushy? The cost to enter and exit functions is almost totally insignificant except for functions that
do very little computation and (especially) do not call any further functions,
and are actually in use for a significant fraction of the time (i.e. random-time samples of the program counter are actually in the function for 10% or more of the time).
So in-lining helps performance only for a certain kind of function.
However, your supervisor could be right that software with layers of abstraction have performance problems.
It's not because of the cycles spent entering and leaving functions.
It's because of the temptation to write function calls without real awareness of how long they take.
A function is a bit like a credit card. It begs to be used. So it's no mystery that with a credit card you spend more than you would without it.
However, it's worse with functions, because functions call functions call functions, over many layers, and the overspending compounds exponentially.
If you get experience with performance tuning like this, you come to recognize the design approaches that result in performance problems. The one I see over and over is too many layers of abstraction, excess notification, overdesigned data structure, stuff like that.
First off, I am not looking for a way to force the compiler to inline the implementation of every function.
To reduce the level of misguided answers make sure you understand what the inline keyword actually means. Here is good description, inline vs static vs extern.
So my question, why not mark every function definition inline? ie Ideally, the only compilation unit would be main.cpp. Or possibly a few more for the functions that cannot be defined in a header file (pimpl idiom, etc).
The theory behind this odd request is it would give the optimizer maximum information to work with. It could inline function implementations of course, but it could also do "cross-module" optimization as there is only one module. Are there other advantages?
Has any one tried this in with a real application? Did the performance increase? decrease?!?
What are the disadvantages of marking all function definitions inline?
Compilation might be slower and will consume much more memory.
Iterative builds are broken, the entire application will need to be rebuilt after every change.
Link times might be astronomical
All of these disadvantage only effect the developer. What are the runtime disadvantages?
Did you really mean #include everything? That would give you only a single module and let the optimizer see the entire program at once.
Actually, Microsoft's Visual C++ does exactly this when you use the /GL (Whole Program Optimization) switch, it doesn't actually compile anything until the linker runs and has access to all code. Other compilers have similar options.
sqlite uses this idea. During development it uses a traditional source structure. But for actual use there is one huge c file (112k lines). They do this for maximum optimization. Claim about 5-10% performance improvement
http://www.sqlite.org/amalgamation.html
We (and some other game companies) did try it via making one uber-.CPP that #includeed all others; it's a known technique. In our case, it didn't seem to affect runtime much, but the compile-time disadvantages you mention turned out to be utterly crippling. With a half an hour compile after every single change, it becomes impossible to iterate effectively. (And this is with the app divvied up into over a dozen different libraries.)
We tried making a different configuration such that we would have multiple .objs while debugging and then have the uber-CPP only in release-opt builds, but then ran into the problem of the compiler simply running out of memory. For a sufficiently large app, the tools simply are not up to compiling a multimillion line cpp file.
We tried LTCG as well, and that provided a small but nice runtime boost, in the rare cases where it didn't simply crash during the link phase.
Interesting question! You are certainly right that all of the listed disadvantages are specific to the developer. I would suggest, however, that a disadvantaged developer is far less likely to produce a quality product. There may be no runtime disadvantages, but imagine how reluctant a developer will be to make small changes if each compile takes hours (or even days) to complete.
I would look at this from a "premature optimization" angle: modular code in multiple files makes life easier for the programmer, so there is an obvious benefit to doing things this way. Only if a specific application turns out to run too slow, and it can be shown that inlining everything makes a measured improvement, would I even consider inconveniencing the developers. Even then, it would be after a majority of the development has been done (so that it can be measured) and would probably only be done for production builds.
This is semi-related, but note that Visual C++ does have the ability to do cross-module optimization, including inline across modules. See http://msdn.microsoft.com/en-us/library/0zza0de8%28VS.80%29.aspx for info.
To add an answer to your original question, I don't think there would be a downside at run time, assuming the optimizer was smart enough (hence why it was added as an optimization option in Visual Studio). Just use a compiler smart enough to do it automatically, without creating all the problems you mention. :)
Little benefit
On a good compiler for a modern platform, inline will affect only a very few functions. It is just a hint to the compiler, modern compilers are fairly good at making this decision themselves, and the the overhead of a function call has become rather small (often, the main benefit of inlining is not to reduce call overhead, but opening up further optimizations).
Compile time
However, since inline also changes semantics, you will have to #include everything into one huge compile unit. This usually increases compile time significantly, which is a killer on large projects.
Code Size
if you move away from current desktop platforms and its high performance compilers, things change a lot. In this case, the increased code size generated by a less clever compiler will be a problem - so much that it makes the code significantly slower. On embedded platforms, code size is usually the first restriction.
Still, some projects can and do profit from "inline everything". It gives you the same effect as link time optimization, at least if your compiler doesn't blindly follow the inline.
That's pretty much the philosophy behind Whole Program Optimization and Link Time Code Generation (LTCG) : optimization opportunities are best with global knowledge.
From a practical point of view it's sort of a pain because now every single change you make will require a recompilation of your entire source tree. Generally speaking you need an optimized build less frequently than you need to make arbitrary changes.
I tried this in the Metrowerks era (it's pretty easy to setup with a "Unity" style build) and the compilation never finished. I mention it only to point out that it's a workflow setup that's likely to tax the toolchain in ways they weren't anticipating.
It is done already in some cases. It is very similar to the idea of unity builds, and the advantages and disadvantages are not fa from what you descibe:
more potential for the compiler to optimize
link time basically goes away (if everything is in a single translation unit, there is nothing to link, really)
compile time goes, well, one way or the other. Incremental builds become impossible, as you mentioned. On the other hand, a complete build is going to be faster than it would be otherwise (as every line of code is compiled exactly once. In a regular build, code in headers ends up being compiled in every translation unit where the header is included)
But in cases where you already have a lot of header-only code (for example if you use a lot of Boost), it might be a very worthwhile optimization, both in terms of build time and executable performance.
As always though, when performance is involved, it depends. It's not a bad idea, but it's not universally applicable either.
As far as buld time goes, you have basically two ways to optimize it:
minimize the number of translation units (so your headers are included in fewer places), or
minimize the amount of code in headers (so that the cost of including a header in multiple translation units decreases)
C code typically takes the second option, pretty much to its extreme: almost nothing apart from forward declarations and macros are kept in headers.
C++ often lies around the middle, which is where you get the worst possible total build time (but PCH's and/or incremental builds may shave some time off it again), but going further in the other direction, minimizing the number of translation units can really do wonders for the total build time.
The assumption here is that the compiler cannot optimize across functions. That is a limitation of specific compilers and not a general problem. Using this as a general solution for a specific problem might be bad. The compiler may very well just bloat your program with what could have been reusable functions at the same memory address (getting to use the cache) being compiled elsewhere (and losing performance because of the cache).
Big functions in general cost on optimization, there is a balance between the overhead of local variables and the amount of code in the function. Keeping the number of variables in the function (both passed in, local, and global) to within the number of disposable variables for the platform results in most everything being able to stay in registers and not have to be evicted to ram, also a stack frame is not required (depends on the target) so function calling overhead is noticeably reduced. Hard to do in real world applications all the time, but the alternative a small number of big functions with lots of local variables the code is going to spend a significant amount of time evicting and loading registers with variables to/from ram (depends on the target).
Try llvm it can optimize across the entire program not just function by function. Release 27 had caught up to gcc's optimizer, at least for a test or two, I didnt do exhaustive performance testing. And 28 is out so I assume it is better. Even with a few files the number of tuning knob combinations are too many to mess with. I find it best to not optimize at all until you have the whole program into one file, then perform your optimization, giving the optimizer the whole program to work with, basically what you are trying to do with inlining, but without the baggage.
Suppose foo() and bar() both call some helper(). If everything is in one compilation unit, the compiler might choose not to inline helper(), in order to reduce total instruction size. This causes foo() to make a non-inlined function call to helper().
The compiler doesn't know that a nanosecond improvement to the running time of foo() adds $100/day to your bottom line in expectation. It doesn't know that a performance improvement or degradation of anything outside of foo() has no impact on your bottom line.
Only you as the programmer know these things (after careful profiling and analysis of course). The decision not to inline bar() is a way of telling the compiler what you know.
The problem with inlining is that you want high performance functions to fit in cache. You might think function call overhead is the big performance hit, but in many architectures a cache miss will blow the couple pushes and pops out of the water. For example, if you have a large (maybe deep) function that needs to be called very rarely from your main high performance path, it could cause your main high performance loop to grow to the point where it doesn't fit in L1 icache. That will slow your code down way, way more than the occasional function call.