What are __builtin__functions for in C++? - c++

I am debugging a transactional processing system which is performance sensitive.
I found a code which uses, __builtin_memcpy and __builtin_memset instead of memcpy and memset.
What are __builtin_functions for?
,to prevent the dependency problems on architecture or compiler?
Or.. is there any performance reason where __builtin_functions are prefered?
thank you :D

Traditional library functions, the standard memcpy is just a call to a function. Unfortunately, memcpy is often called for every small copies, and the overhead of calling a function, shuffling a few bytes and returning is quite a lot of overhead (especially since memcpy adds extra stuff to the beginning of the function to deal with unaligned memory, unrolling of the loop, etc, to do well on LARGE copies).
So, for the compiler to optimise those, it needs to "know" how to do for example memcpy - the solution for this is to have a function "builtin" into the compiler, which then contains code such as this:
int generate_builtin_memcpy(expr arg1, expr arg2, expr size)
{
if (is_constant(size) && eval(size) < SOME_NUMBER)
{
... do magic inline memory copy ...
}
else
{
... call "real" memcpy ...
}
}
[For retargetable compilers, there is typically one of these functions for each CPU architecture, that has different configurations as to what conditions the "real" memcpy gets called, or when an inline memcpy is used.]
The key here is that you MAY actually write your own memcpy function, that ISN'T based on __builtin_memcpy(), which is ALWAYS a function, and doesn't do the same thing as normal memcpy [you'd be a bit in trouble if you change it's behaviour a lot, since the C standard library probably calls memcpy in a few thousand places - but for example doing statistics over how many times memcpy is called, and what sizes are copies could be one such use-case].
Another big reason for using __builtin_* is that they provide code that would otherwise have to be written in inline assembler, or possibly not available at all to the programmer. Setting/getting special registers would be such a thing.
There are other techniques to solve this problem, for example clang has a LibraryPass that assumes library-calls do common functions with other alternatives, for example since printf is much "heavier" than puts, it replaces suitable printf("constant string with no formatting\n")s into puts("constant string with no formatting"), and many trigonometric and other math functions are resolved into common simple values when called with constants, etc.
Calling __builtin_* directly for functions like memcpy or sin or some such is probably the WRONG thing to do - it just makes your code less portable and not at all certain to be faster. Calling __builtin_special_function when there is no other is typically the solution in some tricky situations - but you should probably wrap it in your own function, e.g.
int get_magic_property()
{
return __builtin_get_magic_property();
}
That way, when you port to Windows, you can easily do:
int get_magic_property()
{
#if WIN32
return Win32GetMagicPropertyEx();
#else
return __builtin_magic_property();
#endif
}

__builtin_* functions are optimised functions provided by the compiler libraries. These might be builtin versions of standard library functions, such as memcpy, and perhaps more typically some of the maths functions.
Alternatively, they might be highly optimised functions for typical tasks for that particular target - eg a DSP might have built-in FFT functions
Which functions are provided as __builtin_ are determined by the developers of the compiler, and will be documented in the manuals for the compiler.
Different CPU types and compilers are designed for different use cases, and this will be reflected in the range of built-in functions provided.
Built-in functions might make use of specialised instructions in the target processor, or might trade off accuracy for speed by using lookup tables rather than calculating values directly, or any other reasonable optimisation, all of which should be documented.
These are definitely not to reduce dependency on a particular compiler or cpu, in fact quite the opposite. It actually adds a dependency, and so these might be wrapped up in preprocessor checks eg
#ifdef SOME_CPU_FLAG
#define MEMCPY __builtin_memcpy
#else
#define MEMCPY memcpy

on a compiler note, __builtin_memcpy can fall back to emitting
a memcpy function call. also less-capable
compilers the ability to simplify, by choosing the slow path of
unconditionally emitting a memcpy call.
http://lwn.net/Articles/29183/

Related

If a function is only called from one place, is it always better to inline it? [duplicate]

This question already has answers here:
When to use the inline function and when not to use it?
(14 answers)
Closed 7 years ago.
If a function is only used in one place and some profiling shows that it's not being inlined, will there always be a performance advantage in forcing the compiler to inline it?
Obviously "profile and see" (and in the case of the function in question, it did prove to be a small perf boost). I'm mostly asking out of curiosity -- are there any performance disadvantages to this with a reasonably smart compiler?
No, there are notable exceptions. Take this code for example:
void do_something_often(void) {
x++;
if (x == 100000000) {
do_a_lot_of_work();
}
}
Let's say do_something_often() is called very often and from many places. do_a_lot_of_work() is called very rarely (one out of every one hundred million calls). Inlining do_a_lot_of_work() into do_something_often() doesn't gain you anything. Since do_something_often() does almost nothing, it would be much better if it got inlined into the functions that call it, and in the rare case that they need to call do_a_lot_of_work(), they call it out of line. In that way, they are saving a function call almost every time, and saving code bloat at every call site.
One legitimate case where it makes sense not to inline a function, even if it's only called from a single location, is if the call to the function is rare and almost always skipped. Keeping the instructions before the function call and the instructions after the function call closely together in memory may allow those instructions to be kept in the processor cache, when that would be impossible if those blocks of instructions were separated in memory.
It would still be possible for the compiler to compile the function call as if using goto, avoiding having to keep track of a return address, but if the compiler has already determined that the function call is rare, then it makes sense to not pay as much time optimising that call.
You can't "force" the compiler to inline it, unless you are considering some implementation-specific tools that you have not mentioned, so the question is entirely moot.
If your compiler is already not doing so then it has a reason.
If the function is called only once, there should be no performance disadvantages in inlining it. However, that does not mean you should blindly inline all functions. For example, if the code in question is Linux kernel code and you're using the BUG_ON or WARN_ON statement to print a stack trace, you don't get the full stack trace which includes the inline function. Instead, the stack trace contains only the name of the calling function.
And, as the other answer explained, the "inline" doesn't actually force the compiler to inline the function, it just is a hint to the compiler. However, there is actually an attribute __attribute__((always_inline)) in GCC which should force the compiler to inline the function.
Make sure that the function definition is not exported. If it is, it obviously needs to be compiled, and that means that if your function is big probably the call will not be inlined. (Remember, it's the call that gets inlined, not the function. A function might get inlined in one place and called in another, etc.)
So even if you know that the function is called only from one place, the compiler might not. Make sure to hide the definition of your function to the other object files, for example by defining it in the anonymous namespace.
That being said, even if it is called from only one place, it does not mean that it is always a good idea to inline it. If your function is called rarely, it might waste a lot of memory in the CPU cache.
Depending on how you wrote your function.
In some cases, yes!
void doSomething(int *src, int *dst,
const int loopCountInner, const int loopCountOuter)
{
int i, j;
for(i=0; i<loopCounterOuter; i++){
for(j=0; j<loopCounterInner; j++){
*dst = someCalculations(*src);
src++;
dst++
}
}
}
In this example, if this function is compiled as non-inlined, then compiler basically has no knowledge about the trip count of the two loops. This is a big deal for implementations that rely strongly on compile-time optimizations.
I came across a even worse case: compiler assumes loopCounterInner to be a large value and optimized for that case, but loopCounterInner is actually 3 or 5 so the best choice is to fully unroll the inner loop!
For C++ probably the best way to do it is to make them template variables, but for C, the only way to generate differently optimized code for different use cases is to inline the function.
No, if the code is a rarely used function then keeping it off the 'hot path' will be beneficial. An inline function will use up cache space [instruction cache] whether or not the code is actually used. Tools like LTCG combined with Profile Guided optimisation (in the MSFT world, not sure about Linux) go to great pains to keep rarely used code off the hot path and this can make a significant difference

Is there a reason why not to use link-time optimization (LTO)?

GCC, MSVC, LLVM, and probably other toolchains have support for link-time (whole program) optimization to allow optimization of calls among compilation units.
Is there a reason not to enable this option when compiling production software?
I assume that by "production software" you mean software that you ship to the customers / goes into production. The answers at Why not always use compiler optimization? (kindly pointed out by Mankarse) mostly apply to situations in which you want to debug your code (so the software is still in the development phase -- not in production).
6 years have passed since I wrote this answer, and an update is necessary. Back in 2014, the issues were:
Link time optimization occasionally introduced subtle bugs, see for example Link-time optimization for the kernel. I assume this is less of an issue as of 2020. Safeguard against these kinds of compiler and linker bugs: Have appropriate tests to check the correctness of your software that you are about to ship.
Increased compile time. There are claims that the situation has significantly improved since 2014, for example thanks to slim objects.
Large memory usage. This post claims that the situation has drastically improved in recent years, thanks to partitioning.
As of 2020, I would try to use LTO by default on any of my projects.
This recent question raises another possible (but rather specific) case in which LTO may have undesirable effects: if the code in question is instrumented for timing, and separate compilation units have been used to try to preserve the relative ordering of the instrumented and instrumenting statements, then LTO has a good chance of destroying the necessary ordering.
I did say it was specific.
If you have well written code, it should only be advantageous. You may hit a compiler/linker bug, but this goes for all types of optimisation, this is rare.
Biggest downside is it drastically increases link time.
Apart from to this,
Consider a typical example from embedded system,
void function1(void) { /*Do something*/} //located at address 0x1000
void function2(void) { /*Do something*/} //located at address 0x1100
void function3(void) { /*Do something*/} //located at address 0x1200
With predefined addressed functions can be called through relative addresses like below,
(*0x1000)(); //expected to call function2
(*0x1100)(); //expected to call function2
(*0x1200)(); //expected to call function3
LTO can lead to unexpected behavior.
updated:
In automotive embedded SW development,Multiple parts of SW are compiled and flashed on to a separate sections.
Boot-loader, Application/s, Application-Configurations are independently flash-able units. Boot-loader has special capabilities to update Application and Application-configuration. At every power-on cycle boot-loader ensures the SW application and application-configuration's compatibility and consistence via Hard-coded location for SW-Versions and CRC and many more parameters. Linker-definition files are used to hard-code the variable location and some function location.
Given that the code is implemented correctly, then link time optimization should not have any impact on the functionality. However, there are scenarios where not 100% correct code will typically just work without link time optimization, but with link time optimization the incorrect code will stop working. There are similar situations when switching to higher optimization levels, like, from -O2 to -O3 with gcc.
That is, depending on your specific context (like, age of the code base, size of the code base, depth of tests, are you starting your project or are you close to final release, ...) you would have to judge the risk of such a change.
One scenario where link-time-optimization can lead to unexpected behavior for wrong code is the following:
Imagine you have two source files read.c and client.c which you compile into separate object files. In the file read.c there is a function read that does nothing else than reading from a specific memory address. The content at this address, however, should be marked as volatile, but unfortunately that was forgotten. From client.c the function read is called several times from the same function. Since read only performs one single read from the address and there is no optimization beyond the boundaries of the read function, read will always when called access the respective memory location. Consequently, every time when read is called from client.c, the code in client.c gets a freshly read value from the address, just as if volatile had been used.
Now, with link-time-optimization, the tiny function read from read.c is likely to be inlined whereever it is called from client.c. Due to the missing volatile, the compiler will now realize that the code reads several times from the same address, and may therefore optimize away the memory accesses. Consequently, the code starts to behave differently.
Rather than mandating that all implementations support the semantics necessary to accomplish all tasks, the Standard allows implementations intended to be suitable for various tasks to extend the language by defining semantics in corner cases beyond those mandated by the C Standard, in ways that would be useful for those tasks.
An extremely popular extension of this form is to specify that cross-module function calls will be processed in a fashion consistent with the platform's Application Binary Interface without regard for whether the C Standard would require such treatment.
Thus, if one makes a cross-module call to a function like:
uint32_t read_uint32_bits(void *p)
{
return *(uint32_t*)p;
}
the generated code would read the bit pattern in a 32-bit chunk of storage at address p, and interpret it as a uint32_t value using the platform's native 32-bit integer format, without regard for how that chunk of storage came to hold that bit pattern. Likewise, if a compiler were given something like:
uint32_t read_uint32_bits(void *p);
uint32_t f1bits, f2bits;
void test(void)
{
float f;
f = 1.0f;
f1bits = read_uint32_bits(&f);
f = 2.0f;
f2bits = read_uint32_bits(&f);
}
the compiler would reserve storage for f on the stack, store the bit pattern for 1.0f to that storage, call read_uint32_bits and store the returned value, store the bit pattern for 2.0f to that storage, call read_uint32_bits and store that returned value.
The Standard provides no syntax to indicate that the called function might read the storage whose address it receives using type uint32_t, nor to indicate that the pointer the function was given might have been written using type float, because implementations intended for low-level programming already extended the language to supported such semantics without using special syntax.
Unfortunately, adding in Link Time Optimization will break any code that relies upon that popular extension. Some people may view such code as broken, but if one recognizes the Spirit of C principle "Don't prevent programmers from doing what needs to be done", the Standard's failure to mandate support for a popular extension cannot be viewed as intending to deprecate its usage if the Standard fails to provide any reasonable alternative.
LTO could also reveal edge-case bugs in code-signing algorithms. Consider a code-signing algorithm based on certain expectations about the TEXT portion of some object or module. Now LTO optimizes the TEXT portion away, or inlines stuff into it in a way the code-signing algorithm was not designed to handle. Worst case scenario, it only affects one particular distribution pipeline but not another, due to a subtle difference in which encryption algorithm was used on each pipeline. Good luck figuring out why the app won't launch when distributed from pipeline A but not B.
LTO support is buggy and LTO related issues has lowest priority for compiler developers. For example: mingw-w64-x86_64-gcc-10.2.0-5 works fine with lto, mingw-w64-x86_64-gcc-10.2.0-6 segfauls with bogus address. We have just noticed that windows CI stopped working.
Please refer the following issue as an example.

Can I selectively (force) inline a function?

In the book Clean Code (and a couple of others I have come across and read) it is suggested to keep the functions small and break them up if they become large. It also suggests that functions should do one thing and one thing only.
In Optimizing software in C++ Agner Fog states that he does not like the rule of breaking up a function just because it crosses a certain threshold of a number of lines. He states that this results in unnecessary jumps which degrade performance.
First off, I understand that it will not matter if the code I am working on is not in a tight loop and that the functions are heavy so that the time it takes to call them is dwarfed by the time the code in the function takes to execute. But let's assume that I am working with functions that are, most of the time, used by other objects/functions and are performing relatively trivial tasks. These functions follow the suggestions listed in the first paragraph (that is, perform one single function and are small/comprehensible). Then I start programming a performance critical function that utilizes these other functions in a tight loop and is essentially a frame function. Lastly, assume that in-lining them has a benefit for the performance critical function but no benefit whatsoever to any other function (yes, I have profiled this, albeit with a lot of copying and pasting which I want to avoid).
Immediately, one can say that tag the function inline and let the compiler choose. But what if I don't want all those functions to be in a `.inl file or exposed in the header? In my current situation, the performance critical functions and the other functions it uses are all in the same source file.
To sum it up, can I selectively (force) inline a function(s) for a single function so that the end code behaves like it is one big function instead of several calls to other functions.
There is nothing that prevents you to put inline in a static function in a .cpp file.
Some compilers have the option to force an inline function, see e.g. the GCC attribute((always_inline)) and a ton of options to fine tune the inlining optimizations (see -minline-* parameters).
My recommendation is to use inline or even better static inline wherever you see fit, and let the compiler decide. They usually do it pretty well.
You cannot force the inline. Also, function calls are pretty cheap on modern CPUs, compared to the cost of the work done. If your functions are large enough to need to be broken down, the additional time taken to do the call will be essentially nothing.
Failing that, you could ... try ... to use a macro.
No, inline is a recommendation to the compiler ; it does not force it to do anything. Also, if you're working with MSVC++, note that __forceinline is a misnomer as well ; it's just a stronger recommendation than inline.
This is as much about good old fashioned straight C as it is about C++. I was pondering this the other day, because in an embedded world, where both speed and space need to be carefully managed, this can really matter (as opposed to the all too oft "don't worry about it, your compiler is smart and memory is cheap prevalent in desktop/server development).
A possible solution that I have yet to vet is to basically use two names for the different variants, something like
inline int _max(int a, int b) {
return a > b ? a : b;
}
and then
int max(int a, int b) {
return _max(a, b);
}
This would give one the ability to selectively call either _max() or max() and yet still having the algorithm defined once-and-only-once.
Inlining – For example, if there exists a function A that frequently calls function B, and function B is relatively small, then profile-guided optimizations will inline function B in function A.
VS Profile-Guided Optimizations
You can use the automated Profile Guided Optimization for Visual C++ plug-in in the Performance and Diagnostics Hub to simplify and streamline the optimization process within Visual Studio, or you can perform the optimization steps manually in Visual Studio or on the command line. We recommend the plug-in because it is easier to use. For information on how to get the plug-in and use it to optimize your app, see Profile Guided Optimization Plug-In.
If you have a known-hot function an want the compiler inline more aggressively than usual the flatten attribute offered by gcc/clang might be something to look into. In contrast to the inline keyword and attributes it applies to inlining decisions regarding the functions called in the marked function.
__attribute__((flatten)) void hot_code() {
// functions called here will be inlined if possible
}
See https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html and https://clang.llvm.org/docs/AttributeReference.html#flatten for official documentation.
Compilers are actually really really good at generating optimized code.
I would suggest just organizing your code into logical groupings (using additional functions if that enhanced readability), marking them inline if appropriate, and letting the compiler decide what code to optimally generate.
Quite surprised this hasn't been mention yet but as of now you can tell the compiler (I believe it may only work with GCC/G++) to force inline a function and ignore a couple restrictions associated with it.
You can do so via __attribute__((always_inline)).
Example of it in use:
inline __attribute__((always_inline)) int pleaseInlineThis() {
return 5;
}
Normally you should avoid forcing an inline as the compiler knows what's best better than you; however there are several use cases such as in OS/MicroController development where you need to inline calls where if it is instead called, would break the functionality.
C++ compilers usually aren't very friendly to controlled environments such as those without some hacks.
As people mentioned, you should avoid doing that as the compiler usually makes better decisions. There are several optimizations that you can enable to improve performance. These will inline the functions if needed:
LTO: link-time optimization or interprocedural optimization
Profile guided optimization: optimizations based on a runtime profile
BOLT: Binary Optimization and Layout Tool
Polly: a high-level loop and data-locality optimizer

When should I use __forceinline instead of inline?

Visual Studio includes support for __forceinline. The Microsoft Visual Studio 2005 documentation states:
The __forceinline keyword overrides
the cost/benefit analysis and relies
on the judgment of the programmer
instead.
This raises the question: When is the compiler's cost/benefit analysis wrong? And, how am I supposed to know that it's wrong?
In what scenario is it assumed that I know better than my compiler on this issue?
You know better than the compiler only when your profiling data tells you so.
The one place I am using it is licence verification.
One important factor to protect against easy* cracking is to verify being licenced in multiple places rather than only one, and you don't want these places to be the same function call.
*) Please don't turn this in a discussion that everything can be cracked - I know. Also, this alone does not help much.
The compiler is making its decisions based on static code analysis, whereas if you profile as don says, you are carrying out a dynamic analysis that can be much farther reaching. The number of calls to a specific piece of code is often largely determined by the context in which it is used, e.g. the data. Profiling a typical set of use cases will do this. Personally, I gather this information by enabling profiling on my automated regression tests. In addition to forcing inlines, I have unrolled loops and carried out other manual optimizations on the basis of such data, to good effect. It is also imperative to profile again afterwards, as sometimes your best efforts can actually lead to decreased performance. Again, automation makes this a lot less painful.
More often than not though, in my experience, tweaking alogorithms gives much better results than straight code optimization.
I've developed software for limited resource devices for 9 years or so and the only time I've ever seen the need to use __forceinline was in a tight loop where a camera driver needed to copy pixel data from a capture buffer to the device screen. There we could clearly see that the cost of a specific function call really hogged the overlay drawing performance.
The only way to be sure is to measure performance with and without. Unless you are writing highly performance critical code, this will usually be unnecessary.
For SIMD code.
SIMD code often uses constants/magic numbers. In a regular function, every const __m128 c = _mm_setr_ps(1,2,3,4); becomes a memory reference.
With __forceinline, compiler can load it once and reuse the value, unless your code exhausts registers (usually 16).
CPU caches are great but registers are still faster.
P.S. Just got 12% performance improvement by __forceinline alone.
The inline directive will be totally of no use when used for functions which are:
recursive,
long,
composed of loops,
If you want to force this decision using __forceinline
Actually, even with the __forceinline keyword. Visual C++ sometimes chooses not to inline the code. (Source: Resulting assembly source code.)
Always look at the resulting assembly code where speed is of importance (such as tight inner loops needed to be run on each frame).
Sometimes using #define instead of inline will do the trick. (of course you loose a lot of checking by using #define, so use it only when and where it really matters).
Actually, boost is loaded with it.
For example
BOOST_CONTAINER_FORCEINLINE flat_tree& operator=(BOOST_RV_REF(flat_tree) x)
BOOST_NOEXCEPT_IF( (allocator_traits_type::propagate_on_container_move_assignment::value ||
allocator_traits_type::is_always_equal::value) &&
boost::container::container_detail::is_nothrow_move_assignable<Compare>::value)
{ m_data = boost::move(x.m_data); return *this; }
BOOST_CONTAINER_FORCEINLINE const value_compare &priv_value_comp() const
{ return static_cast<const value_compare &>(this->m_data); }
BOOST_CONTAINER_FORCEINLINE value_compare &priv_value_comp()
{ return static_cast<value_compare &>(this->m_data); }
BOOST_CONTAINER_FORCEINLINE const key_compare &priv_key_comp() const
{ return this->priv_value_comp().get_comp(); }
BOOST_CONTAINER_FORCEINLINE key_compare &priv_key_comp()
{ return this->priv_value_comp().get_comp(); }
public:
// accessors:
BOOST_CONTAINER_FORCEINLINE Compare key_comp() const
{ return this->m_data.get_comp(); }
BOOST_CONTAINER_FORCEINLINE value_compare value_comp() const
{ return this->m_data; }
BOOST_CONTAINER_FORCEINLINE allocator_type get_allocator() const
{ return this->m_data.m_vect.get_allocator(); }
BOOST_CONTAINER_FORCEINLINE const stored_allocator_type &get_stored_allocator() const
{ return this->m_data.m_vect.get_stored_allocator(); }
BOOST_CONTAINER_FORCEINLINE stored_allocator_type &get_stored_allocator()
{ return this->m_data.m_vect.get_stored_allocator(); }
BOOST_CONTAINER_FORCEINLINE iterator begin()
{ return this->m_data.m_vect.begin(); }
BOOST_CONTAINER_FORCEINLINE const_iterator begin() const
{ return this->cbegin(); }
BOOST_CONTAINER_FORCEINLINE const_iterator cbegin() const
{ return this->m_data.m_vect.begin(); }
There are several situations where the compiler is not able to determine categorically whether it is appropriate or beneficial to inline a function. Inlining may involve trade-off's that the compiler is unwilling to make, but you are (e.g,, code bloat).
In general, modern compilers are actually pretty good at making this decision.
When you know that the function is going to be called in one place several times for a complicated calculation, then it is a good idea to use __forceinline. For instance, a matrix multiplication for animation may need to be called so many times that the calls to the function will start to be noticed by your profiler. As said by the others, the compiler can't really know about that, especially in a dynamic situation where the execution of the code is unknown at compile time.
wA Case For noinline
I wanted to pitch in with an unusual suggestion and actually vouch for __noinline in MSVC or the noinline attribute/pragma in GCC and ICC as an alternative to try out first over __forceinline and its equivalents when staring at profiler hotspots. YMMV but I've gotten so much more mileage (measured improvements) out of telling the compiler what to never inline than what to always inline. It also tends to be far less invasive and can produce much more predictable and understandable hotspots when profiling the changes.
While it might seem very counter-intuitive and somewhat backward to try to improve performance by telling the compiler what not to inline, I'd claim based on my experience that it's much more harmonious with how optimizing compilers work and far less invasive to their code generation. A detail to keep in mind that's easy to forget is this:
Inlining a callee can often result in the caller, or the caller of the caller, to cease to be inlined.
This is what makes force inlining a rather invasive change to the code generation that can have chaotic results on your profiling sessions. I've even had cases where force inlining a function reused in several places completely reshuffled all top ten hotspots with the highest self-samples all over the place in very confusing ways. Sometimes it got to the point where I felt like I'm fighting with the optimizer making one thing faster here only to exchange a slowdown elsewhere in an equally common use case, especially in tricky cases for optimizers like bytecode interpretation. I've found noinline approaches so much easier to use successfully to eradicate a hotspot without exchanging one for another elsewhere.
It would be possible to inline functions much less invasively if we
could inline at the call site instead of determining whether or not
every single call to a function should be inlined. Unfortunately, I've
not found many compilers supporting such a feature besides ICC. It
makes much more sense to me if we are reacting to a hotspot to respond
by inlining at the call site instead of making every single call of a
particular function forcefully inlined. Lacking this wide support
among most compilers, I've gotten far more successful results with
noinline.
Optimizing With noinline
So the idea of optimizing with noinline is still with the same goal in mind: to help the optimizer inline our most critical functions. The difference is that instead of trying to tell the compiler what they are by forcefully inlining them, we are doing the opposite and telling the compiler what functions definitely aren't part of the critical execution path by forcefully preventing them from being inlined. We are focusing on identifying the rare-case non-critical paths while leaving the compiler still free to determine what to inline in the critical paths.
Say you have a loop that executes for a million iterations, and there is a function called baz which is only very rarely called in that loop once every few thousand iterations on average in response to very unusual user inputs even though it only has 5 lines of code and no complex expressions. You've already profiled this code and the profiler shows in the disassembly that calling a function foo which then calls baz has the largest number of samples with lots of samples distributed around calling instructions. The natural temptation might be to force inline foo. I would suggest instead to try marking baz as noinline and time the results. I've managed to make certain critical loops execute 3 times faster this way.
Analyzing the resulting assembly, the speedup came from the foo function now being inlined as a result of no longer inlining baz calls into its body.
I've often found in cases like these that marking the analogical baz with noinline produces even bigger improvements than force inlining foo. I'm not a computer architecture wizard to understand precisely why but glancing at the disassembly and the distribution of samples in the profiler in such cases, the result of force inlining foo was that the compiler was still inlining the rarely-executed baz on top of foo, making foo more bloated than necessary by still inlining rare-case function calls. By simply marking baz with noinline, we allow foo to be inlined when it wasn't before without actually also inlining baz. Why the extra code resulting from inlining baz as well slowed down the overall function is still not something I understand precisely; in my experience, jump instructions to more distant paths of code always seemed to take more time than closer jumps, but I'm at a loss as to why (maybe something to do with the jump instructions taking more time with larger operands or something to do with the instruction cache). What I can definitely say for sure is that favoring noinline in such cases offered superior performance to force inlining and also didn't have such disruptive results on the subsequent profiling sessions.
So anyway, I'd suggest to give noinline a try instead and reach for it first before force inlining.
Human vs. Optimizer
In what scenario is it assumed that I know better than my compiler on
this issue?
I'd refrain from being so bold as to assume. At least I'm not good enough to do that. If anything, I've learned over the years the humbling fact that my assumptions are often wrong once I check and measure things I try with the profiler. I have gotten past the stage (over a couple of decades of making my profiler my best friend) to avoid completely blind stabs at the dark only to face humbling defeat and revert my changes, but at my best, I'm still making, at most, educated guesses. Still, I've always known better than my compiler, and hopefully, most of us programmers have always known this better than our compilers, how our product is supposed to be designed and how it is is going to most likely be used by our customers. That at least gives us some edge in the understanding of common-case and rare-case branches of code that compilers don't possess (at least without PGO and I've never gotten the best results with PGO). Compilers don't possess this type of runtime information and foresight of common-case user inputs. It is when I combine this user-end knowledge, and with a profiler in hand, that I've found the biggest improvements nudging the optimizer here and there in teaching it things like what to inline or, more commonly in my case, what to never inline.

What is wrong with using inline functions?

While it would be very convenient to use inline functions at some situations,
Are there any drawbacks with inline functions?
Conclusion:
Apparently, There is nothing wrong with using inline functions.
But it is worth noting the following points!
Overuse of inlining can actually make programs slower. Depending on a function's size, inlining it can cause the code size to increase or decrease. Inlining a very small accessor function will usually decrease code size while inlining a very large function can dramatically increase code size. On modern processors smaller code usually runs faster due to better use of the instruction cache. - Google Guidelines
The speed benefits of inline functions tend to diminish as the function grows in size. At some point the overhead of the function call becomes small compared to the execution of the function body, and the benefit is lost - Source
There are few situations where an inline function may not work:
For a function returning values; if a return statement exists.
For a function not returning any values; if a loop, switch or goto statement exists.
If a function is recursive. -Source
The __inline keyword causes a function to be inlined only if you specify the optimize option. If optimize is specified, whether or not __inline is honored depends on the setting of the inline optimizer option. By default, the inline option is in effect whenever the optimizer is run. If you specify optimize , you must also specify the noinline option if you want the __inline keyword to be ignored. -Source
It worth pointing out that the inline keyword is actually just a hint to the compiler. The compiler may ignore the inline and simply generate code for the function someplace.
The main drawback to inline functions is that it can increase the size of your executable (depending on the number of instantiations). This can be a problem on some platforms (eg. embedded systems), especially if the function itself is recursive.
I'd also recommend making inline'd functions very small - The speed benefits of inline functions tend to diminish as the function grows in size. At some point the overhead of the function call becomes small compared to the execution of the function body, and the benefit is lost.
It could increase the size of the
executable, and I don't think
compilers will always actually make
them inline even though you used the
inline keyword. (Or is it the other
way around, like what Vaibhav
said?...)
I think it's usually OK if the
function has only 1 or 2 statements.
Edit: Here's what the linux CodingStyle document says about it:
Chapter 15: The inline disease
There appears to be a common
misperception that gcc has a magic
"make me faster" speedup option called
"inline". While the use of inlines can
be appropriate (for example as a means
of replacing macros, see Chapter 12),
it very often is not. Abundant use of
the inline keyword leads to a much
bigger kernel, which in turn slows the
system as a whole down, due to a
bigger icache footprint for the CPU
and simply because there is less
memory available for the pagecache.
Just think about it; a pagecache miss
causes a disk seek, which easily takes
5 miliseconds. There are a LOT of cpu
cycles that can go into these 5
miliseconds.
A reasonable rule of thumb is to not
put inline at functions that have more
than 3 lines of code in them. An
exception to this rule are the cases
where a parameter is known to be a
compiletime constant, and as a result
of this constantness you know the
compiler will be able to optimize most
of your function away at compile time.
For a good example of this later case,
see the kmalloc() inline function.
Often people argue that adding inline
to functions that are static and used
only once is always a win since there
is no space tradeoff. While this is
technically correct, gcc is capable of
inlining these automatically without
help, and the maintenance issue of
removing the inline when a second user
appears outweighs the potential value
of the hint that tells gcc to do
something it would have done anyway.
There is a problem with inline - once you defined a function in a header file (which implies inline, either explicit or implicit by defining a body of a member function inside class) there is no simple way to change it without forcing your users to recompile (as opposed to relink). Often this causes problems, especially if the function in question is defined in a library and header is part of its interface.
I agree with the other posts:
inline may be superfluous because the compiler will do it
inline may bloat your code
A third point is it may force you to expose implementation details in your headers, .e.g.,
class OtherObject;
class Object {
public:
void someFunc(OtherObject& otherObj) {
otherObj.doIt(); // Yikes requires OtherObj declaration!
}
};
Without the inline a forward declaration of OtherObject was all you needed. With the inline your
header needs the definition for OtherObject.
As others have mentioned, the inline keyword is only a hint to the compiler. In actual fact, most modern compilers will completely ignore this hint. The compiler has its own heuristics to decide whether to inline a function, and quite frankly doesn't want your advice, thank you very much.
If you really, really want to make something inline, if you've actually profiled it and looked at the disassembly to ensure that overriding the compiler heuristic actually makes sense, then it is possible:
In VC++, use the __forceinline keyword
In GCC, use __attribute__((always_inline))
The inline keyword does have a second, valid purpose however - declaring functions in header files but not inside a class definition. The inline keyword is needed to tell the compiler not to generate multiple definitions of the function.
I doubt it. Even the compiler automatically inlines some functions for optimization.
I don't know if my answer's related to the question but:
Be very careful about inline virtual methods! Some buggy compilers (previous versions of Visual C++ for example) would generate inline code for virtual methods where the standard behaviour was to do nothing but go down the inheritance tree and call the appropriate method.
You should also note that the inline keyword is only a request. The compiler may choose not to inline it, likewise the compiler may choose to make a function inline that you did not define as inline if it thinks the speed/size tradeoff is worth it.
This decision is generaly made based on a number of things, such as the setting between optimise for speed(avoids the function call) and optimise for size (inlining can cause code bloat, so isn't great for large repeatedly used functions).
with the VC++ compiler you can overide this decision by using __forceinline
SO in general:
Use inline if you really want to have a function in a header, but elsewhere theres little point because if your going to gain anything from it, a good compiler will be making it inline for you anyway.
Inlining larger functions can make the program larger, resulting in more instruction cache misses and making it slower.
Deciding when a function is small enough that inlining will increase performance is quite tricky. Google's C++ Style Guide recommends only inlining functions of 10 lines or less.
(Simplified) Example:
Imagine a simple program that just calls function "X" 5 times.
If X is small and all calls are inlined: Potentially all instructions will be prefetched into the instruction cache with a single main memory access - great!
If X is large, let's say approaching the capacity of the instruction cache:
Inlining X will potentially result in fetching instructions from memory once for each inline instance of X.
If X isn't inlined, instructions may be fetched from memory on the first call to X, but could potentially remain in the cache for subsequent calls.
Excessive inlining of functions can increase size of compiled executable which can have negative impact on cache performance, but nowadays compiler decide about function inlining on their own (depending on many criterias) and ignore inline keyword.
Among other issues with inline functions, which I've seen heavily overused (I've seen inline functions of 500 lines), what you have to be aware of are:
build instability
Changing the source of an inline function causes all the users of the header to recompile
#includes leak into the client. This can be very nasty if you rework an inlined function and remove a no-longer used header which some client has relied on.
executable size
Every time an inline is inlined instead of a call instruction the compiler has to generate the whole code of the inline. This is OK if the code of the function is short (one or two lines), not so good if the function is long
Some functions can produce a lot more code than at first appears. I case in point is a 'trivial' destructor of a class that has a lot of non-pod member variables (or two or 3 member variables with rather messy destructors). A call has to be generated for each destructor.
execution time
this is very dependent on your CPU cache and shared libraries, but locality of reference is important. If the code you might be inlining happens to be held in cpu cache in one place, a number of clients can find the code an not suffer from a cache miss and the subsequent memory fetch (and worse, should it happen, a disk fetch). Sadly this is one of those cases where you really need to do performance analysis.
The coding standard where I work limit inline functions to simple setters/getters, and specifically say destructors should not be inline, unless you have performance measurements to show the inlining confers a noticeable advantage.
In addition to other great answers, at least once I saw a case where forced inlining actually slowed down the affected code by 1.5x. There was a nested loop inside (pretty small one) and when this function was compiled as a separate unit, compiler managed to efficiently unroll and optimize it. But when same function was inlined into much bigger outer function, compiler (MSVC 2017) failed to optimize this loop.
As other people said that inline function can create a problem if the the code is large.As each instruction is stored in a specific memory location ,so overloading of inline function make a code to take more time to get exicuted.
there are few other situations where inline may not work
does not work in case of recursive function.
It may also not work with static variable.
it also not work in case there is use of a loop,switch etc.or we can say that with multiple statements.
And the function main cannot work as inline function.