This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Inline functions vs Preprocessor macros
hello can somebody please explain what exactly does it mean, and what is the difference from regular macro(I know that it works during compile time and not preprocessor but so what?) thanks in advance for any help, looked in google but didn't find something understandable
(Assuming here that you're talking about C/C++.)
An inline function has its code copied into the points where it's called, much like a macro would.
The big reason you would use inline functions rather than macros to accomplish this is that the macro language is much weaker than actual C/C++ code; it's harder to write understandable, re-usable, non-buggy macros. A macro doesn't create a lexical scope, so variables in one can collide with those already in scope where it's used. It doesn't type-check its arguments. It's easy to introduce unexpected syntactic errors, since all a macro does is basically search-and-replace.
Also, IIRC, the compiler can choose to ignore an inline directive if it thinks that's really boneheaded; it can't do that with a macro.
Or, to rephrase this in a more opinionated and short way: macros (in C/C++, not, say, Lisp dialects) are an awful kludge, inline functions let you not use them.
Also, keep in mind that it's often not a great idea to mark a function as inline. Compilers will generally inline or not as they see fit; by marking a function inline, you're taking over responsibility for a low-level function that most of the time, the compiler's going to know more about than you are.
There is a big difference between macros and inline functions:
Inline is only a hint to the compiler
that function might be inlined. It
does not guarantee it will be.
Compiler might inline functions not
marked with inline.
Macro invocation does not perform
type checking, so macros are not type
safe.
Using function instead of a macro is
more likely to give you a nice and
understandable compiler output in
case there is some error.
It is easier to debug functions,
using macros complicates debugging a
lot. Most compilers will give you an
option of enabling or disabling
inlining, this is not possible with
macros.
The general rule in C++ is that macros should be avoided whenever possible.
Related
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
When should I write the keyword 'inline' for a function/method?
So this is a question that has bugged me for a while and I can't get a definitive answer. My understanding is that a good compiler will generally realise when it is both safe and advantageous to in-line a function and, if optimisation is switched on, it will in-line all such functions weather they they are explicitly identified as in-line functions by the programmer or not. Also, a complier will recognise when it is not safe/sensible to in-line a function and will simply ignore the programmers request to in-line functions in such cases.
Thus, I would like to know what is the advantage of explicitly stating a function as in-line? As long as optimisation is switched on the compiler will in-line all the functions it deems sensible to in-line, and only those functions.
I have found some discussions around inline protecting against multiple definitions due to nested h files, but surely #ifdefine'ing the header source code is better practice and again renders the use of the key word inline void?
You're spot on about the compiler optimizations. You're just wrong in your assumption of what inline is. Despite the name inline is not for optimization. inline is primarily to "violate" the one definition rule with impunity. Basically, it tells the linker that many translation units can see that definition, so it should not barf on finding it on multiple translation units.
Some compilers may treat it as a hint to inline the function, but that's totally up to the compiler and perfectly valid to just ignore that hint.
Header guards only protect against multiple definitions on the same translation unit. They do not work across translation units.
Header guards don't protect against multiple definition errors.
Multiple definitions are encountered by the linker, and occur when the same definition is included into separate compilation units.
Apparently the preprocessor macros in C++ are
justifiably feared and shunned by the C++ community.
However, there are several cases where C++ macros are beneficial.
Seeing as preprocessor macros can be extremely useful and can reduce repetitive code in a very straightforward manner --
-- leaves me with the question, what exactly is it that makes preprocessor macros "evil", or, as the question title says, which feature (or removal of feature) would be needed from preprocessor macros to make them useful as a "good" development tool (instead of a fill-in that everyone's ashamed of when using it). (After all, the Lisp languages seem to embrace macros.)
Please Note: This is not about #include or #pragma or #ifdef. This is about #define MY_MACRO(...) ...
Note: I do not intend for this question to be subjective. Should you think it is, feel free to vote to move it to programmers.SE.
Macros are widely considered evil because the preprocessor is a stupid text replacement tool that has little to none knowledge of C/C++.
Four very good reasons why macros are evil can be found in the C++ FAQ Lite.
Where possible, templates and inline functions are a better choice. The only reason I can think of why C++ still needs the preprocessor is for #includes and comment removal.
A widely disputed advantage is to use it to reduce code repetition; but as you can see by the boost preprocessor library, much effort has to be put to abuse the preprocessor for simple logic such as loops, leading to ugly syntax. In my opinion, it is a better idea to write scripts in a real high-level programming language for code generation instead of using the preprocessor.
Most preprocessor abuse come from misunderstanding, to quote Paul Mensonides(the author of the Boost.Preprocessor library):
Virtually all
issues related to the misuse of the preprocessor stems from attempting to
make object-like macros look like constant variables and function-like
macro invocations look like underlying-language function calls. At best,
the correlation between function-like macro invocations and function calls
should be incidental. It should never be considered to be a goal. That
is a fundamentally broken mentality.
As the preprocessor is well integrated into C++, its easier to blur the line, and most people don't see a difference. For example, ask someone to write a macro to add two numbers together, most people will write something like this:
#define ADD(x, y) ((x) + (y))
This is completely wrong. Runs this through the preprocessor:
#define ADD(x, y) ((x) + (y))
ADD(1, 2) // outputs ((1) + (2))
But the answer should be 3, since adding 1 to 2 is 3. Yet instead a macro is written to generate a C++ expression. Not only that, it could be thought of as a C++ function, but its not. This is where it leads to abuse. Its just generating a C++ expression, and a function is a much better way to go.
Furthermore, macros don't work like functions at all. The preprocessor works through a process of scanning and expanding macros, which is very different than using a call stack to call functions.
There are times it can be acceptable for macros to generate C++ code, as long as it isn't blurring the lines. Just like if you were to use python as a preprocessor to generate code, the preprocessor can do the same, and has the advantage that it doesn't need an extra build step.
Also, the preprocessor can be used with DSLs, like here and here, but these DSLs have a predefined grammar in the preprocessor, that it uses to generate C++ code. Its not really blurring the lines since it uses a different grammar.
Macros have one notable feature - they are very easy to abuse and rather hard to debug. You can write just about anything with macros, then macros are expanded into one-liners and when nothing works you have very hard time debugging the resulting code.
The feature alone makes one think ten times on whether and how to use macros for their task.
And don't forget that macros are expanded before actual compilation, so they automatically ignore namespaces, scopes, type safety and a ton of other things.
The most important thing about macros is that they have no scope, and do not care about context. They are almost a dump text replacement tool. So when you #define max(.... then everywhere where you have a max it gets replaced; so if someone adds overly generic macro names in their headers, they tend to influence code that they were not intended to.
Another thing is that when used without care, they lead to quite hard to read code, since no one can easily see what the macro could evaluate to, especially when multiple macros are nested.
A good guideline is to choose unique names, and when generating boilerplate code, #undef them as soon as possible to not pollute the namespace.
Additionally, they do not offer type safety or overloading.
Sometimes macros are arguably a good tool to generate boilerplate code, like with the help of boost.pp you could create a macro that helps you creating enums like:
ENUM(xenum,(a,b,(c,7)));
which could expand to
enum xenum { a, b, c=7 };
std::string to_string( xenum x ) { .... }
Things like assert() that need to react on NDEBUG are also often easier to implement as macros
There a many uses where a C developper uses Macros and an C++ developper uses templates.
There obviously corner cases where they're useful, but most of the time it's bad habits from the C world applied to C++ by people that believe there such a language called C/C++
So it's easier to say "it's evil" than risking a developper misuses them.
Macros do not offer type safety
Problems where parameters are executed twice e.g. #define MAX(a,b) ((a)>(b) ? (a) : (b)) and apply it for MAX(i++, y--)
Problems with debugging as their names do not occur in the symbol table.
Forcing the programmer to use proper naming for the macros... and better tools to track replacement of macros would fix most my problems. I can't really say I've had major issues so far... It's something you burn yourself with and learn to take special care later on. But they badly need better integration with IDEs, debuggers.
According to the wikipedia C++ article
C++ is designed to give the programmer choice, even if this makes it possible for the programmer to choose incorrectly.
If it is designed this way why there is no standard way to force the compiler to inline something even if I might be wrong?
Or I can ask why is inline keyword is just a hint?
I think I have no choice here.
In the OOP world we call methods on the objects and directly accessing members should be avoided. If we can't force the accessors to be inlined, then we are unable to write high performance but still maintainable applications.
(I know many compilers implement their own way to force inlining but it's ugly. Using macros to make inline accessors on a class are ugly too.)
Does the compiler always do it better than the programmer?
How would a compiler inline a recursive function (especially if the compiler does not support Tail-call optimization and even if it does, the function is not Tail-call optimize-able).
This is just one reason where compiler should decide whether inline is practical or not. There can be others as well which I cant think of right now.
Does the compiler always do it better than the programmer?
No, not always... but the programmer is far more error prone, and less likely to maintain the optimal tuning over a span of years. The bottom line is that inlining only helps performance if the function is really small (for at least one common/important code path) but then it can help by about an order of magnitude, depending on many things of course. It's often impractical for the programmer to assess let alone keep a careful eye on how trivial a function is, and the thresholds can vary with compiler implementation choices, command line options, CPU model etc.. There are so many things that could suddenly bloat a function - any non-builtin type can trigger all sorts of different behaviours (esp in templates), use of an operator (even new) can be overloaded, the verbosity of calling conventions and exception-handling steps aren't generally obvious to the programmer.
The chances are that if the compiler isn't inlining something that's small enough for you to expect a useful performance improvement if it was inlined, then the compiler's aware of some implementation issue you're not that would actually make it worse. In those gray cases where the compiler might go either way and you're just over some threshold the performance difference isn't likely to be significant anyway.
Further, some programmers (myself included) can be lazy and deliberately abuse inline as a convenient way to put implementation in a header file, getting around the ODR, even though they know those functions are large and that it would be disastrous if the compiler (were required to) actually inline them. This doesn't preclude a forced-inline keyword/notation though... it just explains why it's hard to change the expectations around the current inline keyword.
Or I can ask why is inline keyword is
just a hint?
Because you "might" know better than the compiler.
Most of the time, for functions not marked inline (and correctly declared/defined), the compiler, depending on it's configuration and implementation, will itself evaluate if the function can be inlined or not.
For example, most compilers will automatically inline member functions that are fully defined in the header, if the code is'isn't long and/or too complex. That's because as the function is available in the header, why not inline it as much as we can?
However this don't happen, for example, in Debug mode for Visual Studio : in Debug the debug informations still need to map the binary code of the functions, so it avoid inlining, but will still inline functions marked inline, because the user required it. That's useful if you want to mark functions yuo don't need to have debug-time informations (like simple getters) while getting better performance at debug-time.
In Release mode (by default) the compiler will agresively inline everything it can, making harder to debug some part of the code even if you activate debugging informations.
So, the general idea is that if you code in a way that helps the compiler inlining, it will inline as much as it can. If you write your code in ways that is hard or impossible to inline, it will avoid. If you mark something inline, you just tell the compiler that if it find it hard but not impossible to inline, it should inline it.
As inlining depends on both contexts of the caller and the callee, there is no "rule".
What's often advised is to simply ignore explicitly mark function inline but in two cases :
if you need to put a function definition in a header, it just have to be inlined; often the case for template (member or not) functions, and other utility functions that are just shortcuts;
if you want a specific compiler to behave in specific way at compile time, like marking some member functions inline to be inlined even in Debug configuration on Visual Studio compilers, for example.
Does the compiler always do it better
than the programmer?
No, that's why sometimes using the inline keyword can help. The programmer can have sometimes a better general view of what's necessary than the compiler. For example, if the programmer wants it's binary to be the smallest possible, depending on code, inlining can be harmful. In speed performance required application, inlining aggressively can help very much. How would the compiler know what's required? It have to be configured and be allowed to know in a fine-grain way what is really wanted to be inline.
Mistaken assumption.
There is a way. It's spelled #define. And for many early C projects, that was good enough. inline was sufficiently different - hint, better semantics - that it could be added besides macros. But once you had both, there was little room left for a third option in between, one with the nicer semantics but non-optional.
If you really need to force the inline of a function (why?), you can do it: copy the code and paste it, or use a macro.
Suppose I have a 10 line function. If I add inline keyword, let's say there is a chance of 50% that compiler will make it inline.
If I have a 2 line function, there might be 90% chance it will be inlined.
Can I split the code in 10 line function into 5 functions to make it inlined with better chances?
There may be a reason why the compiler isn't inlining it, possibly something to look at. In addition, the function call overhead becomes less of an issue with longer functions, so inlining them may not be as important (if that's your only reason).
Splitting the function into 5 small functions will just make a mess of your code, and possibly confuse the compiler and end up with it not inlining anything. I would not recommend that.
Depending on your C++ compiler, you may be able to force it to inline the function. Visual C++ has the __forceinline attribute, as well as a setting for how inlining should be handled and how often it should be used in the project settings. As Tony mentions, the GCC equivalent is __attribute__((always_inline)).
You may also be able to use some preprocessor trickery to inline the code itself, but I would not typically recommend that.
If it makes the code more readable, go for it. If not, trust the compiler and don't go messing up your code on the off chance that it'll help. The compiler's a lot smarter than you think, and generally knows better than you do when inlining will help -- and when it won't, or worse, will break stuff.
here is a small question about inline functions in c++.
At what stage of the compilation in C++ are the inline functions actually inlined at the call?
how does that basically work.
lets say if the compiler has decided that a particualr function has to be inline after the programmer has requested with an inline keyword in front of the function ,when does the compiler does that for the programmer .i mean at what stage of the compilation.
is it at the preprocessing stage like in c macros are expanded?
It will vary by compiler. And some stages in some compilers will have no corresponding stages in other compilers. So your question doesn't really have a definite answer.
But generally it's done after the parse tree for the function is created, but before code is actually generated or many optimizations are done. This is the most optimum place to do it because you want the maximum amount of information available for optimizer to work with.
Doing it like a preprocessor macro expansion would be generally too early. The compiler doesn't then have enough information to do the appropriate type checking, and it's easier also to make mistakes that cause side effects to happen more than once and so on.
And GMan provided an excellent Wikipedia link in a comment that goes into much more detail about the function inlining process than I do here. My answer is generally true, but there is a LOT of variation, even more than I thought there was.