Apparently the preprocessor macros in C++ are
justifiably feared and shunned by the C++ community.
However, there are several cases where C++ macros are beneficial.
Seeing as preprocessor macros can be extremely useful and can reduce repetitive code in a very straightforward manner --
-- leaves me with the question, what exactly is it that makes preprocessor macros "evil", or, as the question title says, which feature (or removal of feature) would be needed from preprocessor macros to make them useful as a "good" development tool (instead of a fill-in that everyone's ashamed of when using it). (After all, the Lisp languages seem to embrace macros.)
Please Note: This is not about #include or #pragma or #ifdef. This is about #define MY_MACRO(...) ...
Note: I do not intend for this question to be subjective. Should you think it is, feel free to vote to move it to programmers.SE.
Macros are widely considered evil because the preprocessor is a stupid text replacement tool that has little to none knowledge of C/C++.
Four very good reasons why macros are evil can be found in the C++ FAQ Lite.
Where possible, templates and inline functions are a better choice. The only reason I can think of why C++ still needs the preprocessor is for #includes and comment removal.
A widely disputed advantage is to use it to reduce code repetition; but as you can see by the boost preprocessor library, much effort has to be put to abuse the preprocessor for simple logic such as loops, leading to ugly syntax. In my opinion, it is a better idea to write scripts in a real high-level programming language for code generation instead of using the preprocessor.
Most preprocessor abuse come from misunderstanding, to quote Paul Mensonides(the author of the Boost.Preprocessor library):
Virtually all
issues related to the misuse of the preprocessor stems from attempting to
make object-like macros look like constant variables and function-like
macro invocations look like underlying-language function calls. At best,
the correlation between function-like macro invocations and function calls
should be incidental. It should never be considered to be a goal. That
is a fundamentally broken mentality.
As the preprocessor is well integrated into C++, its easier to blur the line, and most people don't see a difference. For example, ask someone to write a macro to add two numbers together, most people will write something like this:
#define ADD(x, y) ((x) + (y))
This is completely wrong. Runs this through the preprocessor:
#define ADD(x, y) ((x) + (y))
ADD(1, 2) // outputs ((1) + (2))
But the answer should be 3, since adding 1 to 2 is 3. Yet instead a macro is written to generate a C++ expression. Not only that, it could be thought of as a C++ function, but its not. This is where it leads to abuse. Its just generating a C++ expression, and a function is a much better way to go.
Furthermore, macros don't work like functions at all. The preprocessor works through a process of scanning and expanding macros, which is very different than using a call stack to call functions.
There are times it can be acceptable for macros to generate C++ code, as long as it isn't blurring the lines. Just like if you were to use python as a preprocessor to generate code, the preprocessor can do the same, and has the advantage that it doesn't need an extra build step.
Also, the preprocessor can be used with DSLs, like here and here, but these DSLs have a predefined grammar in the preprocessor, that it uses to generate C++ code. Its not really blurring the lines since it uses a different grammar.
Macros have one notable feature - they are very easy to abuse and rather hard to debug. You can write just about anything with macros, then macros are expanded into one-liners and when nothing works you have very hard time debugging the resulting code.
The feature alone makes one think ten times on whether and how to use macros for their task.
And don't forget that macros are expanded before actual compilation, so they automatically ignore namespaces, scopes, type safety and a ton of other things.
The most important thing about macros is that they have no scope, and do not care about context. They are almost a dump text replacement tool. So when you #define max(.... then everywhere where you have a max it gets replaced; so if someone adds overly generic macro names in their headers, they tend to influence code that they were not intended to.
Another thing is that when used without care, they lead to quite hard to read code, since no one can easily see what the macro could evaluate to, especially when multiple macros are nested.
A good guideline is to choose unique names, and when generating boilerplate code, #undef them as soon as possible to not pollute the namespace.
Additionally, they do not offer type safety or overloading.
Sometimes macros are arguably a good tool to generate boilerplate code, like with the help of boost.pp you could create a macro that helps you creating enums like:
ENUM(xenum,(a,b,(c,7)));
which could expand to
enum xenum { a, b, c=7 };
std::string to_string( xenum x ) { .... }
Things like assert() that need to react on NDEBUG are also often easier to implement as macros
There a many uses where a C developper uses Macros and an C++ developper uses templates.
There obviously corner cases where they're useful, but most of the time it's bad habits from the C world applied to C++ by people that believe there such a language called C/C++
So it's easier to say "it's evil" than risking a developper misuses them.
Macros do not offer type safety
Problems where parameters are executed twice e.g. #define MAX(a,b) ((a)>(b) ? (a) : (b)) and apply it for MAX(i++, y--)
Problems with debugging as their names do not occur in the symbol table.
Forcing the programmer to use proper naming for the macros... and better tools to track replacement of macros would fix most my problems. I can't really say I've had major issues so far... It's something you burn yourself with and learn to take special care later on. But they badly need better integration with IDEs, debuggers.
Related
Though I am a fan of macros in general, I am not getting why the Arduino makers chose to use macros instead of actual functions for some of their arithmatic "functions". To name a few examples:
min()
max()
constrain()
Their website informs one not to call functions from within these "functions" or to use pre/postfix inside the brackets() because they are actually macros.
Considering the arduino language is actually C++, they could have easily used (inline) functions instead and prevent any user from falling in one of the well known macro pitfalls.
People usually do things for reasons. And so far I have not found these reasons. So my qeustion: why did the Arduino makers chose to use macro's instead of functions?
Arduino is built based on much older codes and libraries such as AVR-libc where macros are used extensively way before Arduino even existed.
In modern programming, macros are not recommended (versus inline functions) as it does not do type checking, does not check compile errors and if not craft carefully could lead to some side-effects.
In C, when you'd like to do generic programming, your only language-supported option is macros. They work great and are widely used, but are discouraged if you can get by with inline functions or regular functions instead. (If using gcc, you can also use gcc statement expressions, which avoid the double-evaluation "bug". Example.)
C++, however, has done away with the so-called "evils" of macros by creating templates. I'm still somewhat new to the full-blown gigantic behemoth of a language that is C++ (I assess it must have like 4 or 5x as many features and language constructs as C), and generally have favored macros or gcc statement expressions, but am being pressured more and more to use templates in their place. This begs the question: in general, when doing generic programming in C++, which will produce smaller executables: macros or templates?
If you say, "size doesn't matter, choose safety over size", I'm going to go ahead and stop you right there. For large computers and application programming, this may be true, but for microcontroller programming on an Arduino, ATTiny85 with 8KB Flash space for the program, or other small devices, that's hogwash. Size matters too, so tradeoffs must be made.
Which produces smaller executables for the same code when doing generic programming? Macros or templates? Any additional insight is welcome.
Related:
Do c++ templates make programs slow?
Side note:
Some things can only be done with macros, NOT templates. Take, for example, non-name-mangled stringizing/stringifying and X macros. More on X macros:
Real-world use of X-Macros
https://www.geeksforgeeks.org/x-macros-in-c/
https://en.wikipedia.org/wiki/X_Macro
At this time of history, 2020, this is only the job of the optimizer. You can achieve better speed with assembly, the point is that it's not worth in both size and speed. With proper C++ programming your code will be fast enough and small enough. Getting faster or smaller by messing the readability of the code is not worth the trouble.
That said, macros replace stuff at the preprocessor level, templates do that at the compile level. You may get faster compilation time with macros, but a good compiler will optimize them more that macros. This means that you can have the same exe size, or possibly less with templates.
The vast, 99%, troubles of speed or size in an application comes from the programmers errors, not from the language. Very often I discover that some photo resources are PNG instead of proper JPG in my executable and voila, I have a bloat. Or that I accidentally forgot to use weak_ptr to break a reference and now I have two shared pointers that share 100MB of memory that will not be freed. It's almost always the human error.
... in general, when doing generic programming in C++, which will produce smaller executables: macros or templates?
Measure it. There shouldn't be a significant difference assuming you do a good job writing both versions (see the first point above), and your compiler is decent, and your code is equally sympathetic to both.
If you write something that is much bigger with templates - ask a specific question about that.
Note that the linked question's answer is talking about multiple non-mergeable instantiations. IME function templates are very often inlined, in which case they behave very much like type-safe macros, and there's no reason for the inlining site to be larger if it's otherwise the same code. If you start taking the addresses of function template instantiations, for example, that changes.
... C++ ... generally have favored macros ... being pressured more and more to use templates in their place.
You need to learn C++ properly. Perhaps you already have, but don't use templates much: that still leaves you badly-placed to write a good comparison.
This begs the question
No, it prompts the question.
How useful is the C++ preprocessor, really? Even in C#, it still has some functionality, but I've been thinking of ditching its use altogether for a hypothetical future language. I guess that there are some languages like Java that survive without such a thing. Is a language without a preprocessing step competitive and viable? What steps do programs written in languages without a preprocessor take to emulate it's functionality, e.g. different code for debug and release code, and how do these compare to #ifdef DEBUG?
In fact, most languages deal very well without a preprocessor. I'd move on to say that the necessity of using preprocessor with C/C++ roots in their lack of several parts of functionality.
For example:
Most languages don't need header files and include guards, because they have the notion of a "module".
Conditional compilation can be easily obtained through static ifs or an analogous mechanism.
Code repetition can almost always be reduced in more clear ways than what you can achieve with the preprocessor: using templates/generics, a reflection system, etc, etc.
So my conclusion is: for most "features" you can get with preprocessor and metaprogramming, a more clear alternative exists which is safer and more convenient to use.
The D programming language, as a compiled low-level language, is a nice example on "how to provide most features usually done via preprocessor, without actually preprocessing" - includes all I've mentioned, plus string mixins and template mixins, and probably some other clever solutions to problems usually solved with preprocessing in C/C++.
I would say no, macros are not needed for a language to be viable and competitive.
That doesn't mean macros are not needed in some languages.
If you need macros it's probably because there is a deficiency in your language. (Or because you're trying to be compatible with some other deficient language, like C++ is with C. :)). Make the language "good enough" and you will need macros so rarely that the language can do without them.
(Of course, it depends what your language's goals are what "good enough" actually means, and whether or not macros are a good way to achieve certain things or just a band-aid for missing concepts/features.)
Even in a "good enough" language, there may still be the odd time where you wish macros were there. Maybe they would still be worth having. Maybe not. Depends on what they bring to the language and what problems they introduce in terms of complexity (to the compiler/runtime and to the programmer). But that is true of any language feature. You wouldn't design a language with the aim to "have every single feature" so you have to pick & choose based on the trade-offs and benefits.
e.g. Templates are a fantastically powerful feature in C++, and something I find I miss occasionally in C#, but should C# have them? Should every language have the feature? Perhaps not, given the complexity they would bring with them (and the fact you can usually use C++/CLI for that kind of work).
BTW, I'm not saying "any good language doesn't have macros"; I'm just saying a good language doesn't need them. And FWIW, it used to irritate me that Java didn't have them, but that was probably because Java was lacking in certain areas and not because macros are essential.
It is very useful, however should be used with care.
Few examples where you need it.
Currently there is no other standard way to handle #include properly other then processor as it a part of standard. Also you need define to have include guard. But this is very C++ specific issue that does not exist in other languages.
Processor is very useful for conditional compilations when you need to configure your system to work with different API's, different OS's different toolkit, it is the only way to go (unless you want to create an abstract interfaces and then make conditional compilation on build system level).
With current C++ standard (2003) without variadic templates it makes life much easier in certain situations. For example, when you need to create a bunch of classes like:
template<typename R>
class function<R()> { ... }
template<typename R,typename T1>
class function<R(T1)> { ... }
template<typename R,typename T1,typename T2>
class function<R(T1,T2)> { ... }
...
It is almost impossible to do it properly without processor in current C++ standard. (in C++0x there is variadic templates that make it much easier).
In fact great tools like boost::function, boost::signal, boost::bind require quite
complicated templates handling to make this stuff work with current compilers.
Sometimes templates provide very nice structures that are impossible without preprocessor, for example:
assert(ptr!=0);
That prints aborts the program printing:
Assertion failed in foo.cpp, line 134 "ptr!=0"
And of course it is really useful for unit testing:
TEST(3.14159 <=pi && pi < 3.141599);
That prints aborts the program printing:
Test failed in foo.cpp, line 134 "3.14159 <=pi && pi < 3.141599"
Logging. Usually logging is something much easier to implement with macros. Why?
You need either to write:
if(should_log(info))
log(info) << "this is the message in file foo.cpp, line 10, foo::doit()" << "Value is " << x;
or simpler:
LOG_INFO() << "Value is " << x;
Which includes already: file, line number, function name and condition. Very valuable.
In fact boost::log apache logging use such things.
Yes... Preprocessor sometimes is evil, but it too many cases it is extremely useful, so use it smartly and with care and it is fine.
However if you implement with it:
Macros instead if inline function.
Unreadable and unclear macros to "reduce the code"
constants
You are do it wrong.
Bottom line
As every tool it can be abused (and beleive me I had seen very crazy preprocessor
abuse it real code) but today it is very useful thing.
You do not need preprocessing step to implement conditional compilation. You do not really need macros (one can live without stringize and token-paste operators, and even these could be done without PP). #includes are a very special kind of nightmare, which models the reference invocation all wrong.
What else is so important?
It would depend upon what you consider 'viable', but
In C++, a lot of the necessity for/desirability of using macros has been obviated by features such as templates, inlining and namespaces. The only reason I find myself using macros in C++ is for integration with C (#ifdef __cplusplus in headers and processing definitions). With other languages, this is unnecessary with tools like JNI or SWIG to generate C headers/libraries.
Rather than use macros to control 'debug' and 'nodebug' builds, it would make more sense for the compiler to include a debug compile option to include/enable the necessary features.
Several languages just work without macros; if you're looking for a C-like language to compare with, I suggest D http://www.digitalmars.com/d/2.0/comparison.html
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Inline functions vs Preprocessor macros
hello can somebody please explain what exactly does it mean, and what is the difference from regular macro(I know that it works during compile time and not preprocessor but so what?) thanks in advance for any help, looked in google but didn't find something understandable
(Assuming here that you're talking about C/C++.)
An inline function has its code copied into the points where it's called, much like a macro would.
The big reason you would use inline functions rather than macros to accomplish this is that the macro language is much weaker than actual C/C++ code; it's harder to write understandable, re-usable, non-buggy macros. A macro doesn't create a lexical scope, so variables in one can collide with those already in scope where it's used. It doesn't type-check its arguments. It's easy to introduce unexpected syntactic errors, since all a macro does is basically search-and-replace.
Also, IIRC, the compiler can choose to ignore an inline directive if it thinks that's really boneheaded; it can't do that with a macro.
Or, to rephrase this in a more opinionated and short way: macros (in C/C++, not, say, Lisp dialects) are an awful kludge, inline functions let you not use them.
Also, keep in mind that it's often not a great idea to mark a function as inline. Compilers will generally inline or not as they see fit; by marking a function inline, you're taking over responsibility for a low-level function that most of the time, the compiler's going to know more about than you are.
There is a big difference between macros and inline functions:
Inline is only a hint to the compiler
that function might be inlined. It
does not guarantee it will be.
Compiler might inline functions not
marked with inline.
Macro invocation does not perform
type checking, so macros are not type
safe.
Using function instead of a macro is
more likely to give you a nice and
understandable compiler output in
case there is some error.
It is easier to debug functions,
using macros complicates debugging a
lot. Most compilers will give you an
option of enabling or disabling
inlining, this is not possible with
macros.
The general rule in C++ is that macros should be avoided whenever possible.
I'm trying to get a better understanding of what place (if any) Macros have in modern C++ and Visual C++, also with reference to Windows programming libraries: What problem (if any) do Macros solve in these situations that cannot be solved without using them?
I remember reading about Google Chrome's use of WTL for Macros (amonst other things) from this blog post, and they are also used in MFC - here is an example of a Macro from that blog post I'd like explained in a superb amount of detail if possible:
// CWindowImpl
BEGIN_MSG_MAP(Edit)
MSG_WM_CHAR(OnChar)
MSG_WM_CONTEXTMENU(OnContextMenu)
MSG_WM_COPY(OnCopy)
MSG_WM_CUT(OnCut)
MESSAGE_HANDLER_EX(WM_IME_COMPOSITION, OnImeComposition)
MSG_WM_KEYDOWN(OnKeyDown)
MSG_WM_LBUTTONDBLCLK(OnLButtonDblClk)
MSG_WM_LBUTTONDOWN(OnLButtonDown)
MSG_WM_LBUTTONUP(OnLButtonUp)
MSG_WM_MBUTTONDOWN(OnNonLButtonDown)
MSG_WM_MOUSEMOVE(OnMouseMove)
MSG_WM_MOUSELEAVE(OnMouseLeave)
MSG_WM_NCCALCSIZE(OnNCCalcSize)
MSG_WM_NCPAINT(OnNCPaint)
MSG_WM_RBUTTONDOWN(OnNonLButtonDown)
MSG_WM_PASTE(OnPaste)
MSG_WM_SYSCHAR(OnSysChar) // WM_SYSxxx == WM_xxx with ALT down
MSG_WM_SYSKEYDOWN(OnKeyDown)
END_MSG_MAP()
I've read these articles to do with Macros on MSDN but am trying to also pick up best practices for writing or avoiding Macros and where/when to use them.
All of those Macros are defined in public sdk header files, so you can go read what they do yourself if you want. Basically what is happening here is you're generating a WndProc function using Macros. Each MSG_WM_* entry is a case statement that handles the given window message by translating its arguments from wParam and lParam into their original types and calling a helper function (which you then get to go implement) that takes those types as arguments (if appropriate—some window messages have 0 or 1 arguments).
In my opinion all this is crap. You're trading off having to write a bunch of boiler-plate up front but in exchange you make it much harder to debug later. Macros don't have much of a place in my modern programming world, other than to check if they are defined or not. Macros that do flow control, or assume specific local variables are always defined, etc, make me quite unhappy. Especially when I have to debug them.
Peter, you might as well ask if vi or EMACS is the better editor. (EMACS, by the way.) There are a lot of people who think the C preprocessor is a horrible idea; Stroustrup and Jim Gosling among them. That's why Java has no preprocessor, and there are innumerable things Stroustrup put into C++, from const to templates, to avoid using the preprocessor.
There are other people who find it convenient to be able to add a new language, as in that code.
If you read the original Bourne shell code, you'll find it looks like
IF a == b THEN
do_something();
do_more();
ELSE
do_less();
FI
Steve Bourne basically used macros to give it an Algol68-like look (that being the Really Cool Language then.) It could be very difficult to work with for anyone but Steve Bourne.
Then have a look at the Obfuscated C contest, where some of the most amazing obfuscations take advantage of the #define.
Personally, I don't mind too much, although that example is a little daunting (how do you debug that?) But doing that kind of thing is working without a net; there aren't many Wallendas out there.
(The question is kind of fragmented, but ignoring the Windows part):
X-Macros use the preprocessor to avoid code duplication.