Though I am a fan of macros in general, I am not getting why the Arduino makers chose to use macros instead of actual functions for some of their arithmatic "functions". To name a few examples:
min()
max()
constrain()
Their website informs one not to call functions from within these "functions" or to use pre/postfix inside the brackets() because they are actually macros.
Considering the arduino language is actually C++, they could have easily used (inline) functions instead and prevent any user from falling in one of the well known macro pitfalls.
People usually do things for reasons. And so far I have not found these reasons. So my qeustion: why did the Arduino makers chose to use macro's instead of functions?
Arduino is built based on much older codes and libraries such as AVR-libc where macros are used extensively way before Arduino even existed.
In modern programming, macros are not recommended (versus inline functions) as it does not do type checking, does not check compile errors and if not craft carefully could lead to some side-effects.
I have a design question. In C++ there are feature test recommendation macros: http://en.cppreference.com/w/cpp/experimental/feature_test
In the explanation from the link above, I cite textually:
The following macros expand to a numeric value corresponding to the year and month when the feature has been included in the working draft. When a feature changes significantly, the macro will be updated accordingly.
I would like to make use of these feature test macros to simplify the build system and at the same time to get rid of the preprocessor in my own code
My questions are:
Given a __cpp_feature_xyz, it is defined with a numeric value, if it exists, but, if it does not exist, is it #defined at all? This would make possible to use inside if constexpr
When modules come out, these macros are supposed to be compatible no matter what happens with macro propagation? Some of these macros seem to be built-in, but for the library features it seems that you need to include the corresponding header, so I wonder if they would be propagated or an option to do so. I am aware this is one of the controversial design points of modules.
If macro propagation is finally removed from modules, are there any planned replacement mechanisms for feature detection?
I am looking at various STL headers provided with compilers and I cant imagine the developers actually writing all this code by hand.
All the macros and the weird names of varaibles and classes - they would have to remember all of them! Seems error prone to me.
Are parts of the headers result of some text preprocessing or generation?
I've maintained Visual Studio's implementation of the C++ Standard Library for 7 years (VC's STL was written by and licensed from P.J. Plauger of Dinkumware back in the mid-90s, and I work with PJP to pick up new features and maintenance bugfixes), and I can tell you that I do all of my editing "by hand" in a plain text editor. None of the STL's headers or sources are automatically generated (although Dinkumware's master sources, which I have never seen, go through automated filtering in order to produce customized drops for Microsoft), and the stuff that's checked into source control is shipped directly to users without any further modification (now, that is; previously we ran them through a filtering step that caused lots of headaches). I am notorious for not using IDEs/autocomplete, although I do use Source Insight to browse the codebase (especially the underlying CRT whose guts I am less familiar with), and I extensively rely on grep. (And of course I use diff tools; my favorite is an internal tool named "odd".) I do engage in very very careful cut-and-paste editing, but for the opposite reason as novices; I do this when I understand the structure of code completely, and I wish to exactly replicate parts of it without accidentally leaving things out. (For example, different containers need very similar machinery to deal with allocators; it should probably be centralized, but in the meantime when I need to fix basic_string I'll verify that vector is correct and then copy its machinery.) I've generated code perhaps twice - once when stamping out the C++14 transparent operator functors that I designed (plus<>, multiplies<>, greater<>, etc. are highly repetitive), and again when implementing/proposing variable templates for type traits (recently voted into the Library Fundamentals Technical Specification, probably destined for C++17). IIRC, I wrote an actual program for the operator functors, while I used sed for the variable templates. The plain text editor that I use (Metapad) has search-and-replace capabilities that are quite useful although weaker than outright regexes; I need stronger tools if I want to replicate chunks of text (e.g. is_same_v = is_same< T >::value).
How do STL maintainers remember all this stuff? It's a full time job. And of course, we're constantly consulting the Standard/Working Paper for the required interfaces and behavior of code. (I recently discovered that I can, with great difficulty, enumerate all 50 US states from memory, but I would surely be unable to enumerate all STL algorithms from memory. However, I have memorized the longest name, as a useless bit of trivia. :->)
The looks of it are designed to be weird in some sense. The standard library and the code in there needs to avoid conflicts with names used in user programs, including macros and there are almost no restrictions as to what can be in a user program.
They are most probably hand written, and as others have mentioned, if you spend some time looking at them you will figure out what the coding conventions are, how variables are named and so on. One of the few restrictions include that user code cannot use identifiers starting with _ followed by a capital letter or __ (two consecutive underscores), so you will find many names in the standard headers that look like _M_xxx or __yyy and it might surprise at first, but after some time you just ignore the prefix...
Apparently the preprocessor macros in C++ are
justifiably feared and shunned by the C++ community.
However, there are several cases where C++ macros are beneficial.
Seeing as preprocessor macros can be extremely useful and can reduce repetitive code in a very straightforward manner --
-- leaves me with the question, what exactly is it that makes preprocessor macros "evil", or, as the question title says, which feature (or removal of feature) would be needed from preprocessor macros to make them useful as a "good" development tool (instead of a fill-in that everyone's ashamed of when using it). (After all, the Lisp languages seem to embrace macros.)
Please Note: This is not about #include or #pragma or #ifdef. This is about #define MY_MACRO(...) ...
Note: I do not intend for this question to be subjective. Should you think it is, feel free to vote to move it to programmers.SE.
Macros are widely considered evil because the preprocessor is a stupid text replacement tool that has little to none knowledge of C/C++.
Four very good reasons why macros are evil can be found in the C++ FAQ Lite.
Where possible, templates and inline functions are a better choice. The only reason I can think of why C++ still needs the preprocessor is for #includes and comment removal.
A widely disputed advantage is to use it to reduce code repetition; but as you can see by the boost preprocessor library, much effort has to be put to abuse the preprocessor for simple logic such as loops, leading to ugly syntax. In my opinion, it is a better idea to write scripts in a real high-level programming language for code generation instead of using the preprocessor.
Most preprocessor abuse come from misunderstanding, to quote Paul Mensonides(the author of the Boost.Preprocessor library):
Virtually all
issues related to the misuse of the preprocessor stems from attempting to
make object-like macros look like constant variables and function-like
macro invocations look like underlying-language function calls. At best,
the correlation between function-like macro invocations and function calls
should be incidental. It should never be considered to be a goal. That
is a fundamentally broken mentality.
As the preprocessor is well integrated into C++, its easier to blur the line, and most people don't see a difference. For example, ask someone to write a macro to add two numbers together, most people will write something like this:
#define ADD(x, y) ((x) + (y))
This is completely wrong. Runs this through the preprocessor:
#define ADD(x, y) ((x) + (y))
ADD(1, 2) // outputs ((1) + (2))
But the answer should be 3, since adding 1 to 2 is 3. Yet instead a macro is written to generate a C++ expression. Not only that, it could be thought of as a C++ function, but its not. This is where it leads to abuse. Its just generating a C++ expression, and a function is a much better way to go.
Furthermore, macros don't work like functions at all. The preprocessor works through a process of scanning and expanding macros, which is very different than using a call stack to call functions.
There are times it can be acceptable for macros to generate C++ code, as long as it isn't blurring the lines. Just like if you were to use python as a preprocessor to generate code, the preprocessor can do the same, and has the advantage that it doesn't need an extra build step.
Also, the preprocessor can be used with DSLs, like here and here, but these DSLs have a predefined grammar in the preprocessor, that it uses to generate C++ code. Its not really blurring the lines since it uses a different grammar.
Macros have one notable feature - they are very easy to abuse and rather hard to debug. You can write just about anything with macros, then macros are expanded into one-liners and when nothing works you have very hard time debugging the resulting code.
The feature alone makes one think ten times on whether and how to use macros for their task.
And don't forget that macros are expanded before actual compilation, so they automatically ignore namespaces, scopes, type safety and a ton of other things.
The most important thing about macros is that they have no scope, and do not care about context. They are almost a dump text replacement tool. So when you #define max(.... then everywhere where you have a max it gets replaced; so if someone adds overly generic macro names in their headers, they tend to influence code that they were not intended to.
Another thing is that when used without care, they lead to quite hard to read code, since no one can easily see what the macro could evaluate to, especially when multiple macros are nested.
A good guideline is to choose unique names, and when generating boilerplate code, #undef them as soon as possible to not pollute the namespace.
Additionally, they do not offer type safety or overloading.
Sometimes macros are arguably a good tool to generate boilerplate code, like with the help of boost.pp you could create a macro that helps you creating enums like:
ENUM(xenum,(a,b,(c,7)));
which could expand to
enum xenum { a, b, c=7 };
std::string to_string( xenum x ) { .... }
Things like assert() that need to react on NDEBUG are also often easier to implement as macros
There a many uses where a C developper uses Macros and an C++ developper uses templates.
There obviously corner cases where they're useful, but most of the time it's bad habits from the C world applied to C++ by people that believe there such a language called C/C++
So it's easier to say "it's evil" than risking a developper misuses them.
Macros do not offer type safety
Problems where parameters are executed twice e.g. #define MAX(a,b) ((a)>(b) ? (a) : (b)) and apply it for MAX(i++, y--)
Problems with debugging as their names do not occur in the symbol table.
Forcing the programmer to use proper naming for the macros... and better tools to track replacement of macros would fix most my problems. I can't really say I've had major issues so far... It's something you burn yourself with and learn to take special care later on. But they badly need better integration with IDEs, debuggers.
The Evolution WG Issues List of 14 February 2004 has ...
EP003. #nomacros. See EI001. Note by
Stroustrup to be written.
In rough (or exact) terms, what is #nomacros, and is it available as an extension anywhere? It would have been a useful diagnostic tool in a recent project involving porting thousands of files of 1995-vintage C++ to a 2005 compiler, compared to the alternative of running the code through the preprocessor and examining the .i files for surprise packages.
It is just a proposal under active consideration for inclusion into C++, but still not available in the current compilers. If you read further down the page, it says:
ES042. #nospam.
Provide a preprocessor mechanism for limiting macros entering and exiting a scope. For example:
#nomacros
#in A B
…
#out A X
#endnomacros
No macros are expanded between #nomacros and #endnomacros unless explicitly enabled by #in. No macros defined between #nomacros and #endnomacros will be defined after #endnomacros unless explicitly enabled by #out.
Suggestion by Bjarne Stroustrup. After discussion in the EWG it was decided to look for a solution that allowed macros used by macros allowed in by “#in” to be used in the expansion of such macros only.
#nomacros should nest.