Is it possible with the C++ preprocessor to emit an #error if a particular #define is used? Something like this:
#define this_must_not_be_used #error You shouldn't use that.
In C++11,
#define this_must_not_be_used static_assert(false, "You shouldn't use that.");
In C11,
#define _Static_assert(0, "You shouldn't use that.");
For C before C11 or C++ before C++11, you'll have to think up some other invalid expression that contains a string that will show up in the error message. I'm thinking along the lines of
#define this_must_not_be_used ("You shouldn't use that",)
There is no standard way of defining a macro in such a way that its use, wherever it is, will give you a compilation error, especially not one that gives a clear and useful error message. For all you know, the code that uses it might just be stringizing its result, or it might be part of an assert condition that gets removed by the preprocessor.
For most practical purposes, putting something that cannot possibly be part of a valid C (or C++) program will be good enough.
Some implementations do have implementation-specific methods of achieving exactly what you ask for, though. For instance, with GCC, you can use
#pragma GCC poison this_should_not_be_used
where its subsequent use, no matter how it ends up used, will give:
error: attempt to use poisoned "this_should_not_be_used"
You may want to look at your own compiler's documentation to see if it has anything similar. You can also use conditional macro definitions, so that with GCC you use this approach, with your compiler you use your compiler's approach, and with an unknown compiler you fall back to the standard method of providing a macro definition that will probably lead to a difficult-to-read error message.
Do you mean something like follows:
#ifdef this_must_not_be_used
#error "You shouln't use that."
#endif
Related
I know that my question is similar to this one or this one, but I find that it is not really the same and, more, the second one has not an answer accepted, I decided to ask if it is correct to add preprocessor directives when a function-like macro is called?
In my case I have a function-like macro:
#define FUNC_MACRO(a, b) // do something with the variables
and somewhere in the code I call it with specific difference if some other macro is defined:
// ...
FUNC_MACRO(aVal
#ifdef ANOTHER_MACRO
+ offset
#endif // ANOTHER_MACRO
, bVal);
// ...
I tested on my machine (linux, with gcc 4.8) and it worked ok (with and without the preprocessor directives, and with and without ANOTHER_MACRO defined), but is it safe to do so?
I read the 16.3/9 paragraph from the answer of the first similar question, but is it true for my case too?
The C language leaves this as undefined behavior in 6.10.3 Macro replacement, ΒΆ11:
If there are sequences of preprocessing tokens within the list of arguments that would otherwise act as preprocessing directives, the behavior is undefined.
So indeed it's wrong to do it.
GCC and perhaps other popular compiles don't catch it, which is probably why many users of the language are not aware. I encountered this when some of my code failed to compile on PCC (and promptly fixed the bug in my code).
Update: PJTraill asked in the comments for a case where it would be "misleading or meaningless" to have preprocessor directives inside a macro expansion. Here's an obvious one:
foo(a, b,
#ifdef BAR
c);
#else
d);
#endif
I'm not sure whether it would have been plausible for the language to specify that balanced preprocessor conditionals inside the macro expansion are okay, but I think you'd run into problems there too with ambiguities in the order in which they should be processed.
Do the following instead?
#ifdef ANOTHER_MACRO
FUNC_MACRO(aVal + offset, bVal);
#else
FUNC_MACRO(aVal, bVal);
#endif
EDIT: Addressing concern raised by comment; I do not know if the OP's method is specifically wrong (I think other answers cover that). However, succinctness and clarity are two aspects held to be pretty important when coding with C.
As such I would much prefer to find better ways to achieve what the OP seems to be trying, by slightly rethinking the situation such as I have offered above. I guess the OP may have used a triviallised example but I usually find with most C situations that if something is becoming overly complex or attempting to do something it does not seem like the language should allow, then there are better ways to achieve what is needed.
I am reading Efficient c++ (older version) and have some doubts.
Here, for example, it says:
When you do something like this
#define ASPECT_RATIO 1.653
the symbolic name ASPECT_RATIO may never be seen by the compilers; it may be removed by the preprocessors before the source code ever gets compiled. As a results the ASPECT_RATIO may never get entered to SYMBOLIC_TABLE. It an be confusing if you get an error during compilation involving the constant, because the error message may refer to 1.653 and not ASPECT_RATIO
I don't understand this paragraph.How can anything be removed the preprocessor, just like that. what could be the reasons and how feasible are they in real world.
Thanks
I don't understand this paragraph under inverted quotes.How can
anything be removed the preprocessor, just like that. what could be
the reasons and how feasible are they in real world.
Basically what it describes is exactly how C and C++ pre-processor works. The reason is to replace macros/constants (that are made using the #define directive) with their actual values, instead of repeating the same values over and over again. In C++ it is considered a bad style using C-style macros, but they're supported for C compatibility.
The preprocessor, as the name suggests, runs prior to the actual compilation, and is basically changing the source code as directed through the pre-processor directives (those starting with #). This also includes replacement of the macros with their values, the inclusion of the header files as directed by the #include directive, etc etc.
This is used in order to avoid code repetitions, magic numbers, to share interfaces (header files) and many other useful things.
It's simply a global search and replace of "ASPECT_RATIO" with "1.653" in the file before passing it to the compiler
That's why macros are so dangerous. If you have #define max 123 and a variable int max = 100 the compiler will get int 123 = 100 and you will get a confusing error message
The pre-processor will replace all instances of the token ASPECT_RATIO that appear in the code with the actual token 1.653 ... thus the compiler will never see the token ASPECT_RATIO. By the time it compiles the code, it only sees the literal token 1.653 that was substituted in by the pre-processor.
Basically the "problem" you will encounter with this approach is that ASPECT_RATIO will not be seen as a symbol by the compiler, thus in a debugger, etc., you can't query the value ASPECT_RATIO as-if it were a variable. It's not a value that will have a memory address like a static const int may have (I say "may", because an optimizing compiler may decide to act like the pre-processor, and optimize-out the need for an explicit memory address to store the constant value, instead simply substituting the literal value where-ever it appears in the code). In a larger function macro it also won't have an instruction address like actual C/C++ function will have, thus you can't set break-points inside a function macro. But in a more general sense I'm not sure I would call this a "problem" unless you were intending to use the macro as a debug-symbol, and/or set debugging break-points inside your macro. Otherwise the macro is doing its job.
One in a while there's a need for a no-op statement in C++. For example when implementing assert() which is disabled in non-debug configuration (also see this question):
#ifdef _DEBUG
#define assert(x) if( !x ) { \
ThrowExcepion(__FILE__, __LINE__);\
} else {\
//noop here \
}
#else
#define assert(x) //noop here
#endif
So far I'm under impression that the right way is to use (void)0; for a no-op:
(void)0;
however I suspect that it might trigger warnings on some compilers - something like C4555: expression has no effect; expected expression with side-effect Visual C++ warning that is not emitted for this particular case but is emitted when there's no cast to void.
Is it universally portable? Is there a better way?
The simplest no-op is just having no code at all:
#define noop
Then user code will have:
if (condition) noop; else do_something();
The alternative that you mention is also a no-op: (void)0;, but if you are going to use that inside a macro, you should leave the ; aside for the caller to add:
#define noop (void)0
if (condition) noop; else do_something();
(If ; was part of the macro, then there would be an extra ; there)
I suspect that it might trigger warnings on some compilers
Unlikely, since ((void)0) is what the standard assert macro expands to when NDEBUG is defined. So any compiler that issues warnings for it will issue warnings whenever code that contains asserts is compiled for release. I expect that would be considered a bug by the users.
I suppose a compiler could avoid that problem by warning for your proposal (void)0 while treating only ((void)0) specially. So you might be better off using ((void)0), but I doubt it.
In general, casting something to void, with or without the extra enclosing parens, idiomatically means "ignore this". For example in C code that casts function parameters to void in order to suppress warnings for unused variables. So on that score too, a compiler that warned would be rather unpopular, since suppressing one warning would just give you another one.
Note that in C++, standard headers are permitted to include each other. Therefore, if you are using any standard header, assert might have been defined by that. So your code is non-portable on that account. If you're talking "universally portable", you normally should treat any macro defined in any standard header as a reserved identifier. You could undefine it, but using a different name for your own assertions would be more sensible. I know it's only an example, but I don't see why you'd ever want to define assert in a "universally portable" way, since all C++ implementations already have it, and it doesn't do what you're defining it to do here.
How about do { } while(0)? Yes it adds code, but I'm sure most compilers today are capable of optimizing it away.
; is considered as standard no-op. Note that it is possible that the compiler will not generate any code from it.
I think the objective here, and the reason not to define the macro to nothing, is to require the user to add a ;. For that purpose, anywhere a statement is legal, (void)0 (or ((void)0), or other variations thereupon) is fine.
I found this question because I needed to do the same thing at global scope, where a plain old statement is illegal. Fortunately, C++11 gives us an alternative: static_assert(true, "NO OP"). This can be used anywhere, and accomplishes my objective of requiring a ; after the macro. (In my case, the macro is a tag for a code generation tool that parses the source file, so when compiling the code as C++, it will always be a NO-OP.)
I'm rather late to the party on this one but I needed the same for a loop() in an Arduino project where all processing is done in timer interrupt service routines (ISR). Found the inline assembler code worked for me without defining a function:
void loop(){
__asm__("nop\n\t"); // Do nothing.
}
I recommend using:
static_cast<void> (0)
And what about:
#define NOP() ({(void)0;})
or just
#define NOP() ({;})
AFAIK, it is universally portable.
#define MYDEFINE()
will do as well.
Another option may be something like this:
void noop(...) {}
#define MYDEFINE() noop()
However, I'd stick to (void)0 or use intrinsics like __noop
inline void noop( ) {}
Self-documenting
this code will not omitted by optimization
static void nop_func() { }
typedef void (*nop_func_t)();
static nop_func_t nop = &nop_func;
for (...)
{
nop();
}
There are many ways, and here is the comparison I've made to some of them in MSVS2019 cpp compiler.
__asm {
nop
}
Transaltes to nop operation in disassembly which takes 1 machine cycle.
do {} while (0); generates some instructions which generates some more cycles.
Simple ; generates nothing.
When implementing stubs etc. you want to avoid "unused variable" warnings. I've come across a few alternatives of UNUSED() macros over the years, but never one which either is proven to work for "all" compilers, or one which by standard is air tight.
Or are we stuck with #ifdef blocks for each build platform?
EDIT: Due to a number of answers with non c-compliant alternatives, I'd like to clarify that I'm looking for a definition which is valid for both C and C++, all flavours etc.
According to this answer by user GMan the typical way is to cast to void:
#define UNUSED(x) (void)(x)
but if x is marked as volatile that would enforce reading from the variable and thus have a side effect and so the actual way to almost guarantee a no-op and suppress the compiler warning is the following:
// use expression as sub-expression,
// then make type of full expression int, discard result
#define UNUSED(x) (void)(sizeof((x), 0))
In C++, just comment out the names.
void MyFunction(int /* name_of_arg1 */, float /* name_of_arg2*/)
{
...
}
The universal way is not to turn on warnings options that spam warnings for clearly-correct code. Any "unused variable" warning option that includes function arguments in its analysis is simply wrong and should be left off. Don't litter your code with ugliness to quiet broken compilers.
You might also try sending a bug report to the compiler maintainer/vendor.
Is there Q_OBSOLETE or Q_DEPRECATED in C++ with Qt 4.7?
Or is there a similar C++ macro or keyword?
If you use Q_DECL_DEPRECATED you should get the outcome you are looking for e.g.:
Q_DECL_DEPRECATED void foo();
Pull the real function out of public scope.
Create another function with the same name in public scope.
Insert your warning/fail code in that function.
Call the original with the new.
Just use the
#warning
directive
although is not C++ standard is quite unlikely you will encounter a compiler that does not support it (see this SO question).
You might want to do something similiar yourself:
#ifdef Q_TREAT_OBSOLETE_AS_ERRORS
#define Q_OBSOLETE(X) \
BOOST_STATIC_ASSERT(false); \
X
#else
#define Q_OBSOLETE(X) X
#endif
This construction simply substitutes some deprecated code / part of code if there is no Q_TREAT_OBSOLETE_AS_ERRORS defined and generates compilation-time error otherwise.
Note that BOOST_STATIC_ASSERT has no scope limitations, so does the Q_OBSOLETE macro.
Probably this is not the best way to solve your problem and actually I'm not sure this is useful.
You might just mark the code as #obsolete or simply point it out in the comments.
By "deprecated constructs", you really mean "deprecated member functions". You're asking for a compile-time warning to draw your attention to the call site of any deprecated function.
This isn't possible in any reasonable way in standard C++, and I don't see any attributes in G++ that would support this either. Qt can't really add a feature like that if the compiler doesn't have some support for it already.
However, Microsoft Visual C++ supports an __declspec(deprecated) extension, and I would imagine it's possible to write a compiler plugin for G++ 4.5 that adds a similar feature.