Sometimes a local variable is used for the sole purpose of checking it in an assert(), like so -
int Result = Func();
assert( Result == 1 );
When compiling code in a Release build, assert()s are usually disabled, so this code may produce a warning about Result being set but never read.
A possible workaround is -
int Result = Func();
if ( Result == 1 )
{
assert( 0 );
}
But it requires too much typing, isn't easy on the eyes and causes the condition to be always checked (yes, the compiler may optimize the check away, but still).
I'm looking for an alternative way to express this assert() in a way that wouldn't cause the warning, but still be simple to use and avoid changing the semantics of assert().
(disabling the warning using a #pragma in this region of code isn't an option, and lowering warning levels to make it go away isn't an option either...).
We use a macro to specifically indicate when something is unused:
#define _unused(x) ((void)(x))
Then in your example, you'd have:
int Result = Func();
assert( Result == 1 );
_unused( Result ); // make production build happy
That way (a) the production build succeeds, and (b) it is obvious in the code that the variable is unused by design, not that it's just been forgotten about. This is especially helpful when parameters to a function are not used.
I wouldn't be able to give a better answer than this, that addresses that problem, and many more:
Stupid C++ Tricks: Adventures in assert
#ifdef NDEBUG
#define ASSERT(x) do { (void)sizeof(x);} while (0)
#else
#include <assert.h>
#define ASSERT(x) assert(x)
#endif
As of C++17, the variable can be decorated with an attribute.
[[maybe_unused]] int Result = Func();
assert( Result == 1 );
See https://en.cppreference.com/w/cpp/language/attributes/maybe_unused for details.
This is better than the (void)Result trick because you directly decorate the variable declaration, rather than add something as an afterthought.
You could create another macro that allows you to avoid using a temporary variable:
#ifndef NDEBUG
#define Verify(x) assert(x)
#else
#define Verify(x) ((void)(x))
#endif
// asserts that Func()==1 in debug mode, or calls Func() and ignores return
// value in release mode (any braindead compiler can optimize away the comparison
// whose result isn't used, and the cast to void suppresses the warning)
Verify(Func() == 1);
int Result = Func();
assert( Result == 1 );
This situation means that in release mode, you really want:
Func();
But Func is non-void, i.e. it returns a result, i.e. it is a query.
Presumably, besides returning a result, Func modifies something (otherwise, why bother calling it and not using its result?), i.e. it is a command.
By the command-query separation principle (1), Func shouldn't be a command and a query at the same time. In other words, queries shouldn't have side effects, and the "result" of commands should be represented by the available queries on the object's state.
Cloth c;
c.Wash(); // Wash is void
assert(c.IsClean());
Is better than
Cloth c;
bool is_clean = c.Wash(); // Wash returns a bool
assert(is_clean);
The former doesn't give you any warning of your kind, the latter does.
So, in short, my answer is: don't write code like this :)
Update (1): You asked for references about the Command-Query Separation Principle. Wikipedia is rather informative. I read about this design technique in Object Oriented Software Construction, 2nd Editon by Bertrand Meyer.
Update (2): j_random_hacker comments "OTOH, every "command" function f() that previously returned a value must now set some variable last_call_to_f_succeeded or similar". This is only true for functions that don't promise anything in their contract, i.e. functions that might "succeed" or not, or a similar concept. With Design by Contract, a relevant number of functions will have postconditions, so after "Empty()" the object will be "IsEmpty()", and after "Encode()" the message string will be "IsEncoded()", with no need to check. In the same way, and somewhat symetrically, you don't call a special function "IsXFeasible()" before each and every call to a procedure "X()"; because you usually know by design that you're fulfilling X's preconditions at the point of your call.
You could use:
Check( Func() == 1 );
And implement your Check( bool ) function as you want. It may either use assert, or throw a particular exception, write in a log file or to the console, have different implementations in debug and release, or a combination of all.
With C++17 we can do:
[[maybe_unused]] int Result = Func();
though it involves a bit of extra typing compared to a assert substitution. See this answer.
Note: Added this because is the first google hit for "c++ assert unused variable".
You should move the assert inside the function before the return value(s). You know that the return value is not an unreferenced local variable.
Plus it makes more sense to be inside the function anyway, because it creates a self contained unit that has its OWN pre- and post-conditions.
Chances are that if the function is returning a value, you should be doing some kind of error checking in release mode on this return value anyway. So it shouldn't be an unreferenced variable to begin with.
Edit, But in this case the post condition should be X (see comments):
I strongly disagree with this point, one should be able to determine the post condition from the input parameters and if it's a member function, any object state. If a global variable modifies the output of the function, then the function should be restructured.
Most answers suggest using static_cast<void>(expression) trick in Release builds to suppress the warning, but this is actually suboptimal if your intention is to make checks truly Debug-only. The goals of an assertion macro in question are:
Perform checks in Debug mode
Do nothing in Release mode
Emit no warnings in all cases
The problem is that void-cast approach fails to reach the second goal. While there is no warning, the expression that you've passed to your assertion macro will still be evaluated. If you, for example, just do a variable check, that is probably not a big deal. But what if you call some function in your assertion check like ASSERT(fetchSomeData() == data); (which is very common in my experience)? The fetchSomeData() function will still be called. It may be fast and simple or it may be not.
What you really need is not only warning suppression but perhaps more importantly - non-evaluation of the debug-only check expression. This can be achieved with a simple trick that I took from a specialized Assert library:
void myAssertion(bool checkSuccessful)
{
if (!checkSuccessful)
{
// debug break, log or what not
}
}
#define DONT_EVALUATE(expression) \
{ \
true ? static_cast<void>(0) : static_cast<void>((expression)); \
}
#ifdef DEBUG
# define ASSERT(expression) myAssertion((expression))
#else
# define ASSERT(expression) DONT_EVALUATE((expression))
#endif // DEBUG
int main()
{
int a = 0;
ASSERT(a == 1);
ASSERT(performAHeavyVerification());
return 0;
}
All the magic is in the DONT_EVALUATE macro. It is obvious that at least logically the evaluation of your expression is never needed inside of it. To strengthen that, the C++ standard guarantees that only one of the branches of conditional operator will be evaluated. Here is the quote:
5.16 Conditional operator [expr.cond]
logical-or-expression ? expression : assignment-expression
Conditional expressions group right-to-left. The first expression is
contextually converted to bool. It is evaluated and if it is true, the
result of the conditional expression is the value of the second
expression, otherwise that of the third expression. Only one of these
expressions is evaluated.
I have tested this approach in GCC 4.9.0, clang 3.8.0, VS2013 Update 4, VS2015 Update 4 with the most harsh warning levels. In all cases there are no warnings and the checking expression is never evaluated in Release build (in fact the whole thing is completely optimized away). Bare in mind though that with this approach you will get in trouble really fast if you put expressions that have side effects inside the assertion macro, though this is a very bad practice in the first place.
Also, I would expect that static analyzers may warn about "result of an expression is always constant" (or something like that) with this approach. I've tested for this with clang, VS2013, VS2015 static analysis tools and got no warnings of that kind.
The simplest thing is to only declare/assign those variables if the asserts will exist. The NDEBUG macro is specifically defined if asserts won't be effected (done that way round just because -DNDEBUG is a convenient way to disable debugging, I think), so this tweaked copy of #Jardel's answer should work (cf. comment by #AdamPeterson on that answer):
#ifndef NDEBUG
int Result =
#endif
Func();
assert(Result == 1);
or, if that doesn't suit your tastes, all sorts of variants are possible, e.g. this:
#ifndef NDEBUG
int Result = Func();
assert(Result == 1);
#else
Func();
#endif
In general with this stuff, be careful that there's never a possibility for different translation units to be build with different NDEBUG macro states -- especially re. asserts or other conditional content in public header files. The danger is that you, or users of your library might accidentally instantiate a different definition of an inline function from the one used inside the compiled part of the library, quietly violating the one definition rule and making the runtime behaviour undefined.
This is a bad use of assert, IMHO. Assert is not meant as an error reporting tool, it's meant to assert preconditions. If Result is not used elsewhere, it's not a precondition.
Certainly you use a macro to control your assert definition, such as "_ASSERT". So, you can do this:
#ifdef _ASSERT
int Result =
#endif /*_ASSERT */
Func();
assert(Result == 1);
int Result = Func();
assert( Result == 1 );
Result;
This will make the compiler stop complaining about Result not being used.
But you should think about using a version of assert that does something useful at run-time, like log descriptive errors to a file that can be retrieved from the production environment.
I'd use the following:
#ifdef _DEBUG
#define ASSERT(FUNC, CHECK) assert(FUNC == CHECK)
#else
#define ASSERT(FUNC, CHECK)
#endif
...
ASSERT(Func(), 1);
This way, for release build, the compiler don't even need to produce any code for assert.
If this code is inside a function, then act on and return the result:
bool bigPicture() {
//Check the results
bool success = 1 != Func();
assert(success == NO, "Bad times");
//Success is given, so...
actOnIt();
//and
return success;
}
// Value is always computed. We also call assert(value) if assertions are
// enabled. Value is discarded either way. You do not get a warning either
// way. This is useful when (a) a function has a side effect (b) the function
// returns true on success, and (c) failure seems unlikely, but we still want
// to check sometimes.
template < class T >
void assertTrue(T const &value)
{
assert(value);
}
template < class T >
void assertFalse(T const &value)
{
assert(!value);
}
I haven't succeeded in using [[maybe_unused]] but
you can use the unused attribute
int Result __attribute__((__unused__)) = Func();
gcc Variable-Attributes
Related
I've inherited a sizeable codebase where someone, somehow, has written several conditionals like so:
enum
{
FOO_TYPE_A,
FOO_TYPE_B,
FOO_TYPE_C,
FOO_TYPE_D
};
void bar(int fooType)
{
if (fooType == FOO_TYPE_A || FOO_TYPE_B) // <-- This will always be true, since FOO_TYPE_B is nonzero!
{
// Do something intended for only type A or B
}
// Do things general to A,B,C,D
}
where the condition check should clearly be:
if (fooType == FOO_TYPE_A || fooType == FOO_TYPE_B)
Is there a warning in gcc I can turn on to find them all, similar to MSDN's C4127?
Specifically, I'm using the Android NDK r9d.
If not, why not? It seems like a useful catch-all for unintentional assignment, unsigned > 0 as well as the above foolishness.
EDIT: Made the code more verbose to illustrate the problem.
I do not see a warning that corresponds to MSDN C4127. GCC does have a warning that is somewhat similar in intent, but not to address your problem: -Wtype-limits
Warn if a comparison is always true or always false due to the
limited range of the data type, but do not warn for constant
expressions. For example, warn if an unsigned variable is compared
against zero with < or >=. This warning is also enabled by
-Wextra.
As you can see, GCC explicitly states it does not warn about constant expressions. The motivation for this may be due to the common use of constant expressions to leverage the compiler's dead code elimination so that macros (or portions of it) can be optimized away by using a compile time constant. It would be used as an alternative to conditional compilation (#if defined() and #if X == Y), as the macro reads more naturally like a regular function. As a hypothetical example:
#define VERIFY(E) \
do { \
if (NO_VERIFY) break; \
if (!(E) && (VERIFY_LOG_LEVEL >= log_level() || VERIFY_LOG_ALWAYS)) { \
log("validation error for: " #E); \
} \
} while (0)
I think the problem is that variable is a define or a const. CONSTANT_2 is a non-zero constant also make this problem.
I am trying to understand the use of static_assert and assert and what the difference is between them, however there are very little sources/explantions about this
here is some code
// ConsoleApplication3.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "conio.h"
#include "cassert"
#include "iostream"
int main()
{
assert(2+2==4);
std::cout << "Execution continues past the first assert\n";
assert(2+2==5);
std::cout << "Execution continues past the second assert\n";
_getch();
}
comments on redundency will be appreciated(since I am learning "how to c++")
output in cmd
Execution continues past the first assert
Assertion failed: 2+2==5, file c:\users\charles\documents\visual studio 2012\pro
jects\consoleapplication3\consoleapplication3\consoleapplication3.cpp, line 14
I have been trying to find out the different mehtods and uses of it, but as far as I understand it is a runtime check and another "type" of if statement
could someone just clarify the use and explain what each one does and they difference?
You can think of assertions as sanity checks. You know that some condition should be true unless you've screwed something up, so the assertion should pass. If you have indeed screwed something up, the assertion will fail and you'll be told that something is wrong. It's just there to ensure the validity of your code.
A static_assert can be used when the condition is a constant expression. This basically means that the compiler is able to evaluate the assertion before the program ever actually runs. You will be alerted that a static_assert has failed at compile-time, whereas a normal assert will only fail at run time. In your example, you could have used a static_assert, because the expressions 2+2==4 and 2+2==5 are both constant expressions.
static_asserts are useful for checking compile-time constructs such as template parameters. For example, you could assert that a given template argument T must be a POD type with something like:
static_assert(std::is_pod<T>::value, "T must be a POD type");
Note that you generally only want run-time assertions to be checked during debugging, so you can disable assert by #defineing NDEBUG.
assert() is a macro whose expansion depends on whether macro NDEBUG is defined. If so, assert() doesn't expand to anything - it's a no-op. When NDEBUG is not defined, assert(x) expands into a runtime check, something like this:
if (!x) cause_runtime_to_abort()
It is common to define NDEBUG in "release" builds and leave it undefined in "debug" builds. That way, assert()s are only executed in debug code and do not make it into release code at all. You normally use assert() to check things which should always be true - a function's preconditions and postconditions or a class's invariants, for example. A failed assert(x) should mean "the programmer thought that x holds, but a bug somewhere in the code (or in their reasoning) made that untrue."
static_assert() (which was introduced in C++11) is a keyword - similar to e.g. typedef. It can only be used on compile-time expressions, and if it fails, it results in a compilation error. It does not result in any object code and is not executed at all.
static_assert() is primarily useful in templates, if you want to prevent instantiating a template incorrectly. For example:
template <class IntType>
IntType foo(IntType x)
{
static_assert(std::is_integral<IntType>::value, "foo() may be used with integral types only.");
// rest of the code
}
That way, trying to call foo() with e.g. a float will result in a compile-time error with a sensible message.
Occasionally, static_assert() can also be useful outside of templates, e.g. like this:
static_assert(sizeof(void*) > 4, "This code does not work in 32 bits");
a static_assert is evaluated at compile time, an assert is evaluated an runtime.
I doubt you cannot find any sources for this, but I’ll nevertheless give some explanation.
assert is for runtime checks which should never ever fail. If they fail, the program will terminate ungracefully. You can disable assertions for a release build by specifying a compile time flag. As assertions should never fail (they are assertions after all), in a working program, that should not make any difference. However, the expressions inside an assert must not have side effects, because there is no guarantee that they are executed. Example:
unsigned int add_non_negative_numbers(int a, int b)
{
// good
assert(a > 0);
assert(b > 0);
return (unsigned int)a + (unsigned int)b;
}
void checked_read(int fd, char *buffer, int count)
{
// BAD: if assertions are disabled, read() will _not_ be called
assert(read(fd, buffer, count) == count);
/* better: not perfect though, because read can always fail
int read_bytes = read(fd, buffer, count);
assert(read_bytes == count);
*/
}
static_assert is new and can be used to perform compile time checks. They will produce a compiler error when they fail and block the program from compiling:
static_assert(sizeof(char) == 1, "Compiler violates the standard");
If have encountered this claim multiple times and can't figure out what it is supposed to mean. Since the resulting code is compiled using a regular C compiler it will end up being type checked just as much (or little) as any other code.
So why are macros not type safe? It seems to be one of the major reasons why they should be considered evil.
Consider the typical "max" macro, versus function:
#define MAX(a,b) a < b ? a : b
int max(int a, int b) {return a < b ? a : b;}
Here's what people mean when they say the macro is not type-safe in the way the function is:
If a caller of the function writes
char *foo = max("abc","def");
the compiler will warn.
Whereas, if a caller of the macro writes:
char *foo = MAX("abc", "def");
the preprocessor will replace that with:
char *foo = "abc" < "def" ? "abc" : "def";
which will compile with no problems, but almost certainly not give the result you wanted.
Additionally of course the side effects are different, consider the function case:
int x = 1, y = 2;
int a = max(x++,y++);
the max() function will operate on the original values of x and y and the post-increments will take effect after the function returns.
In the macro case:
int x = 1, y = 2;
int b = MAX(x++,y++);
that second line is preprocessed to give:
int b = x++ < y++ ? x++ : y++;
Again, no compiler warnings or errors but will not be the behaviour you expected.
Macros aren't type safe because they don't understand types.
You can't tell a macro to only take integers. The preprocessor recognises a macro usage and it replaces one sequence of tokens (the macro with its arguments) with another set of tokens. This is a powerful facility if used correctly, but it's easy to use incorrectly.
With a function you can define a function void f(int, int) and the compiler will flag if you try to use the return value of f or pass it strings.
With a macro - no chance. The only checks that get made are it is given the correct number of arguments. then it replaces the tokens appropriately and passes onto the compiler.
#define F(A, B)
will allow you to call F(1, 2), or F("A", 2) or F(1, (2, 3, 4)) or ...
You might get an error from the compiler, or you might not, if something within the macro requires some sort of type safety. But that's not down to the preprocessor.
You can get some very odd results when passing strings to macros that expect numbers, as the chances are you'll end up using string addresses as numbers without a squeak from the compiler.
Well they're not directly type-safe... I suppose in certain scenarios/usages you could argue they can be indirectly (i.e. resulting code) type-safe. But you could certainly create a macro intended for integers and pass it strings... the pre-processor handling the macros certainly doesn't care. The compiler may choke on it, depending on usage...
Since macros are handled by the preprocessor, and the preprocessor doesn't understand types, it will happily accept variables that are of the wrong type.
This is usually only a concern for function-like macros, and any type errors will often be caught by the compiler even if the preprocessor doesn't, but this isn't guaranteed.
An example
In the Windows API, if you wanted to show a balloon tip on an edit control, you'd use Edit_ShowBalloonTip. Edit_ShowBalloonTip is defined as taking two parameters: the handle to the edit control and a pointer to an EDITBALLOONTIP structure. However, Edit_ShowBalloonTip(hwnd, peditballoontip); is actually a macro that evaluates to
SendMessage(hwnd, EM_SHOWBALLOONTIP, 0, (LPARAM)(peditballoontip));
Since configuring controls is generally done by sending messages to them, Edit_ShowBalloonTip has to do a typecast in its implementation, but since it's a macro rather than an inline function, it can't do any type checking in its peditballoontip parameter.
A digression
Interestingly enough, sometimes C++ inline functions are a bit too type-safe. Consider the standard C MAX macro
#define MAX(a, b) ((a) > (b) ? (a) : (b))
and its C++ inline version
template<typename T>
inline T max(T a, T b) { return a > b ? a : b; }
MAX(1, 2u) will work as expected, but max(1, 2u) will not. (Since 1 and 2u are different types, max can't be instantiated on both of them.)
This isn't really an argument for using macros in most cases (they're still evil), but it's an interesting result of C and C++'s type safety.
There are situations where macros are even less type-safe than functions. E.g.
void printlog(int iter, double obj)
{
printf("%.3f at iteration %d\n", obj, iteration);
}
Calling this with the arguments reversed will cause truncation and erroneous results, but nothing dangerous. By contrast,
#define PRINTLOG(iter, obj) printf("%.3f at iteration %d\n", obj, iter)
causes undefined behavior. To be fair, GCC warns about the latter, but not about the former, but that's because it knows printf -- for other varargs functions, the results are potentially disastrous.
When the macro runs, it just does a text match through your source files. This is before any compilation, so it is not aware of the datatypes of anything it changes.
Macros aren't type safe, because they were never meant to be type safe.
The compiler does the type checking after macros had been expanded.
Macros and there expansion are meant as a helper to the ("lazy") author (in the sense of writer/reader) of C source code. That's all.
I'm in a position where this design would greatly improve the clarity and maintenance-needs for my code.
What I'm looking for is something like this:
#define MY_MACRO(arg) #if (arg)>0 cout<<((arg)*5.0)<<endl; #else cout<<((arg)/5.0)<<endl; #endif
The idea here:
The pre-processor substitutes different lines of code depending on the compile-time (constant) value of the macro argument. Of course, I know this syntax doesn't work, as the # is seen as the string-ize operator instead of the standard #if, but I think this demonstrates the pre-processor functionality I am trying to achieve.
I know that I could just put a standard if statement in there, and the compiler/runtime would be left to check the value. But this is unnecessary work for the application when arg will always be passed a constant value, like 10.8 or -12.5 that only needs to be evaluated at compile-time.
Performance needs for this number-crunching application require that all unnecessary runtime conditions be eliminated whenever possible, and many constant values and macros have been used (in place of variables and functions) to make this happen. The ability to continue this trend without having to mix pre-processor code with real if conditions would make this much cleaner - and, of course, code-cleanliness is one of the biggest concerns when using macros, especially at this level.
As far as I know, you cannot have #if (or anything similar) inside your macro.
However, if the condition is known at compile-time, you can safetly use a normal if statement. The compiler will optimise it (assuming you have optimisions turned on).
It's called "Dead code elimination"
Simple, use real C++:
template <bool B> void foo_impl (int arg) { cout << arg*5.0 << endl; }
template < > void foo_impl<false>(int arg) { cout << arg/5.0 << endl; }
template <int I> void foo ( ) { foo_impl< (I>0) >(I); }
[edit]
Or in modern C++, if constexpr.
In a coding style question about infinite loops, some people mentioned they prefer the for(;;) style because the while(true) style gives warning messages on MSVC about a conditional expression being constant.
This surprised me greatly, since the use of constant values in conditional expressions is a useful way of avoiding #ifdef hell. For instance, you can have in your header:
#ifdef CONFIG_FOO
extern int foo_enabled;
#else
#define foo_enabled 0
#endif
And the code can simply use a conditional and trust the compiler to elide the dead code when CONFIG_FOO isn't defined:
if (foo_enabled) {
...
}
Instead of having to test for CONFIG_FOO every time foo_enabled is used:
#ifdef CONFIG_FOO
if (foo_enabled) {
...
}
#endif
This design pattern is used all the time in the Linux kernel (for instance, include/linux/cpumask.h defines several macros to 1 or 0 when SMP is disabled and to a function call when SMP is enabled).
What is the reason for that MSVC warning? Additionally, is there a better way to avoid #ifdef hell without having to disable that warning? Or is it an overly broad warning which should not be enabled in general?
A warning doesn't automatically mean that code is bad, just suspicious-looking.
Personally I start from a position of enabling all the warnings I can, then turn off any that prove more annoying than useful. That one that fires anytime you cast anything to a bool is usually the first to go.
I think the reason for the warning is that you might inadvertently have a more complex expression that evaluates to a constant without realizing it. Suppose you have a declaration like this in a header:
const int x = 0;
then later on, far from the declaration of x, you have a condition like:
if (x != 0) ...
You might not notice that it's a constant expression.
I believe it's to catch things like
if( x=0 )
when you meant
if( x==0 )
A simple way to avoid the warning would be:
#ifdef CONFIG_FOO
extern int foo_enabled;
#else
extern int foo_enabled = 0;
#endif