Dangerous ways of removing compiler warnings? - c++

I like to force a policy of no warnings when I check someone's code. Any warnings that appear have to be explicitly documented as sometimes it's not easy to remove some warnings or might require too many cycles or memory etc.
But there is a down-side to this policy and that is removing warnings in ways that are potentially dangerous, i.e. the method used actually hides the problem rather than fixes it.
The one I'm most acutely aware of is explicitly casts which might hide a bug.
What other potentially dangerous ways of removing compiler warnings in C(++) are there that I should look out for?

const correctness can cause a few problems for beginners:
// following should have been declared as f(const int & x)
void f( int & x ) {
...
}
later:
// n is only used to pass the parameter "4"
int n = 4;
// really wanted to say f(4)
f( n );
Edit1: In a somewhat similar vein, marking all member variables as mutable, because your code often changes them when const correctness says it really shouldn't.
Edit2: Another one I've come across (possibly from Java programmers ) is to tack throw() specifications onto functions, whether they could actually throw or not.

Well, there's the obvious way - disabling a specific warning for parts of the code:
#pragma warning( disable : 4507 34 )
EDIT: As has been pointed out in the comments, it is sometimes necessary to use in cases where you know that the warnings are OK (if it wasn't a useful feature, there would have been no reason to put it in in the first place). However, it is also a very easy way to "ignore" warnings in your code and still get it to compile silently, which is what the original question was about.

I think it's a delicate issue. My view is that warnings should be checked thoroughly to see if the code is correct / does what is intended. But often there is correct code that will produce warnings, and trying to eliminate them just convolutes the code or forces a rewrite in a less natural way.
I recall in a previous release I had correct and solid code that produced a couple of warnings, and coworkers started complaining about this. The code was much cleaner and did what it was intented. In the end the code went on production with the warnings.
Also, different compiler versions will produce different warnings, so it becomes more pointless to enforce a "no warnings" policy when the result depends on the mood of the compiler developers.
I want to emphasize how important is to check all warnings at least once.
Btw I develop in C and C++ for embedded systems.

Commenting out (or worse, deleting) the code that generates the warning. Sure, the warning goes away, but you are more than just a little likely ending up with code that doesn't do what you intend.

I also enforce a no-warnings rule, but you are right that you can't just remove the warning without careful consideration. And to be honest, at times I've left warnings in for a while because the code was right. Eventually I clean it up somehow, because once you have more than a dozen warnings in the build, people stop paying attention to them.
What you described is not a problem unique to warnings. I can't tell you how many times I see someones bug-fix for crash to be "I added a couple of NULL checks". You have to go to the root cause: Should that variable be NULL? If not, why was it?
This is why we have code reviews.

The biggest risk would be that someone would spend hours of development time to solve a minor warning that has no effect on the code. That would be a waste of time. Sometimes it's just easier to keep a warning and add a line of comment explaining why the warning occurs. (Until someone has time to resolve these trivial warnings.)
In my experience, resolving trivial warnings often adds two more days of work for developers. Those could make the difference between finishing before and after the deadline.

Related

What will a "single variable as a statement" do?

Below is the C++ function in a project I took over lately. Each of the last two statements is just a variable, containing no assignment. What will such kind of statement do? Lately, I saw such kinds of statements usually.
__fastcall TCardActionArea::TCardActionArea(TComponent* Owner)
:TArea(Owner,"CardActionArea")
{
// Get the thread id
ThreadId = std::__threadid();
this->Visible= false;
m_pBackGroundPicture = NULL;
m_pActionButtonMap.clear();
m_ActionsButtonDisplayed.clear();
m_changecnt = 0;
m_isNextbtn = true;
m_PictureParamPath1;
m_PictureParamPath2;
}
Normally, these statements do not do anything, and it is definitely not a common practice to write them.
Maybe the author just wanted to explicitly note that they do not need to assign any values to these members (although a comment would do better).
Maybe this is some hack for a particular compiler to prevent some optimization (e.g. to prevent the member from being optimized away), but it would be a very slippery hack that may not work on a next compiler version.
Maybe the author intended to assign something to these variables and just forgot to do this, so this may be a bug.
Or maybe the author just had some kind of template, e.g. listing all the members to make sure they did not forgot anything, and just kept the parts of template they did not need to change.
The only time I've seen statements like this used was to silence compiler warnings about unreferenced variables (usually function arguments). I haven't checked whether MSVC (which features of this code lead me to believe was used, at least originally) issues such warnings about unused members, although that does seem a stretch as it would only work in some whole-code analysis mode.

Is this a compiler bug in MSVC++ 2017 update 3

#include <vector>
std::vector<int>::iterator foo();
void bar(void*) {}
int main()
{
void* p;
while (foo() != foo() && (p = 0, true))
{
bar(p);
}
return 0;
}
Results in error:
c:\users\jessepepper\source\repos\testcode\consoleapplication1\consoleapplication1.cpp(15): error C4703: potentially uninitialized local pointer variable 'p' used
It's kind of a bug, but very typical for the kind of code you write.
First, this isn't an error, it's a warning. C4703 is a level 4 warning (meaning that it isn't even enabled by default). So in order to get it reported as an error (and thus interrupt compilation), compiler arguments or pragmas were passed to enable this warning and turn it into an error (/W4 and /Werror are the most likely I think).
Then there's a trade-off in the compiler. How complex should the data flow analysis be to determine whether a variable is actually uninitialized? Should it be interprocedural? The more complex it is, the slower the compiler gets (and because of the halting problem, the issue may be undecidable anyway). The simpler it is, the more false positives you get because the condition that guarantees initialization is too complex for the compiler to understand.
In this case, I suspect that the compiler's analysis works as follows: the assignment to p is behind a conditional (it only happens if foo() != foo()). The usage of p is also behind a conditional (it only happens if that complex and-expression is true). The compiler cannot establish a relationship between these conditions (the analysis is not complex enough to realize that foo() != foo() is a precondition to the entire while loop condition being true). Thus, the compiler errs on the side of assuming that the access could happen without prior initialization and emits the warning.
So it's an engineering trade-off. You could report the bug, but if you do, I suggest you supply a more compelling real-world example of idiomatic code to argue in favor of making the analysis more complex. Are you sure you can't restructure your original code to make it more approachable to the compiler, and more readable for humans at the same time?
I did some experimenting with VC++2017 Preview.
It's definitely a bug bug. It makes it impossible to compile and link code that might be correct, albetit smelly.
A warning would be acceptable. (See #SebastianRedl answer.) But in the latest and greatest VC++2017, it is being treated as an error, not warning, even with warnings turned off, and "Treat warnings as errors" set to No. Something odd is happening. The "error" is being thrown late - after it says, "Generating code". I would guess, and it's only a guess, that the "Generating code" pass is doing global analysis to determine if un-initialized access is possible, and it's getting it wrong. Even then, you should be able to disable the error, IMO.
I do not know if this is new behavior. Reading Sebastian's answer, I presume it is. When I get any kind of warning at any level, I always fix it in the code, so I would not know.
Jesse, click on the triangular flag near the top right of Visual Studio, and report it.
For sure it's a bug. I tried to remove it in all possible ways, including #pragma. The real thing is that this is reported as an error, not as a warning as Microsoft say. This is a big mistake from Microsoft. It's NOT a WARNING, it's an ERROR. Please, do not repeat again that it's a warning, because it's NOT.
What I'm doing is trying to compile some third party library whose sources I do not want to fix in any way, and should compile in normal cases, but it DOESN'T compile in VS2017 because the infamous "error C4703: potentially uninitialized local pointer variable *** used".
Someone found a solution for that?

Should I really massively introduce the explicit keyword?

When I used the (recently released) Cppcheck 1.69 on my code1, it showed a whole lot of messages where I expected none. Disabling noExplicitConstructor proved that all of them were of exactly this kind.
But I found that I'm not the only one with a lot of new Cppcheck messages, look at the results of the analysis of LibreOffice (which I'm allowed to show in public):
What would an experienced programmer do:
Suppress the check?
Massively introduce the explicit keyword?
1 This is of course not my code but code I have to work at work, it's legacy code: a mix of C and C++ in several (pre-)standard flavors (let's say C++98), and it's a pretty large code base.
I've been bitten in the past by performance hits introduced by implicit conversions as well as outright bugs. So I tend to always use explicit for all constructors that I do not want to participate in implicit conversions so that the compiler can help me catch my errors - and I then try to always also add a "// implicit intended" comment to the ctors where I explicitly intend for them to be used as converting ctors implicitly. I find that this helps me write more correct code with fewer surprises.
… So I'd say "yes, go add explicit" - in the long run you'll be glad you did - that's what I did when I first learned about it, and I'm glad I did.

Expressions with no side effects in C++

See, what I don't get is, why should programs like the following be legal?
int main()
{
static const int i = 0;
i < i > i;
}
I mean, surely, nobody actually has any current programs that have expressions with no side effects in them, since that would be very pointless, and it would make parsing & compiling the language much easier. So why not just disallow them? What benefit does the language actually gain from allowing this kind of syntax?
Another example being like this:
int main() {
static const int i = 0;
int x = (i);
}
What is the actual benefit of such statements?
And things like the most vexing parse. Does anybody, ever, declare functions in the middle of other functions? I mean, we got rid of things like implicit function declaration, and things like that. Why not just get rid of them for C++0x?
Probably because banning then would make the specification more complex, which would make compilers more complex.
it would make parsing & compiling the
language much easier
I don't see how. Why is it easier to parse and compile i < i > i if you're required to issue a diagnostic, than it is to parse it if you're allowed to do anything you damn well please provided that the emitted code has no side-effects?
The Java compiler forbids unreachable code (as opposed to code with no effect), which is a mixed blessing for the programmer, and requires a little bit of extra work from the compiler than what a C++ compiler is actually required to do (basic block dependency analysis). Should C++ forbid unreachable code? Probably not. Even though C++ compilers certainly do enough optimization to identify unreachable basic blocks, in some cases they may do too much. Should if (foo) { ...} be an illegal unreachable block if foo is a false compile-time constant? What if it's not a compile-time constant, but the optimizer has figured out how to calculate the value, should it be legal and the compiler has to realise that the reason it's removing it is implementation-specific, so as not to give an error? More special cases.
nobody actually has any current
programs that have expressions with no
side effects in them
Loads. For example, if NDEBUG is true, then assert expands to a void expression with no effect. So that's yet more special cases needed in the compiler to permit some useless expressions, but not others.
The rationale, I believe, is that if it expanded to nothing then (a) compilers would end up throwing warnings for things like if (foo) assert(bar);, and (b) code like this would be legal in release but not in debug, which is just confusing:
assert(foo) // oops, forgot the semi-colon
foo.bar();
things like the most vexing parse
That's why it's called "vexing". It's a backward-compatibility issue really. If C++ now changed the meaning of those vexing parses, the meaning of existing code would change. Not much existing code, as you point out, but the C++ committee takes a fairly strong line on backward compatibility. If you want a language that changes every five minutes, use Perl ;-)
Anyway, it's too late now. Even if we had some great insight that the C++0x committee had missed, why some feature should be removed or incompatibly changed, they aren't going to break anything in the FCD unless the FCD is definitively in error.
Note that for all of your suggestions, any compiler could issue a warning for them (actually, I don't understand what your problem is with the second example, but certainly for useless expressions and for vexing parses in function bodies). If you're right that nobody does it deliberately, the warnings would cause no harm. If you're wrong that nobody does it deliberately, your stated case for removing them is incorrect. Warnings in popular compilers could pave the way for removing a feature, especially since the standard is authored largely by compiler-writers. The fact that we don't always get warnings for these things suggests to me that there's more to it than you think.
It's convenient sometimes to put useless statements into a program and compile it just to make sure they're legal - e.g. that the types involve can be resolved/matched etc.
Especially in generated code (macros as well as more elaborate external mechanisms, templates where Policies or types may introduce meaningless expansions in some no-op cases), having less special uncompilable cases to avoid keeps things simpler
There may be some temporarily commented code that removes the meaningful usage of a variable, but it could be a pain to have to similarly identify and comment all the variables that aren't used elsewhere.
While in your examples you show the variables being "int" immediately above the pointless usage, in practice the types may be much more complicated (e.g. operator<()) and whether the operations have side effects may even be unknown to the compiler (e.g. out-of-line functions), so any benefit's limited to simpler cases.
C++ needs a good reason to break backwards (and retained C) compatibility.
Why should doing nothing be treated as a special case? Furthermore, whilst the above cases are easy to spot, one could imagine far more complicated programs where it's not so easy to identify that there are no side effects.
As an iteration of the C++ standard, C++0x have to be backward compatible. Nobody can assert that the statements you wrote does not exist in some piece of critical software written/owned by, say, NASA or DoD.
Anyway regarding your very first example, the parser cannot assert that i is a static constant expression, and that i < i > i is a useless expression -- e.g. if i is a templated type, i < i > i is an "invalid variable declaration", not a "useless computation", and still not a parse error.
Maybe the operator was overloaded to have side effects like cout<<i; This is the reason why they cannot be removed now. On the other hand C# forbids non-assignment or method calls expresions to be used as statements and I believe this is a good thing as it makes the code more clear and semantically correct. However C# had the opportunity to forbid this from the very beginning which C++ does not.
Expressions with no side effects can turn up more often than you think in templated and macro code. If you've ever declared std::vector<int>, you've instantiated template code with no side effects. std::vector must destruct all its elements when releasing itself, in case you stored a class for type T. This requires, at some point, a statement similar to ptr->~T(); to invoke the destructor. int has no destructor though, so the call has no side effects and will be removed entirely by the optimizer. It's also likely it will be inside a loop, then the entire loop has no side effects, so the entire loop is removed by the optimizer.
So if you disallowed expressions with no side effects, std::vector<int> wouldn't work, for one.
Another common case is assert(a == b). In release builds you want these asserts to disappear - but you can't re-define them as an empty macro, otherwise statements like if (x) assert(a == b); suddenly put the next statement in to the if statement - a disaster! In this case assert(x) can be redefined as ((void)0), which is a statement that has no side effects. Now the if statement works correctly in release builds too - it just does nothing.
These are just two common cases. There are many more you probably don't know about. So, while expressions with no side effects seem redundant, they're actually functionally important. An optimizer will remove them entirely so there's no performance impact, too.

c++ optimization

I'm working on some existing c++ code that appears to be written poorly, and is very frequently called. I'm wondering if I should spend time changing it, or if the compiler is already optimizing the problem away.
I'm using Visual Studio 2008.
Here is an example:
void someDrawingFunction(....)
{
GetContext().DrawSomething(...);
GetContext().DrawSomething(...);
GetContext().DrawSomething(...);
.
.
.
}
Here is how I would do it:
void someDrawingFunction(....)
{
MyContext &c = GetContext();
c.DrawSomething(...);
c.DrawSomething(...);
c.DrawSomething(...);
.
.
.
}
Don't guess at where your program is spending time. Profile first to find your bottlenecks, then optimize those.
As for GetContext(), that depends on how complex it is. If it's just returning a class member variable, then chances are that the compiler will inline it. If GetContext() has to perform a more complicated operation (such as looking up the context in a table), the compiler probably isn't inlining it, and you may wish to only call it once, as in your second snippet.
If you're using GCC, you can also tag the GetContext() function with the pure attribute. This will allow it to perform more optimizations, such as common subexpression elimination.
If you're sure it's a performance problem, change it. If GetContext is a function call (as opposed to a macro or an inline function), then the compiler is going to HAVE to call it every time, because the compiler can't necessarily see what it's doing, and thus, the compiler probably won't know that it can eliminate the call.
Of course, you'll need to make sure that GetContext ALWAYS returns the same thing, and that this 'optimization' is safe.
If it is logically correct to do it the second way, i.e. calling GetContext() once on multiple times does not affect your program logic, i'd do it the second way even if you profile it and prove that there are no performance difference either way, so the next developer looking at this code will not ask the same question again.
Obviously, if GetContext() has side effects (I/O, updating globals, etc.) than the suggested optimization will produce different results.
So unless the compiler can somehow detect that GetContext() is pure, you should optimize it yourself.
If you're wondering what the compiler does, look at the assembly code.
That is such a simple change, I would do it.
It is quicker to fix it than to debate it.
But do you actually have a problem?
Just because it's called often doesn't mean it's called TOO often.
If it seems qualitatively piggy, sample it to see what it's spending time at.
Chances are excellent that it is not what you would have guessed.