Is this a compiler bug in MSVC++ 2017 update 3 - c++

#include <vector>
std::vector<int>::iterator foo();
void bar(void*) {}
int main()
{
void* p;
while (foo() != foo() && (p = 0, true))
{
bar(p);
}
return 0;
}
Results in error:
c:\users\jessepepper\source\repos\testcode\consoleapplication1\consoleapplication1.cpp(15): error C4703: potentially uninitialized local pointer variable 'p' used

It's kind of a bug, but very typical for the kind of code you write.
First, this isn't an error, it's a warning. C4703 is a level 4 warning (meaning that it isn't even enabled by default). So in order to get it reported as an error (and thus interrupt compilation), compiler arguments or pragmas were passed to enable this warning and turn it into an error (/W4 and /Werror are the most likely I think).
Then there's a trade-off in the compiler. How complex should the data flow analysis be to determine whether a variable is actually uninitialized? Should it be interprocedural? The more complex it is, the slower the compiler gets (and because of the halting problem, the issue may be undecidable anyway). The simpler it is, the more false positives you get because the condition that guarantees initialization is too complex for the compiler to understand.
In this case, I suspect that the compiler's analysis works as follows: the assignment to p is behind a conditional (it only happens if foo() != foo()). The usage of p is also behind a conditional (it only happens if that complex and-expression is true). The compiler cannot establish a relationship between these conditions (the analysis is not complex enough to realize that foo() != foo() is a precondition to the entire while loop condition being true). Thus, the compiler errs on the side of assuming that the access could happen without prior initialization and emits the warning.
So it's an engineering trade-off. You could report the bug, but if you do, I suggest you supply a more compelling real-world example of idiomatic code to argue in favor of making the analysis more complex. Are you sure you can't restructure your original code to make it more approachable to the compiler, and more readable for humans at the same time?

I did some experimenting with VC++2017 Preview.
It's definitely a bug bug. It makes it impossible to compile and link code that might be correct, albetit smelly.
A warning would be acceptable. (See #SebastianRedl answer.) But in the latest and greatest VC++2017, it is being treated as an error, not warning, even with warnings turned off, and "Treat warnings as errors" set to No. Something odd is happening. The "error" is being thrown late - after it says, "Generating code". I would guess, and it's only a guess, that the "Generating code" pass is doing global analysis to determine if un-initialized access is possible, and it's getting it wrong. Even then, you should be able to disable the error, IMO.
I do not know if this is new behavior. Reading Sebastian's answer, I presume it is. When I get any kind of warning at any level, I always fix it in the code, so I would not know.
Jesse, click on the triangular flag near the top right of Visual Studio, and report it.

For sure it's a bug. I tried to remove it in all possible ways, including #pragma. The real thing is that this is reported as an error, not as a warning as Microsoft say. This is a big mistake from Microsoft. It's NOT a WARNING, it's an ERROR. Please, do not repeat again that it's a warning, because it's NOT.
What I'm doing is trying to compile some third party library whose sources I do not want to fix in any way, and should compile in normal cases, but it DOESN'T compile in VS2017 because the infamous "error C4703: potentially uninitialized local pointer variable *** used".
Someone found a solution for that?

Related

Why isn't g++ -Wreorder smarter?

Looking at What's the point of g++ -Wreorder, I fully understand what -Wreorder is useful for. But it doesn't seem unreasonable that the compiler would be able to detect whether such a reordering is harmless:
struct Harmless {
C() : b(1), a(2) {}
int a;
int b;
};
or broken:
struct Broken {
C() : b(1), a(b + 1) {}
int a;
int b;
};
My question is then: why doesn't GCC detect (and warn about) the actual use of an undefined member in an initializer instead of this blanket warning on the ordering of initializers?
As far as I understand, -Wuninitialized only applies to automatic variables, and indeed it does not detect the error above.
EDIT:
A stab at formalizing the behavior I want:
Given initializer list : a1(expr1), a2(expr2), a3(expr3) ... an(exprn), I want a warning if (and only if) the execution of any of the initializers, in the order they will be executed, would reference an uninitialized value. I.e. in the same manner as -Wuninitialized warns about use of uninitialized automatic variables.
Some additional background: I work in a mostly windows-based company, where basically everybody but me uses Visual Studio. VS does not have this warning, thus nobody cares about having the correct order (and have no means of knowing when they screw up the ordering except manual inspection), thus leaving me with endless warnings that I have to constantly fix everytime someone breaks something. I would like to be informed about only the cases that are really problematic and ignore the benign cases. So my question is maybe better phrased as: is it technically feasible to implement a warning/error like this? My gut feeling says it is, but the fact that it isn't already implemented makes me doubt it.
My speculation is that it's for the same reason we have -Wold-style-cast: safety erring on the side of being too conservative. All it takes is a moment's inattention to transform Harmless into CarelessMistake. Maybe this developer's in a hurry or has an older version of GCC or sees that it's "just a warning" and presses on.
This is basically true among many warnings. They often are spurious, and require a little bit of restructuring to compile cleanly, but on some occasions they represent real problems. Every good programmer will prefer some working through some false positives if that means they get fewer false negatives.
I would be surprised if there's a valid direct answer to the question. There's no technical reason I see that it couldn't be done. It's just . . . why bother trying to figure out if something questionable is actually okay? Programming is the human's job.
As a personal reason, I think initializing variables in the order you declare them often makes sense.

Strange behavior of VS2013

Recently I was bothering by a crash of my program in release mode while runs fine under debug mode.
By inspecting deeply into my code I found that I forget to return true at the end of a function, which causes the crash. The function should return false when fail, otherwise, it returns true.
I am wandering whether this is a defect of the compiler(vs 2013) as it (maybe) added for me the return true statement at the end of the function, however it did not when releasing. Consequently, the programmer will spent lots of time in debugging the fault, although, the programmer should blame.
:)
Flowing off the end of a function that is supposed to return a value is undefined behavior. Undefined behavior means the compiler can do anything and still be compliant. Giving a warning message is compliant. Not giving a warning message is compliant. Erasing your hard drive: That's also compliant. Fortunately for me, that hasn't happened yet. I've had the misfortune of invoking undefined behavior many, many times.
One reason this is undefined behavior is because there are some weird cases where flow analysis can't decide whether a function returns a value. Another reason is that you might have used assembly to set the return value in a way that works just fine on your computer. A third reason is that the compiler has to do flow analysis to make this determination; this is something many compilers don't do unless optimization is enabled.
That said, a lack of a return before the close brace will often trigger a compiler to check whether the function returns a value. The compiler was being nice to you when it issued a warning.
That you received a warning message and ignored it -- Never do that. Compile with flags set to a reasonably high level and address each and every warning. Code should always compile clean. Always.
C and C++ are tolerant languages. When the programmer writes code that the compiler can compile even if it looks weird, the compiler emits a warning. The warning means you are writing something that may contain an error, but you take the decision.
It allows to voluntarily do certain optimisations. For example, you can always use an 2D array as an 1D array what could not be done in some other languages. But the counterpart is never ignore a warning if you are not sure you know why you are forcing the compiler to to something it does not like
Conclusion : as soon as he has ignored a warning that at the end leads to the error, the programmer is to blame ;-)

Why is there no warning when I write an empty main?

If I write a program like the following one, g++ and visual studio have the courtesy of warning me that the local variable a is never used :
int main()
{
int a; // An unused variable? Warning! Warning!
}
If I remove the unused variable (to make the compiler happy), it leaves me with the following program :
int main()
{
// An empty main? That's fine.
}
Now, I am left with a useless program.
Maybe I am missing something, but, if an unused variable is bad enough to raise a warning, why would an empty program be ok?
The example above is pretty simple. But in real life, if I have a big program with an empty main (because I forgot to put anything in it). Then having a warning should be a good thing, isn't it.
Maybe I am missing an option in g++ or visual studio that can raise a warning/error when the main is empty?
The reason for this is simple, if there is no return statement in main it implicitly returns EXIT_SUCCESS, as defined by the standard.
So an empty main is fine, no return needed, no function calls needed, nothing.
To answer the question why GCC doesn't warn you is because warnings are there to help you with common mistakes. Leaving a variable unused can lead to confusing errors, and code bloat.
However forgetting entirely to write a main function isn't a common mistake by anything but a beginner and isn't worth warning about (because it's entirely legal as well).
I suspect a lot of it is that compilers generally try to warn about things that are potential problems, but aren't necessarily apparent.
Now it's certainly true that if all your main contains a definition of a variable that's never used, that's fairly apparent -- but if you've defined 16 variables (or whatever) and one of them is no longer used, that may not be so obvious.
In the case of main containing nothing, I suppose the same could happen with an empty main -- for example, you could have a whole web of #ifdef/#elif/etc., that led to main being entirely empty for some particular platform. I'm pretty sure I've never run across this though, and I'm pretty sure I've never heard of anybody else seeing it either. At least to me, that suggests that it probably doesn't arise often enough in practice for most people to care much about the possibility.
if an unused variable is bad enough to raise a warning, why would an empty program be ok?
First of all, an empty main does not equal an empty program. There could be static objects with non-trivial constructors/destructors. These would get invoked irrespective of whether main is empty.
Secondly, one could think of lots and lots of potential errors that a compiler could warn about, but most compilers don't. I think this particular one doesn't come up very often (and takes seconds to figure out). I therefore don't see a compelling case for specifically diagnosing it.
When I was cleaning up inherited C code that comprised the customized runner for Informix 4GL, I fixed every warning having set the warning flag to catch everything, and there were lots of warnings.
I haven't used Visual C++ in a long time. Can't VC++ be configured to flag the most severe warnings? It is probably not the default setting, but one you have to change.
It is possible then that at least the unused variable would be flagged.
In a global sense int main() is just a definition of the main function of the program which returns SUCCESS when finishes.
The main function is the point by where all C++ programs start their execution, independently of its location within the source code.
So this:
int main()
{
// An empty main? That's fine.
// notice that the "return 0;" part is here by default, whether you wrote it or not
}
is just a definition of a function which returns admissible value.
So everything is ok, that's why the compiler is silent.

“Uninitialized use” warning in the g++ compiler

I’m using g++ with warning level -Wall -Wextra and treating warnings as errors (-Werror).
Now I’m sometimes getting an error “variable may be used uninitialized in this function”.
By “sometimes” I mean that I have two independent compilation units that both include the same header file. One compilation unit compiles without error, the other gives the above error.
The relevant piece of code in the header files is as follows. Since the function is pretty long, I’ve only reproduced the relevant bit below.
The exact error is:
'cmpres' may be used uninitialized in this function
And I’ve marked the line with the error by * below.
for (; ;) {
int cmpres; // *
while (b <= c and (cmpres = cmp(b, pivot)) <= 0) {
if (cmpres == 0)
::std::iter_swap(a++, b);
++b;
}
while (c >= b and (cmpres = cmp(c, pivot)) >= 0) {
if (cmpres == 0)
::std::iter_swap(d--, c);
--c;
}
if (b > c) break;
::std::iter_swap(b++, c--);
}
(cmp is a functor that takes two pointers x and y and returns –1, 0 or +1 if *x < *y, *x == *y or *x > *y respectively. The other variables are pointers into the same array.)
This piece of code is part of a larger function but the variable cmpres is used nowhere else. Hence I fail to understand why this warning is generated. Furthermore, the compiler obviously understands that cmpres will never be read uninitialized (or at least, it doesn’t always warn, see above).
Now I have two questions:
Why the inconsistent behaviour? Is this warning generated by a heuristic? (This is plausible since emitting this warning requires a control flow analysis which is NP hard in the general case and cannot always be performed.)
Why the warning? Is my code unsafe? I have come to appreciate this particular warning because it has saved me from very hard to detect bugs in other cases – so this is a valid warning, at least sometimes. Is it valid here?
An algorithm that diagnoses uninitialized variables with no false negatives or positives must (as a subroutine) include an algorithm that solves the Halting Problem. Which means there is no such algorithm. It is impossible for a computer to get this right 100% of the time.
I don't know how GCC's uninitialized variable analysis works exactly, but I do know it's very sensitive to what early optimization passes have done to the code. So I'm not at all surprised you get false positives only sometimes. It does distinguish cases where it's certain from cases where it can't be certain --
int foo() { int a; return a; }
produces "warning: ‘a’ is used uninitialized in this function" (emphasis mine).
EDIT: I found a case where recent versions of GCC (4.3 and later) fail to diagnose an uninitialized variable:
int foo(int x)
{
int a;
return x ? a : 0;
}
Early optimizations notice that if x is nonzero, the function's behavior is undefined, so they assume x must be zero and replace the entire body of the function with "return 0;" This happens well before the pass that generates the used-uninitialized warnings, so there's no diagnostic. See GCC bug 18501 for gory details.
I bring this up partially to demonstrate that production-grade compilers can get uninitialized-variable diagnostics wrong both ways, and partially because it's a nice example of the point that undefined behavior can propagate backward in execution time. There's nothing undefined about testing x, but because code control-dependent on x has undefined behavior, a compiler is allowed to assume that the control dependency is never satisfied and discard the test.
There was an interesting discussion on clang dev-mailing list related to those heuristics this week.
The bottom line is: it's actually quite difficult to diagnose unitialized values without getting exponential behavior...
Apparently (from the discussion), gcc uses a predicate base approach, but given your experience it seems that it is not always sufficient.
I suspect it's got something to do with the fact that the assignment is mixed within the condition (and after a short-circuiting operator at that...). Have you tried without ?
I think both the gcc and clang folks would be very interested by this example since it's relatively common practice in C or C++ and thus could benefit from some tuning.
The code is correct, but the compiler is failing to identify that the variable is never used without initialization.
I would suggest that it's likely a heuristical error- that's what the "may" is for. I suspect that not many loop conditions look quite like that. That code is not unsafe because in all control paths, cmpres is assigned before use. However, I certainly wouldn't find it wrong to initialize it first.
You could, however, have some kind of variable shadowing going on here. That would be the only explanation I could think of for only one of the two translation units giving errors.

Dangerous ways of removing compiler warnings?

I like to force a policy of no warnings when I check someone's code. Any warnings that appear have to be explicitly documented as sometimes it's not easy to remove some warnings or might require too many cycles or memory etc.
But there is a down-side to this policy and that is removing warnings in ways that are potentially dangerous, i.e. the method used actually hides the problem rather than fixes it.
The one I'm most acutely aware of is explicitly casts which might hide a bug.
What other potentially dangerous ways of removing compiler warnings in C(++) are there that I should look out for?
const correctness can cause a few problems for beginners:
// following should have been declared as f(const int & x)
void f( int & x ) {
...
}
later:
// n is only used to pass the parameter "4"
int n = 4;
// really wanted to say f(4)
f( n );
Edit1: In a somewhat similar vein, marking all member variables as mutable, because your code often changes them when const correctness says it really shouldn't.
Edit2: Another one I've come across (possibly from Java programmers ) is to tack throw() specifications onto functions, whether they could actually throw or not.
Well, there's the obvious way - disabling a specific warning for parts of the code:
#pragma warning( disable : 4507 34 )
EDIT: As has been pointed out in the comments, it is sometimes necessary to use in cases where you know that the warnings are OK (if it wasn't a useful feature, there would have been no reason to put it in in the first place). However, it is also a very easy way to "ignore" warnings in your code and still get it to compile silently, which is what the original question was about.
I think it's a delicate issue. My view is that warnings should be checked thoroughly to see if the code is correct / does what is intended. But often there is correct code that will produce warnings, and trying to eliminate them just convolutes the code or forces a rewrite in a less natural way.
I recall in a previous release I had correct and solid code that produced a couple of warnings, and coworkers started complaining about this. The code was much cleaner and did what it was intented. In the end the code went on production with the warnings.
Also, different compiler versions will produce different warnings, so it becomes more pointless to enforce a "no warnings" policy when the result depends on the mood of the compiler developers.
I want to emphasize how important is to check all warnings at least once.
Btw I develop in C and C++ for embedded systems.
Commenting out (or worse, deleting) the code that generates the warning. Sure, the warning goes away, but you are more than just a little likely ending up with code that doesn't do what you intend.
I also enforce a no-warnings rule, but you are right that you can't just remove the warning without careful consideration. And to be honest, at times I've left warnings in for a while because the code was right. Eventually I clean it up somehow, because once you have more than a dozen warnings in the build, people stop paying attention to them.
What you described is not a problem unique to warnings. I can't tell you how many times I see someones bug-fix for crash to be "I added a couple of NULL checks". You have to go to the root cause: Should that variable be NULL? If not, why was it?
This is why we have code reviews.
The biggest risk would be that someone would spend hours of development time to solve a minor warning that has no effect on the code. That would be a waste of time. Sometimes it's just easier to keep a warning and add a line of comment explaining why the warning occurs. (Until someone has time to resolve these trivial warnings.)
In my experience, resolving trivial warnings often adds two more days of work for developers. Those could make the difference between finishing before and after the deadline.