I am having a weird optimisation-only bug so I am trying to determine which flag is causing it. The error (incorrect computation) occurs with -O1, but not with -O0. Therefore, I thought I could use all of the -f flags that -O1 includes to narrow down the culprit. However, when I try that (using this list http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html), it works fine again!
Can anyone explain this, or give other suggestions of what to look for? I've run the code through valgrind, and it does not report any errors.
EDIT
I found that the computation is correct with -O0, incorrect with -O1, but correct again with -O1 -ffloat-store. Any thoughts of what to look for that would cause it not to work without -ffloat-store?
EDIT2
If I compile with my normal release flags, there is a computation error. However, if I add either:
-ffloat-store
or
-mpc64
to the list of flags, the error goes away.
Can anyone suggest a way to track down the line at which this flag is making a difference so I could potentially change it instead of requiring everyone using the code to compile with an additional flag?
From back in my GCC/C++ days the optimization bug like this that I remember was that with -O0 on methods where no return value was specified would return the last value of that type in the method (probably what you wanted to return right?) whereas with optimisations on it returned the default value for the type, not the last value of the type in the method (this might only be true for value types, I can't remember). This would mean you would develop for ages with the debug flags on and everything would look fine, then it would stop working when you optimised.
For me not specifying a return value is a compilation error, but that was C++ back then.
The solution to this was to switch on the strongest set of warnings and then to treat all warnings as errors: that will highlight things like this. (If you are not already doing this then you are in for a whole load of pain!)
If you already have all of the errors / warnings on then the only other option is that a method call with side-effects is being optimised out. That is going to be harder to track down.
Related
#include <vector>
std::vector<int>::iterator foo();
void bar(void*) {}
int main()
{
void* p;
while (foo() != foo() && (p = 0, true))
{
bar(p);
}
return 0;
}
Results in error:
c:\users\jessepepper\source\repos\testcode\consoleapplication1\consoleapplication1.cpp(15): error C4703: potentially uninitialized local pointer variable 'p' used
It's kind of a bug, but very typical for the kind of code you write.
First, this isn't an error, it's a warning. C4703 is a level 4 warning (meaning that it isn't even enabled by default). So in order to get it reported as an error (and thus interrupt compilation), compiler arguments or pragmas were passed to enable this warning and turn it into an error (/W4 and /Werror are the most likely I think).
Then there's a trade-off in the compiler. How complex should the data flow analysis be to determine whether a variable is actually uninitialized? Should it be interprocedural? The more complex it is, the slower the compiler gets (and because of the halting problem, the issue may be undecidable anyway). The simpler it is, the more false positives you get because the condition that guarantees initialization is too complex for the compiler to understand.
In this case, I suspect that the compiler's analysis works as follows: the assignment to p is behind a conditional (it only happens if foo() != foo()). The usage of p is also behind a conditional (it only happens if that complex and-expression is true). The compiler cannot establish a relationship between these conditions (the analysis is not complex enough to realize that foo() != foo() is a precondition to the entire while loop condition being true). Thus, the compiler errs on the side of assuming that the access could happen without prior initialization and emits the warning.
So it's an engineering trade-off. You could report the bug, but if you do, I suggest you supply a more compelling real-world example of idiomatic code to argue in favor of making the analysis more complex. Are you sure you can't restructure your original code to make it more approachable to the compiler, and more readable for humans at the same time?
I did some experimenting with VC++2017 Preview.
It's definitely a bug bug. It makes it impossible to compile and link code that might be correct, albetit smelly.
A warning would be acceptable. (See #SebastianRedl answer.) But in the latest and greatest VC++2017, it is being treated as an error, not warning, even with warnings turned off, and "Treat warnings as errors" set to No. Something odd is happening. The "error" is being thrown late - after it says, "Generating code". I would guess, and it's only a guess, that the "Generating code" pass is doing global analysis to determine if un-initialized access is possible, and it's getting it wrong. Even then, you should be able to disable the error, IMO.
I do not know if this is new behavior. Reading Sebastian's answer, I presume it is. When I get any kind of warning at any level, I always fix it in the code, so I would not know.
Jesse, click on the triangular flag near the top right of Visual Studio, and report it.
For sure it's a bug. I tried to remove it in all possible ways, including #pragma. The real thing is that this is reported as an error, not as a warning as Microsoft say. This is a big mistake from Microsoft. It's NOT a WARNING, it's an ERROR. Please, do not repeat again that it's a warning, because it's NOT.
What I'm doing is trying to compile some third party library whose sources I do not want to fix in any way, and should compile in normal cases, but it DOESN'T compile in VS2017 because the infamous "error C4703: potentially uninitialized local pointer variable *** used".
Someone found a solution for that?
Recently I was bothering by a crash of my program in release mode while runs fine under debug mode.
By inspecting deeply into my code I found that I forget to return true at the end of a function, which causes the crash. The function should return false when fail, otherwise, it returns true.
I am wandering whether this is a defect of the compiler(vs 2013) as it (maybe) added for me the return true statement at the end of the function, however it did not when releasing. Consequently, the programmer will spent lots of time in debugging the fault, although, the programmer should blame.
:)
Flowing off the end of a function that is supposed to return a value is undefined behavior. Undefined behavior means the compiler can do anything and still be compliant. Giving a warning message is compliant. Not giving a warning message is compliant. Erasing your hard drive: That's also compliant. Fortunately for me, that hasn't happened yet. I've had the misfortune of invoking undefined behavior many, many times.
One reason this is undefined behavior is because there are some weird cases where flow analysis can't decide whether a function returns a value. Another reason is that you might have used assembly to set the return value in a way that works just fine on your computer. A third reason is that the compiler has to do flow analysis to make this determination; this is something many compilers don't do unless optimization is enabled.
That said, a lack of a return before the close brace will often trigger a compiler to check whether the function returns a value. The compiler was being nice to you when it issued a warning.
That you received a warning message and ignored it -- Never do that. Compile with flags set to a reasonably high level and address each and every warning. Code should always compile clean. Always.
C and C++ are tolerant languages. When the programmer writes code that the compiler can compile even if it looks weird, the compiler emits a warning. The warning means you are writing something that may contain an error, but you take the decision.
It allows to voluntarily do certain optimisations. For example, you can always use an 2D array as an 1D array what could not be done in some other languages. But the counterpart is never ignore a warning if you are not sure you know why you are forcing the compiler to to something it does not like
Conclusion : as soon as he has ignored a warning that at the end leads to the error, the programmer is to blame ;-)
We have a large binary compiled with -g and -O compiler flags. The issue is that setting the breakpoint in some files/line does not breaks at that file/line or breaks in some other line while debugging using gdb. I understand that this could be due to due to the -O compiler flag (used for optimization). Unfortunately I am not in a position to remove the compiler -O flag as there are many a scripts level that I need to take care.
How can I ensure to make the code breaks at a file/line place I want? Is there a line of code that I can add which will always be not optimized or will break when debugging using gdb - I tried something like this -
int x;
int y;
But still then the GDB break point did not work properly - how can I set it correctly?
There are two problems I can think of, inlining and optimisation. Since there is no standard way to tell the compiler to disable inlining and/or optimisation, you'll only be able to do it in a compiler specific way.
To disable inlining in GCC, you can use __attribute__((noinline)) on the method.
To disallow the compiler to optimise functions away (and, untested, give you a stable line of code where you can set your breakpoint), just add this to the code;
asm ("");
Both of these are documented at this page.
I've got a bit of a problem with debugging a C++ program using GDB.
When I use print object.member, it doesn't always print the value of the variable correctly. Instead, it prints the value of one of the arguments to the function I'm debugging. And it doesn't change through the function, although I change the value of object.member throughout.
And the thing is, the program is rather large and consists of several modules, with partially specialised templates and such, so I can't post it all here.
Now I tried to create a minimal testcase, but whatever simple I tried, I can't make it work. I mean, not work.
So all I can ask is, has anybody ever seen this behaviour in GDB, and have you found out what caused it and how to solve it?
There are question here about similar behaviour, but those amount to the program not being compiled properly (optimisation levels too high etc). I compiled it with -Wall -Wextra -pedantic -g -O0, so that can't be it.
And the program runs fine; I can cout << object.member; and that outputs the expected value, so I don't know what to try now.
I've seen similar behaviour before. Unfortunately, gdb is really 'C' based so although it will deal with C++, I've found it occasionally to be quite picky about displaying values.
When displaying more complex items (such as maps, strings or the dereferenced contents of smart pointers) you have to sometimes be quite explicit about dereferencing and casting variables.
Another possibility is the function itself - anything unusual about it? Is it templated for example?
Can you create a reference to this variable in your code and try displaying that? Or take the address of the variable and derefrence the contents - only if it's publicly available of course.
Naturally the source code must match what you've compiled so must be older than the exe but gdb will normally warn you about such things
I like to force a policy of no warnings when I check someone's code. Any warnings that appear have to be explicitly documented as sometimes it's not easy to remove some warnings or might require too many cycles or memory etc.
But there is a down-side to this policy and that is removing warnings in ways that are potentially dangerous, i.e. the method used actually hides the problem rather than fixes it.
The one I'm most acutely aware of is explicitly casts which might hide a bug.
What other potentially dangerous ways of removing compiler warnings in C(++) are there that I should look out for?
const correctness can cause a few problems for beginners:
// following should have been declared as f(const int & x)
void f( int & x ) {
...
}
later:
// n is only used to pass the parameter "4"
int n = 4;
// really wanted to say f(4)
f( n );
Edit1: In a somewhat similar vein, marking all member variables as mutable, because your code often changes them when const correctness says it really shouldn't.
Edit2: Another one I've come across (possibly from Java programmers ) is to tack throw() specifications onto functions, whether they could actually throw or not.
Well, there's the obvious way - disabling a specific warning for parts of the code:
#pragma warning( disable : 4507 34 )
EDIT: As has been pointed out in the comments, it is sometimes necessary to use in cases where you know that the warnings are OK (if it wasn't a useful feature, there would have been no reason to put it in in the first place). However, it is also a very easy way to "ignore" warnings in your code and still get it to compile silently, which is what the original question was about.
I think it's a delicate issue. My view is that warnings should be checked thoroughly to see if the code is correct / does what is intended. But often there is correct code that will produce warnings, and trying to eliminate them just convolutes the code or forces a rewrite in a less natural way.
I recall in a previous release I had correct and solid code that produced a couple of warnings, and coworkers started complaining about this. The code was much cleaner and did what it was intented. In the end the code went on production with the warnings.
Also, different compiler versions will produce different warnings, so it becomes more pointless to enforce a "no warnings" policy when the result depends on the mood of the compiler developers.
I want to emphasize how important is to check all warnings at least once.
Btw I develop in C and C++ for embedded systems.
Commenting out (or worse, deleting) the code that generates the warning. Sure, the warning goes away, but you are more than just a little likely ending up with code that doesn't do what you intend.
I also enforce a no-warnings rule, but you are right that you can't just remove the warning without careful consideration. And to be honest, at times I've left warnings in for a while because the code was right. Eventually I clean it up somehow, because once you have more than a dozen warnings in the build, people stop paying attention to them.
What you described is not a problem unique to warnings. I can't tell you how many times I see someones bug-fix for crash to be "I added a couple of NULL checks". You have to go to the root cause: Should that variable be NULL? If not, why was it?
This is why we have code reviews.
The biggest risk would be that someone would spend hours of development time to solve a minor warning that has no effect on the code. That would be a waste of time. Sometimes it's just easier to keep a warning and add a line of comment explaining why the warning occurs. (Until someone has time to resolve these trivial warnings.)
In my experience, resolving trivial warnings often adds two more days of work for developers. Those could make the difference between finishing before and after the deadline.