I have following C++ code:
#include <memory>
#include <vector>
#include <string>
#include <unordered_map>
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
}
int main(void) {
#if 0
std::vector<std::weak_ptr<int>> mvec;
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
#endif
}
GCC generates warning when I compile it this way:
ppk#fif-cloud-dev:~$ g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
ppk#fif-cloud-dev:~$ g++ -fstrict-overflow -Wstrict-overflow=5 -O2 -std=c++14 warn1.cc
warn1.cc: In function ‘void erase_from_vector(std::vector<std::weak_ptr<int> >&)’:
warn1.cc:6:6: warning: assuming signed overflow does not occur when changing X +- C1 cmp C2 to X cmp C2 -+ C1 [-Wstrict-overflow]
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
^
But when I change -O2 flag to -O1 it compiles without any warnings. When I keep flag -O2 and uncomment code in main() it also compiles without any warnings. Clang compiler also does not report any warnings.
I suppose that this warning comes from std::weak_ptr destructor where counter is decremented but have no idea why it appears in my code.
Is the warning caused by my error or error in the compiler?
Most likely a quirk of gcc 5.4. It's gone as soon as you get to gcc 6.1, and I don't see it reappear again in any later version.
gcc 5.4 (warnings)
gcc 6.1 (no warnings)
It's especially damning that Clang doesn't reproduce the behavior.
It should be noted that such behavior isn't exactly a bug, according to the doc (emphasis mine)
An optimization that assumes that signed overflow does not occur is perfectly safe if the values of the variables involved are such that overflow never does, in fact, occur. Therefore this warning can easily give a false positive: a warning about code that is not actually a problem.
That you're using -Wstrict-overflow=5 makes it even more likely, as this is the highest warning level that comes with its own disclaimer:
this warning level gives a very large number of false positives
My suggestion is to either upgrade your compiler or accept that gcc 5.4 is going to give you a false positive here.
Related
After a compiler update from g++ v7.5 (Ubuntu 18.04) to v11.2 (Ubuntu 22.04), the following code triggers the maybe-uninitialized warning:
#include <cstdint>
void f(std::uint16_t v)
{
(void) v;
}
int main()
{
unsigned int pixel = 0;
std::uint16_t *f16p = reinterpret_cast<std::uint16_t*>(&pixel);
f(*f16p);
}
Compiling on Ubuntu 22.04, g++ v11.2 with -O3 -Wall -Werror.
The example code is a reduced form of a real use case, and its ugliness is not the issue here.
Since we have -Werror enabled, it leads to a build error, so I'm trying figure out how to deal with it right now. Is this an instance of this gcc bug, or is there an other explanation for the warning?
https://godbolt.org/z/baaMTxhae
You don't initialize an object of type std::uint16_t, so it is used uninitialized.
This error is suppressed by -fno-strict-aliasing, allowing pixel to be accessed through a uint16_t lvalue.
Note that -fstrict-aliasing is the default at -O2 or higher.
It could also be fixed with a may_alias pointer:
using aliasing_uint16_t = std::uint16_t [[gnu::may_alias]];
aliasing_uint16_t* f16p = reinterpret_cast<std::uint16_t*>(&pixel);
f(*f16p);
Or you can use std::start_lifetime_as:
static_assert(sizeof(int) >= sizeof(std::uint16_t) && alignof(int) >= alignof(std::uint16_t));
// (C++23, not implemented in g++11 or 12 yet)
auto *f16p = std::start_lifetime_as<std::uint16_t>(&pixel);
// (C++17)
auto *f16p = std::launder(reinterpret_cast<std::uint16_t*>(new (&pixel) char[sizeof(std::uint16_t)]));
Our project uses C++11/14, and we want to use nullptr instead of 0 or NULL with pointers, even when 0 (as an integer literal) is allowed.
I have the following code:
int main()
{
int *ptr1 = nullptr; // #1
int *ptr2 = 0; // #2
}
If I compile with GCC (5.3.0) and the flag -Wzero-as-null-pointer-constant it warnings in #2, but I can't find a similar flag in Clang. If I compile the code with Clang (3.7.1) and the flag -Weverything, I don't get any warning about #2.
So, is there any way to get a similar warning for this in Clang?
clang has this warning as of 5.0; I added it here.
Clang doesn't support these kind of warnings (i.e., there's no -Wzero-as-null-pointer-constant equivalent in Clang). You can see it your self if you add -Weverything option (mind do it only for testing), which enables all Clang's warnings.
Live Demo
I have some code using large integer literals as follows:
if(nanoseconds < 1'000'000'000'000)
This gives the compiler warning integer constant is too large for 'long' type [-Wlong-long]. However, if I change it to:
if(nanoseconds < 1'000'000'000'000ll)
...I instead get the warning use of C++11 long long integer constant [-Wlong-long].
I would like to disable this warning just for this line, but without disabling -Wlong-long or using -Wno-long-long for the entire project. I have tried surrounding it with:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wlong-long"
...
#pragma GCC diagnostic pop
but that does not seem to work here with this warning. Is there something else I can try?
I am building with -std=gnu++1z.
Edit: minimal example for the comments:
#include <iostream>
auto main()->int {
double nanoseconds = 10.0;
if(nanoseconds < 1'000'000'000'000ll) {
std::cout << "hello" << std::endl;
}
return EXIT_SUCCESS;
}
Building with g++ -std=gnu++1z -Wlong-long test.cpp gives test.cpp:6:20: warning: use of C++11 long long integer constant [-Wlong-long]
I should mention this is on a 32bit platform, Windows with MinGW-w64 (gcc 5.1.0) - the first warning does not seem to appear on my 64bit Linux systems, but the second (with the ll suffix) appears on both.
It seems that the C++11 warning when using the ll suffix may be a gcc bug. (Thanks #praetorian)
A workaround (inspired by #nate-eldredge's comment) is to avoid using the literal and have it produced at compile time with constexpr:
int64_t constexpr const trillion = int64_t(1'000'000) * int64_t(1'000'000);
if(nanoseconds < trillion) ...
I seem to not be getting sign compare errors on my g++ gcc version 5.1.1 20150618 (Red Hat 5.1.1-4) (GCC).
When I compile the following with the options I don't get an error - but when I get rid of the const - it shows the warning.
Question: Is this expected behavior?
g++ typecompat.cxx -Wsign-compare -std=c++14
This is the code
#include <stddef.h>
#include <stdio.h>
#include <assert.h>
#include <vector>
int main() {
std::vector<int> v {1,2,3};
const int b = 2;
assert(v.size()>b);
return 0;
}
From a godbolt session of this code it looks like gcc is performing constant folding:
cmpq $2, %rax
So then it knows for sure the comparison is ok, any attempting to change a constant variable is undefined behavior and the compiler will assume no undefined behavior. If we look at the documents it does say:
Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned
This gcc bug report Bogus warning with -Wsign-compare looks like it covers this kind of false positive detection:
To be useful, these warnings are meant to be smart - not warning if it can
be proved that the signed value is nonnegative [...]
Using gcc (4.7.2 here) I get warnings about unused auto variables, but not about other variables:
// cvars.h
#ifndef CVARS_H_
#define CVARS_H_
const auto const_auto = "const_auto";
const char const_char_array[] = "const_char_array";
const char * const_char_star = "const_char_star";
const char use_me = 'u';
#endif // CVARS_H_
//---
//comp_unit.cpp
#include "cvars.h"
void somef()
{
//const_auto // commented out - unused
use_me; // not using any of the others either
}
// compile with $ g++ -std=c++11 -Wunused-variable -c comp_unit.cpp
// gcc outputs warning: ‘cvars::const_auto’ defined but not used [-Wunused-variable]
// but does not complain about the other variables
Is this an inconsistency in GCC?
1.1 If so, what should happen in all cases, warning or no warning?
1.2 If not, what is the reason for the difference in behavior?
Note: Concerning 1.1, I imagine no warning should be printed in this case (this is what clang does). Otherwise, any compilation unit including a constant-defining header but not using all the constants within would contain lots of warnings.
These warnings are entirely up to the implementation, so there is no "should". But, yes, I agree: constants would ideally not generate these warnings even when declared using auto.
Since I can reproduce your observation in GCC 4.7 and GCC 4.8.0, but not in GCC 4.8.1 or 4.9, I'd say the guys over at GNU would agree too. In fact, I believe you're seeing bug 57183.