How to silence long long integer constant warning from GCC - c++

I have some code using large integer literals as follows:
if(nanoseconds < 1'000'000'000'000)
This gives the compiler warning integer constant is too large for 'long' type [-Wlong-long]. However, if I change it to:
if(nanoseconds < 1'000'000'000'000ll)
...I instead get the warning use of C++11 long long integer constant [-Wlong-long].
I would like to disable this warning just for this line, but without disabling -Wlong-long or using -Wno-long-long for the entire project. I have tried surrounding it with:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wlong-long"
...
#pragma GCC diagnostic pop
but that does not seem to work here with this warning. Is there something else I can try?
I am building with -std=gnu++1z.
Edit: minimal example for the comments:
#include <iostream>
auto main()->int {
double nanoseconds = 10.0;
if(nanoseconds < 1'000'000'000'000ll) {
std::cout << "hello" << std::endl;
}
return EXIT_SUCCESS;
}
Building with g++ -std=gnu++1z -Wlong-long test.cpp gives test.cpp:6:20: warning: use of C++11 long long integer constant [-Wlong-long]
I should mention this is on a 32bit platform, Windows with MinGW-w64 (gcc 5.1.0) - the first warning does not seem to appear on my 64bit Linux systems, but the second (with the ll suffix) appears on both.

It seems that the C++11 warning when using the ll suffix may be a gcc bug. (Thanks #praetorian)
A workaround (inspired by #nate-eldredge's comment) is to avoid using the literal and have it produced at compile time with constexpr:
int64_t constexpr const trillion = int64_t(1'000'000) * int64_t(1'000'000);
if(nanoseconds < trillion) ...

Related

Why does g++ 11 trigger "maybe uninitialized" warning in connection with this pointer casting

After a compiler update from g++ v7.5 (Ubuntu 18.04) to v11.2 (Ubuntu 22.04), the following code triggers the maybe-uninitialized warning:
#include <cstdint>
void f(std::uint16_t v)
{
(void) v;
}
int main()
{
unsigned int pixel = 0;
std::uint16_t *f16p = reinterpret_cast<std::uint16_t*>(&pixel);
f(*f16p);
}
Compiling on Ubuntu 22.04, g++ v11.2 with -O3 -Wall -Werror.
The example code is a reduced form of a real use case, and its ugliness is not the issue here.
Since we have -Werror enabled, it leads to a build error, so I'm trying figure out how to deal with it right now. Is this an instance of this gcc bug, or is there an other explanation for the warning?
https://godbolt.org/z/baaMTxhae
You don't initialize an object of type std::uint16_t, so it is used uninitialized.
This error is suppressed by -fno-strict-aliasing, allowing pixel to be accessed through a uint16_t lvalue.
Note that -fstrict-aliasing is the default at -O2 or higher.
It could also be fixed with a may_alias pointer:
using aliasing_uint16_t = std::uint16_t [[gnu::may_alias]];
aliasing_uint16_t* f16p = reinterpret_cast<std::uint16_t*>(&pixel);
f(*f16p);
Or you can use std::start_lifetime_as:
static_assert(sizeof(int) >= sizeof(std::uint16_t) && alignof(int) >= alignof(std::uint16_t));
// (C++23, not implemented in g++11 or 12 yet)
auto *f16p = std::start_lifetime_as<std::uint16_t>(&pixel);
// (C++17)
auto *f16p = std::launder(reinterpret_cast<std::uint16_t*>(new (&pixel) char[sizeof(std::uint16_t)]));

My g++ generates weird warning with vector<weak_ptr> erase() method

I have following C++ code:
#include <memory>
#include <vector>
#include <string>
#include <unordered_map>
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
}
int main(void) {
#if 0
std::vector<std::weak_ptr<int>> mvec;
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
#endif
}
GCC generates warning when I compile it this way:
ppk#fif-cloud-dev:~$ g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
ppk#fif-cloud-dev:~$ g++ -fstrict-overflow -Wstrict-overflow=5 -O2 -std=c++14 warn1.cc
warn1.cc: In function ‘void erase_from_vector(std::vector<std::weak_ptr<int> >&)’:
warn1.cc:6:6: warning: assuming signed overflow does not occur when changing X +- C1 cmp C2 to X cmp C2 -+ C1 [-Wstrict-overflow]
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
^
But when I change -O2 flag to -O1 it compiles without any warnings. When I keep flag -O2 and uncomment code in main() it also compiles without any warnings. Clang compiler also does not report any warnings.
I suppose that this warning comes from std::weak_ptr destructor where counter is decremented but have no idea why it appears in my code.
Is the warning caused by my error or error in the compiler?
Most likely a quirk of gcc 5.4. It's gone as soon as you get to gcc 6.1, and I don't see it reappear again in any later version.
gcc 5.4 (warnings)
gcc 6.1 (no warnings)
It's especially damning that Clang doesn't reproduce the behavior.
It should be noted that such behavior isn't exactly a bug, according to the doc (emphasis mine)
An optimization that assumes that signed overflow does not occur is perfectly safe if the values of the variables involved are such that overflow never does, in fact, occur. Therefore this warning can easily give a false positive: a warning about code that is not actually a problem.
That you're using -Wstrict-overflow=5 makes it even more likely, as this is the highest warning level that comes with its own disclaimer:
this warning level gives a very large number of false positives
My suggestion is to either upgrade your compiler or accept that gcc 5.4 is going to give you a false positive here.

Which Clang warning is equivalent to Wzero-as-null-pointer-constant from GCC?

Our project uses C++11/14, and we want to use nullptr instead of 0 or NULL with pointers, even when 0 (as an integer literal) is allowed.
I have the following code:
int main()
{
int *ptr1 = nullptr; // #1
int *ptr2 = 0; // #2
}
If I compile with GCC (5.3.0) and the flag -Wzero-as-null-pointer-constant it warnings in #2, but I can't find a similar flag in Clang. If I compile the code with Clang (3.7.1) and the flag -Weverything, I don't get any warning about #2.
So, is there any way to get a similar warning for this in Clang?
clang has this warning as of 5.0; I added it here.
Clang doesn't support these kind of warnings (i.e., there's no -Wzero-as-null-pointer-constant equivalent in Clang). You can see it your self if you add -Weverything option (mind do it only for testing), which enables all Clang's warnings.
Live Demo

unused-variable warning different for auto variables

Using gcc (4.7.2 here) I get warnings about unused auto variables, but not about other variables:
// cvars.h
#ifndef CVARS_H_
#define CVARS_H_
const auto const_auto = "const_auto";
const char const_char_array[] = "const_char_array";
const char * const_char_star = "const_char_star";
const char use_me = 'u';
#endif // CVARS_H_
//---
//comp_unit.cpp
#include "cvars.h"
void somef()
{
//const_auto // commented out - unused
use_me; // not using any of the others either
}
// compile with $ g++ -std=c++11 -Wunused-variable -c comp_unit.cpp
// gcc outputs warning: ‘cvars::const_auto’ defined but not used [-Wunused-variable]
// but does not complain about the other variables
Is this an inconsistency in GCC?
1.1 If so, what should happen in all cases, warning or no warning?
1.2 If not, what is the reason for the difference in behavior?
Note: Concerning 1.1, I imagine no warning should be printed in this case (this is what clang does). Otherwise, any compilation unit including a constant-defining header but not using all the constants within would contain lots of warnings.
These warnings are entirely up to the implementation, so there is no "should". But, yes, I agree: constants would ideally not generate these warnings even when declared using auto.
Since I can reproduce your observation in GCC 4.7 and GCC 4.8.0, but not in GCC 4.8.1 or 4.9, I'd say the guys over at GNU would agree too. In fact, I believe you're seeing bug 57183.

%llx format specifier: invalid warning?

Edited to remove the first warning
The following code works as expected in g++ 4.4.0 under mingw32:
#include <cstdio>
int main()
{
long long x = 0xdeadbeefc0defaceLL ;
printf ("%llx\n", x) ;
}
But if I enable all warnings with -Wall, it says:
f.cpp: In function 'int main()':
f.cpp:5: warning: unknown conversion type character 'l' in format
f.cpp:5: warning: too many arguments for format
It's the same with %lld. Is this fixed in newer versions?
Edited again to add:
The warning doesn't go away if I specify -std=c++0x, even though (i) long long is a standard type, and (ii) %lld and %llx seem to be officially supported. For instance, from 21.5 Numeric conversions para 7:
Each function returns a string object holding the character representation of the value of
its argument that would be generated by calling sprintf(buf, fmt, val) with a format specifier of
"%d", "%u", "%ld", "%lu", "%lld", "%llu", "%f", "%f", or "%Lf", respectively, where buf designates
an internal character buffer of sufficient size.
So this is a bug, surely?
long long x = 0xdeadbeefc0defaceLL; // note LL in the end
And there is no ll length specifier for printf. The best you can get is:
printf ("%lx\n", x); // l is for long int
I've tested your sample on my g++, it compiles without errors even without -std=c++0x flag:
~$ g++ -Wall test.cpp
~$ g++ --version
g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3
So, yes, this fixed in newer versions.
For first warning I can say that you must use 0xdeadbeefc0defaceLL instead of 0xdeadbeefc0deface. After that other warnings may pass also.
I get the same warning compiling C using windows/mingw32.
warning: unknown conversion type character 'l' in format
So yes, probably a compiler/platform specific bug.
It's a Mingw-specific issue, because it calls the native Windows runtime for certain things, including this. See this answer.
%I64d works for me. In the answer linked above there is a more portable albeit less readable solution as well.