"narrowing conversion" warning for hex literal with no-suffix? - c++

static int var[2] __attribute__ ((aligned (8))) =
{
0x0255cfa8,
0xfdfcddfc
};
Why am I getting a warning: narrowing conversion of '4261207548u' from 'unsigned int' to 'int' inside { } is ill-formed in C++11 [-Wnarrowing]?
Even though the numbers have no u or U suffix they seem to be taken as unsigned?

If int is 32 bit on your platform, then 0xfdfcddfc is an unsigned. That's because you've used hexadecimal notation.
Your helpful compiler is warning you that that number is too big for an int.
Note that if you written the denary equivalent, then it would be a signed type (long or long long) and the compiler would have issued a subtly different warning.
Reference: https://en.cppreference.com/w/cpp/language/integer_literal#The_type_of_the_literal

Related

Clang-Tidy Narrowing Conversion from uint8_t to float

I'm getting a clang-tidy warning that reads narrowing conversion from 'int' to 'float' when I convert from a uint8_t to a float, which to my understanding is not a narrowing conversion since float can represent every integer that a uint8_t can.
Code example:
uint8_t testVar = 8;
float test2 = 2.0f * testVar;
clang-tidy flags the second line of that with the warning cppcoreguidelines-narrowing-conversions: narrowing conversion from 'int' to 'float'. In my IDE, the squiggle shows up under the testVar.
According to the reference, this warning should be flagged if we convert from an integer to a narrower floating-point (e.g. uint64_t to float), but to the best of my knowledge, float is not narrower than uint8_t.
Am I fundamentally misunderstanding these data types, or is something else going on here?
I'm on LLVM version 11.0.0 if that matters.

Are signed hexdecimal literals possible?

I have an array of bitmasks, the idea being to use them to clear a specified number of the least significant bits of an integer that is being used as a set of flags. It is defined as follows:
int clearLow[10]=
{
0xffffffff, 0xfffffffe, 0xfffffffc, 0xfffffff8, 0xfffffff0, 0xffffffe0, 0xffffffc0, 0xffffff80, 0xffffff00, 0xfffffe00
};
I recently switching to using gcc 4.8 I have found that this array starts throwing warnings,
warning: narrowing conversion of ‘4294967295u’ from ‘unsigned int’ to ‘int’ inside { } is ill-formed in C++11
etc
etc
Clearly my hexadecimal literals are being taken as unsigned ints and the fix is easy as, honestly, I do not care if this array is int or unsigned int it just needs to have the appropriate bits set in each cell, but my question is this:
Are there any ways to set literals in hexadecimal, for the purposes of simply setting bits, without the compiler assuming them to be unsigned?
You describe that you just want to use the values as operands to bit operations. As that is the case, just always use unsigned datatypes. That's the simple solution.
It looks like you just want an array of unsigned int to use for your bit masking:
const unsigned clearLow[] = {
0xffffffff, 0xfffffffe, 0xfffffffc, 0xfffffff8, 0xfffffff0, 0xffffffe0, 0xffffffc0, 0xffffff80, 0xffffff00, 0xfffffe00
};

Signed arithmetic

I'm running this piece of code, and I'm getting the output value as (converted to hex) 0xFFFFFF93 and 0xFFFFFF94.
#include <iostream>
using namespace std;
int main()
{
char x = 0x91;
char y = 0x02;
unsigned out;
out = x + y;
cout << out << endl;
out = x + y + 1;
cout << out << endl;
}
I'm confused about the arithmetic going on here. Is it because all the higher bits in out are taken to be 1 by default?
When I typecast out to an int, I get the answers as (in int) -109 and -108. Any idea why this is happening?
So there are a couple of things going on here. One, char can be either signed or unsigned, in your case it is signed. Two assignment will covert the right hand side to the type of the left hand side. Using the right warning flags would help, clang with the -Wconversion flags warns:
warning: implicit conversion changes signedness: 'int' to 'unsigned int' [-Wsign-conversion]
out = x + y;
~ ~~^~~
In this case to do this conversion it will basically add or subtract the unsigned max + 1 to the number to be converted.
We can see the same results using the limits header:
#include <limits>
//....
std::cout << std::hex << (std::numeric_limits<unsigned>::max() + 1) + (x+y) << std::endl ;
//...
and the result is:
ffffff93
For reference the draft C++ standard section 5.17 Assignment and compound assignment operators says:
If the left operand is not of class type, the expression is implicitly converted (Clause 4) to the cv-unqualified type of the left operand.
Clause 4 under 4.7 Integral conversions says:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). —end note ]
which is equivalent to adding or subtracting UMAX + 1.
A plain char usually also represents a signed type! Since compatibility reasons with C syntax, this isn't specified further, and may be compiler implementation dependent. You always can make it distinct signed arithmetic behavior, by explicitly specifying the signed / unsigned keywords.
Try replacing your char definitions like this
unsigned char x = 0x91;
unsigned char y = 0x02;
to get the results you expect!
See the fully working sample here.
The negative numbers are represented internally as 2's complement and hence, their first bit is a 1. When you work in hex (and print in hex), the significant bits are displayed as 1's leading to numbers like you showed.
C++ doesn't specify whether char is signed or unsigned. Here they are signed, so when they are promoted to int's, the negative value is used which is then converted to unsigned. Use or cast to unsigned char.

Narrowing conversion from char to uint8_t

Compiling the following snippet using C++11(demo here):
#include <stdint.h>
int main() {
const uint8_t foo[] = {
'\xf2'
};
}
Will trigger a warning(at least on GCC 4.7), indicating that there's a narrowing conversion when converting '\xf2' to uint8_t.
Why is this? sizeof(char) is always 1, which should be the same as sizeof(uint8_t), shouldn't it?
Note that when using other char literals such as '\x02', there's no warning.
Although char doesn't necessarily have to be 8-bits long, that's not the problem here. You are converting from signed char to unsigned (uint8_t), that's the reason for the error.
This:
const int8_t foo[] = {
'\xf2'
};
will compile fine.
Looking at my system, the constant \xf2 is overflowing, since it's out of range of a signed char. It's being represented as -14 which is then implicitly converted to an unsigned int, giving a value of 4294967282. That's then narrowed to a char, producing this warning:
warning: narrowing conversion of ‘'\37777777762'’
Using (unsigned char)'\xf2' removes the warning.

Conversion to std::array<unsigned char, 1ul>::value_type from int may alter its value

The -Wconversion GCC parameter produces the warning from the title when compiling this program:
#include <iostream>
#include <array>
#include <string>
int main ()
{
std::string test = "1";
std::array<unsigned char, 1> byteArray;
byteArray[0] = byteArray[0] | test[0];
return 0;
}
Here is how I compile it: g++- -Wall -Wextra -Wconversion -pedantic -std=c++0x test.cpp and I'm using GCC 4.5.
Am I doing something illegal here? Can it cause problems in certain scenarios? Why would the | produce an int?
Am I doing something illegal here?
You're converting from a signed type to an unsigned type. If the signed value were negative, then the unsigned result would be an implmentation-defined non-negative value (and therefore not the same as the initial value).
Can it cause problems in certain scenarios?
Only if the value might be negative. That might be the case on somewhat exotic architectures where sizeof (char) == sizeof (int), or if your code were doing something more complicated than combining two values with |.
Why would the | produce an int?
Because all integer values are promoted before being used in arithmetic operations. If their type is smaller than int, then they are promoted to int. (There's somewhat more to promotion than that, but that's the rule that's relevant to this question).
yes, a string is made up of char which is signed, you have an array of unsigned char.
as for as | producing an int, it's called integer promotion. basically the compiler makes them both ints, does a |, then makes them char's again.
It runs into a problem though. By the C/C++ standards, integer promotion happens if the type promoted to can hold all values of the type promoted from. so it promotes the unsigned char to an unsigned int and the signed char to a signed int. Promotion of a signed value sign-extends it. So say you have -1 or 0xFF or 11111111. That is extended to 10000000000000000000000001111111 signed int ((int)-1). Obviously that will have a different result than |'ing with 11111111 ((char)-1) would intuitively expect.
See here for more info: here it explains a sort of 'workaround'
Result of unsigned char | char is int per integer conversion rules. When you assign int back to unsigned char, assignment can truncate its int value - compiler does not know what how big the value you have in this int.
To silence the compiler:
byteArray[0] = static_cast<unsigned char>(byteArray[0] | test[0]);