Left shifting with a unit64_t - Gives warning - c++

I am trying to do the following. However I am not sure where I might be going wrong
uint64_t x = (1 << 46);
std::cout << x;
I get the
-warning: left shift count >= width of type [-Wshift-count-overflow]
I get the output 0. I was expecting something a decimal output of a binary like this
1 0000........00 (46 0s)
My question is why am I getting this warning ? isnt uint64_t 64 bit ? also why am I getting the output 0 ?

The problem is that you are not shifting a 64-bit constant: 1 is a constant of type int, which is less than 64 bits on your platform (probably 32 bits; it is implementation-defined).
You can fix this by using UINT64_C macro around the constant:
uint64_t x = (UINT64_C(1) << 46);

1 is a 32-bit constant. The compiler (correctly) computes the constant expression as 0 --- the 1 shifted beyound the size of an int32. If the arguments to << were variables, an x86 cpu would return (1 << 14), i.e. 1 << (46 % 32).
Try "1ULL << 46".

Related

Unsigned integer wrapping, different behaviour

Here is a code in C:
#include <stdio.h>
int main()
{
printf("Int width = %lu\n", sizeof(unsigned int)); // Gives 32 bits on my computer
unsigned int n = 32;
unsigned int y = ((unsigned int)1 << n) - 1; // This is line 8
printf("%u\n", y);
unsigned int x = ((unsigned int)1 << 32) - 1; // This is line 11
printf("%u", x);
return 0;
}
It outputs:
main.c:11:39: warning: left shift count >= width of type [-Wshift-count-overflow]
Int width = 4
0
4 294 967 295 (= 2^32-1)
The warning for the line 11 is expected as explained in these links: wiki.sei.cmu.edu and https://stackoverflow.com/a/11270515
left-shift operations [...] if the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
There is no warning for the line 8, but I was expected the same warning as for line 11. Futhermore, the results are entirely different ! What do I miss ?
This behaviour is similar for C++:
#include <iostream>
using namespace std;
int main()
{
cout << "Int width = " << sizeof(uint64_t) << "\n"; // Gives 64 bits on my computer
int n = 64;
uint64_t y = ((uint64_t)1 << n) - 1; // This is line 8
cout << "y = " << y;
uint64_t x = ((uint64_t)1 << 64) - 1; // This is line 11
cout << "\nx = " << x;
return 0;
}
Which outputs:
main.cpp:11:34: warning: left shift count >= width of type [-Wshift-count-overflow]
Int width = 8
y = 0
x = 18 446 744 073 709 551 615 (= 2^64-1)
I used the onlineGBD for C compiler for the C code and onlineGBD for C++ compiler.
Here are the link to the code: C code and C++ code.
For line 8, the compiler has to prove that in ((unsigned int)1 << n), n is 32 or more. That can be difficult since n is not const so it's value could be changed. The compiler would have to do more static analysis to give you the warning.
On the other hand, with (unsigned int)1 << 32) the compiler knows that the value is 32 or more and can easily warn. This requies almost no time to detect, since the type and the value to shift by are both compile time "literals".
If you switch to using const int n = 64; in your C++ code, then you will get an error at OnlineGBD. You can see that here. I tried that with the C version but it still doesn't warn.
There is no warning for the line 8, but I was expected the same warning as for line 11.
The C standard does not require a compiler to diagnose an excessive shift amount. (Generally, it does not require C implementations to diagnose errors other than those explicitly listed in “Constraints” clauses.)
The compiler you using diagnoses the error with the integer constant expression (32), as this is easy. It does not diagnose the error with the variable n, as that involves more work and the compiler authors have not implemented it.
Futhermore, the results are entirely different !
With the integer constant expression, the compiler evaluates the shift during compilation, using whatever software is built into it. That apparently produces zero for (unsigned int) 1 << 32. With the variable, the compiler generates an instruction to perform the shift during program execution. That instruction likely uses only the low five bits of the right operand, so an operand of 32 (1000002) yields of shift of zero bits, so shifting (unsigned int) 1 produces one.
Both behaviors are allowed by the C standard.
It's likely because n is a variable, the compiler doesn't seem to be verifying it, as it doesn't know its value it doesn't issue a warning, if you turn it into a constant i.e const int n = 64;, the warning is issued.
https://godbolt.org/z/4s5jz6
As for the results, undefined behavior is what it is, for the sake of curiosity you can analyze a particular case and try to figure out what the compiler did, but the results can't be reasoned with because there is no correct result.
Even the warnings are optional, gcc is nice enough to to warn you when a constant or constant literal is used but it didn't have to.
Undefined behaviour (UB) means undefined behaviour. Literally anything can happen. Compilers are not required to tell you of UB, but are permitted to.
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
So max unsigned -1, or zero, or format your SSD while installing viruses on your cloud stored files and emailing your browser history to your contact list are all permitted things for your compiler to convert a shift of 32 bits of a 32 bit integer.
As a Quality of Implementation issue, the last is rare.
The compiler is optimizing (1<<x)-1 as x 1 bits, not even doing a shift operation, in one case. Within the bounds of defined shift operations, this is equivalent, so this is a valid optimization. So when you pass 32, it writes 0xffffffff.
In the other case, it is maybe setting the nth bit, reading the low 5 bits of the shift operation to see which to set. Also valid within the range of defined behaviour, utterly different.
Welcome to UB.
I would expect further changes based on optimization level.

reversing two's complement for 18bit int

I have an 18 bit integer that is in two's complement and I'd like to convert it to a signed number so I can better use it. On the platform I'm using, ints are 4 bytes (i.e. 32 bits). Based on this post:
Convert Raw 14 bit Two's Complement to Signed 16 bit Integer
I tried the following to convert the number:
using SomeType = uint64_t;
SomeType largeNum = 0x32020e6ed2006400;
int twosCompNum = (largeNum & 0x3FFFF);
int regularNum = (int) ((twosCompNum << 14) / 8192);
I shifted the number left 14 places to get the sign bit as the most significant bit and then divided by 8192 (in binary, it's 1 followed by 13 zeroes) to restore the magnitude (as mentioned in the post above). However, this doesn't seem to work for me. As an example, inputting 249344 gives me -25600, which prima facie doesn't seem correct. What am I doing wrong?
The almost-portable way (with assumption that negative integers are natively 2s-complement) is to simply inspect bit 17, and use that to conditionally mask in the sign bits:
constexpr SomeType sign_bits = ~SomeType{} << 18;
int regularNum = twosCompNum & 1<<17 ? twosCompNum | sign_bits : twosCompNum;
Note that this doesn't depend on the size of your int type.
The constant 8192 is wrong, it should be 16384 = (1<<14).
int regularNum = (twosCompNum << 14) / (1<<14);
With this, the answer is correct, -12800.
It is correct, because the input (unsigned) number is 249344 (0x3CE00). It has its highest bit set, so it is a negative number. We can calculate its signed value by subtracting "max unsigned value+1" from it: 0x3CE00-0x40000=-12800.
Note, that if you are on a platform, for which right signed shift does the right thing (like on x86), then you can avoid division:
int regularNum = (twosCompNum << 14) >> 14;
This version can be slightly faster (but has implementation-defined behavior), if the compiler doesn't notice that division can be exactly replaced by a shift (clang 7 notices, but gcc 8 doesn't).
Two problems: first your test input is not an 18-bit two's complement number. With n bits, two's compliment permits -(2 ^ (n - 1)) <= value < 2 ^ (n - 1). In the case of 18 bits, that's -131072 <= value < 131071. You say you input 249344 which is outside of this range and would actually be interpreted as -12800.
The second problem is that your powers of two are off. In the answer you cite, the solution offered is of the form
mBitOutput = (mBitCast)(nBitInput << (m - n)) / (1 << (m - n));
For your particular problem, you desire
int output = (nBitInput << (32 - 18)) / (1 << (32 - 18));
// or equivalent
int output = (nBitInput << 14) / 16384;
Try this out.

Somewhat unexpected behaviour from left shift <<

This is a 32-bit MFC application currently running on Windows 10. Compiled with Visual C++ 2013.
std::cout << "sizeof(long long) = " << sizeof(long long) << std::endl;
int rot{ 32 };
long long bits{ (1 << rot) };
std::cout << "bits with variable = " << bits << std::endl;
long long bits2 = (1 << 32);
std::cout << "bits2 with constant = " << bits2 << std::endl;
system("pause");
The size of long long is 8 bytes, sufficient to manage my 32 bits, I was thinking. Here is the output of the debug build:
sizeof(long long) = 8
bits with variable = 1
bits2 with constant = 0
Press any key to continue . . .
And here is the output of the release build:
sizeof(long long) = 8
bits with variable = 0
bits2 with constant = 0
Press any key to continue . . .
So, apparently my single bit is leftshifted into oblivion even with a 64 bit data type. But I'm really puzzled to why the debug build produces different outputs if I shift with a variable as a parameter compared to a constant?
You need a long long type for 64 bits.
The expression 1 << 32 will be evaluated with int types for the operands, irrespective of the type of the variable to which this result is assigned.
You will have more luck with 1LL << 32, and 1LL << rot. That causes the expression to be evaluated using long long types.
Currently the behaviour of your program is undefined as you are overshifting a type when you write 1 << 32. Note also that 1 << 32 is a compile time evaluable constant expression whereas 1 << rot isn't. That probably accounts for the observed difference between using a variable and a constant.
The expression 1 << rot, when rot is an int, will give you an int result. It doesn't matter if you then place it into a long long since the damage has already been done(a).
Use 1LL << rot instead.
(a) And, by damage, I mean undefined behaviour, as per C11 6.5.7 Bitwise shift operators:
The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
As to "why the debug build produces different outputs if I shift with a variable as a parameter compared to a constant", that's one of the vagaries of undefined behaviour - literally anything that's possible is allowed to happen. It's perfectly within its rights to play derisive_laughter.ogg and format your hard disk :-)

Weird behavior of right shift operator (1 >> 32)

I recently faced a strange behavior using the right-shift operator.
The following program:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <stdint.h>
int foo(int a, int b)
{
return a >> b;
}
int bar(uint64_t a, int b)
{
return a >> b;
}
int main(int argc, char** argv)
{
std::cout << "foo(1, 32): " << foo(1, 32) << std::endl;
std::cout << "bar(1, 32): " << bar(1, 32) << std::endl;
std::cout << "1 >> 32: " << (1 >> 32) << std::endl; //warning here
std::cout << "(int)1 >> (int)32: " << ((int)1 >> (int)32) << std::endl; //warning here
return EXIT_SUCCESS;
}
Outputs:
foo(1, 32): 1 // Should be 0 (but I guess I'm missing something)
bar(1, 32): 0
1 >> 32: 0
(int)1 >> (int)32: 0
What happens with the foo() function ? I understand that the only difference between what it does and the last 2 lines, is that the last two lines are evaluated at compile time. And why does it "work" if I use a 64 bits integer ?
Any lights regarding this will be greatly appreciated !
Surely related, here is what g++ gives:
> g++ -o test test.cpp
test.cpp: In function 'int main(int, char**)':
test.cpp:20:36: warning: right shift count >= width of type
test.cpp:21:56: warning: right shift count >= width of type
It's likely the CPU is actually computing
a >> (b % 32)
in foo; meanwhile, the 1 >> 32 is a constant expression, so the compiler will fold the constant at compile-time, which somehow gives 0.
Since the standard (C++98 §5.8/1) states that
The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
there is no contradiction having foo(1,32) and 1>>32 giving different results.
On the other hand, in bar you provided a 64-bit unsigned value, as 64 > 32 it is guaranteed the result must be 1 / 232 = 0. Nevertheless, if you write
bar(1, 64);
you may still get 1.
Edit: The logical right shift (SHR) behaves like a >> (b % 32/64) on x86/x86-64 (Intel #253667, Page 4-404):
The destination operand can be a register or a memory location. The count operand can be an immediate value or the CL register. The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W is used). The count range is limited to 0 to 31 (or 63 if 64-bit mode and REX.W is used). A special opcode encoding is provided for a count of 1.
However, on ARM (armv6&7, at least), the logical right-shift (LSR) is implemented as (ARMISA Page A2-6)
(bits(N), bit) LSR_C(bits(N) x, integer shift)
assert shift > 0;
extended_x = ZeroExtend(x, shift+N);
result = extended_x<shift+N-1:shift>;
carry_out = extended_x<shift-1>;
return (result, carry_out);
where (ARMISA Page AppxB-13)
ZeroExtend(x,i) = Replicate('0', i-Len(x)) : x
This guarantees a right shift of ≥32 will produce zero. For example, when this code is run on the iPhone, foo(1,32) will give 0.
These shows shifting a 32-bit integer by ≥32 is non-portable.
OK. So it's in 5.8.1:
The operands shall be of integral or enumeration type and integral promotions are performed. The type of the result is
that of the promoted left operand. The behavior is undefined if the right operand is negative, or greater than or equal to
the length in bits of the promoted left operand.
So you have an Undefined Behaviour(tm).
What happens in foo is that the shift width is greater than or equal to the size of the data being shifted. In the C99 standard that results in undefined behaviour. It's probably the same in whatever C++ standard MS VC++ is built to.
The reason for this is to allow compiler designers to take advantage of any CPU hardware support for shifts. For example, the i386 architecture has an instruction to shift a 32 bit word by a number of bits, but the number of bits is defined in a field in the instruction that is 5 bits wide. Most likely, your compiler is generating the instruction by taking your bit shift amount and masking it with 0x1F to get the bit shift in the instruction. This means that shifting by 32 is the same as shifting by 0.
I compiled it on 32 bit windows using VC9 compiler. It gave me the following warning. Since sizeof(int) is 4 bytes on my system compiler is indicating that right shifting by 32 bits results in undefined behavior. Since it is undefined, you can not predict the result. Just for checking I right shifted with 31 bits and all the warnings disappeared and the result was also as expected (i.e. 0).
I suppose the reason is that int type holds 32-bits (for most systems), but one bit is used for sign as it is signed type. So only 31 bits are used for actual value.
The warning says it all!
But in fairness I got bitten by the same error once.
int a = 1;
cout << ( a >> 32);
is completely undefined. In fact the compiler generally gives a different results than the runtime in my experience. What I mean by this is if the compiler can see to evaluate the shift expression at run time it may give you a different result to the expression evaluated at runtime.
foo(1,32) performs a rotate-shit, so bits that should disappear on the right reappear on the left. If you do it 32 times, the single bit set to 1 is back to its original position.
bar(1,32) is the same, but the bit is in the 64-32+1=33th bit, which is above the representable numbers for a 32-bit int. Only the 32 lowest bit are taken, and they are all 0's.
1 >> 32 is performed by the compiler. No idea why gcc uses a non-rotating shift here and not in the generated code.
Same thing for ((int)1 >> (int)32)

C++ what does >> do

What does >> do in this situation?
int n = 500;
unsigned int max = n>>4;
cout << max;
It prints out 31.
What did it do to 500 to get it to 31?
Bit shifted!
Original binary of 500:
111110100
Shifted 4
000011111 which is 31!
Original: 111110100
1st Shift:011111010
2nd Shift:001111101
3rd Shift:000111110
4th Shift:000011111 which equals 31.
This is equivilent of doing integer division by 16.
500/16 = 31
500/2^4 = 31
Some facts pulled from here: http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/BitOp/bitshift.html (because blarging from my head results in rambling that is unproductive..these folks state it much cleaner than i could)
Shifting left using << causes 0's to be shifted from the least significant end (the right side), and causes bits to fall off from the most significant end (the left side).
Shifting right using >> causes 0's to be shifted from the most significant end (the left side), and causes bits to fall off from the least significant end (the right side) if the number is unsigned.
Bitshifting doesn't change the value of the variable being shifted. Instead, a temporary value is created with the bitshifted result.
500 got bit shifted to the right 4 times.
x >> y mathematically means x / 2^y.
Hence 500 / 2^4 which is equal to 500 / 16. In integer division the result is 31.
It divided 500 by 16 using integer division.
>> is a right-shift operator, which shifted the bits of the binary representation of n to the right 4 times. This is equivalent to dividing n by 2 4 times, i. e. dividing it by 2^4=16. This is integer division, so the decimal part got truncated.
It shifts the bits of 500 to the right by 4 bit positions, tossing out the rightmost bits as it does so.
500 = 111110100 (binary)
111110100 >> 4 = 11111 = 31
111110100 is 500 in binary. Move the bits to the right and you are left with 11111 which is 31 in binary.
500 in binary is [1 1111 0100]
(4 + 16 + 32 + 64 + 128 + 256)
Shift that to the right 4 times and you lose the lowest 4 bits, resulting in:
[1 1111]
which is 1 + 2 + 4 + 8 + 16 = 31
You can also examine it in Hex:
500(decimal) is 0x1F4(hex).
Then shift to the right 4 bits, or one nibble:
0x1F == 31(dec).
The >> and << operators are shifting operators.
http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Companion/cxx_crib/shift.html
Of course they may be overloaded just to confuse you a little more!
C++ has nice classes to animate what is going on at the bit level
#include <bitset>
#include <iostream>
int main() {
std::bitset<16> s(500);
for(int i = 0; i < 4; i++) {
std::cout << s << std::endl;
s >>= 1;
}
std::cout << s
<< " (dec " << s.to_ulong() << ")"
<< std::endl;
}