This question already has answers here:
Why doesn't left bit-shift, "<<", for 32-bit integers work as expected when used more than 32 times?
(10 answers)
Closed 3 years ago.
Why is the result of
uint32_t s = 64;
uint64_t val = 1ull << s;
and
uint64_t s = 64;
uint64_t val = 1ull << s;
1?
But
uint64_t val = 1ull << 0x40;
gets optimized to 0?
I really don't understand why it equals 1. It does no matter whether I use my VC++ or g++ compiler.
And how can I ensure that 1ull << s equals 0 when s equals 64, what's in my opinion is the correct result? I also need the imo. correct result in my program.
This is because on x64, the instruction SHL (when operating on a 64-bit source/destination operand) only uses the bottom 6 bits of the shift amount. In effect, you are shifting by 0 bits.
From the "Intel 64 and IA-32 Architecture Software Developer's Manual" (can be downloaded from Intel in PDF form, which is hard to link into,) under the entry for "SAL/SAR/SHL/SHR - Shift" instructions:
The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W is used). The count range is limited to 0 to 31 (or 63 if 64-bit mode and REX.W is used).
As commented below, it is also an "Undefined Behavior" in the C++ language to shift an integer by more bits than its size. (Thanks to #sgarizvi for the reference.) The C++ standard, under Section 8.5.7 (Shift Operators) states that:
The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand...
That's why the compiler is producing code that gives different results under different conditions (constant or variable shift count, optimized or not, etc.)
About how to "fix" it, I have no clever tricks. You can do something like this:
template <typename IntT>
IntT ShiftLeft (IntT x, unsigned count) {
// Optional check; depends on how much you want to embrace C++!
static_assert(std::is_integral_v<IntT>, "This shift only works for integral types.");
if (count < sizeof(IntT) * 8)
return x << count;
else
return 0;
}
This code would work for signed and unsigned integer types (left-shift is the same for signed and unsigned values.)
Related
I have an 18 bit integer that is in two's complement and I'd like to convert it to a signed number so I can better use it. On the platform I'm using, ints are 4 bytes (i.e. 32 bits). Based on this post:
Convert Raw 14 bit Two's Complement to Signed 16 bit Integer
I tried the following to convert the number:
using SomeType = uint64_t;
SomeType largeNum = 0x32020e6ed2006400;
int twosCompNum = (largeNum & 0x3FFFF);
int regularNum = (int) ((twosCompNum << 14) / 8192);
I shifted the number left 14 places to get the sign bit as the most significant bit and then divided by 8192 (in binary, it's 1 followed by 13 zeroes) to restore the magnitude (as mentioned in the post above). However, this doesn't seem to work for me. As an example, inputting 249344 gives me -25600, which prima facie doesn't seem correct. What am I doing wrong?
The almost-portable way (with assumption that negative integers are natively 2s-complement) is to simply inspect bit 17, and use that to conditionally mask in the sign bits:
constexpr SomeType sign_bits = ~SomeType{} << 18;
int regularNum = twosCompNum & 1<<17 ? twosCompNum | sign_bits : twosCompNum;
Note that this doesn't depend on the size of your int type.
The constant 8192 is wrong, it should be 16384 = (1<<14).
int regularNum = (twosCompNum << 14) / (1<<14);
With this, the answer is correct, -12800.
It is correct, because the input (unsigned) number is 249344 (0x3CE00). It has its highest bit set, so it is a negative number. We can calculate its signed value by subtracting "max unsigned value+1" from it: 0x3CE00-0x40000=-12800.
Note, that if you are on a platform, for which right signed shift does the right thing (like on x86), then you can avoid division:
int regularNum = (twosCompNum << 14) >> 14;
This version can be slightly faster (but has implementation-defined behavior), if the compiler doesn't notice that division can be exactly replaced by a shift (clang 7 notices, but gcc 8 doesn't).
Two problems: first your test input is not an 18-bit two's complement number. With n bits, two's compliment permits -(2 ^ (n - 1)) <= value < 2 ^ (n - 1). In the case of 18 bits, that's -131072 <= value < 131071. You say you input 249344 which is outside of this range and would actually be interpreted as -12800.
The second problem is that your powers of two are off. In the answer you cite, the solution offered is of the form
mBitOutput = (mBitCast)(nBitInput << (m - n)) / (1 << (m - n));
For your particular problem, you desire
int output = (nBitInput << (32 - 18)) / (1 << (32 - 18));
// or equivalent
int output = (nBitInput << 14) / 16384;
Try this out.
I am developing C++ libraries for the Arduino 2560 Mega and I have come across an interesting bug.
uint8_t resolution = 15;
uint32_t numDiscreteLevels = (1 << resolution); //yields a value of 0xFFFF8000
uint32_t numDiscreteLevels = ((uint32_t)1 << resolution); //yields 0x8000 (correct value)
It seems that in the first line, signed bits are padded onto the value before being assigned to the variable. According to promotion rules I believe that the 1 should be cast to an unsigned integer. But even then, I thought signed padding only occurs when you shift left.
On the AVR architecture, an int is 16 bits -- not 32! This means that all numbers, including integer constants, are treated as a int16_t unless otherwise specified.
This means that 1 << 8 is (int16_t) 0x8000, not (int32_t) 0x00008000 as it would be on a 32-bit platform. Since this is a signed value and it has its high bit set, it's negative (specifically, -32768), and sign-extending it to a uint32_t gives 0xffff8000.
You could provide the mask value as an unsigned directly to see how that affects the behavior, which should be as expected.:
uint8_t resolution = 15;
uint32_t numDiscreteLevels = 1u << resolution;
1u << 15 is 0x8000u whereas 1 << 15 as a 16-bit value is -32767.
This question already has answers here:
What's the best way to toggle the MSB?
(4 answers)
Closed 8 years ago.
If, for example, I have the number 20:
0001 0100
I want to set the highest valued 1 bit, the left-most, to 0.
So
0001 0100
will become
0000 0100
I was wondering which is the most efficient way to achieve this.
Preferrably in c++.
I tried substracting from the original number the largest power of two like this,
unsigned long long int originalNumber;
unsigned long long int x=originalNumber;
x--;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
x++;
x >>= 1;
originalNumber ^= x;
,but i need something more efficient.
The tricky part is finding the most significant bit, or counting the number of leading zeroes. Everything else is can be done more or less trivially with left shifting 1 (by one less), subtracting 1 followed by negation (building an inverse mask) and the & operator.
The well-known bit hacks site has several implementations for the problem of finding the most significant bit, but it is also worth looking into compiler intrinsics, as all mainstream compilers have an intrinsic for this purpose, which they implement as efficiently as the target architecture will allow (I tested this a few years ago using GCC on x86, came out as single instruction). Which is fastest is impossible to tell without profiling on your target architecture (fewer lines of code, or fewer assembly instructions are not always faster!), but it is a fair assumption that compilers implement these intrinsics not much worse than you'll be able to implement them, and likely faster.
Using an intrinsic with a somewhat intellegible name may also turn out easier to comprehend than some bit hack when you look at it 5 years from now.
Unluckily, although a not entirely uncommon thing, this is not a standardized function which you'd expect to find in the C or C++ libraries, at least there is no standard function that I'm aware of.
For GCC, you're looking for __builtin_clz, VisualStudio calls it _BitScanReverse, and Intel's compiler calls it _bit_scan_reverse.
Alternatively to counting leading zeroes, you may look into what the same Bit Twiddling site has under "Round up to the next power of two", which you would only need to follow up with a right shift by 1, and a NAND operation. Note that the 5-step implementation given on the site is for 32-bit integers, you would have to double the number of steps for 64-bit wide values.
#include <limits.h>
uint32_t unsetHighestBit(uint32_t val) {
for(uint32_t i = sizeof(uint32_t) * CHAR_BIT - 1; i >= 0; i--) {
if(val & (1 << i)) {
val &= ~(1 << i);
break;
}
}
return val;
}
Explanation
Here we take the size of the type uint32_t, which is 4 bytes. Each byte has 8 bits, so we iterate 32 times starting with i having values 31 to 0.
In each iteration we shift the value 1 by i to the left and then bitwise-and (&) it with our value. If this returns a value != 0, the bit at i is set. Once we find a bit that is set, we bitwise-and (&) our initial value with the bitwise negation (~) of the bit that is set.
For example if we have the number 44, its binary representation would be 0010 1100. The first set bit that we find is bit 5, resulting in the mask 0010 0000. The bitwise negation of this mask is 1101 1111. Now when bitwise and-ing & the initial value with this mask, we get the value 0000 1100.
In C++ with templates
This is an example of how this can be solved in C++ using a template:
#include <limits>
template<typename T> T unsetHighestBit(T val) {
for(uint32_t i = sizeof(T) * numeric_limits<char>::digits - 1; i >= 0; i--) {
if(val & (1 << i)) {
val &= ~(1 << i);
break;
}
}
return val;
}
If you're constrained to 8 bits (as in your example), then just precalculate all possible values in an array (byte[256]) using any algorithm, or just type it in by hand.
Then you just look up the desired value:
x = lookup[originalNumber]
Can't be much faster than that. :-)
UPDATE: so I read the question wrong.
But if using 64 bit values, then break it apart into 8 bytes, maybe by casting it to a byte[8] or overlaying it in a union or something more clever. After that, find the first byte which are not zero and do as in my answer above with that particular byte. Not as efficient I'm afraid, but still it is at most 8 tests (and in average 4.5) + one lookup.
Of course, creating a byte[65536} lookup will double the speed.
The following code will turn off the right most bit:
bool found = false;
int bit, bitCounter = 31;
while (!found) {
bit = x & (1 << bitCounter);
if (bit != 0) {
x &= ~(1 << bitCounter);
found = true;
}
else if (bitCounter == 0)
found = true;
else
bitCounter--;
}
I know method to set more right non zero bit to 0.
a & (a - 1)
It is from Book: Warren H.S., Jr. - Hacker's Delight.
You can reverse your bits, set more right to zero and reverse back. But I do now know efficient way to invert bits in your case.
I have a doubt in Left Shift Operator
int i = 1;
i <<= (sizeof (int) *8);
cout << i;
It prints 1.
i has been initialized to 1.
And while moving the bits till the size of the integer, it fills the LSB with 0's and as 1 crosses the limit of integer, i was expecting the output to be 0.
How and Why it is 1?
Let's say sizeof(int) is 4 on your platform. Then the expression becomes:
i = i << 32;
The standard says:
6.5.7-3
If the value of the right operand is negative or is greater than or
equal to the width of the promoted left operand, the behavior is
undefined.
As cnicutar said, your example exhibits undefined behaviour. That means that the compiler is free to do whatever the vendor seems fit, including making demons fly out your nose or just doing nothing to the value at hand.
What you can do to convince yourself, that left shifting by the number of bits will produce 0 is this:
int i = 1;
i <<= (sizeof (int) *4);
i <<= (sizeof (int) *4);
cout << i;
Expanding on the previous answer...
On the x86 platform your code would get compiled down to something like this:
; 32-bit ints:
mov cl, 32
shl dword ptr i, cl
The CPU will shift the dword in the variable i by the value contained in the cl register modulo 32. So, 32 modulo 32 yields 0. Hence, the shift doesn't really occur. And that's perfectly fine per the C standard. In fact, what the C standard says in 6.5.7-3 is because the aforementioned CPU behavior was quite common back in the day and influenced the standard.
As already mentioned by others, according to C standard the behavior of the shift is undefined.
That said, the program prints 1. A low-level explanation of why it prints 1 is as follows:
When compiling without optimizations the compiler (GCC, clang) emits the SHL instruction:
...
mov $32,%ecx
shll %cl,0x1c(%esp)
...
The Intel documentation for SHL instruction says:
SAL/SAR/SHL/SHR—Shift
The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W is used). The count range is limited to 0 to 31 (or 63 if 64-bit mode and REX.W is used).
Masking the shift count 32 (binary 00100000) to 5 bits yields 0 (binary 00000000). Therefore the shll %cl,0x1c(%esp) instruction isn't doing any shifting and leaves the value of i unchanged.
I recently faced a strange behavior using the right-shift operator.
The following program:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <stdint.h>
int foo(int a, int b)
{
return a >> b;
}
int bar(uint64_t a, int b)
{
return a >> b;
}
int main(int argc, char** argv)
{
std::cout << "foo(1, 32): " << foo(1, 32) << std::endl;
std::cout << "bar(1, 32): " << bar(1, 32) << std::endl;
std::cout << "1 >> 32: " << (1 >> 32) << std::endl; //warning here
std::cout << "(int)1 >> (int)32: " << ((int)1 >> (int)32) << std::endl; //warning here
return EXIT_SUCCESS;
}
Outputs:
foo(1, 32): 1 // Should be 0 (but I guess I'm missing something)
bar(1, 32): 0
1 >> 32: 0
(int)1 >> (int)32: 0
What happens with the foo() function ? I understand that the only difference between what it does and the last 2 lines, is that the last two lines are evaluated at compile time. And why does it "work" if I use a 64 bits integer ?
Any lights regarding this will be greatly appreciated !
Surely related, here is what g++ gives:
> g++ -o test test.cpp
test.cpp: In function 'int main(int, char**)':
test.cpp:20:36: warning: right shift count >= width of type
test.cpp:21:56: warning: right shift count >= width of type
It's likely the CPU is actually computing
a >> (b % 32)
in foo; meanwhile, the 1 >> 32 is a constant expression, so the compiler will fold the constant at compile-time, which somehow gives 0.
Since the standard (C++98 §5.8/1) states that
The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
there is no contradiction having foo(1,32) and 1>>32 giving different results.
On the other hand, in bar you provided a 64-bit unsigned value, as 64 > 32 it is guaranteed the result must be 1 / 232 = 0. Nevertheless, if you write
bar(1, 64);
you may still get 1.
Edit: The logical right shift (SHR) behaves like a >> (b % 32/64) on x86/x86-64 (Intel #253667, Page 4-404):
The destination operand can be a register or a memory location. The count operand can be an immediate value or the CL register. The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W is used). The count range is limited to 0 to 31 (or 63 if 64-bit mode and REX.W is used). A special opcode encoding is provided for a count of 1.
However, on ARM (armv6&7, at least), the logical right-shift (LSR) is implemented as (ARMISA Page A2-6)
(bits(N), bit) LSR_C(bits(N) x, integer shift)
assert shift > 0;
extended_x = ZeroExtend(x, shift+N);
result = extended_x<shift+N-1:shift>;
carry_out = extended_x<shift-1>;
return (result, carry_out);
where (ARMISA Page AppxB-13)
ZeroExtend(x,i) = Replicate('0', i-Len(x)) : x
This guarantees a right shift of ≥32 will produce zero. For example, when this code is run on the iPhone, foo(1,32) will give 0.
These shows shifting a 32-bit integer by ≥32 is non-portable.
OK. So it's in 5.8.1:
The operands shall be of integral or enumeration type and integral promotions are performed. The type of the result is
that of the promoted left operand. The behavior is undefined if the right operand is negative, or greater than or equal to
the length in bits of the promoted left operand.
So you have an Undefined Behaviour(tm).
What happens in foo is that the shift width is greater than or equal to the size of the data being shifted. In the C99 standard that results in undefined behaviour. It's probably the same in whatever C++ standard MS VC++ is built to.
The reason for this is to allow compiler designers to take advantage of any CPU hardware support for shifts. For example, the i386 architecture has an instruction to shift a 32 bit word by a number of bits, but the number of bits is defined in a field in the instruction that is 5 bits wide. Most likely, your compiler is generating the instruction by taking your bit shift amount and masking it with 0x1F to get the bit shift in the instruction. This means that shifting by 32 is the same as shifting by 0.
I compiled it on 32 bit windows using VC9 compiler. It gave me the following warning. Since sizeof(int) is 4 bytes on my system compiler is indicating that right shifting by 32 bits results in undefined behavior. Since it is undefined, you can not predict the result. Just for checking I right shifted with 31 bits and all the warnings disappeared and the result was also as expected (i.e. 0).
I suppose the reason is that int type holds 32-bits (for most systems), but one bit is used for sign as it is signed type. So only 31 bits are used for actual value.
The warning says it all!
But in fairness I got bitten by the same error once.
int a = 1;
cout << ( a >> 32);
is completely undefined. In fact the compiler generally gives a different results than the runtime in my experience. What I mean by this is if the compiler can see to evaluate the shift expression at run time it may give you a different result to the expression evaluated at runtime.
foo(1,32) performs a rotate-shit, so bits that should disappear on the right reappear on the left. If you do it 32 times, the single bit set to 1 is back to its original position.
bar(1,32) is the same, but the bit is in the 64-32+1=33th bit, which is above the representable numbers for a 32-bit int. Only the 32 lowest bit are taken, and they are all 0's.
1 >> 32 is performed by the compiler. No idea why gcc uses a non-rotating shift here and not in the generated code.
Same thing for ((int)1 >> (int)32)