Why this code does not write 0 as a last element but 18446744073709551615?
(compiled with g++)
#include <iostream>
using namespace std;
int main(){
unsigned long long x = (unsigned long long) (-1);
for(int i=0; i <= 64; i++)
cout << i << " " << (x >> i) << endl;
cout << (x >> 64) << endl;
return 0;
}
When you shift a value by more bits than word size, it usually gets shifted by mod word-size. Basically, shifting it by 64 means shifting by 0 bits which is equal to no shifting at all. You shouldn't rely on this though as it's not defined by the standard and it can be different on different architectures.
Shifting a number a number of bits that is equal to or larger than its width is undefined behavior. You can only safely shift a 64-bit integer between 0 and 63 positions.
This warning from the compiler should be a hint:
"warning: right shift count >= width of type"
This results in undefined behavior:
http://sourcefrog.net/weblog/software/languages/C/bitshift.html
well, you are shifting one too many times. you are shifting from 0 to 64 inclusive which is a total of 65 times. You generally want:
for(int i=0; i < 64; i++)
....
You overflow the shift. If you've noticed, GCC even warns you:
warning: right shift count >= width of type
How come? You include 64 as a valid shift, which is an undefined behavior.
counting from 0 to 64 there are 65 numbers (0 included). 0 being the first bit (much like arrays).
#include <iostream>
using namespace std;
int main(){
unsigned long long x = (unsigned long long) (-1);
for(int i=0; i < 64; i++)
cout << i << " " << (x >> i) << endl;
cout << (x >> 63) << endl;
return 0;
}
Will produce the output you'd expect.
You can use:
static inline pack_t lshift_fix64(pack_t shiftee, short_idx_t shifter){
return (shiftee << shifter) & (-(shifter < 64));
}
for such a trick,
(-(shifter < 64)) == 0xffff ffff ffff ffff
if shifter < 64 and
(-(shifter < 64)) == 0x0
otherwise.
I get:
test.c:8: warning: right shift count >= width of type
so perhaps it's undefined behavior?
The bit pattern of -1 looks like 0xFFFFFFFFFFFFFFFF in hex, for 64 bit types. Thus if you print it as an unsigned variable you will see the largest value an unsigned 64 bit variable can hold, i.e. 18446744073709551615.
When bit shifting we don't care what a value means in this case, i.e. it doesn't matter if the variable is signed or unsigned it is treated the same way (shifting all bits one step to the right in this case).
Another trap for the unwary: I know this is an old thread, but I came here looking for help. I got caught out on a 64 bit machine using 1<<k when I meant 1L<<k; no help from the compiler in this case :(
Related
i am trying to interleave(For calculating morton code) 2 signed long numbers say x and y (32 bits) with values
case 1 :
x = 10; //1010
y = 10; //1010
result will be :
11001100
case 2:
x = -10;
y = 10;
Binary representation are,
x = 1111111111111111111111111111111111111111111111111111111111110110
y = 1010
For interleaving ,i am considering only 32 bit representation where i can interleave 31st bit of x with 31st bit of y ,
using the following code,
signed long long x_y;
for (int i = 31; i >= 0; i--)
{
unsigned long long xbit = ((unsigned long) x)& (1 << i);
x_y|= (xbit << i);
unsigned long long ybit = ((unsigned long) y)& (1 << i);
if (i != 0)
{
x_y|= (x_y<< (i - 1));
}
else
{
(x_y= x_y<< 1) |= ybit;
}
}
The above code works fine ,if we have x positive and y negative but the case 2 is failing ,Please help me ,what is going wrong?
The negative numbers uses 64 bits ,whereas positive numbers uses 32 bits.Correct me if iam wrong.
I think below code work according to your requirement,
Morton code is 64 bits and we are making 64 bit number from two 32 bits numbers by interleaving.
Since numbers are signed ,we have to consider negative numbers as,
if (x < 0) //value will be represented as 2's compliment,hence uses all 64 bits
{
value = x; //value is of 32 bit,so use only first lower 32 bits
cout << value;
value &= ~(1 << 31); //make sign bit to 0,as it does not contribute to real value.
}
similarly do for y.
Following code does the interleaving,
unsigned long long x_y_copy = 0; //make a copy of ur morton code
//looping for each bit of two 32 bit numbers starting from MSB.
for (int i = 31; i >=0; i--)
{
//making mort to 0,so because shifting causes loss of data
mort = 0;
//take 32 bit from x
int xbit = ((unsigned long)x)& (1 << i);
mort = (mort |= xbit)<<i+1; /*shifting*/
//copy formed code to copy ,so that next time the value is preserved for appending
x_y_copy|= mort;
mort =0;
//take 32nd bit from 'y' also
int ybit = ((unsigned long)y)& (1 << i);
mort = (mort |= ybit)<<i;
x_y_copy |= mort;
}
//this is important,when 'y' is negative because the 32nd bit of 'y' is set to 0 by above first code,and while moving 32 bit of 'y' to morton code,the value 0 is copied to 63rd bit,which has to be made to 1,as sign bit is not 63rd bit.
if (mapu_y < 0)
{
x_y_copy = (x_y_copy) | (4611686018427387904);//4611686018427387904 = pow(2,63)
}
I hope this helps.:)
I am running following program: (URL: http://ideone.com/aoJoI5)
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
long long int N=pow(2, 36);
cout << N <<endl;
int count = 0;
cout << "Positions where bits are set : " << endl;
for(int j=0; j<sizeof(long long int)*8; ++j){
if(N&(1<<j)){
++count;
cout << j << endl;
}
}
return 0;
}
This program gives me output as:
68719476736
Positions where bits are set :
31
63
Now as I am using N=2^36, which means 36th bit should be 1 and nothing else, but why program gives me position 31 and 63? is anything wrong with my program?
I have one observation that if we use N=2^{exp} where exp >= 32 it always give positions for set bit to be 31 and 63. Can anybody please explain why this happens?
If int is 32-bit long, 1<<j will do shifting too much and invoke undefined behavior.
Here is my guess of the cause:
When j becomes 31, the 1 bit comes to the sign bit.
Seeing the sign bit being 1, to calculate bitwise AND with N, the value is sign-extended, so bits from 31st to 63th (0-origin) become 1.
The 36th bit (0-origin) in N is 1, so the result of bitwise AND will be nonzero.
The condition is evaluated as true and the number is printed.
When j is 63, if you use IA-32 CPU, the width to be shifted is masked to 5 bits, so it will be interpreted as 31 and the same thing will happen.
To avoid this undefined behavior, use unsigned long long value to shift like 1ull<<j.
Note that using long long is not good because shifting the 1 bit to sign bit invokes undefined behavior.
My objective is to write an algorithm that would be able to convert a long number into a binary number stored in a string.
Here is my current block of code:
#include <iostream>
#define LONG_SIZE 64; // size of a long type is 64 bits
using namespace std;
string b10_to_b2(long x)
{
string binNum;
if(x < 0) // determine if the number is negative, a number in two's complement will be neg if its' first bit is zero.
{
binNum = "1";
}
else
{
binNum = "0";
}
int i = LONG_SIZE - 1;
while(i > 0)
{
i --;
if( (x & ( 1 << i) ) == ( 1 << i) )
{
binNum = binNum + "1";
}
else
{
binNum = binNum + "0";
}
}
return binNum;
}
int main()
{
cout << b10_to_b2(10) << endl;
}
The output of this program is:
00000000000000000000000000000101000000000000000000000000000001010
I want the output to be:
00000000000000000000000000000000000000000000000000000000000001010
Can anyone identify the problem? For whatever reason the function outputs 10 represented by 32 bits concatenated with another 10 represented by 32 bits.
why would you assume long is 64 bit?
try const size_t LONG_SIZE=sizeof(long)*8;
check this, the program works correctly with my changes
http://ideone.com/y3OeB3
Edit: and ad #Mats Petersson pointed out you can make it more robust by changing this line
if( (x & ( 1 << i) ) == ( 1 << i) )
to something like
if( (x & ( 1UL << i) ) ) where that UL is important, you can see his explanation the the comments
Several suggestions:
Make sure you use a type that is guaranteed to be 64-bit, such as uint64_t, int64_t or long long.
Use above mentioned 64-bit type for your variable i to guarantee that the 1 << i calculates correctly. This is caused by the fact that shift is only guaranteed by the standard when the number of bits shifted are less or equal to the number of bits in the type being shifted - and 1 is the type int, which for most modern platforms (evidently including yours) is 32 bits.
Don't put semicolon on the end of your #define LONG_SIZE - or better yet, use const int long_size = 64; as this allows all manner of better behaviour, for example that you in the debugger can print long_size and get 64, where print LONG_SIZE where LONG_SIZE is a macro will yield an error in the debugger.
I need to know whether an integer is 32 bits long or not (I want to know if it's exactly 32 bits long (8 hexadecimal characters). How could I achieve this in C++? Should I do this with the hexadecimal representation or with the unsigned int one?
My code is as follows:
mistream.open("myfile.txt");
if(mistream)
{
for(int i=0; i<longArray; i++)
{
mistream >> hex >> datos[i];
}
}
mistream.close();
Where mistream is of type ifstream, and datos is an unsigned int array
Thank you
std::numeric_limits<unsigned>::digits
is a static integer constant (or constexpr in C++11) giving the number of bits (since unsigned is stored in base 2, it gives binary digits).
You need to #include <limits> to get this, and you'll notice here that this gives the same value as Thomas' answer (while also being generalizable to other primitive types)
For reference (you changed your question after I answered), every integer of a given type (eg, unsigned) in a given program is exactly the same size.
What you're now asking is not the size of the integer in bits, because that never varies, but whether the top bit is set. You can test this trivially with
bool isTopBitSet(uint32_t v) {
return v & 0x80000000u;
}
(replace the unsigned hex literal with something like T{1} << (std::numeric_limits<T>::digits-1) if you want to generalise to unsigned T other than uint32_t).
As already hinted in a comment by #chux, you can use a combination of the sizeof operator and the CHAR_BIT macro constant. The former tells you (at compile-time) the size (in multiples of sizeof(char) aka bytes) of its argument type. The latter is the number of bits to the byte (usually 8).
You can encapsulate this nicely into a function template.
#include <climits> // CHAR_BIT
#include <cstddef> // std::size_t
#include <iostream> // std::cout, std::endl
template <typename T>
constexpr std::size_t
bit_size() noexcept
{
return sizeof(T) * CHAR_BIT;
}
int
main()
{
std::cout << bit_size<int>() << std::endl;
std::cout << bit_size<long>() << std::endl;
}
On my implementation, it outputs 32 and 64.
Since the function is a constexpr, you can use it in static contexts, such as in static_assert<bit_size<int>() >= 32, "too small");.
Try this:
#include <climits>
unsigned int bits_per_byte = CHAR_BIT;
unsigned int bits_per_integer = CHAR_BIT * sizeof(int);
The identifier CHAR_BIT represents the number of bits in a char.
The sizeof returns the number of char locations occupied by the integer.
Multiplying them gives us the number of bits for an integer.
OP said "if it's exactly 32 bits long (8 hexadecimal characters)" and further with ".. interested in knowing if the value is between power(2, 31) and power(2, 32) - 1". So it is a little fuzzy on negative 32-bit numbers.
Certainly OP wants to know the result based on the value and not the type.
bool integer_is_32_bits_long(int x) =
// cope with 32-bit int
((INT_MAX == 0x7FFFFFFF) && (x < 0)) ||
// larger 32-bit int
((INT_MAX > 0x7FFFFFFF) && (x >= 0x80000000) && (x <= 0xFFFFFFFF));
Of course if int is 16-bit, then the result is always false.
I want to know if it's exactly 32 bits long (8 hexadecimal characters)
I am interested in knowing if the value is between power(2, 31) and power(2, 32) - 1
So you want to know if the upper bit is set? Then you can simply test if the number is negative:
bool upperBitSet(int x)
{
return x < 0;
}
For unsigned numbers, you can simply shift left and back right and then check if you lost data:
bool upperBitSet(unsigned x)
{
return (x << 1 >> 1) != x;
}
The simplest way probably is to check if the 32nd bit is set:
bool isReally32bitsLong(uint32_t in) {
return (in >> 31)!=0;
}
bool isExactly32BitsLong(uint64_t in) {
return ((in >> 31)!=0) && ((in >> 32) == 0);
}
As part of my master thesis, I get a number (e.g. 5 bits) with 2 significant bits (2nd and 4th). This means for example x1x0x, where $x \in {0,1}$ (x could be 0 or 1) and 1,0 are bits with fixed values.
My first task is to compute all the combinations of the above given number , 2^3 = 8. This is called S_1 group.
Then I need to compute 'S_2' group and this is all the combinations of the two numbers x0x0x and x1x1x(this means one mismatch in the significant bits), this should give us $\bin{2}{1} * 2^3 = 2 * 2^3 = 16.
EDIT
Each number, x1x1x and x0x0x, is different from the Original number, x1x0x, at one significant bit.
Last group, S_3, is of course two mismatches from the significant bits, this means, all the numbers which pass the form x0x1x, 8 possibilities.
The computation could be computed recursively or independently, that is not a problem.
I would be happy if someone could give a starting point for these computations, since what I have is not so efficient.
EDIT
Maybe I chose my words wrongly, using significant bits. What I meant to say is that a specific places in a five bits number the bit are fixed. Those places I defined as specific bits.
EDIT
I saw already 2 answers and it seems I should have been clearer. What I am more interested in, is finding the numbers x0x0x, x1x1x and x0x1x with respect that this is a simply example. In reality, the group S_1 (in this example x1x0x) would be built with at least 12 bit long numbers and could contain 11 significant bits. Then I would have 12 groups...
If something is still not clear please ask ;)
#include <vector>
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
string format = "x1x0x";
unsigned int sigBits = 0;
unsigned int sigMask = 0;
unsigned int numSigBits = 0;
for (unsigned int i = 0; i < format.length(); ++i)
{
sigBits <<= 1;
sigMask <<= 1;
if (format[i] != 'x')
{
sigBits |= (format[i] - '0');
sigMask |= 1;
++numSigBits;
}
}
unsigned int numBits = format.length();
unsigned int maxNum = (1 << numBits);
vector<vector<unsigned int> > S;
for (unsigned int i = 0; i <= numSigBits; i++)
S.push_back(vector<unsigned int>());
for (unsigned int i = 0; i < maxNum; ++i)
{
unsigned int changedBits = (i & sigMask) ^ sigBits;
unsigned int distance = 0;
for (unsigned int j = 0; j < numBits; j++)
{
if (changedBits & 0x01)
++distance;
changedBits >>= 1;
}
S[distance].push_back(i);
}
for (unsigned int i = 0; i <= numSigBits; ++i)
{
cout << dec << "Set with distance " << i << endl;
vector<unsigned int>::iterator iter = S[i].begin();
while (iter != S[i].end())
{
cout << hex << showbase << *iter << endl;
++iter;
}
cout << endl;
}
return 0;
}
sigMask has a 1 where all your specific bits are. sigBits has a 1 wherever your specific bits are 1. changedBits has a 1 wherever the current value of i is different from sigBits. distance counts the number of bits that have changed. This is about as efficient as you can get without precomputing a lookup table for the distance calculation.
Of course, it doesn't actually matter what the fixed-bit values are, only that they're fixed. xyxyx, where y is fixed and x isn't, will always yield 8 potentials. The potential combinations of the two groups where y varies between them will always be a simple multiplication- that is, for each state that the first may be in, the second may be in each state.
Use bit logic.
//x1x1x
if(01010 AND test_byte) == 01010) //--> implies that the position where 1s are are 1.
There's probably a number-theoretic solution, but, this is very simple.
This needs to be done with a fixed-bit integer type. Some dynamic languages (python for example), will extend bits out if they think it's a good idea.
This is not hard, but it is time consuming, and TDD would be particularly appropriate here.