I'm attempting to implement circular bit-shifting in C++. It kind of works, except after a certain point I get a bunch of zeroes.
for (int n=0;n<12;n++) {
unsigned char x=0x0f;
x=((x<<n)|(x>>(8-n))); //chars are 8 bits
cout<<hex<<"0x"<<(int)x<<endl;
}
My output is:
0xf
0x1e
0x3c
0x78
0xf0
0xe1
0xc3
0x87
0xf
0x0
0x0
0x0
As you can see, I start getting 0x0's instead of the expected 0x1e, 0x3c, etc.
If I expand the for loop to iterate 60 times or so, the numbers come back correctly (after a bunch of zeroes.)
I'm assuming that a char houses a big space, and the "gaps" of unused data are zeroes. My understanding is a bit limited, so any suggestions would be appreciated. Is there a way to toss out those zeroes?
Shifting by a negative amount is undefined behavior.
You loop from 0 to 12, but you have an 8 - n in your shifts. So that will go negative.
If you want to handle n > 8, you'll need to take a modulus by 8. (assuming you want 8-bit circular shift.)
for (int n=0; n < 12; n++) {
unsigned char x = 0x0f;
int shift = n % 8; // Wrap modulus
x = ((x << shift) | (x >> (8 - shift))); //chars are 8 bits
cout << hex << "0x" << (int)x << endl;
}
Shifting a byte left by more than 7 will always result in 0.
Also, shifting by a negative amount is not defined.
In order to fix this you have to limit the shift to the size of the type.
Basically:
unsigned char x = 0xf;
int shift = n&7;
x=((x<<shift)|(x>>(8-shift)))
Related
What would be the fastest way possible to reverse the nibbles (e.g digits) of a hexadecimal number in C++?
Here's an example of what I mean : 0x12345 -> 0x54321
Here's what I already have:
unsigned int rotation (unsigned int hex) {
unsigned int result = 0;
while (hex) {
result = (result << 4) | (hex & 0xF);
hex >>= 4;
}
return result;
}
This problem can be split into two parts:
Reverse the nibbles of an integer. Reverse the bytes, and swap the nibble within each byte.
Shift the reversed result right by some amount to adjust for the "variable length". There are std::countl_zero(x) & -4 (number of leading zeroes, rounded down to a multiple of 4) leading zero bits that are part of the leading zeroes in hexadecimal, shifting right by that amount makes them not participate in the reversal.
For example, using some of the new functions from <bit>:
#include <stdint.h>
#include <bit>
uint32_t reverse_nibbles(uint32_t x) {
// reverse bytes
uint32_t r = std::byteswap(x);
// swap adjacent nibbles
r = ((r & 0x0F0F0F0F) << 4) | ((r >> 4) & 0x0F0F0F0F);
// adjust for variable-length of input
int len_of_zero_prefix = std::countl_zero(x) & -4;
return r >> len_of_zero_prefix;
}
That requires C++23 for std::byteswap which may be a bit optimistic, you can substitute it with some other byteswap.
Easily adaptable to uint64_t too.
i would do it without loops based on the assumption that the input is 32 bits
result = (hex & 0x0000000f) << 28
| (hex & 0x000000f0) << 20
| (hex & 0x00000f00) << 12
....
dont know if faster, but I find it more readable
I always struggled with bitwise operators and their practical usage. I found an example online for something I am doing in C++ and was wondering what is going on there.
for (int i = 0; i < size / 2; ++i)
{
queue->push(temp[i] & 0xff);
queue->push((temp[i] >> 8) & 0xff);
}
I know roughly what the "And" and the "Shift" operator is doing, but how does that affect the temp variable and the result. Anyone can help understand that?
The first statement, temp[i] & 0xff, extracts the lower 8 bits because 0xff = 1111 1111.
The second statement, (temp[i] >> 8) & 0xff , first shifts the bits in temp[i] to the right by 8 times, so the bits from position 8 to position 15 will now occupy bits from position 0 to position 7. And when you do the bitwise & with 0xFF, you get the new bits from position 0 to position 7.
For example -
Let's say, temp[i] = 0x01020304
then temp[i] & 0xff = 0x04
and (temp[i] >> 8) & 0xff = 0x03
The temp variable is not affected in either operation. The first logical operation is isolating the low order 8 bits of the temp variable and pushing them onto the queue, the second operation is isolating the next 8 bits (numbers 8 through 15) and is pushing them onto the queue. These two operations are repeated size/2 times.
My suggestion: Learn to love std::bitset. It comes in handy quite often when juggling with bits. Consider this code:
#include <bitset>
#include <iostream>
int main() {
unsigned long x = 12345678;
std::bitset<32> a{ x} ;
std::bitset<32> b{ x & 0xff };
std::bitset<32> c{ (x >> 8) & 0xff };
std::bitset<32> d{ 0xff };
std::cout << a << '\n' << b << '\n' << c << '\n' << d;
}
Its output is (comments added)
00000000101111000110000101001110 // x
00000000000000000000000001001110 // x & 0xff
00000000000000000000000001100001 // (x>>8) & 0xff
00000000000000000000000011111111 // 0xff
So what is happening here is...
0xff is an integer literal with the 8 lower bits set. The bitwise and & allows you to mask bits, ie the result has those bits set that are set in both operands. The shift operator >> shifts the bits by the specified amount (8 in this case).
In total, your code first pushes the first 8 bits and then the next 8 bits into the queue.
My task seems simple, I need to calculate the minimum number of bytes required to represent a variable integer (for example if the integer is 5, then I would like to return 1; if the integer is 300, I would like to return 2). Not referring to the data type int which is, as pointed out in comments, always just sizeof(int), I'm referring to a mathematical integer. And I almost have a solution. Here is my code:
int i;
cin >> i;
int length = 0;
while (i != 0) {
i >>= 8;
length++;
}
The problem is that this doesn’t work for any negative numbers (I have not been able to determine why) or some positive numbers where the most significant bit is a 0 (because the sign bit is the bit that makes it one bit larger)... Is there any hints or advice I can get in how to account for those cases?
Stored as a single byte,
Positive numbers are in the range 0x00 to 0x7F
Negative numbers are in the range 0x80 to 0xFF
As 2-bytes,
Positive numbers are in the range 0x0000 to 0x7FFF
Negative numbers are in the range 0x8000 to 0xFFFF
As 4-bytes,
Positive numbers are in the range 0x00000000 to 0x7FFFFFFF
Negative numbers are in the range 0x80000000 to 0xFFFFFFFF
You can use a function like the following to get the minimum size:
int getmin(int64_t i)
{
if(i == (int8_t)(i & 0xFF))
return 1;
if(i == (int16_t)(i & 0xFFFF))
return 2;
if(i == (int32_t)(i & 0xFFFFFFFF))
return 4;
return 8;
}
Then for example, when you see 0x80, translate it to -128. While 7F is translated to 127, and 0x801 should be translated to a positive number.
Note that this will be very difficult and rather pointless, it should be avoided. This doesn't accommodate storage of numbers in triple bytes, for that, you have to make up your own format.
The range of signed numbers possible to store in x bytes in 2's complement is -2^(8*x-1) to 2^(8*x-1)-1. For example, 1 byte can store signed integers from -128 to 127. Your example would incorrectly calculate that only 1 byte is needed to represent 128 (if we are talking about signed numbers), as right shifting by 8 would equal zero, but that last byte is required to know that this is not a negative number.
For handling negatives, turning it into a positive number and subtracting one (because negative numbers can store an extra value) will allow you to right shift.
int i;
cin >> i;
unsigned bytes = 1;
unsigned max = 128;
if (i < 0) {
i = ~i; //-1*i - 1
}
while(max <= i) {
i >>= 8;
bytes++;
}
cout << bytes;
Another option is to use __builtin_clz() if you are using gcc. This returns the leading zeros, which you can then use to determine the minimum number of bytes.
can someone please explain what this code is doing? i have to interpret this code and use it as a checksum code, but i am not sure if it is absolutely correct. Especially how the overflows are working and what *cp, const char* cp and sum & 0xFFFF mean? The basic idea was to take an input as string from user, convert it to binary form 16 bits at a time. Then sum all the multiple 16 bits together (in binary) and get a 16 bit sum. If there is any overflow bit in the addition, add that to lsb of final sum. Then take a ones complement of the result.
How close is this code to doing the above?
unsigned int packet::calculateChecksum()
{
unsigned int c = 0;
int i;
string j;
int k;
cout<< "enter a message" << message;
getline(cin, message) ; // Some string.
//std::string message =
std::vector<uint16_t> bitvec;
const char* cp = message.c_str()+1;
while (*cp) {
uint16_t bits = *(cp-1)>>8 + *(cp);
bitvec.push_back(bits);
cp += 2;
}
uint32_t sum=0;
uint16_t overflow=0;
uint32_t finalsum =0;
// Compute the sum. Let overflows accumulate in upper 16 bits.
for(auto j = bitvec.begin(); j != bitvec.end(); ++j)
sum += *j;
// Now fold the overflows into the lower 16 bits. Loop until no overflows.
do {
sum = (sum & 0xFFFF) + (sum >> 16);
} while (sum > 0xFFFF);
// Return the 1s complement sum in finalsum
finalsum = 0xFFFF & sum;
//cout<< "the finalsum is" << c;
c = finalsum;
return c;
}
I see several issues in the code:
cp is a pointer to zero ended char array holding the input message. The while(*cp) will have problem as inside the while loop body cp is incremented by 2!!! So it's fairly easy to skip the ending \0 of the char array (e.g. the input message has 2 characters) and result in a segmentation fault.
*(cp) and *(cp-1) fetch the two neighbouring characters (bytes) in the input message. But why the two-bytes word is formed by *(cp-1)>>8 + *(cp)? I think it would make sense to formed the 16bits word by *(cp-1)<<8 + *(cp) i.e. the preceding character sits on the higher byte and the following character sits on the lower byte of the 16bits word.
To answer your question sum & 0xFFFF just means calculating a number where the higher 16 bits are zero and the lower 16 bits are the same as in sum. the 0xFFFF is a bit mask.
The funny thing is, even the above code might not doing the exact thing you mentioned as requirement, as long as the sending and receiving party are using the same piece of incorrect code, your checksum creation and verification will pass, as both ends are consistent with each other:)
I'm having a little trouble grabbing n bits from a byte.
I have an unsigned integer. Let's say our number in hex is 0x2A, which is 42 in decimal. In binary it looks like this: 0010 1010. How would I grab the first 5 bits which are 00101 and the next 3 bits which are 010, and place them into separate integers?
If anyone could help me that would be great! I know how to extract from one byte which is to simply do
int x = (number >> (8*n)) & 0xff // n being the # byte
which I saw on another post on stack overflow, but I wasn't sure on how to get separate bits out of the byte. If anyone could help me out, that'd be great! Thanks!
Integers are represented inside a machine as a sequence of bits; fortunately for us humans, programming languages provide a mechanism to show us these numbers in decimal (or hexadecimal), but that does not alter their internal representation.
You should review the bitwise operators &, |, ^ and ~ as well as the shift operators << and >>, which will help you understand how to solve problems like this.
The last 3 bits of the integer are:
x & 0x7
The five bits starting from the eight-last bit are:
x >> 3 // all but the last three bits
& 0x1F // the last five bits.
"grabbing" parts of an integer type in C works like this:
You shift the bits you want to the lowest position.
You use & to mask the bits you want - ones means "copy this bit", zeros mean "ignore"
So, in you example. Let's say we have a number int x = 42;
first 5 bits:
(x >> 3) & ((1 << 5)-1);
or
(x >> 3) & 31;
To fetch the lower three bits:
(x >> 0) & ((1 << 3)-1)
or:
x & 7;
Say you want hi bits from the top, and lo bits from the bottom. (5 and 3 in your example)
top = (n >> lo) & ((1 << hi) - 1)
bottom = n & ((1 << lo) - 1)
Explanation:
For the top, first get rid of the lower bits (shift right), then mask the remaining with an "all ones" mask (if you have a binary number like 0010000, subtracting one results 0001111 - the same number of 1s as you had 0-s in the original number).
For the bottom it's the same, just don't have to care with the initial shifting.
top = (42 >> 3) & ((1 << 5) - 1) = 5 & (32 - 1) = 5 = 00101b
bottom = 42 & ((1 << 3) - 1) = 42 & (8 - 1) = 2 = 010b
You could use bitfields for this. Bitfields are special structs where you can specify variables in bits.
typedef struct {
unsigned char a:5;
unsigned char b:3;
} my_bit_t;
unsigned char c = 0x42;
my_bit_t * n = &c;
int first = n->a;
int sec = n->b;
Bit fields are described in more detail at http://www.cs.cf.ac.uk/Dave/C/node13.html#SECTION001320000000000000000
The charm of bit fields is, that you do not have to deal with shift operators etc. The notation is quite easy. As always with manipulating bits there is a portability issue.
int x = (number >> 3) & 0x1f;
will give you an integer where the last 5 bits are the 8-4 bits of number and zeros in the other bits.
Similarly,
int y = number & 0x7;
will give you an integer with the last 3 bits set the last 3 bits of number and the zeros in the rest.
just get rid of the 8* in your code.
int input = 42;
int high3 = input >> 5;
int low5 = input & (32 - 1); // 32 = 2^5
bool isBit3On = input & 4; // 4 = 2^(3-1)