I don't understand what this code is doing at all, could someone please explain it?
long input; //just here to show the type, assume it has a value stored
unsigned int output( input >> 4 & 0x0F );
Thanks
bitshifts the input 4 bits to the right, then masks by the lower 4 bits.
Take this example 16 bit number: (the dots are just for visual separation)
1001.1111.1101.1001 >> 4 = 0000.1001.1111.1101
0000.1001.1111.1101 & 0x0F = 1101 (or 0000.0000.0000.1101 to be more explicit)
& is the bitwise AND operator. "& 0x0F" is sometimes done to pad the first 4 bits with 0s, or ignore the first(leftmost) 4 bits in a value.
0x0f = 00001111. So a bitwise & operation of 0x0f with any other bit pattern will retain only the rightmost 4 bits, clearing the left 4 bits.
If the input has a value of 01010001, after doing &0x0F, we'll get 00000001 - which is a pattern we get after clearing the left 4 bits.
Just as another example, this is a code I've used in a project:
Byte verflag = (Byte)(bIsAck & 0x0f) | ((version << 4) & 0xf0). Here I'm combining two values into a single Byte value to save space because it's being used in a packet header structure. bIsAck is a BOOL and version is a Byte whose value is very small. So both these values can be contained in a single Byte variable.
The first nibble in the resultant variable will contain the value of version and the second nibble will contain the value of bIsAck. I can retrieve the values into separate variables at the receiving by doing a 4 bits >> while taking the value of version.
Hope this is somewhere near to what you asked for.
That is doing a bitwise right shift the contents of "input" by 4 bits, then doing a bitwise AND of the result with 0x0F (1101).
What it does depends on the contents and type of "input". Is it an int? A long? A string (which would mean the shift and bitwise AND are being done on a pointer to the first byte).
Google for "c++ bitwise operations" for more details on what's going on under the hood.
Additionally, look at C++ operator precedence because the C/C++ precedence is not exactly the same as in many other languages.
Related
I'm currently working on bitwise operations but I am confused right now... Here's the scoop and why
I have a byte 0xCD in bits this is 1100 1101
I am shifting the bits left 7, then I'm saying & 0xFF since 0xFF in bits is 1111 1111
unsigned int bit = (0xCD << 7) & 0xFF<<7;
Now I would make the assumption that both 0xCD and 0xFF would get shifted to the left 7 times and the remaining bit would be 1&1 = 1 but I'm not getting that for output also I would also make the assumption that shifting 6 would give me bits 0&1 = 0 but I'm getting again a number above 1 like 205 0.o Is there something incorrect about the way I am trying to process bit shifting in my head? If so what is it that I am doing wrong?
Code Below:
unsigned char byte_now = 0xCD;
printf("Bits for byte_now: 0x%02x: ", byte_now);
/*
* We want to get the first bit in a byte.
* To do this we will shift the bits over 7 places for the last bit
* we will compare it to 0xFF since it's (1111 1111) if bit&1 then the bit is one
*/
unsigned int bit_flag = 0;
int bit_pos = 7;
bit_flag = (byte_now << bit_pos) & 0xFF;
printf("%d", bit_flag);
Is there something incorrect about the way I am trying to process bit shifting in my head?
There seems to be.
If so what is it that I am doing wrong?
That's unclear, so I offer a reasonably full explanation.
In the first place, it is important to understand that C does not not perform any arithmetic directly on integers smaller than int. Consider, then, your expression byte_now << bit_pos. "The usual arithmetic promotions" are performed on the operands, resulting in the left operand being converted to the int value 0xCD. The result has the same pattern of least-significant value bits as bit_flag, but also a bunch of leading zero bits.
Left shifting the result by 7 bits produces the bit pattern 110 0110 1000 0000, equivalent to 0x6680. You then perform a bitwise and operation on the result, masking off all but the least-significant 8 bits, thus yielding 0x80. What happens when you assign that to bit_flag depends on the type of that variable, but if it is an integer type that is either unsigned or has more than 7 value bits then the assignment is well-defined and value-preserving. Note that it is bit 7 that is nonzero, not bit 0.
The type of bit_flag is more important when you pass it to printf(). You've paired it with a %d field descriptor, which is correct if bit_flag has type int and incorrect otherwise. If bit_flag does have type int, then I would expect the program to print 128.
I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".
I am trying to perform this operation, and im getting the wrong output.
signed char temp3[3] = {0x0D, 0xFF, 0xC0};
double temp = ((temp3[0] & 0x03) << 10) | (temp3[1]) | ((temp3[2] & 0xC0) >> 6)
I am trying to form a 12 bit number. get the last 2 bits of 0x0D, all 8 of 0xFF and first 2 of 0xC0 to form the binary number (011111111111) = 2047, however I am getting -1. When I break the first mask and shift of 10, I get 0. I dont know if this is my problem, trying to shift an 8 bit character 10 bits.
When bit twiddling, always use unsigned numbers.
Change the array to unsigned char.
Add the 'U' suffix to each constant, because each constant is a signed integer by default.
BTW, right shifting is undefined implementation defined for signed integers.
Per comments, changed "undefined" to "implementation defined".
There are a few things you need to address.
First up, c++ doesn't have 12 bit numbers. The best you can have are 16 bit. The top bit represents sign in twos complement form.
You also need to be very careful shift of the type of the number you are shifting. In your example, you are left shifting a char by over 8 bits. As a char is only 8 bits, you are zeroing it.
The following example gives a correct implmentation (for signed 12 bit numbers). There are no doubt more efficient ones.
// shift in top 2 bits
signed short test = static_cast<signed short>(temp3[0] & 0x03) << 10 ;
// shift in middle 8 bits
test |= (static_cast<signed short>(temp3[1]) << 2) & 0x03FC;
// rightshift, mask and append lower 2 bits
test |= (static_cast<signed short>(temp3[2]) >> 6) & 0x0003;
// sign extend top bits from 12 bits to 16 bits
test |= (temp3[0] & 0x02) == 0 ? 0x0000 : 0xF0000;
I have been following the msdn example that shows how to hash data using the Windows CryptoAPI. The example can be found here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa382380%28v=vs.85%29.aspx
I have modified the code to use the SHA1 algorithm.
I don't understand how the code that displays the hash (shown below) in hexadecmial works, more specifically I don't understand what the >> 4 operator and the & 0xf operator do.
if (CryptGetHashParam(hHash, HP_HASHVAL, rgbHash, &cbHash, 0)){
printf("MD5 hash of file %s is: ", filename);
for (DWORD i = 0; i < cbHash; i++)
{
printf("%c%c", rgbDigits[rgbHash[i] >> 4],
rgbDigits[rgbHash[i] & 0xf]);
}
printf("\n");
}
I would be grateful if someone could explain this for me, thanks in advance :)
x >> 4 shifts x right four bits. x & 0xf does a bitwise and between x and 0xf. 0xf has its four least significant bits set, and all the other bits clear.
Assuming rgbHash is an array of unsigned char, this means the first expression retains only the four most significant bits and the second expression the four least significant bits of the (presumably) 8-bit input.
Four bits is exactly what will fit in one hexadecimal digit, so each of those is used to look up a hexadecimal digit in an array which presumably looks something like this:
char rgbDigits[] = "0123456789abcdef"; // or possibly upper-case letters
this code uses simple bit 'filtering' techniques
">> 4" means shift right by 4 places, which in turn means 'divide by 16'
"& 0xf" equals to bit AND operation which means 'take first 4 bits'
Both these values are passed to rgbDigits which proly produced output in valid range - human readable
I am trying to understand how to use Bitwise AND to extract the values of individual bytes.
What I have is a 4-byte array and am casting the last 2 bytes into a single 2 byte value. Then I am trying to extract the original single byte values from that 2 byte value. See the attachment for a screen shot of my code and values.
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
How would I go about doing this with Bitwise AND?
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
Your 2byte integer is formed with the values 3 and 4 (since your pointer is to a[1]). As you have already seen in your tests, you can get the 3 by applying the mask 0xFF. Now, to get the 4 you need to remove the lower bits and shift the value. In your example, by using the mask 0xFF00 you effectively remove the 3 from the 16bit number, but you leave the 4 in the high byte of your 2byte number, which is the value 1024 == 2^10 -- 11th bit set, which is the third bit in the second byte (counting from the least representative)
You can shift that result 8 bits to the right to get your 4, or else you can ignore the mask altogether, since by just shifting to the right the lowest bits will disappear:
4 == ( x>>8 )
More interesting results to test bitwise and can be obtained by working with a single number:
int x = 7; // or char, for what matters:
(x & 0x1) == 1;
(x & (0x1<<1) ) == 2; // (x & 0x2)
(x & ~(0x2)) == 5;
You need to add some bit-shifting to convert the masked value from the upper byte to the lower byte.
The problem I am having is I am not able to get the value of the last
byte in the 2 byte value.
Not sure where that "watch" table comes from or if there is more code involved, but it looks to me like the result is correct. Remember, one of them is a high byte and so the value is shifted << 8 places. On a little endian machine, the high byte would be the second one.