Shifting more than 8 bits - leading to wrong output - c++

I am trying to perform this operation, and im getting the wrong output.
signed char temp3[3] = {0x0D, 0xFF, 0xC0};
double temp = ((temp3[0] & 0x03) << 10) | (temp3[1]) | ((temp3[2] & 0xC0) >> 6)
I am trying to form a 12 bit number. get the last 2 bits of 0x0D, all 8 of 0xFF and first 2 of 0xC0 to form the binary number (011111111111) = 2047, however I am getting -1. When I break the first mask and shift of 10, I get 0. I dont know if this is my problem, trying to shift an 8 bit character 10 bits.

When bit twiddling, always use unsigned numbers.
Change the array to unsigned char.
Add the 'U' suffix to each constant, because each constant is a signed integer by default.
BTW, right shifting is undefined implementation defined for signed integers.
Per comments, changed "undefined" to "implementation defined".

There are a few things you need to address.
First up, c++ doesn't have 12 bit numbers. The best you can have are 16 bit. The top bit represents sign in twos complement form.
You also need to be very careful shift of the type of the number you are shifting. In your example, you are left shifting a char by over 8 bits. As a char is only 8 bits, you are zeroing it.
The following example gives a correct implmentation (for signed 12 bit numbers). There are no doubt more efficient ones.
// shift in top 2 bits
signed short test = static_cast<signed short>(temp3[0] & 0x03) << 10 ;
// shift in middle 8 bits
test |= (static_cast<signed short>(temp3[1]) << 2) & 0x03FC;
// rightshift, mask and append lower 2 bits
test |= (static_cast<signed short>(temp3[2]) >> 6) & 0x0003;
// sign extend top bits from 12 bits to 16 bits
test |= (temp3[0] & 0x02) == 0 ? 0x0000 : 0xF0000;

Related

What are these two bitwise operators doing?

Anyone mind explaining what the following expression is doing?
int someValue = (((buffer[position + 1] << 8) | buffer[position] & 0xff) << 16)
I get that buffer[position + 1] << 8 is shifting 8 bits to the left, and that buffer[position] & 0xff is basically extracting those 8 bits, but what's the role of the "or" (|) and why is the whole thing being shifted 16 bits to left? Are they being erased? Thanks in advance.
Basically this is transforming two bytes into a 16-bit integer. The two bytes are at buffer[position] and buffer[position + 1].
First, the byte at position + 1 shifted left by 8 bits. Second, the first byte has its high-order bits cleared.
Then the two bytes are combined with the bitwise-or operator.
Then this number so far is shifted left 16 bits, presumably in order to have another 2 bytes put into the lower part of this integer.

C/C++ Bitwise Operations not resulting in expected output?

I'm currently working on bitwise operations but I am confused right now... Here's the scoop and why
I have a byte 0xCD in bits this is 1100 1101
I am shifting the bits left 7, then I'm saying & 0xFF since 0xFF in bits is 1111 1111
unsigned int bit = (0xCD << 7) & 0xFF<<7;
Now I would make the assumption that both 0xCD and 0xFF would get shifted to the left 7 times and the remaining bit would be 1&1 = 1 but I'm not getting that for output also I would also make the assumption that shifting 6 would give me bits 0&1 = 0 but I'm getting again a number above 1 like 205 0.o Is there something incorrect about the way I am trying to process bit shifting in my head? If so what is it that I am doing wrong?
Code Below:
unsigned char byte_now = 0xCD;
printf("Bits for byte_now: 0x%02x: ", byte_now);
/*
* We want to get the first bit in a byte.
* To do this we will shift the bits over 7 places for the last bit
* we will compare it to 0xFF since it's (1111 1111) if bit&1 then the bit is one
*/
unsigned int bit_flag = 0;
int bit_pos = 7;
bit_flag = (byte_now << bit_pos) & 0xFF;
printf("%d", bit_flag);
Is there something incorrect about the way I am trying to process bit shifting in my head?
There seems to be.
If so what is it that I am doing wrong?
That's unclear, so I offer a reasonably full explanation.
In the first place, it is important to understand that C does not not perform any arithmetic directly on integers smaller than int. Consider, then, your expression byte_now << bit_pos. "The usual arithmetic promotions" are performed on the operands, resulting in the left operand being converted to the int value 0xCD. The result has the same pattern of least-significant value bits as bit_flag, but also a bunch of leading zero bits.
Left shifting the result by 7 bits produces the bit pattern 110 0110 1000 0000, equivalent to 0x6680. You then perform a bitwise and operation on the result, masking off all but the least-significant 8 bits, thus yielding 0x80. What happens when you assign that to bit_flag depends on the type of that variable, but if it is an integer type that is either unsigned or has more than 7 value bits then the assignment is well-defined and value-preserving. Note that it is bit 7 that is nonzero, not bit 0.
The type of bit_flag is more important when you pass it to printf(). You've paired it with a %d field descriptor, which is correct if bit_flag has type int and incorrect otherwise. If bit_flag does have type int, then I would expect the program to print 128.

16-bit to 10-bit conversion code explanation

I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".

Understanding bitwise operations - shifting and AND

uint8_t payload[] = { 0, 0 };
pin5 = analogRead(A0);
payload[0] = pin5 >> 8 & 0xff;
payload[1] = pin5 & 0xff;
This is code from the XBee library published by andrewrapp on GitHub. I was wondering how the bitwise operation worked.
so suppose pin 5 gets an analog value of 256 which as I am using a particle photon board comes in a 12bit format text as 000100000000. so does payload[0] get the last eight bits ie 00000000, or does it get value after shifting ie, 00000001? Also then what becomes the value in payload[1]?
I want to add a 4-bit code of my on using a bitmask to the first four bits in the array followed by the data bits. Can I & payload[1] with a 0X1 to payload[1] for this?
The code in your example reverser the content of pin5's two bytes into payload array: the most significant byte is placed into payload[0] and the least significant byte is placed into payload[1].
If, for example, pin5 is 0x0A63, then payload would contain 0x63, 0x0A.
If pin5 has a 12-bit value, you can use its four most significant bits to store a four-bit value of your own. To make sure the upper bits are zeroed out, use 0x0F mask instead of 0xFF:
payload[0] = pin5 >> 8 & 0x0f;
// ^
Now you can move your data into the upper four bits with | operator:
payload[0] |= myFourBits << 4;
So you want to understand what the stated operations do. Let's have a look if we can clarify this, by examining the pin5 variable and subdividing it into 2 parts:
pin5 000100000000
MMMMLLLLLLLL
M = 4 Most significant bits, L = 8 Least significant bits
payload[0] takes the result of some operations on pin5:
pin5 000100000000
>> 8 000000000001 Shifts all bits 8 positions to the right
00000000MMMM and fills the left part with zeroes
so you have the originally leading 4 bits right-aligned now, on which an additional operation is performed:
000000000001
& 0xFF 000011111111 Anding with FF
000000000001
Right-shifting a 12-bits variable by 8 positions leaves 4 significant positions; the leading 8 bits will always be 0. 0xFF is binary 11111111, i.e., represents 8 set bits. So what is done here is Anding the 4 least significant bits with 8 least significant bits in order to make sure, that the 4 most significant bits get erased.
00000000xxxx Potentially set bits (you have 0001)
000011111111 & 0xFF
00000000xxxx Result
0000xxxx Storing in 8-bits variable
payload[0] = 00000001 in your case
In this case, the Anding operation is not useful and a complete waste of time, because Anding any variable with 0xFF does never change its 8 least significant bits in any way, and since the 4 most significant bits are never set anyway, there simply is no point in this operation.
(Technically, because the source is a 12-bits variable (presumably it is a 16 bits variable though, with only 12 significant (relevant) binary digits), 0x0F would have sufficed for the Anding mask. Can you see why? But even this would simply be a wasted CPU cycle.)
payload[1] also takes the result of an operation on pin5:
pin5 MMMMLLLLLLLL potentially set bits
& 0xFF 000011111111 mask to keep LLLLLLLL only
0000LLLLLLLL result (you have 00000000)
xxxxxxxx Storing in 8-bits variable
payload[1] = 00000000 in your case
In this case, Anding with 11111111 makes perfect sense, because it discards MMMM, which in your case is 0001.
So, all in all, your value
pin5 000100000000
MMMMLLLLLLLL
is split such, that payload[0] contains MMMM (0001 = decimal 1), and payload[1] contains LLLLLLLL (00000000 = decimal 0).
If the input was
pin5 101110010001
MMMMLLLLLLLL
instead, you would find in payload[0]: 1011 (decimal 8+2+1 = 11), and in payload[1]: 10010001 (decimal 128+16+1 = 145).
You would interpret this result as decimal 11 * 256 + 145 = 2961, the same result you obtain when converting the original 101110010001 from binary into decimal, for instance using calc.exe in Programmer mode (Alt+3), if you are using Windows.
Likewise, your original data is being interpreted as 1 * 256 + 0 = 256, as expected.

PIC Bit Masking and Shifting for 4 Bit LCD Control

I have a question regarding both masking and bit shifting.
I have the following code:
void WriteLCD(unsigned char word, unsigned commandType, unsigned usDelay)
{
// Most Significant Bits
// Need to do bit masking for upper nibble, and shift left by 8.
LCD_D = (LCD & 0x0FFF) | (word << 8);
EnableLCD(commandType, usDelay); // Send Data
// Least Significant Bits
// Need to do bit masking for lower nibble, and shift left by 12.
LCD_D = (LCD & 0x0FFF) | (word << 12);
EnableLCD(commandType, usDelay); // Send Data
}
The "word" is 8 bits, and is being put through a 4 bit LCD interface. Meaning I have to break the most significant bits and least significant bits apart before I send the data.
LCD_D is a 16 bit number, in which only the most significant bits I pass to it I want to actually "do" something. I want the previous 12 bits preserved in case they were doing something else.
Is my logic in terms of bit masking and shifting the "word" correct in terms of passing the upper and lower nibbles appropriately to the LCD_D?
Thanks for the help!
Looks ok apart from needing to cast "word" to an unsigned short (16 bit) before the shift, in both cases, so that the shift is not performed on a char and looses the data. eg:
LCD_D = (LCD & 0x0FFF) | ((unsigned short) word << 8);