What are these two bitwise operators doing? - c++

Anyone mind explaining what the following expression is doing?
int someValue = (((buffer[position + 1] << 8) | buffer[position] & 0xff) << 16)
I get that buffer[position + 1] << 8 is shifting 8 bits to the left, and that buffer[position] & 0xff is basically extracting those 8 bits, but what's the role of the "or" (|) and why is the whole thing being shifted 16 bits to left? Are they being erased? Thanks in advance.

Basically this is transforming two bytes into a 16-bit integer. The two bytes are at buffer[position] and buffer[position + 1].
First, the byte at position + 1 shifted left by 8 bits. Second, the first byte has its high-order bits cleared.
Then the two bytes are combined with the bitwise-or operator.
Then this number so far is shifted left 16 bits, presumably in order to have another 2 bytes put into the lower part of this integer.

Related

16-bit to 10-bit conversion code explanation

I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".

C++ bit shift which way

I am having trouble understanding which way I should be shifting bits to convert from part of one structure to another. I am writing an application to be used only on Windows / Intel systems.
Old structure (DataByte):
Return Number 3 bits (bits 0 – 2)
Number of Returns 3 bits (bits 3 – 5)
Scan Direction Flag 1 bit (bit 6)
Edge of Flight Line 1 bit (bit 7)
New structure (ReturnData and DataByte):
Return Number 4 bits (bits 0 - 3)
Number of Returns (given pulse) 4 bits (bits 4 - 7)
Classification Flags 4 bits (bits 0 - 3)
Scanner Channel 2 bits (bits 4 - 5)
Scan Direction Flag 1 bit (bit 6)
Edge of Flight Line 1 bit (bit 7)
Bits 0 to 5 should be 0 as that data is unknown in the existing record. I think that converting to the new structure using bit mask and shift:
New->ReturnData = (Old->DataByte & 0x07)>>1 | (Old->DataByte & 0x38)>>2;
New->DataByte = Old->DataByte & 0xC0;
Is that correct? the first 3 bits (& 0x07) shifted >> 1 becomes the first nibble and the second 3 bits (& 0x38) shifted >> 2 the second nibble forming a byte.. or is the shift the other way as Intel is the other endianness?
Bit 0 is bit 0 regardless of endianness. Endianness affects byte order in memory, which should only matter if you want to do reinterpreting or sending data across the wire. Math is always internally consistent.
Bits 0-2 would be 0x07, Bits 3-5 would be 0b0011 1000, which is 0x38. Now in the new data structure, the "return number" stays in the same place, and the "number of returns" just shifts up one (from 3-5) to (4-7). So that's:
New->ReturnData = (Old->DataByte & 0x07) // these bits stay in the same place
| ((Old->DataByte & 0x38) << 1); // these shift up one
Your logic for Scan+Edge looks correct.

Shifting more than 8 bits - leading to wrong output

I am trying to perform this operation, and im getting the wrong output.
signed char temp3[3] = {0x0D, 0xFF, 0xC0};
double temp = ((temp3[0] & 0x03) << 10) | (temp3[1]) | ((temp3[2] & 0xC0) >> 6)
I am trying to form a 12 bit number. get the last 2 bits of 0x0D, all 8 of 0xFF and first 2 of 0xC0 to form the binary number (011111111111) = 2047, however I am getting -1. When I break the first mask and shift of 10, I get 0. I dont know if this is my problem, trying to shift an 8 bit character 10 bits.
When bit twiddling, always use unsigned numbers.
Change the array to unsigned char.
Add the 'U' suffix to each constant, because each constant is a signed integer by default.
BTW, right shifting is undefined implementation defined for signed integers.
Per comments, changed "undefined" to "implementation defined".
There are a few things you need to address.
First up, c++ doesn't have 12 bit numbers. The best you can have are 16 bit. The top bit represents sign in twos complement form.
You also need to be very careful shift of the type of the number you are shifting. In your example, you are left shifting a char by over 8 bits. As a char is only 8 bits, you are zeroing it.
The following example gives a correct implmentation (for signed 12 bit numbers). There are no doubt more efficient ones.
// shift in top 2 bits
signed short test = static_cast<signed short>(temp3[0] & 0x03) << 10 ;
// shift in middle 8 bits
test |= (static_cast<signed short>(temp3[1]) << 2) & 0x03FC;
// rightshift, mask and append lower 2 bits
test |= (static_cast<signed short>(temp3[2]) >> 6) & 0x0003;
// sign extend top bits from 12 bits to 16 bits
test |= (temp3[0] & 0x02) == 0 ? 0x0000 : 0xF0000;

PIC Bit Masking and Shifting for 4 Bit LCD Control

I have a question regarding both masking and bit shifting.
I have the following code:
void WriteLCD(unsigned char word, unsigned commandType, unsigned usDelay)
{
// Most Significant Bits
// Need to do bit masking for upper nibble, and shift left by 8.
LCD_D = (LCD & 0x0FFF) | (word << 8);
EnableLCD(commandType, usDelay); // Send Data
// Least Significant Bits
// Need to do bit masking for lower nibble, and shift left by 12.
LCD_D = (LCD & 0x0FFF) | (word << 12);
EnableLCD(commandType, usDelay); // Send Data
}
The "word" is 8 bits, and is being put through a 4 bit LCD interface. Meaning I have to break the most significant bits and least significant bits apart before I send the data.
LCD_D is a 16 bit number, in which only the most significant bits I pass to it I want to actually "do" something. I want the previous 12 bits preserved in case they were doing something else.
Is my logic in terms of bit masking and shifting the "word" correct in terms of passing the upper and lower nibbles appropriately to the LCD_D?
Thanks for the help!
Looks ok apart from needing to cast "word" to an unsigned short (16 bit) before the shift, in both cases, so that the shift is not performed on a char and looses the data. eg:
LCD_D = (LCD & 0x0FFF) | ((unsigned short) word << 8);

What is this doing: "input >> 4 & 0x0F"?

I don't understand what this code is doing at all, could someone please explain it?
long input; //just here to show the type, assume it has a value stored
unsigned int output( input >> 4 & 0x0F );
Thanks
bitshifts the input 4 bits to the right, then masks by the lower 4 bits.
Take this example 16 bit number: (the dots are just for visual separation)
1001.1111.1101.1001 >> 4 = 0000.1001.1111.1101
0000.1001.1111.1101 & 0x0F = 1101 (or 0000.0000.0000.1101 to be more explicit)
& is the bitwise AND operator. "& 0x0F" is sometimes done to pad the first 4 bits with 0s, or ignore the first(leftmost) 4 bits in a value.
0x0f = 00001111. So a bitwise & operation of 0x0f with any other bit pattern will retain only the rightmost 4 bits, clearing the left 4 bits.
If the input has a value of 01010001, after doing &0x0F, we'll get 00000001 - which is a pattern we get after clearing the left 4 bits.
Just as another example, this is a code I've used in a project:
Byte verflag = (Byte)(bIsAck & 0x0f) | ((version << 4) & 0xf0). Here I'm combining two values into a single Byte value to save space because it's being used in a packet header structure. bIsAck is a BOOL and version is a Byte whose value is very small. So both these values can be contained in a single Byte variable.
The first nibble in the resultant variable will contain the value of version and the second nibble will contain the value of bIsAck. I can retrieve the values into separate variables at the receiving by doing a 4 bits >> while taking the value of version.
Hope this is somewhere near to what you asked for.
That is doing a bitwise right shift the contents of "input" by 4 bits, then doing a bitwise AND of the result with 0x0F (1101).
What it does depends on the contents and type of "input". Is it an int? A long? A string (which would mean the shift and bitwise AND are being done on a pointer to the first byte).
Google for "c++ bitwise operations" for more details on what's going on under the hood.
Additionally, look at C++ operator precedence because the C/C++ precedence is not exactly the same as in many other languages.