How to Concatenate two BitFields - c++

I have two seperate bitfields that make up a "Identity" field that are 11 + 18 bits in length (29 bits total).
In the bitfield they are of the expected size:
header a;
memset(a.arr, 0, sizeof(a.arr));
a = {0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0}; // 1010 0000
cout << hex << a.BID << endl; // 010 0000 1010 -> 20a
cout << hex << a.IDEX << endl; // 00 1010 0000 1010 0000 -> a0a0
and what I need to do is combine these fields into a 29-bit segment, e.g. 010 0000 1010 00 1010 0000 1010 0000.
When attempting to concatenate the two bitfields however the result is not what I expect:
int BID = a.BID;
int IDEX = a.IDEX;
int result = (BID<<11) | IDEX;
cout << BID << endl;
printf("%x %d",result, result); // -> 10f0a0 (21 bits) where I expect 828A0A0 (29 bits)
It's important for me to have all 29 bits as within this 29-bit field there's various subfields and I was going to take this output and put it through another bit-field to resolve those subfields.
Would you be able to assist in how I could combine BID and IDEX mentioned above into one combined bitfield of 29 bits? Unfortunately they have two bits inbetween the BID and IDEX fields another in the header that are ignored which is why I cannot just set my bitfield to 29 bits.

You should shift 18 bits first and then do the OR. For example:
int result = (BID<<18) | IDEX;
Otherwise you are overwriting the first block. What you are doing here is shifting 11 bits and then ORing with 18 bits which corrupts the first 11 bits indeed.

Related

How does ios::fmtflags works in C++?How setf() works?

I am trying to understand formatted flags of ios stream. Can anyone please explain how this cout.setf(ios::hex | ios::showbase) thing works? I mean how does the or (|) operator work between the two ios formatted flags?
Please pardon me for my bad english.
std::ios_base::hex and std::ios_base::showbase are both enumerators of the BitmaskType std::ios_base::fmtflags. A BitmaskType is typically an enumeration type whose enumerators are distinct powers of two, kinda like this: (1 << n means 2n)
// simplified; can also be implemented with integral types, std::bitset, etc.
enum fmtflags : unsigned {
dec = 1 << 0, // 1
oct = 1 << 1, // 2
hex = 1 << 2, // 4
// ...
showbase = 1 << 9, // 512
// ...
};
The | operator is the bit-or operator, which performs the or operation on the corresponding bits, so
hex 0000 0000 0000 0100
showbase 0000 0010 0000 0000
-------------------
hex | showbase 0000 0010 0000 0100
This technique can be used to combine flags together, so every bit in the bitmask represents a separate flag (set or unset). Then, each flag can be
queried: mask & flag;
set: mask | flag;
unset: mask & (~flag).

Casting two bytes to a 12 bit short value?

I have a buffer which consists of data in unsigned char and two bytes form a 12 Bit value.
I found out that my system is little endian. The first byte in the buffer gives me on the console numbers from 0 to 255. The second byte gives always low numbers between 1 and 8 (measured data, so higher values up to 4 bit would be possible too).
I tried to shift them together so that I get an ushort with a correct 12 bit number.
Sadly at the moment I am totally confused about the endianess and what I have to shift how far in which direction.
I tried e.g. this:
ushort value =0;
value= (ushort) firstByte << 8 | (ushort) secondByte << 4;
Sadly the value of value is quite often bigger than 12 bit.
Where is the mistake?
It depends on how the bits are packed within the two bytes exactly, but the solution for the most likely packing would be:
value = firstByte | (secondByte << 8);
This assumes that the second byte contains the 4 most significant bits (bits 8..11), while the first byte contains the 8 least significant bits (bits 0..7).
Note: the above solution assumes that firstByte and secondByte are sensible unsigned types (e.g. uint8_t). If they are not (e.g. if you have used char or some other possibly signed type), then you'll need to add some masking:
value = (firstByte & 0xff) | ((secondByte & 0xf) << 8);
I think the main issue may not be with the values you're shifting alone. If these values are greater than their representative bits, they'll create a large value unless "and'd" out.
picture the following
0000 0000 1001 0010 << 8 | 0000 0000 0000 1101 << 4
1001 0010 0000 0000 | 0000 0000 1101 0000
You should notice the first problem here. The first 4 'lowest' values are not being used, and it's using up 16 bits. you only wanted twelve. This should be modified like so:
(these are new numbers to demonstrate something else)
0000 1101 1001 0010 << 8 | 0000 0000 0000 1101
1101 1001 0010 0000 | (0000 0000 0000 1101 & 0000 0000 0000 1111)
This will create the following value:
1101 1001 0010 1101
here, you should note that the value is still greater than the 12 bits. If your numbers don't extend passed the original 8bit, 4 bit size ignore this. Otherwise, you have to use the 'and' operation on the bits to eliminate the left most 4 bits.
0000 1111 1111 1111 & 0000 1001 0010 1101
These values can be created using either 0bXX macros, the 2^bits - 1 pattern, as well as various other forms.

Map 10Bit buffer to 8Bit

I have a 10 bit SDI stream, when I receive it, it will be stored into uint8_t *buffer and off course when I read it I get completely different value from what expected, except for the first:
10Bit -> 00 0000 0001 | 00 0101 1010 → Hex: A5 10
8 Bit -> 0000 | 0000 0100 | 0101 1010 → Hex: A5 40
is there a function I can use to map it correctly? (C++ style)
If it does not exist, how do I implement it?
Basically you need to use fread() with the correct parameters to read exactly 5 bytes, i.e. 40 bits, into a temporary buffer. You use that amount because it corresponds to a whole number of bytes on the input stream and also a whole number of output bytes on the output stream.
You then use left and right SHIFTs (<< and >>) and bitwise masks (&) to extract the 5 output bytes and put them in your uint8_t buffer.

Shift instructions in Golang

The go spec says:
<< left shift integer << unsigned integer
What if the left side is type of uint8:
var x uint8 = 128
fmt.Println(x << 8) // it got 0, why ?
fmt.Println(int(x)<<8) // it got 32768, sure
Questions:
when x is uint8 type, why no compile error?
why x << 8 got result 0
For C/C++,
unsigned int a = 128;
printf("%d",a << 8); // result is 32768.
Could anyone explain? Thank you.
The left shift operator is going to shift the binary digits in the number to the left X number of places. This has the effect of adding X number of 0's to the right hand side the number A unit8 only holds 8 bits so when you have 128 your variable has
x = 1000 0000 == 128
x << 8
x= 1000 0000 0000 0000 == 32768
Since uint8 only holds 8 bits we tak the rightmost 8 bits which is
x = 0000 0000 == 0
The reason you get the right number with an int is an int has at least 16 bits worth of storage and most likely has 32 bits on your system. That is enough to store the entire result.
Because uint8 is an unsigned 8-bit integer type. That's what "u" stands for.
Because uint8(128) << 8 shifts the value, turning 1000 0000 into 0000 0000.
int(x) makes it 0000 0000 0000 0000 0000 0000 1000 0000 (on 32 bit systems, since int is architecture-dependant) and then the shift comes, making it 0000 0000 0000 0000 1000 0000 0000 0000, or 32768.

How to interpret this bit shift

I have the following bit shift:
1011 & (~0 << 2)
How do I work out the answer for this? In particular I am confused about what ~0 << 2 means - I know that the << operator is a bit shift, and that ~ represents 'not'.
What I have read is that ~0 is a sequence of 1s - but how is that true, and how many 1s are there??
Usually, an int is a 32-bit/4-byte value. So ~0 = 1111 1111 1111 1111 1111 1111 1111 1111
In your case it really doesn't matter.
You want to solve 1011 & (~0 << 2)
Let's go through your example in steps.
First thing that happens is the parenthesis:
(~0 << 2)
This is the bits 1111 shifted left by two bits. When a lift shift occurs the new added bits are 0s. Therefore (~0 << 2) equals:
(1111 << 2) = 1100
Finally you just need to do a bitwise and between 1011 and 1100 which ends up as
1000 = 8