bitwise shifts, unsigned chars - c++

Can anyone explain verbosely what this accomplishes? Im trying to learn c and am having a hard time wrapping my head around it.
void tonet_short(uint8_t *p, unsigned short s) {
p[0] = (s >> 8) & 0xff;
p[1] = s & 0xff;
}
void tonet_long(uint8_t *p, unsigned long l)
{
p[0] = (l >> 24) & 0xff;
p[1] = (l >> 16) & 0xff;
p[2] = (l >> 8) & 0xff;
p[3] = l & 0xff;
}

Verbosely, here it goes:
As a direct answer; both of them stores the bytes of a variable inside an array of bytes, from left to right. tonet_short does that for unsigned short variables, which consist of 2 bytes; and tonet_long does it for unsigned long variables, which consist of 4 bytes.
I will explain it for tonet_long, and tonet_short will just be the variation of it that you'll hopefully be able to derive yourself:
unsigned variables, when their bits are bitwise-shifted, get their bits shifted towards the determined side for determined amount of bits, and the vacated bits are made to be 0, zeros. I.e.:
unsigned char asd = 10; //which is 0000 1010 in basis 2
asd <<= 2; //shifts the bits of asd 2 times towards left
asd; //it is now 0010 1000 which is 40 in basis 10
Keep in mind that this is for unsigned variables, and these may be incorrect for signed variables.
The bitwise-and & operator compares the bits of two operands on both sides, returns a 1 (true) if both are 1 (true), and 0 (false) if any or both of them are 0 (false); and it does this for each bit. Example:
unsigned char asd = 10; //0000 1010
unsigned char qwe = 6; //0000 0110
asd & qwe; //0000 0010 <-- this is what it evaluates to, which is 2
Now that we know the bitwise-shift and bitwise-and, let's get to the first line of the function tonet_long:
p[0] = (l >> 24) & 0xff;
Here, since l is unsigned long, the (l >> 24) will be evaluated into the first 4 * 8 - 24 = 8 bits of the variable l, which is the first byte of the l. I can visualize the process like this:
abcd efgh ijkl mnop qrst uvwx yz.. .... //letters and dots stand for
//unknown zeros and ones
//shift this 24 times towards right
0000 0000 0000 0000 0000 0000 abcd efgh
Note that we do not change the l, this is just the evaluation of l >> 24, which is temporary.
Then the 0xff which is just 0000 0000 0000 0000 0000 0000 1111 1111 in hexadecimal (base 16), gets bitwise-anded with the bitwise-shifted l. It goes like this:
0000 0000 0000 0000 0000 0000 abcd efgh
&
0000 0000 0000 0000 0000 0000 1111 1111
=
0000 0000 0000 0000 0000 0000 abcd efgh
Since a & 1 will be simply dependent strictly on a, so it will be a; and same for the rest... It looks like a redundant operation for this, and it really is. It will, however, be important for the rest. This is because, for example, when you evaluate l >> 16, it looks like this:
0000 0000 0000 0000 abcd efgh ijkl mnop
Since we want only the ijkl mnop part, we have to discard the abcd efgh, and that will be done with the aid of 0000 0000 that 0xff has on its corresponding bits.
I hope this helps, the rest happens like it does this far, so... yeah.

These routines convert 16 and 32 bit values from native byte order to standard network(big-endian) byte order. They work by shifting and masking 8-bit chunks from the native value and storing them in order into a byte array.

If I see it right, I basically switches the order of bytes in the short and in the long ... (reverses the byte order of the number) and stores the result at an address which hopefully has enough space :)

explain verbosely - OK...
void tonet_short(uint8_t *p, unsigned short s) {
short is typically a 16-bit value (max: 0xFFFF)
The uint8_t is an unsigned 8-bit value, and p is a pointer to some number of unsigned 8-bit values (from the code we're assuming at least 2 sequential ones).
p[0] = (s >> 8) & 0xff;
This takes the "top half" of the value in s and puts it in the first element in the array p. So let's assume s==0x1234.
First s is shifted by 8 bits (s >> 8 == 0x0012)then it's AND'ed with 0xFF and the result is stored in p[0]. (p[0] == 0x12)
p[1] = s & 0xff;
Now note that when we did that shift, we never changed the original value of s, so s still has the original value of 0x1234, thus when we do this second line we simply do another bit-wise AND and p[1] get the "lower half" of the value of s (p[0] == 0x34)
The same applies for the other function you have there, but it's a long instead of a short, so we're assuming p in this case has enough space for all 32-bits (4x8) and we have to do some extra shifts too.

This code is used to serialize a 16-bit or 32-bit number into bytes (uint8_t). For example, to write them to disk, or to send them over a network connection.
A 16-bit value is split into two parts. One containing the most-significant (upper) 8 bits, the other containing least-significant (lower) 8 bits. The most-significant byte is stored first, then the least-significant byte. This is called big endian or "network" byte order. That's why the functions are named tonet_.
The same is done for the four bytes of a 32-bit value.
The & 0xff operations are actually useless. When a 16-bit or 32-bit value is converted to an 8-bit value, the lower 8 bits (0xff) are masked implicitly.
The bit-shifts are used to move the needed byte into the lowest 8 bits. Consider the bits of a 32-bit value:
AAAAAAAABBBBBBBBCCCCCCCCDDDDDDDD
The most significant byte are the 8 bits named A. In order to move them into the lowest 8 bits, the value has to be right-shifted by 24.

The names of the functions are a big hint... "to net short" and "to net long".
If you think about decimal... say we have a two pieces of paper so small we can only write one digit on each of them, we can therefore use both to record all the numbers from 0 to 99: 00, 01, 02... 08, 09, 10, 11... 18, 19, 20...98, 99. Basically, one piece of paper holds the "tens" column (given we're in base 10 for decimal), and the other the "units".
Memory works like that where each byte can store a number from 0..255, so we're working in base 256. If you have two bytes, one of them's going to be the "two-hundred-and-fifty-sixes" column, and the other the "units" column. To work out the combined value, you multiple the former by 256 and add the latter.
On paper we write numbers with the more significant ones on the left, but on a computer it's not clear if a more significant value should be in a higher or lower memory address, so different CPU manufacturers picked different conventions.
Consequently, some computers store 258 - which is 1 * 256 + 2 - as low=1 high=2, while others store low=2 high=1.
What these functions do is rearrange the memory from whatever your CPU happens to use to a predictable order - namely, the more significant value(s) go into the lower memory addresses, and eventually the "units" value is put into the highest memory address. This is a consistent way of storing the numbers that works across all computer types, so it's great when you want to transfer the data over the network; if the receiving computer uses a different memory ordering for the base-256 digits, it can move them from network byte ordering to whatever order it likes before interpreting them as CPU-native numbers.
So, "to net short" packs the most significant 8 bits of s into p[0] - the lower memory address. It didn't actually need to & 0xff as after taking the 16 input bits and shifting them 8 to the "right", all the left-hand 8 bits are guaranteed 0 anyway, which is the affect from & 0xFF - for example:
1010 1111 1011 0111 // = decimal 10*256^3 + 15*256^2 + 11*256 + 7
>>8 0000 0000 1010 1111 // move right 8, with left-hand values becoming 0
0xff 0000 0000 1111 1111 // we're going to and the above with this
& 0000 0000 1010 1111 // the bits that were on in both the above 2 values
// (the and never changes the value)

Related

fastest way to convert int8 to int7

I've a function that takes int8_t val and converts it to int7_t.
//Bit [7] reserved
//Bits [6:0] = signed -64 to +63 offset value
// user who calls this function will use it correctly (-64 to +63)
uint8_t func_int7_t(int8_t val){
uint8_t val_6 = val & 0b01111111;
if (val & 0x80)
val_6 |= 0x40;
//...
//do stuff...
return val_6;
}
What is the best and fastest way to manipulate the int8 to int7? Did I do it efficient and fast? or there is better way?
The target is ARM Cortex M0+ if that matters
UPDATE:
After reading different answers I can say the question was asked wrong? (or my code in the question is what gave wrong assumptions to others) I had the intension to make an int8 to int7
So it will be done by doing nothing because
8bit:
63 = 0011 1111
62 = 0011 1110
0 = 0000 0000
-1 = 1111 1111
-2 = 1111 1110
-63 = 1100 0001
-64 = 1100 0000
7bit:
63 = 011 1111
62 = 011 1110
0 = 000 0000
-1 = 111 1111
-2 = 111 1110
-63 = 100 0001
-64 = 100 0000
the faster way is probably :
uint8_t val_7 = (val & 0x3f) | ((val >> 1) & 0x40);
val & 0x3f get the 6 lower bits (truncate) and ((val >> 1) & 0x40) move the bit to sign from the position 8 to 7
The advantage to not use a if is to have a shorter code (even you can use arithmetic if) and to have a code without sequence break
To clear the reserved bit, just
return val & 0x7f;
To leave the reserved bit exactly like how it was from input, nothing needs to be done
return val;
and the low 7 bits will contain the values in [-64, 63]. Because in two's complement down casting is done by a simple truncation. The value remains the same. That's what happens for an assignment like (int8_t)some_int_value
There's no such thing as 0bX1100001. There's no undefined bit in machine language. That state only exists in hardware, like the high-Z state or undefined state in Verilog or other hardware description languages
Use bitfield to narrow the value and let compiler to choose what sequence of shifts and/or masks is most efficient for that on your platform.
inline uint8_t to7bit(int8_t x)
{
struct {uint8_t x:7;} s;
return s.x = x;
}
If you are not concerned about what happens to out-of-range values, then
return val & 0x7f;
is enough. This correctly handles values in the range -64 <= val <= 63.
You haven't said how you want to handle out-of-range values, so I have nothing to say about that.
Updated to add: The question has been updated so stipulate that the function will never be called with out-of-range values. So this method qualifies unambiguously as "best and fastest".
the user who calls this function he knows he should put data -64 to +63
So not considering any other values, the really fastest thing you can do is not doing anything at all!
You have a 7 bit value stored in eight bits. Any value within specified range will have both bit 7 and bit 6 the same value, and when you process the 7-bit value, you just ignore the MSB (of 8-bit value), no matter if set or not, e. g.:
for(unsigned int bit = 0x40; bit; bit >>= 1)
// NOT: 0x80!
std::cout << (value & bit);
The other way round is more critical: whenever you receive these seven bits via some communication channel, then you need to do manual sign extension for eight (or more) bits to be able to correctly use that value.

How do I make a bit mask that only masks certain parts (indices) of 32 bits?

I am currently working on a programming assignment in which I have to mask only a certain index of the whole 32-bit number(EX: If I take 8 4-bit numbers into my 32-bit integer, I would have 8 indices with 4 bits in each). I want to be able to print only part of the bits out of the whole 32 bits, which can be done with masking. If the bits were to only be in one place, I would not have a problem, for I would just create a mask that puts the 1s in a set place(EX: 00000000 00000000 00000000 00000001). However, I need to be able to shift the mask throughout only one index to print each bit (EX: I want to loop through the first index with my 0001 mask, shifting the 1 left every time, but I do not want to continue after the third bit of that index). I know that I need a loop to accomplish this; however, I am having a difficult time wrapping my head around how to complete this part of my assignment. Any tips, suggestions, or corrections would be appreciated. Also, I'm sorry if this was difficult to understand, but I could not find a better way to word it. Thanks.
first of all about representation. You need binary numbers to represent bits and masks. There are no binaries implemented directly in c/c++ languages at least before c++14. So, before c++14 you had to use hexadecimals or octals to represent your binaries, i.e.
0000 1111 == 0x0F
1111 1010 == 0xFA
since c++14 you can use
0b00001111;
Now, if you shift your binary mask left or right, you will have the following pictures
00001111 (OxF) << 2 ==> 00111100 (0x3C)
00001111 (0xF) >> 2 ==> 00000011 (0x03)
Now, supposedly you have an number in which you are interested in bits 4 to 7 (4 bits)
int bad = 0x0BAD; // == 0000 1011 1010 1101
you can create a mask as
int mask = 0x00F0; // == 0000 0000 1111 00000
and do bitwise and
int result = bad & mask; // ==> 0000 0000 1010 000 (0x00A0)
You will mask 4 bits in the middle of the word, but it will print as 0xA0. probably not what you would expect. To print it as 0xA you would need to shift the result 4 bits right: result >> 4. I prefer doing it in a bit different order, shifting the 'bad' first and then mask:
int result = (bad >> 4) & 0xF;
I hope the above will help you to understand bits.

Why does left shift and right shift in the same statement yields a different result?

Consider the following Example:
First Case:
short x=255;
x = (x<<8)>>8;
cout<<x<<endl;
Second Case:
short x=255;
x = x<<8;
x = x>>8;
cout<<x<<endl;
The output in the first case is 255 whereas in the second case is -1. -1 as output does makes sense as cpp does a arithmetic right shift. Here are the intermediate values of x to obtain -1 as output.
x: 0000 0000 1111 1111
x<<8:1111 1111 0000 0000
x>>8:1111 1111 1111 1111
Why doesn't the same mechanism happen in the first case?
The difference is a result of two factors.
The C++ standard does not specify the maximum values of integral types. The standard only specifies the minimum size of each integer type. On your platform, a short is a 16 bit value, and an ints is at least a 32 bit value.
The second factor is two's complement arithmetic.
In your first example, the short value is naturally promoted to an int, which is at least 32 bits, so the left and the right shift operates on an int, before getting converted back to a short.
In your second example, after the first left shift operation the resulting value is once again converted back to a short, and due to two's complement arithmetic, it ends up being a negative value. The right shift ends up sign-extending the negative value, resulting in the final result of -1.
What you just observed is sign extension:
Sign extension is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most significant side of the number, following a procedure dependent on the particular signed number representation used.
For example, if six bits are used to represent the number "00 1010" (decimal positive 10) and the sign extend operation increases the word length to 16 bits, then the new representation is simply "0000 0000 0000 1010". Thus, both the value and the fact that the value was positive are maintained.
If ten bits are used to represent the value "11 1111 0001" (decimal negative 15) using two's complement, and this is sign extended to 16 bits, the new representation is "1111 1111 1111 0001". Thus, by padding the left side with ones, the negative sign and the value of the original number are maintained.
You rigt shift all the way to the point where your short becomes negative, and when you then shift back, you get the sign extension.
This doesn't happen in the first case, as the shift isn't applied to a short. It's applied to 255 which isn't a short, but the default integral type (probably an int). It only gets casted after it's already been shifted back:
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
<<8
on the stack: 0000 0000 0000 0000 1111 1111 0000 0000
>>8
on the stack: 0000 0000 0000 0000 0000 0000 1111 1111
convert to short: 0000 0000 1111 1111

How do you copy certain bits from a variable in c to another variable?

Say I have a long 64 bit integer that starts with these bits:
0100 0000 0110 1101 .... .... ....
And I want a specific integer to hold this value:
0b10000000110
Which, as you can see are bits 2 through 12 in the original number.
How can I do this with bitwise operations is this possible?
Something like this should work:
uint64_t input = <0100 0000 0110 1101 .... .... ....>
uint64_t mask = (uint64_t)0x7FF << 52;
uint64_t output = (input & mask) >> 52;
0x7ff is eleven bits: 11111111111. Shift it left 52 bits to get it where you want, use it to mask the input value, and shift the return value back 52 bits.

selective access to bits on datatypes with C++

I'm using C++ for hardware-based model design with SystemC. SystemC as a C++ extension introduces specific datatypes useful for signal and byte descriptions.
How can I access the first bits of a datatype in general, like:
sc_bv<16> R0;
or access the first four bits of tmp.
int my_array[42];
int tmp = my_array[1];
sc_bv is a bit-vector data-type, that's storing binary sequences. Now I want the first four bits of that data-type e. g.. My background is C# and Java, therefore I miss some of the OOP and Reflexion based API constructs in general. I need to perform conversion on this low-level stuff. Useful introductory stuff would help a lot.
Thanks :),
wishi
For sc_bv, you can use the indexing operator []
For the int, just use normal bitwise operations with constants, e.g. the least significant bit in tmp is tmp & 1
I can't really speak for SystemC (sounds interesting though). In normal C you'd read out the lower four bits with a mask like so:
temp = R0 & 0xf;
and write into only the lower four bits (assuming a 32-bit register, and temp<16) like so:
R0 = (R0 & 0xfffffff0) | temp;
To access the first four (i assume you mean four highest bits) bits of tmp (ie to get their values) you use bit masks. So if you want to know if for example the second bit is set you do the following:
int second_bit = (tmp & 0x4000000) >> 30;
now second_bit is 1 if the bit is set and zero otherwise. The idea behind this is the following:
Imagine tmp is (in binary)
1101 0000 0000 0000 0000 0000 0000 0000
Now you use bitwise AND ( the & ) with the following value
0100 0000 0000 0000 0000 0000 0000 0000 // which is 0x40000000 in hex
ANDing produces a 1 on the given bit if and only if both operands have corresponding bits set (they are both 1). So the result will be:
0100 0000 0000 0000 0000 0000 0000 0000
Then you shift this 30 bits to the right, which makes it be:
0000 0000 0000 0000 0000 0000 0000 0001 \\ which is 1
Note that if the original value had the tested bit zero, the result would be zero.
This way you can test any bit you like, you just need to provide correct mask. Note that i assumed here that int is 32bits wide, which should be true in most cases.
You will have to know a bit more about sc_bv to amke sure you get the right information. Also, when you say the "first four bytes" I assume you mean the "first four bits." However, that is misleading as well, because you really want to delineate between the low-order or high-order bits.
In any event, you use the C bitwise operators for this kind of thing. However, you will need to know the size of the integer values AND the "endian-ness" of the runtime architecture to get that right.
But, if you REALLY want just the first four bits, then you would do something like this...
inline unsigned char
first_4_bits(void const * ptr)
{
return (*reinterpret_cast<unsigned char const *>(ptr) & 0xf0) >> 4;
}
and that will grab the very first 4 bits of what it being pointed at. So, if the first byte pointed-to is 0x38, then this function will return the first 4 bits, so the result will be 3.