Bitshifting std_logic_vector while keep precision and conversion to signed - bit-manipulation

In VHDL I want to take a 14 bit input and append '00' on the end to give me a 16 bit number which is the 14 bit input multiplied by 4 and then put this into a 17 bit signed variable such that it is positive (the input is always positive). How should I go about this?
like this? shiftedInput <= to_signed('0' & input & '00', 17);
Or maybe like this? shiftedInput <= to_signed(input sll 2, 17);
Or this? shiftedInput <= to_signed(input & '00', 17);
Does it see that the std_logic_vector it's getting is 16 bit and the signed variable is 17 bit and therefore assume the most significant bit (the singing bit) is 0?
Or do I have to do this? shiftedInput <= to_signed('0' & input sll 2, 17);
e.g. If I read in the 14 bit number 17 as a std_logic_vector [i.e. (00 0000 0001 0001)] it should be converted to the signed number +68. [i.e. (0 0000 0000 0100 0100)]

std_logic_vector is compatible with the type signed of numeric_std. So, the type conversion function is signed (not to_signed that converts between integers and vectors):
shiftedInput <= signed('0' & input & "00");
should make it. Note the "00" instead of your '00'. Bit strings are double-quoted while bits are single-quoted.

Related

C/C++ Bitwise Operations not resulting in expected output?

I'm currently working on bitwise operations but I am confused right now... Here's the scoop and why
I have a byte 0xCD in bits this is 1100 1101
I am shifting the bits left 7, then I'm saying & 0xFF since 0xFF in bits is 1111 1111
unsigned int bit = (0xCD << 7) & 0xFF<<7;
Now I would make the assumption that both 0xCD and 0xFF would get shifted to the left 7 times and the remaining bit would be 1&1 = 1 but I'm not getting that for output also I would also make the assumption that shifting 6 would give me bits 0&1 = 0 but I'm getting again a number above 1 like 205 0.o Is there something incorrect about the way I am trying to process bit shifting in my head? If so what is it that I am doing wrong?
Code Below:
unsigned char byte_now = 0xCD;
printf("Bits for byte_now: 0x%02x: ", byte_now);
/*
* We want to get the first bit in a byte.
* To do this we will shift the bits over 7 places for the last bit
* we will compare it to 0xFF since it's (1111 1111) if bit&1 then the bit is one
*/
unsigned int bit_flag = 0;
int bit_pos = 7;
bit_flag = (byte_now << bit_pos) & 0xFF;
printf("%d", bit_flag);
Is there something incorrect about the way I am trying to process bit shifting in my head?
There seems to be.
If so what is it that I am doing wrong?
That's unclear, so I offer a reasonably full explanation.
In the first place, it is important to understand that C does not not perform any arithmetic directly on integers smaller than int. Consider, then, your expression byte_now << bit_pos. "The usual arithmetic promotions" are performed on the operands, resulting in the left operand being converted to the int value 0xCD. The result has the same pattern of least-significant value bits as bit_flag, but also a bunch of leading zero bits.
Left shifting the result by 7 bits produces the bit pattern 110 0110 1000 0000, equivalent to 0x6680. You then perform a bitwise and operation on the result, masking off all but the least-significant 8 bits, thus yielding 0x80. What happens when you assign that to bit_flag depends on the type of that variable, but if it is an integer type that is either unsigned or has more than 7 value bits then the assignment is well-defined and value-preserving. Note that it is bit 7 that is nonzero, not bit 0.
The type of bit_flag is more important when you pass it to printf(). You've paired it with a %d field descriptor, which is correct if bit_flag has type int and incorrect otherwise. If bit_flag does have type int, then I would expect the program to print 128.

How do I make a bit mask that only masks certain parts (indices) of 32 bits?

I am currently working on a programming assignment in which I have to mask only a certain index of the whole 32-bit number(EX: If I take 8 4-bit numbers into my 32-bit integer, I would have 8 indices with 4 bits in each). I want to be able to print only part of the bits out of the whole 32 bits, which can be done with masking. If the bits were to only be in one place, I would not have a problem, for I would just create a mask that puts the 1s in a set place(EX: 00000000 00000000 00000000 00000001). However, I need to be able to shift the mask throughout only one index to print each bit (EX: I want to loop through the first index with my 0001 mask, shifting the 1 left every time, but I do not want to continue after the third bit of that index). I know that I need a loop to accomplish this; however, I am having a difficult time wrapping my head around how to complete this part of my assignment. Any tips, suggestions, or corrections would be appreciated. Also, I'm sorry if this was difficult to understand, but I could not find a better way to word it. Thanks.
first of all about representation. You need binary numbers to represent bits and masks. There are no binaries implemented directly in c/c++ languages at least before c++14. So, before c++14 you had to use hexadecimals or octals to represent your binaries, i.e.
0000 1111 == 0x0F
1111 1010 == 0xFA
since c++14 you can use
0b00001111;
Now, if you shift your binary mask left or right, you will have the following pictures
00001111 (OxF) << 2 ==> 00111100 (0x3C)
00001111 (0xF) >> 2 ==> 00000011 (0x03)
Now, supposedly you have an number in which you are interested in bits 4 to 7 (4 bits)
int bad = 0x0BAD; // == 0000 1011 1010 1101
you can create a mask as
int mask = 0x00F0; // == 0000 0000 1111 00000
and do bitwise and
int result = bad & mask; // ==> 0000 0000 1010 000 (0x00A0)
You will mask 4 bits in the middle of the word, but it will print as 0xA0. probably not what you would expect. To print it as 0xA you would need to shift the result 4 bits right: result >> 4. I prefer doing it in a bit different order, shifting the 'bad' first and then mask:
int result = (bad >> 4) & 0xF;
I hope the above will help you to understand bits.

bitmasking and binary arithmetic

I'd like to know the science behind the following. a 32 bit value is shifted left 32 times in a 64 bit type, then a division is performed. somehow the precision is contained within the last 32 bits and in order to retrieve the value as a floating point number, I can multiply by 1 over the max value of an unsigned 32 bit int.
phase = ((uint64) 44100 << 32) / 48000;
(phase & 0xffffffff) * (1.0f / 4294967296.0f);// == 0.918749988
the same as
(float)44100/48000;// == 0.918749988
(...)
If you lose precision when dividing two integer numbers, you should remember the remainder.
The reminder in C++ can be taken by doing 44100%48000 in your case.
Actually these are constants and it's completely clear that 44100/48000 == 0, so remainder is all you have.
Well, the reminder will even be -- guess what -- 44100!
The float type (imposed by the explicit cast) has only 6 significant digits. So 4294967296.0f will be simply 429496e4 (in mathematics: 429496*10^4). That's why this type isn't valuable for anything but playing around.
The best way to get a value of fixed integer type in which all bits are set, and not miss the correct number of 'f' in 0xfffff, is to use the ~ operator and 0 value. In your case, ~uint32_t(0).
Well, I should have said this in the beginning: 44100.0/48000 should give you the result you want. :P
this is the answer I was looking for
bit shifting left will provide that number of bits in which to store the precision vale from a division.
dividing the integer value represented by these bits by 2 to the power of the bit shift amount will return the precision value
e.g
0000 0001 * 2^8 = 1 0000 0000 = 256(base 10)
1 0000 0000 / 2 = 1000 0000 = 128(base 10)
128 / 2^8 = 0.5

bitwise shifts, unsigned chars

Can anyone explain verbosely what this accomplishes? Im trying to learn c and am having a hard time wrapping my head around it.
void tonet_short(uint8_t *p, unsigned short s) {
p[0] = (s >> 8) & 0xff;
p[1] = s & 0xff;
}
void tonet_long(uint8_t *p, unsigned long l)
{
p[0] = (l >> 24) & 0xff;
p[1] = (l >> 16) & 0xff;
p[2] = (l >> 8) & 0xff;
p[3] = l & 0xff;
}
Verbosely, here it goes:
As a direct answer; both of them stores the bytes of a variable inside an array of bytes, from left to right. tonet_short does that for unsigned short variables, which consist of 2 bytes; and tonet_long does it for unsigned long variables, which consist of 4 bytes.
I will explain it for tonet_long, and tonet_short will just be the variation of it that you'll hopefully be able to derive yourself:
unsigned variables, when their bits are bitwise-shifted, get their bits shifted towards the determined side for determined amount of bits, and the vacated bits are made to be 0, zeros. I.e.:
unsigned char asd = 10; //which is 0000 1010 in basis 2
asd <<= 2; //shifts the bits of asd 2 times towards left
asd; //it is now 0010 1000 which is 40 in basis 10
Keep in mind that this is for unsigned variables, and these may be incorrect for signed variables.
The bitwise-and & operator compares the bits of two operands on both sides, returns a 1 (true) if both are 1 (true), and 0 (false) if any or both of them are 0 (false); and it does this for each bit. Example:
unsigned char asd = 10; //0000 1010
unsigned char qwe = 6; //0000 0110
asd & qwe; //0000 0010 <-- this is what it evaluates to, which is 2
Now that we know the bitwise-shift and bitwise-and, let's get to the first line of the function tonet_long:
p[0] = (l >> 24) & 0xff;
Here, since l is unsigned long, the (l >> 24) will be evaluated into the first 4 * 8 - 24 = 8 bits of the variable l, which is the first byte of the l. I can visualize the process like this:
abcd efgh ijkl mnop qrst uvwx yz.. .... //letters and dots stand for
//unknown zeros and ones
//shift this 24 times towards right
0000 0000 0000 0000 0000 0000 abcd efgh
Note that we do not change the l, this is just the evaluation of l >> 24, which is temporary.
Then the 0xff which is just 0000 0000 0000 0000 0000 0000 1111 1111 in hexadecimal (base 16), gets bitwise-anded with the bitwise-shifted l. It goes like this:
0000 0000 0000 0000 0000 0000 abcd efgh
&
0000 0000 0000 0000 0000 0000 1111 1111
=
0000 0000 0000 0000 0000 0000 abcd efgh
Since a & 1 will be simply dependent strictly on a, so it will be a; and same for the rest... It looks like a redundant operation for this, and it really is. It will, however, be important for the rest. This is because, for example, when you evaluate l >> 16, it looks like this:
0000 0000 0000 0000 abcd efgh ijkl mnop
Since we want only the ijkl mnop part, we have to discard the abcd efgh, and that will be done with the aid of 0000 0000 that 0xff has on its corresponding bits.
I hope this helps, the rest happens like it does this far, so... yeah.
These routines convert 16 and 32 bit values from native byte order to standard network(big-endian) byte order. They work by shifting and masking 8-bit chunks from the native value and storing them in order into a byte array.
If I see it right, I basically switches the order of bytes in the short and in the long ... (reverses the byte order of the number) and stores the result at an address which hopefully has enough space :)
explain verbosely - OK...
void tonet_short(uint8_t *p, unsigned short s) {
short is typically a 16-bit value (max: 0xFFFF)
The uint8_t is an unsigned 8-bit value, and p is a pointer to some number of unsigned 8-bit values (from the code we're assuming at least 2 sequential ones).
p[0] = (s >> 8) & 0xff;
This takes the "top half" of the value in s and puts it in the first element in the array p. So let's assume s==0x1234.
First s is shifted by 8 bits (s >> 8 == 0x0012)then it's AND'ed with 0xFF and the result is stored in p[0]. (p[0] == 0x12)
p[1] = s & 0xff;
Now note that when we did that shift, we never changed the original value of s, so s still has the original value of 0x1234, thus when we do this second line we simply do another bit-wise AND and p[1] get the "lower half" of the value of s (p[0] == 0x34)
The same applies for the other function you have there, but it's a long instead of a short, so we're assuming p in this case has enough space for all 32-bits (4x8) and we have to do some extra shifts too.
This code is used to serialize a 16-bit or 32-bit number into bytes (uint8_t). For example, to write them to disk, or to send them over a network connection.
A 16-bit value is split into two parts. One containing the most-significant (upper) 8 bits, the other containing least-significant (lower) 8 bits. The most-significant byte is stored first, then the least-significant byte. This is called big endian or "network" byte order. That's why the functions are named tonet_.
The same is done for the four bytes of a 32-bit value.
The & 0xff operations are actually useless. When a 16-bit or 32-bit value is converted to an 8-bit value, the lower 8 bits (0xff) are masked implicitly.
The bit-shifts are used to move the needed byte into the lowest 8 bits. Consider the bits of a 32-bit value:
AAAAAAAABBBBBBBBCCCCCCCCDDDDDDDD
The most significant byte are the 8 bits named A. In order to move them into the lowest 8 bits, the value has to be right-shifted by 24.
The names of the functions are a big hint... "to net short" and "to net long".
If you think about decimal... say we have a two pieces of paper so small we can only write one digit on each of them, we can therefore use both to record all the numbers from 0 to 99: 00, 01, 02... 08, 09, 10, 11... 18, 19, 20...98, 99. Basically, one piece of paper holds the "tens" column (given we're in base 10 for decimal), and the other the "units".
Memory works like that where each byte can store a number from 0..255, so we're working in base 256. If you have two bytes, one of them's going to be the "two-hundred-and-fifty-sixes" column, and the other the "units" column. To work out the combined value, you multiple the former by 256 and add the latter.
On paper we write numbers with the more significant ones on the left, but on a computer it's not clear if a more significant value should be in a higher or lower memory address, so different CPU manufacturers picked different conventions.
Consequently, some computers store 258 - which is 1 * 256 + 2 - as low=1 high=2, while others store low=2 high=1.
What these functions do is rearrange the memory from whatever your CPU happens to use to a predictable order - namely, the more significant value(s) go into the lower memory addresses, and eventually the "units" value is put into the highest memory address. This is a consistent way of storing the numbers that works across all computer types, so it's great when you want to transfer the data over the network; if the receiving computer uses a different memory ordering for the base-256 digits, it can move them from network byte ordering to whatever order it likes before interpreting them as CPU-native numbers.
So, "to net short" packs the most significant 8 bits of s into p[0] - the lower memory address. It didn't actually need to & 0xff as after taking the 16 input bits and shifting them 8 to the "right", all the left-hand 8 bits are guaranteed 0 anyway, which is the affect from & 0xFF - for example:
1010 1111 1011 0111 // = decimal 10*256^3 + 15*256^2 + 11*256 + 7
>>8 0000 0000 1010 1111 // move right 8, with left-hand values becoming 0
0xff 0000 0000 1111 1111 // we're going to and the above with this
& 0000 0000 1010 1111 // the bits that were on in both the above 2 values
// (the and never changes the value)

C++ Int bit manipulating is 2UL = 10UL?

I have a quick question.
I've been playing around with bit manipulation in c/c++ for a while and I recently discovered that when I compare 2UL and 10UL to a regular unsigned int they seem to return the same bit.
For example,
#define JUMP 2UL
#define FALL 10UL
unsigned int flags = 0UL;
this->flags |= FALL;
//this returns true
this->is(JUMP);
bool Player::is(const unsigned long &isThis)
{
return ((this->flags & isThis) == isThis);
}
Please confirm if 2U equals 10U and if so, how would I go around it if I need more than 8(?) flags in a single unsigned integer.
Kind regards,
-Markus
Of course. 10ul is 1010 in binary and 2 is 10. Therefore, doing x |= 10 sets the second bit too.
You probably wanted to use 0x10 and 0x2 as your flags. These would work as you expect.
As an aside: a single digit in the hex notation represent 4 bits, not 8.
JUMP, 2: 0010
FALL, 10: 1010
FALL & JUMP = JUMP = 0010
Decimal 2 in binary is 0010, whereas decimal 10 is binary 1010. If you bitwise-and them (2 & 10), that yields binary 0010, or decimal 2. So 10 & 2 is indeed equal to 2. Maybe your intention is to test for 1ul << 2 and 1ul << 10, which would be bits number 2 and 10 respectively. Or maybe you meant to use hexadecimal 10, (decimal 16, binary 10000), which is denoted as 0x10.