The midi norm for music allows to code delta time duration as integer values (representing ticks).
For example I have a delta time of 960.
The binary value of 960 is 1111000000.
The thing is that the midi norm doesn't code the number on 16 bits.
It codes it on 14 bits and then, adds 10 at the 2 first bits to create another 16 bits value, 1 meaning that there is a following byte, and 0 meaning that it is the last byte.
My question is : how can I easily calculate 960 as a binary value coded on 14 bits?
Cheers
In the bytes that make up a delta time, the most significant bit specifies whether another byte with more bits is following.
This means that a 14-bit value like 00001111000000 is split into two parts, 0000111 and 1000000, and encoded as follows:
1 0000111 0 1000000
^ ^ ^ lower 7 bits
| | \
| \ last byte
\ upper 7 bits
more bytes follow
In C, a 14-bit value could be encoded like this:
int value = 960;
write_byte(0x80 | ((value >> 7) & 0x7f));
write_byte(0x00 | ((value >> 0) & 0x7f));
(Also see the function var_value() in arecordmidi.c.)
You can specify any number of bits as length inside a struct like so:
struct customInt {
unsigned int n:14; // 14-bit long unsigned integer type
};
Or you can make your own functions that take care of these kind of specific calculations/values.
If you are using unsigned integers, just do the calculations normally.
Start with
value = 960 ;
To convert the final output to 14 bits, do
value &= 0x3FFF ;
To add binary 10 to the front do
value |= 0x8000 ;
Related
I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".
uint8_t payload[] = { 0, 0 };
pin5 = analogRead(A0);
payload[0] = pin5 >> 8 & 0xff;
payload[1] = pin5 & 0xff;
This is code from the XBee library published by andrewrapp on GitHub. I was wondering how the bitwise operation worked.
so suppose pin 5 gets an analog value of 256 which as I am using a particle photon board comes in a 12bit format text as 000100000000. so does payload[0] get the last eight bits ie 00000000, or does it get value after shifting ie, 00000001? Also then what becomes the value in payload[1]?
I want to add a 4-bit code of my on using a bitmask to the first four bits in the array followed by the data bits. Can I & payload[1] with a 0X1 to payload[1] for this?
The code in your example reverser the content of pin5's two bytes into payload array: the most significant byte is placed into payload[0] and the least significant byte is placed into payload[1].
If, for example, pin5 is 0x0A63, then payload would contain 0x63, 0x0A.
If pin5 has a 12-bit value, you can use its four most significant bits to store a four-bit value of your own. To make sure the upper bits are zeroed out, use 0x0F mask instead of 0xFF:
payload[0] = pin5 >> 8 & 0x0f;
// ^
Now you can move your data into the upper four bits with | operator:
payload[0] |= myFourBits << 4;
So you want to understand what the stated operations do. Let's have a look if we can clarify this, by examining the pin5 variable and subdividing it into 2 parts:
pin5 000100000000
MMMMLLLLLLLL
M = 4 Most significant bits, L = 8 Least significant bits
payload[0] takes the result of some operations on pin5:
pin5 000100000000
>> 8 000000000001 Shifts all bits 8 positions to the right
00000000MMMM and fills the left part with zeroes
so you have the originally leading 4 bits right-aligned now, on which an additional operation is performed:
000000000001
& 0xFF 000011111111 Anding with FF
000000000001
Right-shifting a 12-bits variable by 8 positions leaves 4 significant positions; the leading 8 bits will always be 0. 0xFF is binary 11111111, i.e., represents 8 set bits. So what is done here is Anding the 4 least significant bits with 8 least significant bits in order to make sure, that the 4 most significant bits get erased.
00000000xxxx Potentially set bits (you have 0001)
000011111111 & 0xFF
00000000xxxx Result
0000xxxx Storing in 8-bits variable
payload[0] = 00000001 in your case
In this case, the Anding operation is not useful and a complete waste of time, because Anding any variable with 0xFF does never change its 8 least significant bits in any way, and since the 4 most significant bits are never set anyway, there simply is no point in this operation.
(Technically, because the source is a 12-bits variable (presumably it is a 16 bits variable though, with only 12 significant (relevant) binary digits), 0x0F would have sufficed for the Anding mask. Can you see why? But even this would simply be a wasted CPU cycle.)
payload[1] also takes the result of an operation on pin5:
pin5 MMMMLLLLLLLL potentially set bits
& 0xFF 000011111111 mask to keep LLLLLLLL only
0000LLLLLLLL result (you have 00000000)
xxxxxxxx Storing in 8-bits variable
payload[1] = 00000000 in your case
In this case, Anding with 11111111 makes perfect sense, because it discards MMMM, which in your case is 0001.
So, all in all, your value
pin5 000100000000
MMMMLLLLLLLL
is split such, that payload[0] contains MMMM (0001 = decimal 1), and payload[1] contains LLLLLLLL (00000000 = decimal 0).
If the input was
pin5 101110010001
MMMMLLLLLLLL
instead, you would find in payload[0]: 1011 (decimal 8+2+1 = 11), and in payload[1]: 10010001 (decimal 128+16+1 = 145).
You would interpret this result as decimal 11 * 256 + 145 = 2961, the same result you obtain when converting the original 101110010001 from binary into decimal, for instance using calc.exe in Programmer mode (Alt+3), if you are using Windows.
Likewise, your original data is being interpreted as 1 * 256 + 0 = 256, as expected.
in c++, I have the following code:
int x = -3;
x &= 0xffff;
cout << x;
This produces
65533
But if I remove the negative, so I have this:
int x = 3;
x &= 0xffff;
cout << x;
I simply get 3 as a result
Why does the first result not produce a negative number? I would expect that -3 would be sign extended to 16 bits, which should still give a twos complement negative number, considering all those extended bits would be 1. Consequently the most significant bit would be 1 too.
It looks like your system uses 32-bit ints with two's complement representation of negatives.
Constant 0xFFFF covers the least significant two bytes, with the upper two bytes are zero.
The value of -3 is 0xFFFFFFFD, so masking it with 0x0000FFFF you get 0x0000FFFD, or 65533 in decimal.
Positive 3 is 0x00000003, so masking with 0x0000FFFF gives you 3 back.
You would get the result that you expect if you specify 16-bit data type, e.g.
int16_t x = -3;
x &= 0xffff;
cout << x;
In your case int is more than 2 bytes. You probably run on modern CPU where usually these days integer is 4 bytes (or 32 bits)
If you take a look the way system stores negative numbers you will see that its a complementary number. And if you take only last 2 bytes as your mask is 0xFFFF then you will get only a part of it.
your 2 options:
use short intstead of int. Usually its a half of integer and will be only 2 bites
use bigger mask like 0xFFFFFFFF that it covers all the bits of your integer
NOTE: I use "usually" because the amount of bits in your int and short depends on your CPU and compiler.
I have a 32 bits integer that I treat as a bitfield. I'm interested in the value of the bits with an index of the form 3n where n range from 0 to 6 (every third bit between 0 and 18) I'm not interested in the bits with index in the form 3n+1 or 3n+2.
I can easily use the bitwise AND operator to keep the bits i'm interested in and set all the others bits to zero.
I would also need to "pack" the bits I'm interested in in the 7 least significant bits positions. So the bit at position 0 stay at 0, but the bit at position 3 is moved to position 1, the bit at position 6 moves to position 2 and so on.
I would like to do this in an efficient way, ideally without using a loop. Is there a combinations of operations I could apply to an integer to achieve this?
Since we're only talking about integer arithmetics here, I don't think the programming language I plan to use is of importance. But if you need to know :
I'm gonna use JavaScript.
If the order of the bits is not important, they can be packed into bits 0-6 like this:
function packbits(a)
{
// mask out the bits we're not interested in:
var b = a & 299593; // 1001001001001001001 in binary
// pack into the lower 7 bits:
return (b | (b >> 8) | (b >> 13)) & 127;
}
If the initial bit ordering is like this:
bit 31 bit 0
xxxxxxxxxxxxxGxxFxxExxDxxCxxBxxA
Then the packed ordering is like this:
bit 7 bit 0
0CGEBFDA
I have a byte array:
byte data[2]
I want to to keep the 7 less significant bits from the first and the 3 most significant bits from the second.
I do this:
unsigned int the=((data[0]<<8 | data[1])<<1)>>6;
Can you give me a hint why this does not work?
If I do it in different lines it works fine.
Can you give me a hint why this does not work?
Hint:
You have two bytes and want to preserve 7 less significant bits from the first and the 3 most significant bits from the second:
data[0]: -xxxxxxx data[1]: xxx-----
-'s represent bits to remove, x's represent bits to preserve.
After this
(data[0]<<8 | data[1])<<1
you have:
the: 00000000 0000000- xxxxxxxx xx-----0
Then you make >>6 and result is:
the: 00000000 00000000 00000-xx xxxxxxxx
See, you did not remove high bit from data[0].
Keep the 7 less significant bits from the first and the 3 most significant bits from the second.
Assuming the 10 bits to be preserved should be the LSB of the unsigned int value, and should be contiguous, and that the 3 bits should be the LSB of the result, this should do the job:
unsigned int value = ((data[0] & 0x7F) << 3) | ((data[1] & 0xE0) >> 5);
You might not need all the masking operands; it depends in part on the definition of byte (probably unsigned char, or perhaps plain char on a machine where char is unsigned), but what's written should work anywhere (16-bit, 32-bit or 64-bit int; signed or unsigned 8-bit (or 16-bit, or 32-bit, or 64-bit) values for byte).
Your code does not remove the high bit from data[0] at any point — unless, perhaps, you're on a platform where unsigned int is a 16-bit value, but if that's the case, it is unusual enough these days to warrant a comment.