I need to convert a 32 bit floating point number to 4 bytes for an embedded system using the Squirrel language. I was hoping I could just bit shift and mask the bytes into separate parts, doing something like:
bytes = [
(myfloat >> 24) & 0xff,
(myfloat >> 16) & 0xff,
(myfloat >> 8) & 0xff,
(myfloat ) & 0xff,
]
However, this gives me a type error saying that you can't bit shift on a float.
The only other thing I see in the docs is the tointeger function, so I could cast to an integer and then get the nondecimal part, but even then I will also need to go the other way from bytes to a float later.
Any ideas?
Aha, you have to read and write the float value to a blob:
local pi = 3.1415926;
bl <- blob(4);
bl.writen(pi, 'f');
bytes <- [];
foreach (byte in bl) {
server.log(byte);
bytes.append(byte);
}
back <- blob(4);
foreach (byte in bytes) {
back.writen(byte, 'b');
}
back.seek(0, 'b');
server.log(back.readn('f'));
Related
I have a uint32_t as follows:
uint32_t midiData=0x9FCC00;
I need to separate this uint32_t into smaller parts so that 9 becomes its own entity, F becomes its own entity, and CC becomes its own entity. If you're wondering what I am doing, I am trying to break up the parts of a MIDI message so that they are easier to manage in my program.
I found this solution, but the problem is I don't know how to apply it to the CC section, and that I am not sure that this method works with C++.
Here is what I have so far:
uint32_t midiData=0x9FCC00;
uint32_t status = 0x0FFFFF & midiData; // Retrieve 9
uint32_t channel = (0xF0FFFF & midiData)>>4; //Retrieve F
uint32_t note = (0xFF00FF & midiData) >> 8; //Retrieve CC
Is this correct for C++? Reason I ask is cause I have never used C++ before and its syntax of using the > and < has always confused me (thus why I tend to avoid it).
You can use bit shift operator >> and bit masking operator & in C++ as well.
There are, however, some issues on how you use it:
Operator v1 & v2 gives a number built from those bits that are set in both v1 and v2, such that, for example, 0x12 & 0xF0 gives 0x10, not 0x02. Further, bit shift operator takes the number of bits, and a single digit in a hex number (which is usually called a nibble), consists of 4 bits (0x0..0xF requires 4 bits). So, if you have 0x12 and want to get 0x01, you have to write 0x12 >>4.
Hence, your shifts need to be adapted, too:
#define BITS_OF_A_NIBBLE 4
unsigned char status = (midiData & 0x00F00000) >> (5*BITS_OF_A_NIBBLE);
unsigned char channel = (midiData & 0x000F0000) >> (4*BITS_OF_A_NIBBLE);
unsigned char note = (midiData & 0x0000FF00) >> (2*BITS_OF_A_NIBBLE);
unsigned char theRest = (midiData & 0x000000FF);
You have it backwards, in a way.
In boolean logic (the & is a bitwise-AND), ANDing something with 0 will exclude it. Knowing that F in hex is 1111 in binary, a line like 0x9FCC00 & 0x0FFFFF will give you all the hex digits EXCEPT the 9, the opposite of what you want.
So, for status:
uint32_t status = 0xF000000 & midiData; // Retrieve 9
Actually, this will give you 0x900000. If you want 0x9 (also 9 in decimal), you need to bitshift the result over.
Now, the right bitshift operator (say, X >> 4) means move X 4 bits to the right; dividing by 16. That is 4 bits, not 4 hex digits. 1 hex digit == 4 bits, so to get 9 from 0x900000, you need 0x900000 >> 20.
So, to put them together, to get a status of 9:
uint32_t status = (0xF000000 & midiData) >> 20;
A similar process will get you the remaining values you want.
In general I'd recommend shift first, then mask - it's less error prone:
uint8_t cmd = (midiData >> 16) & 0xff;
uint8_t note = (midiData >> 8) & 0x7f; // MSB can't be set
uint8_t velocity = (midiData >> 0) & 0x7f; // ditto
and then split the cmd variable:
uint8_t status = (cmd & 0xf0); // range 0x00 .. 0xf0
uint8_t channel = (cmd & 0x0f); // range 0 .. 15
I personally wouldn't bother mapping the status value back into the range 0 .. 15 - it's commonly understood that e.g. 0x90 is a "note on", and not the plain value 9.
I am in process of rewriting code from Big Endian Machine to Little Endian machine.
Let's say there is a variable called a, which is a 32 bit integer which holds current timestamp(user request's current timestamp).
In Big Endian machine, right now the code is this way:
uint32 a = current_timestamp_of_user_request;
uint8 arr[3] = {0};
arr[0] = ((a >> (8 * 2)) & 0x000000FF);
arr[1] = ((a >> (8 * 1)) & 0x000000FF);
arr[2] = ((a >> (8 * 0)) & 0x000000FF);
Now, when I am writing the same logic for little endian machine, can I use the same code(method a), or should I convert the code this way(let's call this method b)?
uint32 a = current_timestamp_of_user_request;
uint32 b = htonl(a);
uint8 arr[3] = {0};
arr[0] = ((b >> (8 * 2)) & 0x000000FF);
arr[1] = ((b >> (8 * 1)) & 0x000000FF);
arr[2] = ((b >> (8 * 0)) & 0x000000FF);
I wrote this program to verify:
#include<stdio.h>
#include<stdlib.h>
void main() {
long int a = 3265973637;
long int b = 0;
int arr[3] = {0,0,0};
arr[0] = ((a >> (8 * 2)) & 0x000000FF);
arr[1] = ((a >> (8 * 1)) & 0x000000FF);
arr[2] = ((a >> (8 * 0)) & 0x000000FF);
printf("arr[0] = %d\t arr[1] = %d\t arr[2] = %d\n", arr[0], arr[1], arr[2]);
b = htonl(a);
arr[0] = ((b >> (8 * 2)) & 0x000000FF);
arr[1] = ((b >> (8 * 1)) & 0x000000FF);
arr[2] = ((b >> (8 * 0)) & 0x000000FF);
printf("After htonl:\n");
printf("arr[0] = %d\t arr[1] = %d\t arr[2] = %d\n", arr[0], arr[1], arr[2]);
}
Results:
Result with little endian machine:
bgl-srtg-lnx11: /scratch/nnandiga/test>./x86
arr[0] = 170 arr[1] = 205 arr[2] = 133
After htonl:
arr[0] = 205 arr[1] = 170 arr[2] = 194
Result with big endian machine:
arr[0] = 170 arr[1] = 205 arr[2] = 133
After htonl:
arr[0] = 170 arr[1] = 205 arr[2] = 133
Looks like without conversion to big endian order, the same logic(without htonl()) gives exact results in filling the array arr. Now, can you please answer should I use htonl() or not if I want the array to be the same in both little endian and big endian machines(little endian result should be exact as big endian result).
Your code as originally written will do what you want on both big endian and little endian machines.
If for example the value of a is 0x00123456, then 0x12 goes in arr[0], 0x34 goes in arr[1], and 0x56 goes in arr[2]. This occurs regardless of what the endianness of the machine is.
When you use the >> and & operators, they operate on the value of the expression in question, not the representation of that value.
When you call htonl, you change the value to match a particular representation. So on a little endian machine htonl(0x00123456) will result in the value 0x56341200. Then when you operate on that value you get different results.
Where endianness matters is when the representation of a number using multiple bytes is read or written as bytes, i.e. to disk, over a network, or to/from a byte buffer.
For example, if you do this:
uint32_t a = 0x12345678;
...
write(fd, &a, sizeof(a));
Then the four bytes that a consists of are written to the file descriptor (be it a file or a socket) one at a time. A big endian machine will write 0x12, 0x34, 0x56, 0x78 in that order while a little endian machine will write 0x78, 0x56, 0x34, 0x12.
If you want the bytes to be written in a consistent order then you would first call a = htonl(a) before calling write. Then the bytes will always be written as 0x12, 0x34, 0x56, 0x78.
Because your code operates on the value and not the individual bytes of the value, you don't need to worry about endianness.
You should use htonl(). On a big-endian machine this does nothing, it just returns the original value. On a little-endian machine it swaps the bytes appropriately. So by using this, you don't have to concern yourself with the endian-ness of the machine, you can use the same code after calling it.
I can read for example , 4 bytes from file using
ifstream r(filename , ios::binary | ios::in)
uint_32 readHere;
r.read( (char*)&readHere, 4 )
But how could i read 4.5 bytes = 4bytes and 4 bits.
What came to my mind is
ifstream r(filename , ios::binary | std::in)
uint_64t readHere;
r.read( (char*)&readHere, 5 ) // reading 5 bytes ;
uint_64t tmp = readHere & 11111111 // extract 5th bytes
tmp = tmp >> 4 // get first half of the bites
readHere = (( readHere >> 8 ) << 8) | tmp // remove 5th byte then add 4 bits
But im not sure how shouldi take half of byte , if first or last 4.
Is there some better way how to retrieve it?
The smallest unit that you can read or write be it in file, or in memory is a char (a byte on common systems (*)). You can browse longer element byte wise, and effectively endianness matters here.
uint32_t u = 0xaabbccdd;
char *p = static_cast<char *>(&u);
char c = p[0]; // c is 0xdd on a little endian system and 0xaa on a big endian one
But as soon as you are inside a byte all you can do is to use bitwise ands and shifts to extract the low order or high order bits. There is no longer endianness here except if you decide to use one convention.
BTW, if you read on a network interface or even on a serial line where bits are individually transfered, you get one full byte at a time, and there is no way to read only 4 bits on one read and the 4 others on next one.
(*) older systems (CDC in the 80's) used to have 6bits per character - but C++ did not exist at that time and I'm unsure whether C compilers existed there
It's still not clear whether this is a file format that you control, or if it's something else. Anyway, let's assume you have some integer data type that can hold a 36-bit unsigned value:
typedef uint64_t u36;
Now, regardless of whether your system uses big-endian or little-endian, you can write the value to a binary stream in a predictable order by doing them one byte at a time. Let's use big-endian, because it's slightly easier to picture the bits assembling together to create a value.
You can just use naive shifting and masking into a small buffer. The only thing to decide is where to truncate the half-byte. But if you follow the pattern of shifting each value by another 8 bits, then the remainder naturally falls in the high-order.
ostream & write_u36( ostream & s, u36 val )
{
char bytes[5] = {
(val >> 28) & 0xff,
(val >> 20) & 0xff,
(val >> 12) & 0xff,
(val >> 4 ) & 0xff,
(val << 4 ) & 0xf0
};
return s.write( bytes, 5 );
}
But this isn't how you'd actually write a bunch of these numbers. You'd have to hold off the 5th byte until you were finished or you could pack the next value into it. Or you would always write two values at a time:
ostream & write_u36_pair( ostream & s, u36 a, u36 b )
{
char bytes[9] = {
(a >> 28) & 0xff,
(a >> 20) & 0xff,
(a >> 12) & 0xff,
(a >> 4 ) & 0xff,
(a << 4 ) & 0xf0 | (b >> 32) & 0x0f,
(b >> 24) & 0xff,
(b >> 16) & 0xff,
(b >> 8) & 0xff,
b & 0xff
};
return s.write( bytes, 9 );
}
And so now, you might see how to go about reading values and deserialising them back into integers. The simplest way is to read two at a time.
istream & read_u36_pair( istream & s, u36 & a, u36 & b )
{
char bytes[9];
if( s.read( bytes, 9 ) )
{
a = (u36)bytes[0] << 28
| (u36)bytes[1] << 20
| (u36)bytes[2] << 12
| (u36)bytes[3] << 4
| (u36)bytes[4] >> 4;
b = ((u36)bytes[4] & 0x0f) << 32
| (u36)bytes[5] << 24
| (u36)bytes[6] << 16
| (u36)bytes[7] << 8
| (u36)bytes[8];
}
return s;
}
If you wanted to read them one at a time, you'd need to keep track of some state so you knew how many bytes to read (either 5 or 4), and which shift operations to apply. Something naive like this:
struct u36deser {
char bytes[5];
int which = 0;
};
istream & read_u36( istream & s, u36deser & state, u36 & val )
{
if( state.which == 0 && s.read( state.bytes, 5 ) )
{
val = (u36)state.bytes[0] << 28
| (u36)state.bytes[1] << 20
| (u36)state.bytes[2] << 12
| (u36)state.bytes[3] << 4
| (u36)state.bytes[4] >> 4;
state.which = 1;
}
else if( state.which == 1 && s.read( state.bytes, 4 ) )
{
val = ((u36)state.bytes[4] & 0x0f) << 32 // byte left over from previous call
| (u36)state.bytes[0] << 24
| (u36)state.bytes[1] << 16
| (u36)state.bytes[2] << 8
| (u36)state.bytes[3];
state.which = 0;
}
return s;
}
All of this is purely hypothetical, which seems to be the point of your question anyway. There are many other ways to serialise bits, and some of them are not at all obvious.
Given an unsigned integer, I need to end up with a 6-digits long hexadecimal value.
81892 (hex: 13FE4), should become 13FE40 or 013FE4
3285446057 (hex: C3D3EDA9), should become C3D3ED or D3EDA9
Since the project I'm contributing to uses Qt, I solve the problem this way:
unsigned int hex = qHash(name);
QString hexStr = (QString::number(hex, 16) + "000000").left(6);
bool ok;
unsigned int hexPat = hexStr.toUInt(&ok, 16);
This pads the hex number string on the right and then trims it after the sixth character from the left. To do the opposite, I would simply replace the second line:
QString hexStr = ("000000" + QString::number(hex, 16)).right(6);
The value will be used for RGB values, which is why I need six hex digits (three values between 0 and 255).
Is there a more efficient way to achieve either (or both) of these results without converting to string and then back?
The actual requirement for your problem is given an unsigned integer, you need to extract three bytes.
There really isn't any need to convert to a string to extract them, it can be more effectively performed using bit operations.
To extract any byte from the integer, right-shift (>>) the corresponding number of bits (0, 8, 16 or 24), and AND the result with a mask that takes only the rightmost byte (0xFF, which is really 0x000000FF).
e.g. take the three least significant bytes:
uint c = hash(...);
BYTE r = (BYTE)((c >> 16) & 0xFF);
BYTE g = (BYTE)((c >> 8) & 0xFF);
BYTE b = (BYTE)(c & 0xFF);
or three most significant bytes:
uint c = hash(...);
BYTE r = (BYTE)((c >> 24) & 0xFF);
BYTE g = (BYTE)((c >> 16) & 0xFF);
BYTE b = (BYTE)((c >> 8) & 0xFF);
I have three numbers in decimal form, they make itself one more decimal number. <120, 111, 200> - (120 * 256 + 111) * 256 + 200 = 7892936 - Decimal. I keep the number is because I have a variable number of bytes to write the number.
Q: How can I carry out the reverse operation?? If I need to convert
7892936 to <120, 111, 200>?
Drawing up a hexadecimal number from several decimal numbers
You may use bitmask and right-shift. Following may help:
std::array<std::uint8_t, 4> convert(std::uint32_t u)
{
return {
(u >> 24) & 0xFF,
(u >> 16) & 0xFF,
(u >> 8) & 0xFF,
u & 0xFF
};
}
Live example
you just need to do modulus and divide in similar order:
int val = 7892936;
while(val > 0){
int mod = val%256;
print mod;
val /= 256;
}
So the result will be:
200
111
120
Use repeated modulo and division operations in a loop for generic integer radix conversion. Optimizations for specific bases are possible, but shouldn't really concern you yet.
Also, you probably don't have numbers in decimal form. You have numbers. Unless you store them as strings, it's up to the computer to store them, and it will store them as binary.
int bigNumber = 7892936;
int a = bigNumber & FF;
int b = (bigNumber & FF00) >> 8;
int c = (bigNumber & FF0000) >> 16;
perform AND with ff, ff00, and ff0000 respectively.