How to send float over serial - c++

What's the best way to send float, double, and int16 over serial on Arduino?
The Serial.print() only sends values as ASCII encoded. But I want to send the values as bytes. Serial.write() accepts byte and bytearrays, but what's the best way to convert the values to bytes?
I tried to cast an int16 to an byte*, without luck. I also used memcpy, but that uses to many CPU cycles. Arduino uses plain C/C++. It's an ATmega328 microcontroller.

hm. How about this:
void send_float (float arg)
{
// get access to the float as a byte-array:
byte * data = (byte *) &arg;
// write the data to the serial
Serial.write (data, sizeof (arg));
}

Yes, to send these numbers you have to first convert them to ASCII strings. If you are working with C, sprintf() is, IMO, the handiest way to do this conversion:
[Added later: AAAGHH! I forgot that for ints/longs, the function's input argument wants to be unsigned. Likewise for the format string handed to sprintf(). So I changed it below. Sorry about my terrible oversight, which would have been a hard-to-find bug. Also, ulong makes it a little more general.]
char *
int2str( unsigned long num ) {
static char retnum[21]; // Enough for 20 digits plus NUL from a 64-bit uint.
sprintf( retnum, "%ul", num );
return retnum;
}
And similar for floats and doubles. The code doing the conversion has be known in advance. It has to be told - what kind of an entity it's converting, so you might end up with functions char *float2str( float float_num) and char *dbl2str( double dblnum).
You'll get a NUL-terminated left-adjusted (no leading blanks or zeroes) character string out of the conversion.
You can do the conversion anywhere/anyhow you like; these functions are just illustrations.

Use the Firmata protocol. Quote:
Firmata is a generic protocol for communicating with microcontrollers
from software on a host computer. It is intended to work with any host
computer software package. Right now there is a matching object in a
number of languages. It is easy to add objects for other software to
use this protocol. Basically, this firmware establishes a protocol for
talking to the Arduino from the host software. The aim is to allow
people to completely control the Arduino from software on the host
computer.

The jargon word you need to look up is "serialization".
It is an interesting problem over a serial connection which might have restrictions on what characters can go end to end, and might not be able to pass eight bits per character either.
Restrictions on certain character codes are fairly common. Here's a few off the cuff:
If software flow control is in use, then conventionally the control characters DC1 and DC3 (Ctrl-Q and Ctrl-S, also sometimes called XON and XOFF) cannot be transmitted as data because they are sent to start and stop the sender at the other end of the cable.
On some devices, NUL and/or DEL characters (0x00 and 0x7F) may simply vanish from the receiver's FIFO.
If the receiver is a Unix tty, and the termio modes are not set correctly, then the character Ctrl-D (EOT or 0x04) can cause the tty driver to signal an end-of-file to the process that has the tty open.
A serial connection is usually configurable for byte width and possible inclusion of a parity bit. Some connections will require that a 7-bit byte with a parity are used, rather than an 8-bit byte. It is even possible for connection to (seriously old) legacy hardware to configure many serial ports for 5-bit and 6-bit bytes. If less than 8-bits are available per byte, then a more complicated protocol is required to handle binary data.
ASCII85 is a popular technique for working around both 7-bit data and restrictions on control characters. It is a convention for re-writing binary data using only 85 carefully chosen ASCII character codes.
In addition, you certainly have to worry about byte order between sender and receiver. You might also have to worry about floating point format, since not every system uses IEEE-754 floating point.
The bottom line is that often enough choosing a pure ASCII protocol is the better answer. It has the advantage that it can be understood by a human, and is much more resistant to issues with the serial connection. Unless you are sending gobs of floating point data, then inefficiency of representation may be outweighed by ease of implementation.
Just be liberal in what you accept, and conservative about what you emit.

Does size matter? If it does, you can encode each 32 bit group into 5 ASCII characters using ASCII85, see http://en.wikipedia.org/wiki/Ascii85.

This simply works. Use Serial.println() function
void setup() {
Serial.begin(9600);
}
void loop() {
float x = 23.45585888;
Serial.println(x, 10);
delay(1000);
}
And this is the output:

Perhaps that is best Way to convert Float to Byte and Byte to Float,-Hamid Reza.
int breakDown(int index, unsigned char outbox[], float member)
{
unsigned long d = *(unsigned long *)&member;
outbox[index] = d & 0x00FF;
index++;
outbox[index] = (d & 0xFF00) >> 8;
index++;
outbox[index] = (d & 0xFF0000) >> 16;
index++;
outbox[index] = (d & 0xFF000000) >> 24;
index++;
return index;
}
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (outbox[index+3] << 24) | (outbox[index+2] << 16)
| (outbox[index+1] << 8) | (outbox[index]);
float member = *(float *)&d;
return member;
}
regards.
`

Structures and unions solve that issue. Use a packed structure with a byte sized union matching the structure. Overlap the pointers to the structure and union (or add the union in the structure). Use Serial.write to send the stream. Have a matching structure/union on receiving end. As long as byte order matches no issue otherwise you can unpack using the "C" hto(s..l) functions. Add "header" info to decode different structures/unions.

For Arduino IDE:
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (long(outbox[index +3]) << 24) | \
(long(outbox[index +2]) << 16) | \
(long(outbox[index +1]) << 8) | \
(long(outbox[index]));
float member = *(float *)&d;
return member;
}
otherwise not working.

Related

how to read/write sequnce of bits bit by bit in c++

I have implemented the Huffman coding algorithm in C++, and it's working fine. I want to create a text compression algorithm.
behind every file or data in the digital world, there is 0/1.
I want to persist the sequence of bits(0/1) that are generated by the Huffman encoding algorithm in the file.
my goal is to save the number of bits used in the file to store. I'm storing metadata for decoding in a separate file. I want to write bit by bit data to file, and then read the same bit by bit in c++.
the problem I'm facing with the binary mode is that it not allowing me to put data bit by bit.
I want to put "10101" as bit by bit to file but it put asci values or 8-bits of each character at a time.
code
#include "iostream"
#include "fstream"
using namespace std;
int main(){
ofstream f;
f.open("./one.bin", ios::out | ios::binary);
f<<"10101";
f.close();
return 0;
}
output
any help or pointer to help is appreciated. thank you.
"Binary mode" means only that you have requested that the actual bytes you write are not corrupted by end-of-line conversions. (This is only a problem on Windows. No other system has the need to deliberately corrupt your data.)
You are still writing a byte at a time in binary mode.
To write bits, you accumulate them in an integer. For convenience, in an unsigned integer. This is your bit buffer. You need to decide whether to accumulate them from the least to most or from the most to least significant positions. Once you have eight or more bits accumulated, you write out one byte to your file, and remove those eight bits from the buffer.
When you're done, if there are bits left in your buffer, you write out those last one to seven bits to one byte. You need to carefully consider how exactly you do that, and how to know how many bits there were, so that you can properly decode the bits on the other end.
The accumulation and extraction are done using the bit operations in your language. In C++ (and many other languages), those are & (and), | (or), >> (right shift), and << (left shift).
For example, to insert one bit, x, into your buffer, and later three bits in y, ending up with the earliest bits in the most significant positions:
unsigned buf = 0, bits = 0;
...
// some loop
{
...
// write one bit (don't need the & if you know x is 0 or 1)
buf = (buf << 1) | (x & 1);
bits++;
...
// write three bits
buf = (buf << 3) | (y & 7);
bits += 3;
...
// write bytes from the buffer before it fills the integer length
if (bits >= 8) { // the if could be a while if expect 16 or more
// out is an ostream -- must be in binary mode if on Windows
bits -= 8;
out.put(buf >> bits);
}
...
}
...
// write any leftover bits (it is assumed here that bits is in 0..7 --
// if not, first repeat if or while from above to clear out bytes)
if (bits) {
out.put(buf << (8 - bits));
bits = 0;
}
...

htonl without using network related headers

We are writing an embedded application code and validating a string for a valid IPv4 format. I am successfully able to do so using string tokenizer but now I need to convert the integers to Host-To-Network order using htonl() function.
Since it an embedded application I cannot include network header and library just to make use of htonl() function.
Is there any way / non-network header in C++ by which I can avail htonl() functionality?
From htonl()'s man page:
The htonl() function converts the unsigned integer hostlong from host byte order to network byte order.
Network byte order is actually just big endian.
All you need to do is write (or find) a function that converts an unsigned integer to big endian and use it in place of htonl. If your system is already in big endian than you don't need to do anything at all.
You can use the following to determine the endianness of your system:
int n = 1;
// little endian if true
if(*(char *)&n == 1) {...}
Source
And you can convert a little endian uint32_t to big endian using the following:
uint32_t htonl(uint32_t x) {
unsigned char *s = (unsigned char *)&x;
return (uint32_t)(s[0] << 24 | s[1] << 16 | s[2] << 8 | s[3]);
}
Source
You don't strictly need htonl. If you have the IP address as individual bytes like this:
uint8_t a [4] = { 192, 168, 2, 1 };
You can just send these 4 bytes, in that exact order, over the network. That is unless you specifically need it as a 4 byte Integer, which you probably don't, since you presumably are not using sockaddr_in & friends.
If you already have the address as a 32 bit integer in host byte order, you can obtain a like this:
uint32_t ip = getIPHostOrder();
uint8_t a [4] = { (ip >> 24) & 0xFF, (ip >> 16) & 0xFF, (ip >> 8) & 0xFF, ip & 0xFF };
This has the advantage of not relying on implementation defined behaviour and being portable.

how to use char values as signed variable

I am coding in arduino IDE, so basically C++. and I need to use a variable with only 1 byte for transmission purposes, but I need this to be signed.
More specifically I need to send an int(2byte) but this int has 2 values in it, one is a byte since I don't care of the sign since its always positive, but the other I need it to have negatives included.
I'm doing something like this.
turn = -120
int PromedioD_turn = PromedioD << 8 | (turn & 0b11111111);
Serial.println("test");
Serial.println(PromedioD);
Serial.println(turn & 0b11111111,DEC); //this is printing as 136
Serial.println(PromedioD_turn);
I cant understand why or how to solve this, I need to be able to send the value and also break it down later.
Thanks to #bolov by using int8_t i could convert it bach to signed value
int PromedioD_turn = PromedioD << 8 | turn & 0b11111111;
Serial.println("test");
Serial.println(PromedioD);
Serial.println(turn);
Serial.println(PromedioD_turn);
Serial.println(int8_t(PromedioD_turn &0b11111111));

Endian-ness in a char array containing binary characters

I'm building some code to read a RIFF wav file and I've bumped into something odd.
The first 4 bytes of the file header are the word RIFF in big-endian ascii coding:
0x5249 0x4646
I read this first element using:
char *fileID = new char[4];
filestream.read(fileID,4);
When I write this to screen the results are as expected:
std::cout << fileID << std::endl;
>> RIFF
Now, the next 4 bytes give the size of the file, but crucially they're little-endian.
So, I write a little function to flip the bytes, based on a union:
int flip4bytes(char* input){
union flip {int flip_int; char flip_char[4];};
flip.flip_char[0] = input[3];
flip.flip_char[1] = input[2];
flip.flip_char[2] = input[1];
flip.flip_char[3] = input[0];
return flip.flip_int;
}
This looks good to me, except when I call it, the value returned is totally wrong. Interestingly, the following code (where the bytes are not reversed!) works correctly:
int flip4bytes(char* input){
union flip {int flip_int; char flip_char[4];};
flip.flip_char[0] = input[0];
flip.flip_char[1] = input[1];
flip.flip_char[2] = input[2];
flip.flip_char[3] = input[3];
return flip.flip_int;
}
This has thoroughly confused me. Is the union somehow reversing the bytes for me?! If not, how are the bytes being converted to int correctly without being reversed?
I think there's some facet of endian-ness here that I'm ignorant to..
You are simply on a little-endian machine, and the "RIFF" string is just a string and thus neither little- nor big-endian, but just a sequence of chars. You don't need to reverse the bytes on a little-endian machine, but you need to when operating on a big-endian.
You need to figure of the endianess of your machine. #include <sys/param.h> will help you do that.
You could also use the fact that network byte order is big ended (if my memory serves me correctly - you need to check). In which case convert to big ended and use the ntohs function. That should work on any machine that you compile the code on.

C/C++ read a byte from an hexinput from stdin

Can't exactly find a way on how to do the following in C/C++.
Input : hexdecimal values, for example: ffffffffff...
I've tried the following code in order to read the input :
uint16_t twoBytes;
scanf("%x",&twoBytes);
Thats works fine and all, but how do I split the 2bytes in 1bytes uint8_t values (or maybe even read the first byte only). Would like to read the first byte from the input, and store it in a byte matrix in a position of choosing.
uint8_t matrix[50][50]
Since I'm not very skilled in formating / reading from input in C/C++ (and have only used scanf so far) any other ideas on how to do this easily (and fast if it goes) is greatly appreciated .
Edit: Found even a better method by using the fread function as it lets one specify how many bytes it should read from the stream (stdin in this case) and save to a variable/array.
size_t fread ( void * ptr, size_t size, size_t count, FILE * stream );
Parameters
ptr - Pointer to a block of memory with a minimum size of (size*count) bytes.
size - Size in bytes of each element to be read.
count - Number of elements, each one with a size of size bytes.
stream - Pointer to a FILE object that specifies an input stream.
cplusplus ref
%x reads an unsigned int, not a uint16_t (thought they may be the same on your particular platform).
To read only one byte, try this:
uint32_t byteTmp;
scanf("%2x", &byteTmp);
uint8_t byte = byteTmp;
This reads an unsigned int, but stops after reading two characters (two hex characters equals eight bits, or one byte).
You should be able to split the variable like this:
uint8_t LowerByte=twoBytes & 256;
uint8_t HigherByte=twoBytes >> 8;
A couple of thoughts:
1) read it as characters and convert it manually - painful
2) If you know that there are a multiple of 4 hexits, you can just read in twobytes and then convert to one-byte values with high = twobytes << 8; low = twobyets & FF;
3) %2x