I am coding in arduino IDE, so basically C++. and I need to use a variable with only 1 byte for transmission purposes, but I need this to be signed.
More specifically I need to send an int(2byte) but this int has 2 values in it, one is a byte since I don't care of the sign since its always positive, but the other I need it to have negatives included.
I'm doing something like this.
turn = -120
int PromedioD_turn = PromedioD << 8 | (turn & 0b11111111);
Serial.println("test");
Serial.println(PromedioD);
Serial.println(turn & 0b11111111,DEC); //this is printing as 136
Serial.println(PromedioD_turn);
I cant understand why or how to solve this, I need to be able to send the value and also break it down later.
Thanks to #bolov by using int8_t i could convert it bach to signed value
int PromedioD_turn = PromedioD << 8 | turn & 0b11111111;
Serial.println("test");
Serial.println(PromedioD);
Serial.println(turn);
Serial.println(PromedioD_turn);
Serial.println(int8_t(PromedioD_turn &0b11111111));
Related
I'm currently working on a project that encrypts files and adds them to the application's library. I need to version the file format so that I'm planning to prepend a file header to the encrypted file. The project is in Qt and currently for Windows. Later will make app for android and mac as well.
FOr this I made these structures, the version 1 file.
struct Header_Meta
{
char signature [4];
char version [4];
};
struct Header_v1
{
char id [12];
char flag [8];
char name [128];
long size;
};
union File_v1
{
Header_Meta meta;
Header_v1 header;
byte null [512 - sizeof (Header_Meta) - sizeof (Header_v1)];
byte data [MAX_HEADERv1];
};
The file is binary file.
Now in the getDetails() function, I'll read the MAX_HEADERv1 bytes to file_v1.data and will get the details in the member variables.
My questions are
Is there a better approach?
Is there any problem writing the long size of Header_v1 to file, in cases of platform differences?
The logic should work the same way in all devices with file from another platform. Will this hold?
There is a slight possibility that you will end up having a lot of #ifdef BIG/LITTLE_ENDIAN's in the code depending on the platform you are trying to deploy your product. I would use for the long size to be like: unsigned char size[8] (this would yield a 64 (=8*8) bit value) and then you could use a formula in your code, like:
uint64_t real_size = size[0] + size[1] << 8 + size[2] << 16 + ....
and when calculating individual size bytes you could do it like:
size[0] = real_size && 0xFF;
size[1] = (real_size && 0xFF00) >> 8;
size[2] = (real_size && 0xFF0000) >> 16;
and so on...
and from this point on you just need to worry about correctly writing out the bytes of size to their corresponding position.
Regarding the version string you want to add to the header (char version[4]) it all depends on what do you want to store there. If you want to put textual information (such as: "v1.0") you will limit the possible version you can have, so I would recommend again putting in a binary version, such as:
version[0] = VERSION // customers usually pay for an increase in this
version[1] = RELEASE // new functionality, it's up to you if customer pays or not :)
version[2] = MAINTENANCE // planned maintenance, usually customers don't pay for this
version[3] = PATCH // emergency patch, hopefully you never have to use this
This will allow for version numbers in the form of VERSION.RELEASE.MAINTENACE.PATCH and you can go up to 255.255.255.255
Also, please pay attention to #Ben's comment, the union just feels wrong. Usually these fields should come one after the other, but with the union they all will overlap each other, starting at the same location.
I'm building some code to read a RIFF wav file and I've bumped into something odd.
The first 4 bytes of the file header are the word RIFF in big-endian ascii coding:
0x5249 0x4646
I read this first element using:
char *fileID = new char[4];
filestream.read(fileID,4);
When I write this to screen the results are as expected:
std::cout << fileID << std::endl;
>> RIFF
Now, the next 4 bytes give the size of the file, but crucially they're little-endian.
So, I write a little function to flip the bytes, based on a union:
int flip4bytes(char* input){
union flip {int flip_int; char flip_char[4];};
flip.flip_char[0] = input[3];
flip.flip_char[1] = input[2];
flip.flip_char[2] = input[1];
flip.flip_char[3] = input[0];
return flip.flip_int;
}
This looks good to me, except when I call it, the value returned is totally wrong. Interestingly, the following code (where the bytes are not reversed!) works correctly:
int flip4bytes(char* input){
union flip {int flip_int; char flip_char[4];};
flip.flip_char[0] = input[0];
flip.flip_char[1] = input[1];
flip.flip_char[2] = input[2];
flip.flip_char[3] = input[3];
return flip.flip_int;
}
This has thoroughly confused me. Is the union somehow reversing the bytes for me?! If not, how are the bytes being converted to int correctly without being reversed?
I think there's some facet of endian-ness here that I'm ignorant to..
You are simply on a little-endian machine, and the "RIFF" string is just a string and thus neither little- nor big-endian, but just a sequence of chars. You don't need to reverse the bytes on a little-endian machine, but you need to when operating on a big-endian.
You need to figure of the endianess of your machine. #include <sys/param.h> will help you do that.
You could also use the fact that network byte order is big ended (if my memory serves me correctly - you need to check). In which case convert to big ended and use the ntohs function. That should work on any machine that you compile the code on.
What's the best way to send float, double, and int16 over serial on Arduino?
The Serial.print() only sends values as ASCII encoded. But I want to send the values as bytes. Serial.write() accepts byte and bytearrays, but what's the best way to convert the values to bytes?
I tried to cast an int16 to an byte*, without luck. I also used memcpy, but that uses to many CPU cycles. Arduino uses plain C/C++. It's an ATmega328 microcontroller.
hm. How about this:
void send_float (float arg)
{
// get access to the float as a byte-array:
byte * data = (byte *) &arg;
// write the data to the serial
Serial.write (data, sizeof (arg));
}
Yes, to send these numbers you have to first convert them to ASCII strings. If you are working with C, sprintf() is, IMO, the handiest way to do this conversion:
[Added later: AAAGHH! I forgot that for ints/longs, the function's input argument wants to be unsigned. Likewise for the format string handed to sprintf(). So I changed it below. Sorry about my terrible oversight, which would have been a hard-to-find bug. Also, ulong makes it a little more general.]
char *
int2str( unsigned long num ) {
static char retnum[21]; // Enough for 20 digits plus NUL from a 64-bit uint.
sprintf( retnum, "%ul", num );
return retnum;
}
And similar for floats and doubles. The code doing the conversion has be known in advance. It has to be told - what kind of an entity it's converting, so you might end up with functions char *float2str( float float_num) and char *dbl2str( double dblnum).
You'll get a NUL-terminated left-adjusted (no leading blanks or zeroes) character string out of the conversion.
You can do the conversion anywhere/anyhow you like; these functions are just illustrations.
Use the Firmata protocol. Quote:
Firmata is a generic protocol for communicating with microcontrollers
from software on a host computer. It is intended to work with any host
computer software package. Right now there is a matching object in a
number of languages. It is easy to add objects for other software to
use this protocol. Basically, this firmware establishes a protocol for
talking to the Arduino from the host software. The aim is to allow
people to completely control the Arduino from software on the host
computer.
The jargon word you need to look up is "serialization".
It is an interesting problem over a serial connection which might have restrictions on what characters can go end to end, and might not be able to pass eight bits per character either.
Restrictions on certain character codes are fairly common. Here's a few off the cuff:
If software flow control is in use, then conventionally the control characters DC1 and DC3 (Ctrl-Q and Ctrl-S, also sometimes called XON and XOFF) cannot be transmitted as data because they are sent to start and stop the sender at the other end of the cable.
On some devices, NUL and/or DEL characters (0x00 and 0x7F) may simply vanish from the receiver's FIFO.
If the receiver is a Unix tty, and the termio modes are not set correctly, then the character Ctrl-D (EOT or 0x04) can cause the tty driver to signal an end-of-file to the process that has the tty open.
A serial connection is usually configurable for byte width and possible inclusion of a parity bit. Some connections will require that a 7-bit byte with a parity are used, rather than an 8-bit byte. It is even possible for connection to (seriously old) legacy hardware to configure many serial ports for 5-bit and 6-bit bytes. If less than 8-bits are available per byte, then a more complicated protocol is required to handle binary data.
ASCII85 is a popular technique for working around both 7-bit data and restrictions on control characters. It is a convention for re-writing binary data using only 85 carefully chosen ASCII character codes.
In addition, you certainly have to worry about byte order between sender and receiver. You might also have to worry about floating point format, since not every system uses IEEE-754 floating point.
The bottom line is that often enough choosing a pure ASCII protocol is the better answer. It has the advantage that it can be understood by a human, and is much more resistant to issues with the serial connection. Unless you are sending gobs of floating point data, then inefficiency of representation may be outweighed by ease of implementation.
Just be liberal in what you accept, and conservative about what you emit.
Does size matter? If it does, you can encode each 32 bit group into 5 ASCII characters using ASCII85, see http://en.wikipedia.org/wiki/Ascii85.
This simply works. Use Serial.println() function
void setup() {
Serial.begin(9600);
}
void loop() {
float x = 23.45585888;
Serial.println(x, 10);
delay(1000);
}
And this is the output:
Perhaps that is best Way to convert Float to Byte and Byte to Float,-Hamid Reza.
int breakDown(int index, unsigned char outbox[], float member)
{
unsigned long d = *(unsigned long *)&member;
outbox[index] = d & 0x00FF;
index++;
outbox[index] = (d & 0xFF00) >> 8;
index++;
outbox[index] = (d & 0xFF0000) >> 16;
index++;
outbox[index] = (d & 0xFF000000) >> 24;
index++;
return index;
}
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (outbox[index+3] << 24) | (outbox[index+2] << 16)
| (outbox[index+1] << 8) | (outbox[index]);
float member = *(float *)&d;
return member;
}
regards.
`
Structures and unions solve that issue. Use a packed structure with a byte sized union matching the structure. Overlap the pointers to the structure and union (or add the union in the structure). Use Serial.write to send the stream. Have a matching structure/union on receiving end. As long as byte order matches no issue otherwise you can unpack using the "C" hto(s..l) functions. Add "header" info to decode different structures/unions.
For Arduino IDE:
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (long(outbox[index +3]) << 24) | \
(long(outbox[index +2]) << 16) | \
(long(outbox[index +1]) << 8) | \
(long(outbox[index]));
float member = *(float *)&d;
return member;
}
otherwise not working.
My background is php so entering the world of low-level stuff like char is bytes, which are bits, which is binary values, etc is taking some time to get the hang of.
What I am trying to do here is sent some values from an Ardunio board to openFrameWorks (both are c++).
What this script currently does (and works well for one sensor I might add) when asked for the data to be sent is:
int value_01 = analogRead(0); // which outputs between 0-1024
unsigned char val1;
unsigned char val2;
//some Complicated bitshift operation
val1 = value_01 &0xFF;
val2 = (value_01 >> 8) &0xFF;
//send both bytes
Serial.print(val1, BYTE);
Serial.print(val2, BYTE);
Apparently this is the most reliable way of getting the data across.
So now that it is send via serial port, the bytes are added to a char string and converted back by:
int num = ( (unsigned char)bytesReadString[1] << 8 | (unsigned char)bytesReadString[0] );
So to recap, im trying to get 4 sensors worth of data (which I am assuming will be 8 of those serialprints?) and to have int num_01 - num_04... at the end of it all.
Im assuming this (as with most things) might be quite easy for someone with experience in these concepts.
Write a function to abstract sending the data (I've gotten rid of your temporary variables because they don't add much value):
void send16(int value)
{
//send both bytes
Serial.print(value & 0xFF, BYTE);
Serial.print((value >> 8) & 0xFF, BYTE);
}
Now you can easily send any data you want:
send16(analogRead(0));
send16(analogRead(1));
...
Just send them one after the other.
Note that the serial driver lets you send one byte (8 bits) at a time. A value between 0 and 1023 inclusive (which looks like what you're getting) fits in 10 bits. So 1 byte is not enough. 2 bytes, i.e. 16 bits, are enough (there is some extra space, but unless transfer speed is an issue, you don't need to worry about this wasted space).
So, the first two bytes can carry the data for your first sensor. The next two bytes carry the data for the second sensor, the next two bytes for the third sensor, and the last two bytes for the last sensor.
I suggest you use the function that R Samuel Klatchko suggested on the sending side, and hopefully you can work out what you need to do on the receiving side.
int num = ( (unsigned char)bytesReadString[1] << 8 |
(unsigned char)bytesReadString[0] );
That code will not do what you expect.
When you shift an 8-bit unsigned char, you lose the extra bits.
11111111 << 3 == 11111000
11111111 << 8 == 00000000
i.e. any unsigned char, when shifted 8 bits, must be zero.
You need something more like this:
typedef unsigned uint;
typedef unsigned char uchar;
uint num = (static_cast<uint>(static_cast<uchar>(bytesReadString[1])) << 8 ) |
static_cast<uint>(static_cast<uchar>(bytesReadString[0]));
You might get the same result from:
typedef unsigned short ushort;
uint num = *reinterpret_cast<ushort *>(bytesReadString);
If the byte ordering is OK. Should work on Little Endian (x86 or x64), but not on Big Endian (PPC, Sparc, Alpha, etc.)
To generalise the "Send" code a bit --
void SendBuff(const void *pBuff, size_t nBytes)
{
const char *p = reinterpret_cast<const char *>(pBuff);
for (size_t i=0; i<nBytes; i++)
Serial.print(p[i], BYTE);
}
template <typename T>
void Send(const T &t)
{
SendBuff(&t, sizeof(T));
}
I have a binary array with hex values and I would like to convert it into decimal. Once that is done I would like to display it in a string.
BTW, I am working on an intel process via ubuntu and this will probably be used on a sun many hp unix box.
When I do a straight up assignment to unsigned long it works... as in the decimal part. I still have not made that into a string yet, but I am not sure why it works. Furthermore I read somewhere that when you put it in unsigned long the you will not have to worry about endianess.
Could I get some help on how to take this issue and why the aforementioned works as well?
please note: I am using c++ and not the .net stuff so bitconverter is not available.
EDIT (Copied from answer to own question below)
first of all thanks for the endian link that IBM article really helped. knowing that i was just putting it into a register rather than manually putting the byte values in consecutive memory locations is what takes away the whole endianess issue made a big difference.
here is a piece of code that I wrote which I know can be better but its just something quick...
// use just first 6 bytes
byte truncHmac[6];
memset( truncHmac, 0x00, 6 );
unsigned long sixdigit[6];
for (int i=0; i<=5; i++)
{
truncHmac[i]=hmacOutputBuffer[i];
sixdigit[i]=truncHmac[i];
std::cout<<sixdigit[i]<< std::endl;
}
the output is
HMAC: 76e061dc7512be8bcca2dce44e0b81608771714b
118
224
97
220
117
18
which makes sense that its take the first six bytes and converting them to decimals.
The only question I have now is how to make this into a string. Someone mentioned using manipulators? Could I get an example?
Indenting your code with four spaces per line makes it show up as code:
// use just first 6 bytes
byte truncHmac[6];
memset( truncHmac, 0x00, 6 );
unsigned long sixdigit[6];
for (int i=0; i<=5; i++)
{
truncHmac[i]=hmacOutputBuffer[i];
sixdigit[i]=truncHmac[i];
std::cout<<sixdigit[i]<< std::endl;
}
Hit the orange ? at the top right of the editing box for markup help.
As for manipulators, use a stringstream instead of cout:
#include <sstream>
stringstream strs;
for (int i = 0; i < 6; ++i)
{
strs << hex << hmacOutputBuffer[i] << std::endl;
}
strs.str(); // returns a string object.
EDIT: Writing loops as for (int i = 0; i < N; ++i) instead of for (int i = 0; i <= N - 1; ++i) will work better for you when you need to deal with C++ iterators, which define the end as "one past the last valid element".
Also, MAC addresses typically put - or : characters in between each byte. The use of : is deprecated because IPv6 addresses also use :.
hex, decimal and char are just a different way to present the value.
it is up to your application how to interpret the value.
You might want to read up on C++ iostream manipulators, especially on hex.
You should read up on Endianness.
IBM has a nice article on endianness in practice.
first of all thanks for the endian link that IBM article really helped. knowing that i was just putting it into a register rather than manually putting the byte values in consecutive memory locations is what takes away the whole endianess issue made a big difference.
here is a pc of code that i wrote which i know can be better but its just something quick...
// use just first 6 bytes
byte truncHmac[6];
memset( truncHmac, 0x00, 6 );
unsigned long sixdigit[6];
for (int i=0; i<=5; i++)
{
truncHmac[i]=hmacOutputBuffer[i];
sixdigit[i]=truncHmac[i];
std::cout<<sixdigit[i]<< std::endl;
}
the output is
HMAC: 76e061dc7512be8bcca2dce44e0b81608771714b
118
224
97
220
117
18
which makes sense that its take the first six bytes and converting them to decimals.
the only question i have now is how to make this into a string..someone mentioned using manipulators? could i get an example?
thanks a bunch guys you all rock!