How to create a packet for serial communication with specific requirements? - c++

I want to communicate with a serial-device. The individual packets are sent with a continuously ascending byte sequence, i.e. the first byte of the packet is PacCmd and the last byte of a packet is CheckSum. The individual bytes are INTEL coded (Little Endian), i.e. the LSB is sent first. The INTEL coding is used everywhere where the packet's individual data elements consist of more than one byte (PacCmd, Data).
The datapack is described as follows:
DataPac ::= { PacCmd + DataLen + { DataByte } } + CheckSum
PacCmd - 2 Byte unsigned integer = unsigned short
DataLen - 1 Byte unsigned integer = unsigned char/uint8_t
DataByte - x Byte = Data to send/receive = mixed
CheckSum - 1 Byte unsigned integer = unsigned char/uint8_t
This is no problem. But the PacCmd is a problem. It consists of 3 Bits for control and a 13 bit integer value, the command. The description of PacCmd:
PacCmd ::= ( ReqBit | RspBit | MoreDataBit ) + Cmd
ReqBit - 1 Bit: If set, an acknowledgement is requested from the receiver
RspBit - 1 Bit: If set, this data packet is to be interpreted as an acknowledgement
MoreDataBit - 1 Bit: If set, an additional PacCmd field follows subsequent to the data belonging to this command
Cmd - 13 Bits Unsigned Integer: Identification via the content of the data field of this packet (only if the RspBit is not set)
My question is: How can I interpret the value of Cmd? How can I extract the 3 bits and the 13 bits integer? How can I set the 3 bits and the 13 bits integer? Any example-code on how to extract the information? Just use the bitwise operator &? What does the Little Endian do with that?

Related

16-bit to 10-bit conversion code explanation

I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".

Convert a 40 byte long data frame from hex to binary and subsequently to decimal [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I am an amateur at software programming.
However, I have a task to convert a dataframe that is 40 bytes long and in hex values to be converted to binary values and subsequently into decimal values. I tried converting the values from hex to binary after reading them byte by byte. It didn't work quite effectively as some of the data in the frame are not constituted of a single byte.
Let me explain a little in detail. I have a 40 byte long data frame that reads in hex like this:
0 40 ffffff82 2 0 0 28 6d
ffffffaf ffffffc8 0 41 0 8 78 8
72 17 16 16 0 42 0 2
1 2 1 16 ffffffff ffffffff 0 43
0 0 3 0 0 2 8 0
The reason I do not prefer converting these data by reading one byte at a time is because every byte displayed may not essentially imply a meaning. Please read on to understand what I mean by this.
For example:
1st to 6th byte represent data that are just 1 byte each. 1st byte is status, 2nd byte is unit voltage, 3rd being unit current and so forth.
Whereas when it comes to 7th and 8th byte represent a 2 byte data, unit SOC, meaning, unit SOC is a 16 bit data.
9th, 10th and 11th byte together indicate Module 1 cell failure information, i.e, the failure information is a 24 bit data.
12th,13th and 14th byte together indicate Module 2 cell failure information etc.
This being the case, how can I convert the incoming data frame into binary and subsequently to decimal without reading them byte after byte.
I would appreciate if this is something someone may be able to lend a helping hand with.
Suppose you have read your data frame into a buffer like this:
unsigned char inputbuffer[40];
Set a pointer pointing to the beginning of the buffer:
unsigned char *p = inputbuffer;
You can extract single-byte fields trivially:
int status = *p++; /* first byte */
int maxvoltage = *p++; /* second byte */
int current = *p++; /* third byte */
A two-byte field is only slightly more complicated:
int soc = *p++;
soc = (soc << 8) | *p++;
This reads two bytes for soc, concatenating them together as firstbyte+secondbyte. That assumes that the data frame uses what's called "big endian" byte order (that is, most-significant or "biggest" byte first). If that gives you crazy values, it's likely that the data uses "little endian" order, in which case you can flip the bytes around, yielding secondbyte+firstbyte, by reading them like this instead:
int soc = *p++;
soc = soc | *p++ << 8);
Alternatively, you can dispense with the pointer p, and access various bytes out of the inputbuffer array directly, although in that case you need to remember that arrays in C are 0-based:
int status = inputbuffer[0]; /* first byte */
int maxvoltage = inputbuffer[1]; /* second byte */
int current = inputbuffer[2]; /* third byte */
int soc = (inputbuffer[6] << 8) | inputbuffer[7];
or
int soc = inputbuffer[6] | (inputbuffer[7] << 8);
You can almost follow the same pattern for your 24-bit fields, except that for portability (and especially if you're on an old 16-bit machine) you need to take care to use a long int:
long int module_1_cell_failure = *p++;
module_1_cell_failure = (module_1_cell_failure << 8) | *p++;
module_1_cell_failure = (module_1_cell_failure << 8) | *p++;
or
long int module_1_cell_failure = *p++;
module_1_cell_failure |= (*p++ << 8);
module_1_cell_failure |= ((unsigned long)*p++ << 16);
or
long int module_1_cell_failure =
inputbuffer[8] | (inputbuffer[9] << 8) |
((unsigned long)inputbuffer[10] << 16);

C++/C: Prepend length to Char[] in bytes (binary/hex)

I'm looking to send UDP datagrams from a client, to a server, and back.
The server needs to add a header to each datagram (represented as a char[]) in byte format, which I've struggled to find examples of. I know how to send it as actual text characters, but I want to send it perhaps as "effectively" binary form (eg, if the length were to be 40 bytes then I'd want to prepend 0x28 , or the 2 byte unsigned equivalent, rather than as '0028' in ASCII char form or similar, which would be 4 bytes instead of a potential 2.
As far as I can work out my best option is below:
unsigned int length = dataLength; //length of the data received
char test[512] = { (char)length };
Is this approach valid, or will it cause problems later?
Further, this gives me a hard limit of 255 if I'm not mistaken. How can I best represent it as 2 bytes to extend my maximum length.
EDIT: I need the length of each datagram to be prepended because I will be building each datagram into a larger frame, and the recipient needs to be able to take the frame apart into each information element, which I think means I should need the length included so the recipient and work out where each element ends and the next begins
You probably need something like this:
char somestring[] = "Hello World!";
char sendbuffer[1000];
int length = strlen(somestring);
sendbuffer[0] = length % 0xff; // put LSB of length
sendbuffer[1] = (length >> 8) & 0xff; // put MSB of length
strcpy(&sendbuffer[2], somestring); // copy the string right after the length
sendbuffer is the buffer that will be sent; I fixed it to a maximum length of 1000 allowing for strings up to an length of 997 beeing sent (1000 - 2 bytes for the length - 1 byte for NUL terminator).
LSB means least significant byte and MSB means most significant byte. Here we put the LSB first and the MSB second, this convention is called little endian, the other way round would be big endian. You need to be sure that on the receiver side that the length is correctly decoded. If the architecture on the receiver side has an other endianness than the sender, the length on the receiver side may be decoded wrong depending on the code. Google "endianness" for more details.
sendbuffer will look like this in memory:
0x0c 0x00 0x48 0x65 0x6c 0x6c ...
| 12 |'H' |'e' |'l' |'l '| ...
//... Decoding (assuming short is a 16 bit type on the receiver side)
// first method (won't work if endiannnes is different on receiver side)
int decodedlength = *((unsigned short*)sendbuffer);
// second method (endiannness safe)
int decodedlength2 = (unsigned char)sendbuffer[0] | (unsigned char)sendbuffer[1] << 8;
char decodedstring[1000];
strcpy(decodedstring, &sendbuffer[2]);
Possible optimisation:
If the majority of the strings you send have a length shorter than 255, you can optimize and not prepending systematically two bytes but only one byte most of the time, but that's another story.

(C++) Integer on a specific number of bits (MIDI File)

The midi norm for music allows to code delta time duration as integer values (representing ticks).
For example I have a delta time of 960.
The binary value of 960 is 1111000000.
The thing is that the midi norm doesn't code the number on 16 bits.
It codes it on 14 bits and then, adds 10 at the 2 first bits to create another 16 bits value, 1 meaning that there is a following byte, and 0 meaning that it is the last byte.
My question is : how can I easily calculate 960 as a binary value coded on 14 bits?
Cheers
In the bytes that make up a delta time, the most significant bit specifies whether another byte with more bits is following.
This means that a 14-bit value like 00001111000000 is split into two parts, 0000111 and 1000000, and encoded as follows:
1 0000111 0 1000000
^ ^ ^ lower 7 bits
| | \
| \ last byte
\ upper 7 bits
more bytes follow
In C, a 14-bit value could be encoded like this:
int value = 960;
write_byte(0x80 | ((value >> 7) & 0x7f));
write_byte(0x00 | ((value >> 0) & 0x7f));
(Also see the function var_value() in arecordmidi.c.)
You can specify any number of bits as length inside a struct like so:
struct customInt {
unsigned int n:14; // 14-bit long unsigned integer type
};
Or you can make your own functions that take care of these kind of specific calculations/values.
If you are using unsigned integers, just do the calculations normally.
Start with
value = 960 ;
To convert the final output to 14 bits, do
value &= 0x3FFF ;
To add binary 10 to the front do
value |= 0x8000 ;

How to store two bytes in a BYTE array as an int (or something similar)?

I am writing a bittorrent client in C++ that receives a message from a tracker (server) containing several 6 byte strings. The first 4 bytes represent the IP address of a peer and the next 2 bytes represent the port number that the peer is listening on.
I have worked out how to convert the ip bytes into a human readable ip address but am struggling to convert the two bytes representing the port number into an int (or something similar)
Here are my efforts so far:
BYTE portbinary[2];
unsigned short peerport;
//trackers[i]->peersBinary[j * 6 + 4] is the first byte
portbinary[0] = trackers[i]->peersBinary[j * 6 + 4];
//trackers[i]->peersBinary[j * 6 + 5] is the second byte
portbinary[1] = trackers[i]->peersBinary[j * 6 + 5];
peerport = *portbinary;
Upon examination peerport only seems to contain the integer representation of the first byte, how might I be able to fix this?
Thanks in advance :)
I prefer using bitwise operations instead of type punning because it brings no issues with endianness at all (the port number comes as a big endian number, and many systems today are little endian).
int peerport = (portbinary[0] << 8) | portbinary[1];
Since it seems like the data is aligned, you can just use
peerport = ntohs(*(uint16_t *)(trackers[i]->peersBinary + j * 6 + 4));
Since portbinary is an array of BYTE, then *portbinary is equivalent to portbinary[0].
A portable way to to achieve your result could be:
peerport = portbinary[0];
peerport = 256*peerport + portbinary[1];
This assumes portbinary was delivered in network byte order.
Solution with unions:
union port_extractor
{
BYTE raw_port[2];
unsigned short port_assembled;
};
This will work if your computer endianess is the same as of representation you fetch from the network. Sorry, I dont know how it comes by bittorrent protocol.
If the endianess is the opposite, then the solution you are bound to use is not as nice:
unsigned short port_assembled = (unsigned short)raw_port[first_byte] | ((unsigned short)raw_port[second_byte] << 8);
// first_byte = 0, second_byte = 1 or vice versa
// depending on source data endianess