I'm going to connect to a device (MODBUS protocol), and I have to calculate CRC (CRC16). There are standard implementations of this protocol, but I have to create my CRC using this formula:
X15 + X2 + 1 (there is a standard implementation with this formula: X16 + X15 + X2 + 1)
I've tested differnt values for CRC, but none of them is giving me the correct answer. I should write some bytes to a port, and at the end of this byte string I should write two CRC bytes in order to obtain my device information.
What is your question?
I'm assuming your question is "How do I calculate a MODBUS CRC at the end of a message so the MODBUS device at the other end of the cable recognizes it as a valid MODBUS message?"
I try to get test vectors in hand first,
before I implement yet another checksum or CRC function.
Do you have any examples of valid messages, including the proper/expected CRC at the end?
According to Wikipedia: cyclic redundancy check,
"Since the high-order bit is always 1, and since an n-bit CRC must be defined by an (n+1)-bit divisor which overflows an n-bit register, some writers assume that it is unnecessary to mention the divisor's high-order bit."
So writers that say that MODBUS uses a "X^15 + X^2 + 1" polynomial (with an understood x^16, since it's a 16-bit CRC) are referring to exactly the same polynomial as other writers that say MODBUS uses a "X^16+X^15+X^2+1" polynomial.
Both authors will write code that generates exactly the same CRC and are interoperable with each other.
Also, people calculating the standard MODBUS CRC in the "foward" direction often use the magic constant "0x8005".
Those calculating the standard MODBUS CRC in the "reverse" direction often use the magic constant "0xA001" instead (the bitwise-reverse of "0x8005").
Both people write code that generates exactly the same CRC bytes and are interoperable with each other.
There are many implementations online of MODBUS CRC calculation;
perhaps you might find one of them useful.
online live CRC calculator with many options
http://www.zorc.breitbandkatze.de/crc.html
On-line CRC calculation
http://www.lammertbies.nl/comm/info/crc-calculation.html
and many others, in no particular order:
a b c d e f g h i j k l m
n o p q
Many implementations are byte-oriented, which run a little faster but require a big lookup table
(and it's far from obvious to me how to test whether the lookup table is valid).
Many implementations are bit-oriented, which produce a much shorter program size
and have fewer dusty corners where bugs can lurk (but take about 8 times as long to calculate the checksum), such as the following:
// warning: untested code
// optimized for size rather than speed
// (i.e., uses a bit-oriented calculation rather than table-driven calculation)
uint16_t modbusCRC(uint8_t* data, int length) {
uint16_t crc = 0xFFFF;
for(int pos = 0; pos<length; pos++){
crc ^= (uint16_t)data[pos];
for( int i=0; i<8; i++ ){
if(crc & 1){ // LSB is set
crc >>= 1; // Shift right
crc ^= 0xA001; // XOR 0xA001
}else{ // LSB is not set
crc >>= 1;
};
};
};
return crc;
}
int main(void){
uint8_t message[80] = { // 6-byte test vector
0x11, 0x03, 0x00, 0x6B, 0x00, 0x03
};
int message_length = 6;
uint16_t the_CRC = modbusCRC(message, message_length); // find CRC of the message
message[message_length++] = the_CRC & 0xFF; // send low byte first
message[message_length++] = the_CRC >> 8; // then send high byte
assert( 0x76 == message[6] ); // test against known-good CRC
assert( 0x87 == message[7] );
send_message(message, message_length); // send all 8 bytes of the message,
// including the two MODBUS CRC bytes.
// (Must send *all* 8 bytes of the message,
// even though there are multiple 0x00 bytes).
}
"Binary MODBUS" ("Modbus RTU Frame Format") sends all the data of the message as raw 8-bit bytes, including the 2 CRC bytes.
"ASCII MODBUS" ("Modbus ASCII Frame Format") sends the message as more-or-less plain ASCII text, including an 8 bit checksum.
(The checksum is transmitted as 2 bytes -- as 2 ASCII hexadecimal characters).
Related
So I have a little piece of code that takes 2 uint8_t's and places then next to each other, and then returns a uint16_t. The point is not adding the 2 variables, but putting them next to each other and creating a uint16_t from them.
The way I expect this to work is that when the first uint8_t is 0, and the second uint8_t is 1, I expect the uint16_t to also be one.
However, this is in my code not the case.
This is my code:
uint8_t *bytes = new uint8_t[2];
bytes[0] = 0;
bytes[1] = 1;
uint16_t out = *((uint16_t*)bytes);
It is supposed to make the bytes uint8_t pointer into a uint16_t pointer, and then take the value. I expect that value to be 1 since x86 is little endian. However it returns 256.
Setting the first byte to 1 and the second byte to 0 makes it work as expected. But I am wondering why I need to switch the bytes around in order for it to work.
Can anyone explain that to me?
Thanks!
There is no uint16_t or compatible object at that address, and so the behaviour of *((uint16_t*)bytes) is undefined.
I expect that value to be 1 since x86 is little endian. However it returns 256.
Even if the program was fixed to have well defined behaviour, your expectation is backwards. In little endian, the least significant byte is stored in the lowest address. Thus 2 byte value 1 is stored as 1, 0 and not 0, 1.
Does endianess also affect the order of the bit's in the byte or not?
There is no way to access a bit by "address"1, so there is no concept of endianness. When converting to text, bits are conventionally shown most significant on left and least on right; just like digits of decimal numbers. I don't know if this is true in right to left writing systems.
1 You can sort of create "virtual addresses" for bits using bitfields. The order of bitfields i.e. whether the first bitfield is most or least significant is implementation defined and not necessarily related to byte endianness at all.
Here is a correct way to set two octets as uint16_t. The result will depend on endianness of the system:
// no need to complicate a simple example with dynamic allocation
uint16_t out;
// note that there is an exception in language rules that
// allows accessing any object through narrow (unsigned) char
// or std::byte pointers; thus following is well defined
std::byte* data = reinterpret_cast<std::byte*>(&out);
data[0] = 1;
data[1] = 0;
Note that assuming that input is in native endianness is usually not a good choice, especially when compatibility across multiple systems is required, such as when communicating through network, or accessing files that may be shared to other systems.
In these cases, the communication protocol, or the file format typically specify that the data is in specific endianness which may or may not be the same as the native endianness of your target system. De facto standard in network communication is to use big endian. Data in particular endianness can be converted to native endianness using bit shifts, as shown in Frodyne's answer for example.
In a little endian system the small bytes are placed first. In other words: The low byte is placed on offset 0, and the high byte on offset 1 (and so on). So this:
uint8_t* bytes = new uint8_t[2];
bytes[0] = 1;
bytes[1] = 0;
uint16_t out = *((uint16_t*)bytes);
Produces the out = 1 result you want.
However, as you can see this is easy to get wrong, so in general I would recommend that instead of trying to place stuff correctly in memory and then cast it around, you do something like this:
uint16_t out = lowByte + (highByte << 8);
That will work on any machine, regardless of endianness.
Edit: Bit shifting explanation added.
x << y means to shift the bits in x y places to the left (>> moves them to the right instead).
If X contains the bit-pattern xxxxxxxx, and Y contains the bit-pattern yyyyyyyy, then (X << 8) produces the pattern: xxxxxxxx00000000, and Y + (X << 8) produces: xxxxxxxxyyyyyyyy.
(And Y + (X<<8) + (Z<<16) produces zzzzzzzzxxxxxxxxyyyyyyyy, etc.)
A single shift to the left is the same as multiplying by 2, so X << 8 is the same as X * 2^8 = X * 256. That means that you can also do: Y + (X*256) + (Z*65536), but I think the shifts are clearer and show the intent better.
Note that again: Endianness does not matter. Shifting 8 bits to the left will always clear the low 8 bits.
You can read more here: https://en.wikipedia.org/wiki/Bitwise_operation. Note the difference between Arithmetic and Logical shifts - in C/C++ unsigned values use logical shifts, and signed use arithmetic shifts.
If p is a pointer to some multi-byte value, then:
"Little-endian" means that the byte at p is the least-significant byte, in other words, it contains bits 0-7 of the value.
"Big-endian" means that the byte at p is the most-significant byte, which for a 16-bit value would be bits 8-15.
Since the Intel is little-endian, bytes[0] contains bits 0-7 of the uint16_t value and bytes[1] contains bits 8-15. Since you are trying to set bit 0, you need:
bytes[0] = 1; // Bits 0-7
bytes[1] = 0; // Bits 8-15
Your code works but your misinterpreted how to read "bytes"
#include <cstdint>
#include <cstddef>
#include <iostream>
int main()
{
uint8_t *in = new uint8_t[2];
in[0] = 3;
in[1] = 1;
uint16_t out = *((uint16_t*)in);
std::cout << "out: " << out << "\n in: " << in[1]*256 + in[0]<< std::endl;
return 0;
}
By the way, you should take care of alignment when casting this way.
One way to think in numbers is to use MSB and LSB order
which is MSB is the highest Bit and LSB ist lowest Bit for
Little Endian machines.
For ex.
(u)int32: MSB:Bit 31 ... LSB: Bit 0
(u)int16: MSB:Bit 15 ... LSB: Bit 0
(u)int8 : MSB:Bit 7 ... LSB: Bit 0
with your cast to a 16Bit value the Bytes will arrange like this
16Bit <= 8Bit 8Bit
MSB ... LSB BYTE[1] BYTE[0]
Bit15 Bit0 Bit7 .. 0 Bit7 .. 0
0000 0001 0000 0000 0000 0001 0000 0000
which is 256 -> correct value.
Taken from IEEE 802.3,
Mathematically, the CRC value corresponding to a given MAC frame is defined by the following procedure:
a) The first 32 bits of the frame are complemented.
b) The n bits of the protected fields are then considered to be the
coefficients of a polynomial M(x) of degree n – 1. (The first bit
of the Destination Address field corresponds to the x(n–1) term and the last
bit of the MAC Client Data field (or Pad field if present) corresponds to the
x0 term.)
c) M(x) is multiplied by x32 and divided by G(x), producing a remainder R(x) of degree ≤ 31.
d) The coefficients of R(x) are considered to be a 32-bit sequence.
e) The bit sequence is complemented and the result is the CRC.
https://www.kernel.org/doc/Documentation/crc32.txt
A big-endian CRC written this way would be coded like:
for (i = 0; i < input_bits; i++) {
multiple = remainder & 0x80000000 ? CRCPOLY : 0;
remainder = (remainder << 1 | next_input_bit()) ^ multiple;
}
Where is part c) M(x) is multiplied by x^32? I don't see 32 zeros appended to any number.
Also the following piece of code make no sense to me. The code and math don't really match up.
Evaluating the differences in CRC-32 implementations
and
unsigned short
crc16_update(unsigned short crc, unsigned char nextByte)
{
crc ^= nextByte;
for (int i = 0; i < 8; ++i) {
if (crc & 1)
crc = (crc >> 1) ^ 0xA001;
else
crc = (crc >> 1);
}
return crc;
}
What are these implementations doing? None of them really resemble the original procedure.
Even after reading the very end of this it still makes no sense:
http://www.relisoft.com/science/crcmath.html
This tutorial (also here, here, and here for those who will complain about link rot), in particular "10. A Slightly Mangled Table-Driven Implementation", explains well the optimization to avoid feeding an extra 32 zero bits at the end.
The bottom line is that you feed the bits into the end of the register instead of the start, which has the same effect as feeding a register-length's worth of zeros at the end.
The tutorial also shows nicely how the implementation you quoted implements the long division over GF(2).
Problem:
I have an 6 DoF accelerometer that I can access through SDA/SCL via an ArduinoUNO. I can read inputs through the commands like:
LreadAccX()
There are 2 registers for the sensor, a LOW and HIGH value. The above would read the lower register's X acceleration. These are 8 bit numbers. For example, the above might return:
LreadAccX()
>>>> 01000010
41 in decimal, by the way. I need to get these out as fast as possible, we'd like 1kHz to 400Hz if possible. That means just spitting out binary data and then post processing it.
Example:
0000 0001 1100 1001
This may be a value for the X acceleration. In decimal it says: 457. That's 3 different ASCII chars that I have to log, not 2 as in binary.
1111 0001 1100 1001
This is 61897, so 5 ASCII chars, vs. just the 2 binary ones. Obviously, I want to use binary to optimize for speed.
My Solution
void loop() {
/*
print data to processing. Data broken up into 2 parts, one for the high and low
registers in the sensor.
*/
print_bytes(HreadAccX(),LreadAccX(),HreadAccY(),LreadAccY(),HreadAccZ(),LreadAccZ());
print_bytes(HreadGyroX(),LreadGyroX(),HreadGyroY(),LreadGyroY(),HreadGyroZ(),LreadGyroZ());
Serial.print("A");
}
inline void print_bytes (int HaX, int LaX, int HaY, int LaY, int HaZ, int LaZ)
{
char l = (LaZ >> 48) & 0xff ;
char k = (LaZ >> 44) & 0xff ;
char j = (HaZ >> 40) & 0xff ;
char i = (HaZ >> 36) & 0xff ;
char h = (LaY >> 32) & 0xff ;
char g = (LaY >> 28) & 0xff ;
char f = (HaY >> 24) & 0xff ;
char e = (HaY >> 20) & 0xff ;
char d = (LaX >> 16) & 0xff ;
char c = (LaX >> 12) & 0xff ;
char b = (HaX >> 8) & 0xff ;
char a = HaX & 0xff ;
putchar (l) ;
putchar (k) ;
putchar (j) ;
putchar (i) ;
putchar (h) ;
putchar (g) ;
putchar (f) ;
putchar (e) ;
putchar (d) ;
putchar (c) ;
putchar (b) ;
putchar (a) ;
Serial.print(l);
Serial.print(k);
Serial.print(j);
Serial.print(i);
Serial.print(h);
Serial.print(g);
Serial.print(f);
Serial.print(e);
Serial.print(d);
Serial.print(c);
Serial.print(b);
Serial.print(a);
}
The output is something like:
>>>> asFkDi?g-g^&A
as Fk Di ?g -g ^& A (for clarity, the diff 16 bits have spaces, and the 'A' is
to show a start/stop bit)
It's just garbage in ASCII, but it translates to meaningful 16bit numbers
However, this Chewbacca as far as elegance goes. It also runs at about 200Hz, waaay too slow. I feel that it's the
Serial.print();
functions that are slowing it down. However, if I try:
Serial.print(a+b+c+d+e+f+g+h+i+j+k+l);
all I get is the addition of the numbers.
My Question
How do I get the Seria.print() to output just a string of binary numbers from a set of arguments?
You may want Serial.write. Assuming HaX etc are each one byte, you could use something like:
char data_bytes[] = {HaX, LaX, HaY, LaY, HaZ, LaZ};
Serial.write(data_bytes, sizeof(data_bytes));
To send all those 8-bit values.
Some issues though:
With raw binary on serial, it leaves no character codes for indicating the start and end of a frame, so you may have problems with synchronisation. Normally binary serial protocols have a special start of frame character, and special escape character. Whenever one of these appears in the data stream, you need to "escape" it (send a modified 2-byte sequence in place of the original "special" byte) so as not to confuse the receiver.
When you have a function call that gives separate high and low values and then combines them, you need to make sure that they are a matching pair. If there is a change in value between the calls, you can get errors.
You are more likely limited by the baud rate of the serial connection than in the time executing the serial printing functions. If you are sending a 12 byte sequence at best you will only get 160Hz at 19.2kbps, or 960Hz at 115.2kbps
firt of all Serial.print() convert to char, use Serial.write() to write binary data, as sayd by sj0h.
this
Serial.print(a+b+c+d+e+f+g+h+i+j+k+l);
is just summing ll valiue in one char and discarding the overflow value. also because of sum, aven if no overflow occurred, you can't get your value back.
Another trick is to buffer the data so you will use all bit; for example, normally IMU use 10bit sensor, so you can buffer 4 axes reading, and put them in just 5 byte.
Finally the real trick is to use higest as possible non-standard baudrate, like 460.800, i've tested arduino at around 1.000.000 but tecnically should go as fast as 2.000.000 (check for real value) with a 16MHz clock
Most likely you're running into the serial link speed limit rather than anything to do with your sensor or your code or your printing strategy. Typical serial link speed is 9600 baud (bits per second). In "round numbers" that's only roughly 1000 cps (characters per second). ["Round number" means convenient for back-of-the-envelope calculations in your head, the right order of magnitude, and only one (or maybe two) significant digits.] If each message were for example 5 characters, you could get a maximum of 200 messages per second ...which is very similar to the 200Hz you mention. Also what counts in serial communication is the number of characters that have to go over the serial link to be displayed: a binary 41 will come out as "101000" which counts as six characters, not one. So the rough rule of thumb is to not do binary.
Most likely the baud rate limit will never let you get even anywhere close to the sensor speed limit ...and if somehow you do succeed in getting all the readings over the serial link, they would flip by so fast you couldn't read them anyway. Turning the baud rate way up will only sorta solve your problem (if it works at all- serial communications at hyper speeds is quite sensitive to distance and interference such as from fluorescent lights or parallel extension cords, and may not be sufficiently reliable for you); it still won't get real close to sensor speed, and it will be unreadable.
I suggest you instead dump only every Nth sensor reading rather than all of them. (If you're really concerned about the values of the other readings, you could send an "average" of N readings rather than every single one. If you're really concerned that a sensor reading happened -without caring so much about its exact value- you could use just one bit to indicate "it happened": so for example F would mean 4 readings happened, FF would mean eight readings, 7 would mean 3 readings, 3F would mean 6 readings, and so on.)
My key is a 64 bit address and the output is a 1 byte number (0-255). Collisions are allowed but the probability of them occurring should be low. Also, assume that number of elements to be inserted are low, lets say not more than 255, as to minimize the pigeon hole effect.
The addresses are addresses of the functions in the program.
uint64_t addr = ...
uint8_t hash = addr & 0xFF;
I think that meets all of your requirements.
I would XOR together the 2 LSB (least significant bytes), if this distribues badly, then add a 3rd one, and so forth
The rationale behind this is the following: function addresses do not distribute uniformly. The problem normally lies in the lower (lsb) bits. Functions usually need to begin in addresses divisible by 4/8/16 so the 2-4 lsb are probably meaningless. By XORing with the next byte, you should get rid of most of these problems and it's still pretty fast.
Function addresses are, I think, quite likely to be aligned (see this question, for instance). That seems to indicate that you want to skip least significant bits, depending on the alignment.
So, perhaps take the 8 bits starting from bit 3, i.e. skipping the least significant 3 bits (bits 0 through 2):
const uint8_t hash = (address >> 3);
This should be obvious from inspection of your set of addresses. In hex, watch the rightmost digit.
How about:
uint64_t data = 0x12131212121211B12;
uint32_t d1 = (data >> 32) ^ (uint32_t)(data);
uint16_t d2 = (d1 >> 16) ^ (uint16_t)(d1);
uint8_t d3 = (d2 >> 8) ^ (uint8_t)(d2);
return d3;
It combined all bits of your 8 bytes with 3 shifts and three xor instructions.
I have instructions on creating a checksum of a message described like this:
The checksum consists of a single byte equal to the two’s complement sum of all bytes starting from the “message type” word up to the end of the message block (excluding the transmitted checksum). Carry from the most significant bit is ignored.
Another description I found was:
The checksum value contains the twos complement of the modulo 256 sum of the other words in the data message (i.e., message type, message length, and data words). The receiving equipment may calculate the modulo 256 sum of the received words and add this sum to the received checksum word. A result of zero generally indicates that the message was correctly received.
I understand this to mean that I sum the value of all bytes in message (excl checksum), get modulo 256 of this number. get twos complement of this number and that is my checksum.
But I am having trouble with an example message example (from design doc so I must assume it has been encoded correctly).
unsigned char arr[] = {0x80,0x15,0x1,0x8,0x30,0x33,0x31,0x35,0x31,0x30,0x33,0x30,0x2,0x8,0x30,0x33,0x35,0x31,0x2d,0x33,0x32,0x31,0x30,0xe};
So the last byte, 0xE, is the checksum. My code to calculate the checksum is as follows:
bool isMsgValid(unsigned char arr[], int len) {
int sum = 0;
for(int i = 0; i < (len-1); ++i) {
sum += arr[i];
}
//modulo 256 sum
sum %= 256;
char ch = sum;
//twos complement
unsigned char twoscompl = ~ch + 1;
return arr[len-1] == twoscompl;
}
int main(int argc, char* argv[])
{
unsigned char arr[] = {0x80,0x15,0x1,0x8,0x30,0x33,0x31,0x35,0x31,0x30,0x33,0x30,0x2,0x8,0x30,0x33,0x35,0x31,0x2d,0x33,0x32,0x31,0x30,0xe};
int arrsize = sizeof(arr) / sizeof(arr[0]);
bool ret = isMsgValid(arr, arrsize);
return 0;
}
The spec is here:= http://www.sinet.bt.com/227v3p5.pdf
I assume I have misunderstood the algorithm required. Any idea how to create this checksum?
Flippin spec writer made a mistake in their data example. Just spotted this then came back on here and found others spotted too. Sorry if I wasted your time. I will study responses because it looks like some useful comments for improving my code.
You miscopied the example message from the pdf you linked. The second parameter length is 9 bytes, but you used 0x08 in your code.
The document incorrectly states "8 bytes" in the third column when there are really 9 bytes in the parameter. The second column correctly states "00001001".
In other words, your test message should be:
{0x80,0x15,0x1,0x8,0x30,0x33,0x31,0x35,0x31,0x30,0x33,0x30, // param1
0x2,0x9,0x30,0x33,0x35,0x31,0x2d,0x33,0x32,0x31,0x30,0xe} // param2
^^^
With the correct message array, ret == true when I try your program.
Agree with the comment: looks like the checksum is wrong. Where in the .PDF is this data?
Some general tips:
Use an unsigned type as the accumulator; that gives you well-defined behavior on overflow, and you'll need that for longer messages. Similarly, if you store the result in a char variable, make it unsigned char.
But you don't need to store it; just do the math with an unsigned type, complement the result, add 1, and mask off the high bits so that you get an 8-bit result.
Also, there's a trick here, if you're on hardware that uses twos-complement arithmetic: just add all of the values, including the checksum, then mask off the high bits; the result will be 0 if the input was correct.
The receiving equipment may calculate the modulo 256 sum of the received words and add this sum to the received checksum word.
It's far easier to use this condition to understand the checksum:
{byte 0} + {byte 1} + ... + {last byte} + {checksum} = 0 mod 256
{checksum} = -( {byte 0} + {byte 1} + ... + {last byte} ) mod 256
As the others have said, you really should use unsigned types when working with individual bits. This is also true when doing modular arithmetic. If you use signed types, you leave yourself open to a rather large number of sign-related mistakes. OTOH, pretty much the only mistake you open yourself up to using unsigned numbers is things like forgetting 2u-3u is a positive number.
(do be careful about mixing signed and unsigned numbers together: there are a lot of subtleties involved in that too)