I’m working on the SPI communication between a microcontroller and the L9963E. The datasheet of the L9963E shows little information about the CRC calculation, but mentions:
a CRC of 6 bits,
a polynomial of X6 + X4 + X3 + 1 = 0b1011001
a seed of 0b111000
The documentation also mentions in the SPI protocol details that the CRC is a value between 0x00 and 0x3F, and is "calculated on the [39-7] field of the frame", see Table 22.
I'm wondering: What is meant by "field [39-7]"? The total frame length is 40bits, out of this the CRC makes up 6 bits. I would expect a CRC calculation over the remaining 34 bits, but field 39-7 would mean either 33 bits (field 7 inclusive) or 32 bits (excluding field 7).
Since I have access to a L9963E evaluation board, which includes pre-loaded firmware, I have hooked up a logic analyser. I have found the following example frames to be sent to the L9963E from the eval-board, I am assuming that these are valid, error-free frames.
0xC21C3ECEFD
0xC270080001
0xE330081064
0xC0F08C1047
0x827880800E
0xC270BFFFF9
0xC2641954BE
Could someone clear up the datasheet for me, and maybe advise me on how to implement this CRC calculation?
All of the special frames have CRC == 0:
(0000000016 xor E000000000) mod 59 = 00
(C1FCFFFC6C xor E000000000) mod 59 = 00
(C1FCFFFC87 xor E000000000) mod 59 = 00
(C1FCFFFCDE xor E000000000) mod 59 = 00
(C1FCFFFD08 xor E000000000) mod 59 = 00
and as noted by Mark Adler only one of the messages in the question works:
(C270BFFFF9 xor E000000000) mod 59 = 00
Bit ranges in datasheets like this are always inclusive.
I suspect that this is just a typo, or the person who wrote it temporarily forgot that the bits are numbered from zero.
Looking at the other bit-field boundaries in Table 19 of the document you linked it wouldn't make sense to exclude the bottom bit of the data field from the CRC, so I suspect the datasheet should say bits 39 to 6 inclusive.
There is a tool called pycrc that can generate C code to calculate a CRC with an arbitrary polynomial.
If a 40-bit message is in the low bits of uint64_t x;, then this:
x ^= (uint64_t)0x38 << 34;
for (int j = 0; j < 40; j++)
x = x & 0x8000000000 ? (x << 1) ^ ((uint64_t)0x19 << 34) : x << 1;
x &= 0xffffffffff;
gives x == 0 for all of the example messages in the document. But only for one of your examples in the question. You may not be extracting the data correctly.
Related
I'm building something which uses the DS18B20 temperature sensor.
First I am trying to understand the example CRC in the Maxim application note 27, "Understanding and Using Cyclic Redundancy Checks with Maxim iButton Products" (https://www.analog.com/en/technical-articles/understanding-and-using-cyclic-redundancy-checks-with-maxim-1wire-and-ibutton-products.html).
It doesn't look too hard to code the conversion but my problem is that I cannot find any calculator that gives me the correct answer of 0xA2.
On page 5, example 2 the complete ROM code is given in hex as A2=(CRC), 00 00 00 01 B8 1C 02=(Family code).
The generator polynomial is 100110001 (X8+X5+X4+1).
On the site https://crccalc.com/ it has a CRC-8/MAXIM algorithm which has the correct generator but the RefIn and RefOut are both true whereas I cannot see anything in the application note about reversing parts (although I have tried this).
On the site https://tomeko.net/online_tools/crc8.php?lang=en it claims to implement the CRC from the application note but it gives the same answers as crccalc.com. Also note that crccalc has the same lookup table for the Maxim algorithm as the application note so no surprise the two web sites are giving the same answers.
Finally I found a site, https://www.rndtool.info/CRC-step-by-step-calculator/, that allows me to add the polynomial and bit stream in binary and it shows the 'hand' calculation of the CRC. This says nothing about input and output refs so I assume they are false. This gives different answers to the other two sites probably because of the ref values but still does not give 0xA2.
Has anyone correctly calculated the given value in the application note?
I don't want to start programming until I understand what is going on and I cannot read data from a device if I cannot decipher the CRC correctly. This is driving me mad at the moment. I've tried the number reflected and forwards with the generator reflected and forwards plus reversing the answer but I never get 0xA2.
You didn't give a language in your tags. Here is an example in C:
#include <stdio.h>
unsigned crc8maximdow(unsigned char *data, size_t len) {
unsigned crc = 0;
for (size_t i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0x8c : crc >> 1;
}
return crc;
}
int main(void) {
unsigned char data[] = {2, 0x1c, 0xb8, 1, 0, 0, 0};
printf("0x%02x\n", crc8maximdow(data, sizeof(data)));
return 0;
}
That prints 0xa2.
Using Mark's answer above I worked out what I was doing wrong with the Maxim example. Tutorials were talking about 'reflecting' the input data so I literally took the 56-bit bit pattern and reversed it which gave the wrong answer in the site https://crccalc.com/. Instead I reversed the hex numbers so that 00 00 00 01 B8 1C 02 becomes 02 1C B8 01 00 00 00 which gives the correct CRC of 0xA2 on the website. Hope this may help anyone else having the same problem.
While working with the following PDF, there is an example in
Section 4: CRC-16 Code and Example
(page 95 or 91) that shows a serial packet with a CRC16 value of 133 (LSB) and 24 (MSB).
However, I have tried different calculators, for example:
Lammert
Elaborate calculator
CRC calc
but I cannot get the CRC16 values that the PDF indicates, regardless of the byte combination I use.
How can I correctly calculate the CRC16 in the example, preferably using one of these calculators? (otherwise, C/C++ code should work).
Thanks.
This particular CRC is CRC-16/ARC. crcany generates the code for this CRC, which includes this simple bit-wise routine:
#include <stddef.h>
#include <stdint.h>
uint16_t crc16arc_bit(uint16_t crc, void const *mem, size_t len) {
unsigned char const *data = mem;
if (data == NULL)
return 0;
for (size_t i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++) {
crc = crc & 1 ? (crc >> 1) ^ 0xa001 : crc >> 1;
}
}
return crc;
}
The standard interface is to do crc = crc16arc_bit(0, NULL, 0); to get the initial value (zero in this case), and then crc = crc16arc_bit(crc, data, len); with successive portions of the message to compute the CRC.
If you do that on the nine-byte message in the appendix, {1, 2, 1, 0, 17, 3, 'M', 'O', 'C'}, the returned CRC is 0x1885, which has the least significant byte 133 in decimal and most significant byte 24 in decimal.
Faster table-driven routines are also generated by crcany.
If you give 01 02 01 00 11 03 4d 4f 43 as hex to Lammert Bies' calculator, the very first one, called "CRC-16" gives 0x1885.
If you give 0102010011034d4f43 to crccalc.com and hit Calc-CRC16, the second line is CRC-16/ARC, with the result 0x1885.
I'm trying to write a table-based CRC routine for receiving Mode S uplink interrogator messages. On the downlink side, the CRC is just the 24-bit CRC based on polynomial P=0x1FFF409. So far, so good -- I wrote a table-based implementation that follows the usual byte-at-a-time convention, and it's working fine.
On the uplink side, though, things get weird. The protocol specification says that calculating the target uplink address is by finding:
U' = x^24 * U / G(x)
...where U is the received message and G(x) is the encoding polynomial 0x1FFF409, resulting in:
U' = x^24 * m(x) + A(x) + r(x) / G(x)
...where m(x) is the original message, A(x) is the address, and r(x) is the remainder. I want the low-order quotient A(x); e.g., the result of the GF(2) polynomial division operation instead of the remainder. The remainder is effectively discarded. The target address is encoded with the transmitted checksum such that the receiving aircraft can validate the checksum by comparing it with its address.
This is great and all, and I have a bitwise implementation which follows from the above. Please ignore the weird shifting of the polynomial and checksum, this has been cribbed from this Pascal implementation (on page 15) which assumes 32-bit registers and makes optimizations based on that assumption. In reality the message and checksum come as a single, 56-bit transmission.
#This is the reference bit-shifting implementation. It is slow.
def uplink_bitshift_crc():
p = 0xfffa0480 #polynomial (0x1FFF409 shifted left 7 bits)
a = 0x00000000 #rx'ed uplink data (32 bits)
adr = 0xcc5ee900 #rx'ed checksum (24 bits, shifted left 8 bits)
ad = 0 #will hold division result low-order bits
for j in range(56):
#if MSBit is 1, xor w/poly
if a & 0x80000000:
a = a ^ p
#shift off the top bit of A (we're done with it),
#and shift in the top bit of adr
a = ((a << 1) & 0xFFFFFFFF) + ((adr >> 31) & 1)
#shift off the top bit of adr
adr = (adr << 1) & 0xFFFFFFFF
if j > 30:
#shift ad left 1 bit and shift in the msbit of a
#this extracts the LS 24bits of the division operation
#and ignores the remainder at the end
ad = ad + ((a >> 31) & 1)
ad = ((ad << 1) & 0xFFFFFFFF)
#correct the ad
ad = ad >> 2
return ad
The above is of course slower than molasses in software and I'd really like to be able to construct a lookup table that would allow similar byte-at-a-time calculation of the received address, or massage the remainder (which is quickly calculated) into a quotient.
TL;DR:
Given a message, the encoding polynomial, and the remainder (calculated by the normal CRC method), is there a faster way to obtain the quotient of the polynomial division operation than by using shift registers to do polynomial division "longhand"?
You might take a look at the PyCRC library, I guess this may answer your questions.
Too late for the OP, but I'm posting this for others that might see this question. You can generate two tables to operate a byte at a time. The first 256 by 8 bit table is indexed by the current leading 8 bits of the dividend (message), and the 8 bit values are the quotients. The second 256 by 32 bit table is indexed by the 8 bit quotient and the 32 bit values are the 32 bit product of the 8 bit quotient times the 25 bit polynomial (since this is a carryless multiply, the product is 32 bits, (x^7 * x^24 = x^31)), which you xor to the upper 32 bits of the dividend, which will zero out the upper 8 bits of the dividend. Then loop back for the next 8 bits of the dividend.
A modern X86 cpu has the carryless multiply instruction, PCLMULQDQ that operates on 128 bit xmm registers, performing a 64 bit by 64 bit multiply to produce a 128 bit product (since it's a carryless multiply bit 127 is always 0, so it's really a 127 bit product). A multiply of the 56 bit message by the 41 bit constant 2^64/G(x) will produce a 96 bit product, of which the upper 32 bits will be the quotient (lower 64 bits are not used).
I've been scratching my head around calculating a checksum to communicate with Unitronics PLCs using binary commands. They offer the source code but it's in a Windows-only C# implementation, which is of little help to me other than basic syntax.
Specification PDF (the checksum calculation is near the end)
C# driver source (checksum calculation in Utils.cs)
Intended Result
Below is the byte index, message description and the sample which does work.
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | 24 25 26 27 28 29 | 30 31 32
# sx--------------- id FE 01 00 00 00 cn 00 specific--------- lengt CHKSM | numbr ot FF addr- | CHKSM ex
# 2F 5F 4F 50 4C 43 00 FE 01 00 00 00 4D 00 00 00 00 00 01 00 06 00 F1 FC | 01 00 01 FF 01 00 | FE FE 5C
The specification calls for calculating the accumuated value of the 22 byte message header and, seperately, the 6+ byte detail, getting the value of sum modulo 65536, and then returning two's complement of that value.
Attempt #1
My understanding is the tilde (~) operator in Python is directly derived from C/C++. After a day of writing the Python that creates the message I came up with this (stripped down version):
#!/usr/bin/env python
def Checksum( s ):
x = ( int( s, 16 ) ) % 0x10000
x = ( ~x ) + 1
return hex( x ).split( 'x' )[1].zfill( 4 )
Details = ''
Footer = ''
Header = ''
Message = ''
Details += '0x010001FF0100'
Header += '0x2F5F4F504C4300FE010000004D000000000001000600'
Header += Checksum( Header )
Footer += Checksum( Details )
Footer += '5C'
Message += Header.split( 'x' )[1].zfill( 4 )
Message += Details.split( 'x' )[1].zfill( 4 )
Message += Footer
print Message
Message: 2F5F4F504C4300FE010000004D000000000001000600600L010001FF010001005C
I see an L in there, which is a different result to yesterday, which wasn't any closer. If you want a quick formula result based on the rest of the message: Checksum(Header) should return F1FC and Checksum(Details) should return FEFE.
The value it returns is nowhere near the same as the specification's example. I believe the issue may be one or two things: the Checksum method isn't calculating the sum of the hex string correctly or the Python ~ operator is not equivalent to the C++ ~ operator.
Attempt #2
A friend has given me his C++ interpretation of what the calculation SHOULD be, I just can't get my head around this code, my C++ knowledge is minimal.
short PlcBinarySpec::CalcHeaderChecksum( std::vector<byte> _header ) {
short bytesum = 0;
for ( std::vector<byte>::iterator it = _header.begin(); it != _header.end(); ++it ) {
bytesum = bytesum + ( *it );
}
return ( ~( bytesum % 0x10000 ) ) + 1;
}
I'm not entirely sure what the correct code should be… but if the intention is for Checksum(Header) to return f705, and it's returning 08fb, here's the problem:
x = ( ~( x % 0x10000 ) ) + 1
The short version is that you want this:
x = (( ~( x % 0x10000 ) ) + 1) % 0x10000
The problem here isn't that ~ means something different. As the documentation says, ~x returns "the bits of x inverted", which is effectively the same thing it means in C (at least on 2s-complement platforms, which includes all Windows platforms).
You can run into a problem with the difference between C and Python types here (C integral types are fixed-size, and overflow; Python integral types are effectively infinite-sized, and grow as needed). But I don't think that's your problem here.
The problem is just a matter of how you convert the result to a string.
The result of calling Checksum(Header), up to the formatting, is -2299, or -0x08fb, in both versions.
In C, you can pretty much treat a signed integer as an unsigned integer of the same size (although you may have to ignore warnings to do so, in some cases). What exactly that does depends on your platform, but on a 2s-complement platform, signed short -0x08fb is the bit-for-bit equivalent of unsigned 0xf705. So, for example, if you do sprintf(buf, "%04hx", -0x08fb), it works just fine—and it gives you (on most platforms, including everything Windows) the unsigned equivalent, f705.
But in Python, there are no unsigned integers. The int -0x08fb has nothing to do with 0xf705. If you do "%04hx" % -0x08fb, you'll get -8fb, and there's no way to forcibly "cast it to unsigned" or anything like that.
Your code actually does hex(-0x08fb), which gives you -0x8fb, which you then split on the x, giving you 8fb, which you zfill to 08fb, which makes the problem a bit harder to notice (because that looks like a perfectly valid pair of hex bytes, instead of a minus sign and three hex digits), but it's the same problem.
Anyway, you have to explicitly decide what you mean by "unsigned equivalent", and write the code to do that. Since you're trying to match what C does on a 2s-complement platform, you can write that explicit conversion as % 0x10000. If you do "%04hx" % (-0x08fb % 0x10000), you'll get f705, just as you did in C. And likewise for your existing code.
It's quite simple. I checked your friend's algorithm by adding all the header bytes manually on a calculator, and it yields the correct result (0xfcf1).
Now, I don't actually know python, but it looks to me like you are adding up half-byte values. You have made your header string like this:
Header = '2F5F4F504C4300FE010000004D000000000001000600'
And then you go through converting each byte in that string from hex and adding it. That means you are dealing with values from 0 to 15. You need to consider every two bytes as a pair and convert that (values from 0 to 255). Or you need to use actual binary data instead of a text representation of the binary data.
At the end of the algorithm, you don't really need to do the ~ operator if you don't trust it. Instead you can do (0xffff - (x % 0x10000)) + 1. Bear in mind that prior to adding 1, the value might actually be 0xffff, so you need to modulo the entire result by 0x10000 afterwards. Your friend's C++ version uses the short datatype so no modulo is necessary at all because the short will naturally overflow
What I must do is open a file in binary mode that contains stored data that is intended to be interpreted as integers. I have seen other examples such as Stackoverflow-Reading “integer” size bytes from a char* array. but I want to try taking a different approach (I may just be stubborn, or stupid :/). I first created a simple binary file in a hex editor that reads as follows.
00 00 00 47 00 00 00 17 00 00 00 41
This (should) equal 71, 23, and 65 if the 12 bytes were divided into 3 integers.
After opening this file in binary mode and reading 4 bytes into an array of chars, how can I use bitwise operations to make char[0] bits be the first 8 bits of an int and so on until the bits of each char are part of the int.
My integer = 00 00 00 00
+ ^ ^ ^ ^
Chars Char[0] Char[1] Char[2] Char[3]
00 00 00 47
So my integer(hex) = 00 00 00 47 = numerical value of 71
Also, I don't know how the endianness of my system comes into play here, so is there anything that I need to keep in mind?
Here is a code snippet of what I have so far, I just don't know the next steps to take.
std::fstream myfile;
myfile.open("C:\\Users\\Jacob\\Desktop\\hextest.txt", std::ios::in | std::ios::out | std::ios::binary);
if(myfile.is_open() == false)
{
std::cout << "Error" << std::endl;
}
char* mychar;
std::cout << myfile.is_open() << std::endl;
mychar = new char[4];
myfile.read(mychar, 4);
I eventually plan on dealing with reading floats from a file and maybe a custom data type eventually, but first I just need to get more familiar with using bitwise operations.
Thanks.
You want the bitwise left shift operator:
typedef unsigned char u8; // in case char is signed by default on your platform
unsigned num = ((u8)chars[0] << 24) | ((u8)chars[1] << 16) | ((u8)chars[2] << 8) | (u8)chars[3];
What it does is shift the left argument a specified number of bits to the left, adding zeros from the right as stuffing. For example, 2 << 1 is 4, since 2 is 10 in binary and shifting one to the left gives 100, which is 4.
This can be more written in a more general loop form:
unsigned num = 0;
for (int i = 0; i != 4; ++i) {
num |= (u8)chars[i] << (24 - i * 8); // += could have also been used
}
The endianness of your system doesn't matter here; you know the endianness of the representation in the file, which is constant (and therefore portable), so when you read in the bytes you know what to do with them. The internal representation of the integer in your CPU/memory may be different from that of the file, but the logical bitwise manipulation of it in code is independent of your system's endianness; the least significant bits are always at the right, and the most at the left (in code). That's why shifting is cross-platform -- it operates at the logical bit level :-)
Have you thought of using Boost.Spirit to make a binary parser? You might hit a bit of a learning curve when you start, but if you want to expand your program later to read floats and structured types, you'll have an excellent base to start from.
Spirit is very well-documented and is part of Boost. Once you get around to understanding its ins and outs, it's really mind-boggling what you can do with it, so if you have a bit of time to play around with it, I'd really recommend taking a look.
Otherwise, if you want your binary to be "portable" - i.e. you want to be able to read it on a big-endian and a little-endian machine, you'll need some sort of byte-order mark (BOM). That would be the first thing you'd read, after which you can simply read your integers byte by byte. Simplest thing would probably be to read them into a union (if you know the size of the integer you're going to read), like this:
union U
{
unsigned char uc_[4];
unsigned long ui_;
};
read the data into the uc_ member, swap the bytes around if you need to change endianness and read the value from the ui_ member. There's no shifting etc. to be done - except for the swapping if you want to change endianness..
HTH
rlc