First off, if there's a better site to ask this question then please do migrate this or close it and let me know where to go.
Secondly, we're discussing CRC in one of my classes, and neither us nor the professor understand why CRC polynomials are one bit longer than the name (or resulting checksum) suggest. I've done some searching, but nothing seems to discuss why it's one bit longer.
A CRC is the remainder after dividing the message by the polynomial. By definition, the remainder has to be less than the length of the polynomial. Hence the CRC for a "33-bit" polynomial is 32 bits.
Note that the largest exponent of a "33-bit" polynomial is 32 (the lowest term has exponent zero), so the degree of the polynomial, as well as the length of the CRC is 32.
Related
I am confused about how to calculate the bit-reflected constants in the white paper "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction".
In the post Fast CRC with PCLMULQDQ NOT reflected and How the bit-reflect constant is calculated when we use CLMUL in CRC32, #rcgldr mentioned that "...are adjusted to compensate for the shift, so instead of x^(a) mod poly, it's (x^(a-32) mod poly)<<32...", but I do not understand what does this mean.
For example, constant k1=(x^(4*128+64)%P(x))=0x8833794c (on page 16) v.s. k1'=(x^(4*128+64-32)%P(x)<<32)'=(0x154442db4>>1) (on page 22), I can't see those two figures have any reflection relationship (10001000_00110011_01111001_01001100 v.s 10101010_00100010_00010110_11011010).
I guess my question is why the exponent needs to subtract 32 to compensate 32bits of left shift? and why k1 and (k1)' are not reflected?
Could you please help to interpret it? Thanks
I had carefully searched for the answer to this question on the internet, especially in StackOverflow, and I tried to understand the related posts but need some experts to interpret more.
I modded what was originally some Intel examples to work with Visual Studio | Windows, not-reflected and reflected for 16, 32, and 64 bit CRC, in this github repository.
https://github.com/jeffareid/crc
I added some missing comments and also added a program to generate the constants used in the assembly code for each of the 6 cases.
instead of x^(a) mod poly, it's (x^(a-32) mod poly)<<32
This is done for non-reflected CRC. The CRC is kept in the upper 32 bits, as well as the constants, so that the result of PCLMULQDQ ends up in the upper 64 bits, and then right shifted. Shifting the constants left 32 bits is the same as multiplying by 2^32, or with polynomial notation, x^32.
For reflected CRC, the CRC is kept in the lower 32 bits, which are logically the upper 32 bits of a reflected number. The issue is PCLMULQDQ multiplies the product by 2, right shifting the product by 1 bit, leaving bit 127 == 0 and the 127 bit product in bits 126 to 0. To compensate for that, the constants are (x^(a) mod poly) << 1 (left shift for reflected number is == divide by 2).
The example code at that github site includes crc32rg.cpp, which is the program to generate the constants used by crc32ra.asm.
Another issue occurs when doing 64 bit CRC. For non-reflected CRC, sometimes the constant is 65 bits (for example, if the divisor is 7), but only the lower 64 bits are stored, and the 2^64 bit handled with a few more instructions. For reflected 64 bit CRC, since the constants can't be shifted left, (x^(a-1) mod poly) is used instead.
#rcgldr I think I didn't catch your point tbh... probably I didn't make my question clear...
If my understanding of the code (reverse CRC32) is correct, take the simplest scenario as an example, the procedure of 1-fold 32byte block is shown here. I don't understand why the exponents used in the constants are not 128 and 192 (=128+64) respectively.
The algorithm to calculate a CRC involves dividing (mod 2) the data by a polynomial, and that, by nature starts at the biggest bit using the basic long division algorithm and works down (unless you're taking the shortcuts and using tables).
Now, the stream I'm dealing with has the requirements that the data is added little endian and the CRC remainder goes at the end, whereas if the CRC was applied and appended; the CRC remainder bits would appear at the leftmost point in the least significant bit given the bitstream is little endian.
So here's the question. We have a little endian stream with the CRC remainder at the "unexpected" end (correct me if I'm wrong please), should the CRC remainder be added big endian at the end of the bytestream, and then the CRC run on the whole bytestream (this is what I expect from the requirements) or something else?
How in industry is this normally done?
Major update for clarity thanks to Mark Adler's highly helpful answer.
I've read a few posts, but seen nothing where there seems to be a little endian bytestream with a CRC in the MSB (rightmost).
The image above should describe what I'm doing. All the bytes are big endian bit order, but the issue is that the requirements state that the bytes should be little endian byte ordered, and then the CRC tacked on the end.
For the bytestream as a sequence of bits to be validated by the CRC remainder being placed at the end, the CRC remainder bytes should be added bit endian, therefore allowing the message as a whole to be validated with the polynomial. However this involves adding bytes to the stream in a mix of endiannesses, which sounds highly ugly and wrong.
I will assume that by "biggest" bit, you mean most significant. First off, you can take either the most-significant bit or the least-significant bit of the first byte as the highest power of x for polynomial division. Both are in common use. There is no "by nature" here. And that has nothing to do with whether tables are used or not. Taking the least-significant bit as the highest power of x, the one you would call "not by nature" is in very common use, due to slightly faster and simpler software implementations as compared to using the most-significant bit.
Second, bit streams are neither "little endian", nor "big endian". Those terms are used for how integers are broken up into a sequence of bytes. That has nothing to do with the interpretation of a stream of bits as a polynomial. The terms you seem to be looking for are "reflected" and "not reflected" bit streams in and CRCs out. "reflected" means that the highest power of x is the least significant bit, and "not reflected" means it is the most significant bit.
If you look at Greg Cook's catalogue of CRCs, you will see as part of each definition refin=false refout=false or refin=true refout=true, meaning that the data coming in is reflected or not, and the CRC coming out is reflected or not, referring to where the highest power of x is found. For the CRC, the entire n-bits is reflected or not. In actual implementations, no bits are flipped for the input data or the output CRC. Instead, the constant CRC polynomial is reflected to match the data and CRC reflections. That is done once as the code is written, never during execution. (There is one outlier CRC in Greg's catalogue, CRC-12/UMTS, that has refin=false refout=true. For that one, the implementation would in fact have to reflect the CRC result every time.)
Given all that, I am left attempting to intepret your question. What do you mean by "the data is added little endian"? Does that mean the CRC is being calculated using the least-significant bit as the highest power of x (the opposite of your "by nature")? What does "the CRC remainder bits would appear at the leftmost point in the least significant bit given the bitstream is little endian" mean? That one is really confusing, since there is no leftmost point of a bit, and I can't tell at all what you're trying to say about the arrangement of the remainder bits.
The only thing I think I understand and can try to answer here is: "How in industry is this normally done?"
Well, as you can tell from the list of over a hundred CRCs, there is little normalcy established. What I can say is that CRCs have a special property that leads to a "natural" (now I can use that word) ordering of the CRC bits and bytes at the end of the stream of bits and bytes that the CRC was calculated on. That property is that if you append it properly, the CRC of the entire message, including the CRC at the end, will always be the same constant, if there are no errors in the message. Now little and big endian are useful terms, but only for the CRC itself, not the bit or byte stream. The proper order is little endian for reflected CRCs and big endian for non-reflected CRCs. (This assumes that the input and output have the same reflection, so this won't work for that one outlier CRC.)
Of course, I have seen many cases where a reflected CRC is used, but is appended to the stream big-endian, and vice versa, in which case this calculation of the CRC on the entire message doesn't work. That's ok, since the alternative way to check the CRC is to simply repeat what was done before transmission, which is to calculate the CRC only on the data portion of the message, then properly assemble the CRC from the bytes that follow it, and compare the two values. That is what would be done for any other hash that doesn't have that elegant mathematical property of CRCs.
Can anyone with good knowledge of CRC calculation verify that this code
https://github.com/psvanstrom/esphome-p1reader/blob/main/p1reader.h#L120
is actually calculating crc according to this description?
CRC is a CRC16 value calculated over the preceding characters in the data message (from
“/” to “!” using the polynomial: x16+x15+x2
+1). CRC16 uses no XOR in, no XOR out and is
computed with least significant bit first. The value is represented as 4 hexadecimal characters (MSB first).
There's nothing in the linked code about where it starts and ends, and how the result is eventually represented, but yes, that code implements that specification.
I was looking at this page and I saw that the terms of this polynomial:
0xad0424f3 = x^32 +x^30 +x^28 +x^27 +x^25 +x^19 +x^14 +x^11 +x^8 +x^7 +x^6 +x^5 +x^2 +x +1
which seems not correct since converting the Hex:
0xad0424f3 is 10101101000001000010010011110011
It would become:
x^31+ x^29+ x^27+ x^26+ x^24+ x^18+ x^13+ x^10+ x^7+ x^6+ x^5+ x^4+ x^1+ x^0
Can you help me understand which one is correct?
what about 64 bit ECMA polynomial,
0xC96C5795D7870F42
I want to know the number of terms in each polynomial 0xad0424f3 and 0xC96C5795D7870F42.
That page is on Koopman's web site, where he has his own notation for CRC polynomials. Since all CRC polynomials have a 1 term, he drops that term, divides the polynomial by x, and represents that in binary. That's what you're looking at.
The benefit is that with a 64-bit word, you can then represent all 64-bit and shorter CRC polynomials, with the length of the CRC denoted by the most significant 1 in the word.
The downside is that only Koopman uses that notation, as far as I know, resulting in some confusion by others. Like yourself.
As for your 64-bit CRC, that polynomial that you note is from the Wikipedia page is actually the reversed version, and is not in Koopman's notation. The expansion into a polynomial is shown right there, underneath the hex representation. It has 34 terms.
I learned an error detection technique called crc. crc calculations are done in modulo-2 arithmetic without carries in addition or borrows in subtraction. I wonder the reason why crc takes modulo-2 arithmetic rather than regular binary arithmetic. Is it easier to be implemented in digital circuit?
Maybe better late than never for an answer. A CRC treats the data as a string of 1 bit coefficients of a polynomial, since the coefficients are numbers modulo 2. From a math perspective, for an n bit CRC, the data polynomial is multiplied by x^n, effectively adding n 0 bit coefficients to the data, then dividing that data + zeroes by a n+1 bit CRC polynomial, resulting in a n bit remainder, which is the CRC. If "encoding" the data with a CRC, the remainder is "subtracted" from the data + zeroes, but for single bit coefficients, adding and subtracting are both xor, so the CRC is just appended to the data, replacing the zeroes that were appended to the data to generate the CRC.
The reason for no carries or borrows across coefficients is because it's polynomial math.
Although not done for CRC, Reed Solomon codes are somewhat similar, but the polynomial coefficients may be numbers modulo some prime number other than 2, such as 929, and for prime numbers other than 2, it's important to keep track of when addition or subtraction modulo a prime number other than 2 is used.
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction