CRC store in 16 bit with one extra value indicating uninitialised crc value? - crc

For functional safety reasons I need to store a crc-16 or similar for protection of data. Data length would be up to 80 bytes. I need to use one value of the 16bit value for indication, that the data was modified intentionally and crc is not calculated yet.
As far as I understand, every value of a 16bit value could be the result of CRC-16. There is no unused value which could indicate "uninitialised".
What is the best solution?
take "0" as the uninitialised value and store "1" if the calculation delivers "0"
use a smaller CRC, e.g. CRC-15
is there a better solution?
I use C and C++ but this should not play a big role.
Update, taking into account the suggestion of rcgldr to use CRC-15:
I will calculate CRC-15 value (which is 0..32767) or the value 65432 which should indicate that the data should be checked and the CRC calculated. I want not use only 1 bit or 0x0000 or 0xFFFF for invalidating the crc, as these bit patterns could occur more likely than an odd number, outside the valid range of CRC-15, like 65432.
Taking into account the suggestion of Adler:
I calculate CRC-16 and if the value is 65432, write e.g. 0xFFFF instead. 65432 is thus reserved for indicating the modification.
I have the feeling that CRC-15 looks more clean, but Adler is right that I loose information. On the other hand my data (calibration data) are stored in memory and bit errors are not so likely like it would be with data transfer via serial interface (this will be protected seperately). The chance that an error is not detected is about 1:32767.

Your first option. You can preserve most of the power of a 16-bit CRC by mapping a CRC value of 0 to 1. Then the value 1 appears twice as often as the other non-zero values, and 0 never appears. This very slightly weakens the power of the CRC to detect errors, and you have now freed up the zero value to indicate that the CRC has not been calculated.
Taking an entire bit for that indication weakens the power of the CRC, indicated by the probability of a false positive, by a factor of two.

Without knowing why or how often a CRC is not initialized (calculated), the expected error rate and how non-initialized CRC is handled, I'm not sure what to recommend.
take "0" as the uninitialised value and store "1" if the calculation delivers "0
Both 0 and 1 are possible valid and invalid (indicating error) CRC 16 values, but as Mark Adler answered, it only slightly weakens the CRC, since a calculated CRC of 0 or 1 are both mapped to 1. When the data is received, and the CRC recalculated on just the data, then if the message CRC == 1, the code would accept recalculated CRC of 0 or 1 as indicating no errors.
use a smaller CRC, e.g. CRC-15
Use a 15 bit CRC and a flag bit to indicate if the 15 bit CRC is calculated or not. As Mark Adler answered, it's weaker than having 0 and 1 mapped to 1, but if the error rate for 82 bytes is very low, it may not matter from a practical standpoint.

Related

When calculating a CRC, should the remainder be set to something if it becomes 0?

Are there CRC algorithms in which the remainder is checked for zero during the generation, and changed when that happens?
When calculating a CRC, if you start with an initial value (the CRC value, aka the remainder value) of 0, leading zeroes in the data being CRCd will not have an effect. So the CRC for "\0\0\0Hello" will be the same as "Hello".
https://xcore.github.io/doc_tips_and_tricks/crc.html#the-initial-value
This means that if, as the CRC is being calculated, the value becomes zero at some point, any zeroes immediately following that point will have no effect on the CRC. If one or more of the zeroes is lost, the CRC will not be changed.
I want to generate a CRC for some data, so that I can determine when a byte is lost. Is this an esoteric application of the CRC? Because when looking for examples of calculating a CRC, I have not found any examples where the CRC value is checked for 0 during the computation, and some non-zero bits fed in a that point. I would think that this would be the only way to be sure to detect a loss of one or more bits at a point in the data, if the CRC up to that point happened to be 0.
To detect and/or correct dropped bits, something like Marker Code or Watermark Code is needed.
https://link.springer.com/article/10.1007/BF03219806
https://ieeexplore.ieee.org/document/866775
Marker codes and Watermark codes are usually implemented in hardware, such as the read / write logic in a hard drive, and operate at the bit level and the media interface. Additional bits are intermixed with data bits to allow detection of lost synchronization or signal, which could otherwise lead to dropped or inserted bits. Here is a better link that describes this. The turbopaper.pdf file mentions LDPC, low density parity codes, which Wiki has an article for. Note that when receiving or reading data, the Marker | Watermark code is handled first at the hardware interface, and the LDPC (or CRC or Reed Solomon) is done after the hardware has detected no dropped or inserted bits.
http://www.inference.org.uk/ear23/turbopaper.pdf
https://en.wikipedia.org/wiki/Low-density_parity-check_code
If you did that, then it wouldn't be a CRC anymore. So, no. By definition there are no such CRC algorithms.

crc32 hash default/invalid value?

I am building a simple string ID system using crc32 to generate 32 bit integer handles from my strings. I'd like to default the hash inside my StringID wrapper class to an invalid index by default, is there a value that crc32 will never generate? Will I have to use a separate flag?
Clarification: I am not interested in language specific answers. I'd simply like to know if there is an integer outside of the crc32 range that can be used to represent an unhashed value.
Thanks!
Is there a value that crc32 will never generate?
No, it will generate any/all values in the range of a 32-bit integer.
Will I have to use a separate flag?
Not necessarily.
If you decide that (e.g.) 0x00000000 means "CRC not set" and non-zero is the CRC value; then after calculating the CRC (but before storing it or checking the stored value) you can do if(CRCvalue == 0) CRCvalue = 0xFFFFFFFF;.
This weakens the CRC by an extremely tiny amount. Specifically, for 2 random pieces of data, for pure CRC32 there's 1 chance in 4294967296 of the CRCs matching, and with "zero means unset" there's 1 chance in 4294967295.000000000232830643654 of the CRCs matching.
There is an easy demonstration to the fact that you can generate any crc32 value, as it is de division mod P (where P is the generator polynomial) in a galois field (which happens to be a field, as real or complex numbers are), you can subtract (this is a XOR operation, so adding and subtracting are indeed the same thing) to your polynomial its modulus, giving a 0 remainder, then you can add to this multiple of the modulus any of all the possible crc32 values to it (as they are already remainders of divisions, their crc32 is just themselves) to get any of the 2^32 possible values.
It is a common practice to add as many zero bits as necessary to complete a full 32 bit word (this appears as a multiplication by a constant value x^32), and then subtract(xor) the remainder to that, making the result a multiple of of the modulus (remember that the addition and subtraction are the same ---a xor operation) and so making the crc32(pol) = 0x0000;
edit(easier to see)
Indeed, each of the possible 2^32 values for crc32, when divided by the generator polynomial, give themselves as a result (they are coprime with the generator polynomial, as are the numbers 1 .. N when doing arithmetic modulo N on integers) so they all are possible results of the crc32() operator.
The crc operation, as implemented in many places, is not that simple... as some implementations initialize the remainder register as 0xffffffff and look for 0xffffffff at termination(indeed, crc32 does this).... If you do the maths, you'll guess the reason for that: Initializing the register to 0x11111111 is equivalent to having a previous remainder of 0xffffffff in a longer string... and looking for 0xffffffff at the end is like appending 0xffffffff to the original string. This has the effect of concatenating the bit string 0xffffffff before and after your string, making the remainder sensible to appends of a string of zeros before and after the crc32 calculated string (altering the string of bits by appending zeros at either side). Anyway, this modification doesn't alter the original algorithm of calculating a polynomial remainder, so any of the 2**32 values are possible also in this case.
No. A CRC-32 can be any 32-bit value. You'll need to indicate an invalid index somewhere else.
My spoof code allows you to choose bit locations in a message to modify and the desired CRC, and will solve for which of those locations to flip to get exactly that CRC.

Maximum message length for CRC codes? [duplicate]

I've seen 8-bit, 16-bit, and 32-bit CRCs.
At what point do I need to jump to a wider CRC?
My gut reaction is that it is based on the data length:
1-100 bytes: 8-bit CRC
101 - 1000 bytes: 16-bit CRC
1001 - ??? bytes: 32-bit CRC
EDIT:
Looking at the Wikipedia page about CRC and Lott's answer, here' what we have:
<64 bytes: 8-bit CRC
<16K bytes: 16-bit CRC
<512M bytes: 32-bit CRC
It's not a research topic. It's really well understood: http://en.wikipedia.org/wiki/Cyclic_redundancy_check
The math is pretty simple. An 8-bit CRC boils all messages down to one of 256 values. If your message is more than a few bytes long, the possibility of multiple messages having the same hash value goes up higher and higher.
A 16-bit CRC, similarly, gives you one of the 65,536 available hash values. What are the odds of any two messages having one of these values?
A 32-bit CRC gives you about 4 billion available hash values.
From the wikipedia article: "maximal total blocklength is equal to 2**r − 1". That's in bits. You don't need to do much research to see that 2**9 - 1 is 511 bits. Using CRC-8, multiple messages longer than 64 bytes will have the same CRC checksum value.
The effectiveness of a CRC is dependent on multiple factors. You not only need to select the SIZE of the CRC but also the GENERATING POLYNOMIAL to use. There are complicated and non-intuitive trade-offs depending on:
The expected bit error rate of the channel.
Whether the errors tend to occur in bursts or tend to be spread out (burst is common)
The length of the data to be protected - maximum length, minimum length and distribution.
The paper Cyclic Redundancy Code Polynominal Selection For Embedded Networks, by Philip Koopman and Tridib Chakravarty, publised in the proceedings of the 2004 International Conference on Dependable Systems and Networks gives a very good overview and makes several recomendations. It also provides a bibliography for further understanding.
http://www.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf
The choice of CRC length versus file size is mainly relevant in cases where one is more likely to have an input which differs from the "correct" input by three or fewer bits than to have a one which is massively different. Given two inputs which are massively different, the possibility of a false match will be about 1/256 with most forms of 8-bit check value (including CRC), 1/65536 with most forms of 16-bit check value (including CRC), etc. The advantage of CRC comes from its treatment of inputs which are very similar.
With an 8-bit CRC whose polynomial generates two periods of length 128, the fraction of single, double, or triple bit errors in a packet shorter than that which go undetected won't be 1/256--it will be zero. Likewise with a 16-bit CRC of period 32768, using packets of 32768 bits or less.
If packets are longer than the CRC period, however, then a double-bit error will go undetected if the distance between the erroneous bits is a multiple of the CRC period. While that might not seem like a terribly likely scenario, a CRC8 will be somewhat worse at catching double-bit errors in long packets than at catching "packet is totally scrambled" errors. If double-bit errors are the second most common failure mode (after single-bit errors), that would be bad. If anything that corrupts some data is likely to corrupt a lot of it, however, the inferior behavior of CRCs with double-bit errors may be a non-issue.
I think the size of the CRC has more to do with how unique of a CRC you need instead of of the size of the input data. This is related to the particular usage and number of items on which you're calculating a CRC.
The CRC should be chosen specifically for the length of the messages, it is not just a question of the size of the CRC: http://www.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf
Here is a nice "real world" evaluation of CRC-N
http://www.backplane.com/matt/crc64.html
I use CRC-32 and file-size comparison and have NEVER, in the billions of files checked, run into a matching CRC-32 and File-Size collision. But I know a few exist, when not purposely forced to exist. (Hacked tricks/exploits)
When doing comparison, you should ALSO be checking "data-sizes". You will rarely have a collision of the same data-size, with a matching CRC, within the correct sizes.
Purposely manipulated data, to fake a match, is usually done by adding extra-data until the CRC matches a target. However, that results in a data-size that no-longer matches. Attempting to brute-force, or cycle through random, or sequential data, of the same exact size, would leave a real narrow collision-rate.
You can also have collisions within the data-size, just by the generic limits of the formulas used, and constraints of using bits/bytes and base-ten systems, which depends on floating-point values, which get truncated and clipped.
The point you would want to think about going larger, is when you start to see many collisions which can not be "confirmed" as "originals". (When they both have the same data-size, and (when tested backwards, they have a matching CRC. Reverse/byte or reverse/bits, or bit-offsets)
In any event, it should NEVER be used as the ONLY form of comparison, just for a quick form of comparison, for indexing.
You can use a CRC-8 to index the whole internet, and divide everything into one of N-catagories. You WANT those collisions. Now, with those pre-sorted, you only have to check one of N-directories, looking for "file-size", or "reverse-CRC", or whatever other comparison you can do to that smaller data-set, fast...
Doing a CRC-32 forwards and backwards on the same blob of data is more reliable than using CRC-64 in just one direction. (Or an MD5, for that matter.)
You can detect a single bit error with a CRC in any size packet. Detecting double bit errors or correction of single bit errors is limited to the number of distinct values the CRC can take, so for 8 bits, that would 256; for 16 bits, 65535; etc. 2^n; In practice, though, CRCs actually take on fewer distinct values for single bit errors. For example what I call the 'Y5' polynomial, the 0x5935 polynomial only takes on up to 256 different values before they repeat going back farther, but on the other hand it is able to correct double bit errors that distance, which is 30 bytes plus 2 bytes for errors in the CRC itself.
The number of bits you can correct with forward error correction is also limited by the Hamming Distance of the polynomial. For example, if the Hamming distance is three, you have to flip three bits to change from a set of bits that represents one valid message with matching CRC to another valid message with its own matching CRC. If that is the case, you can correct one bit with confidence. If the Hamming distance were 5, you could correct two bits. But when correcting multiple bits, you are effectively indexing multiple positions, so you need twice as many bits to represent the indexes of two corrected bits rather than one.
With forward error correction, you calculate the CRC on a packet and CRC together, and get a residual value. A good message with zero errors will always have the expected residual value (zero unless there's a nonzero initial value for the CRC register), and each bit position of error has a unique residual value, so use it to identify the position. If you ever get a CRC result with that residual, you know which bit (or bits) to flip to correct the error.

32 bit CRC with some inputs set to zero. Is this less accurate than dummy data?

Sorry if I should be able to answer this simple question myself!
I am working on an embedded system with a 32bit CRC done in hardware for speed. A utility exists that I cannot modify that initially takes 3 inputs (words) and returns a CRC.
If a standard 32 bit was implemented, would generating a CRC from a 32 bit word of actual data and 2 32 bit words comprising only of zeros produce a less reliable CRC than if I just made up/set some random values for the last 2 32?
Depending on the CRC/polynomial, my limited understanding of CRC would say the more data you put in the less accurate it is. But don't zero'd data reduce accuracy when performing the shifts?
Using zeros will be no different than some other value you might pick. The input word will be just as well spread among the CRC bits either way.
I agree with Mark Adler that zeros are mathematically no worse than other numbers. However, if the utility you can't change does something bad like set the initial CRC to zero, then choose non-zero pad words. An initial CRC=0 + Data=0 + Pads=0 produces a final CRC=0. This is technically valid, but routinely getting CRC=0 is undesirable for data integrity checking. You could compensate for a problem like this with non-zero pad characters, e.g. pad = -1.

Data Length vs CRC Length

I've seen 8-bit, 16-bit, and 32-bit CRCs.
At what point do I need to jump to a wider CRC?
My gut reaction is that it is based on the data length:
1-100 bytes: 8-bit CRC
101 - 1000 bytes: 16-bit CRC
1001 - ??? bytes: 32-bit CRC
EDIT:
Looking at the Wikipedia page about CRC and Lott's answer, here' what we have:
<64 bytes: 8-bit CRC
<16K bytes: 16-bit CRC
<512M bytes: 32-bit CRC
It's not a research topic. It's really well understood: http://en.wikipedia.org/wiki/Cyclic_redundancy_check
The math is pretty simple. An 8-bit CRC boils all messages down to one of 256 values. If your message is more than a few bytes long, the possibility of multiple messages having the same hash value goes up higher and higher.
A 16-bit CRC, similarly, gives you one of the 65,536 available hash values. What are the odds of any two messages having one of these values?
A 32-bit CRC gives you about 4 billion available hash values.
From the wikipedia article: "maximal total blocklength is equal to 2**r − 1". That's in bits. You don't need to do much research to see that 2**9 - 1 is 511 bits. Using CRC-8, multiple messages longer than 64 bytes will have the same CRC checksum value.
The effectiveness of a CRC is dependent on multiple factors. You not only need to select the SIZE of the CRC but also the GENERATING POLYNOMIAL to use. There are complicated and non-intuitive trade-offs depending on:
The expected bit error rate of the channel.
Whether the errors tend to occur in bursts or tend to be spread out (burst is common)
The length of the data to be protected - maximum length, minimum length and distribution.
The paper Cyclic Redundancy Code Polynominal Selection For Embedded Networks, by Philip Koopman and Tridib Chakravarty, publised in the proceedings of the 2004 International Conference on Dependable Systems and Networks gives a very good overview and makes several recomendations. It also provides a bibliography for further understanding.
http://www.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf
The choice of CRC length versus file size is mainly relevant in cases where one is more likely to have an input which differs from the "correct" input by three or fewer bits than to have a one which is massively different. Given two inputs which are massively different, the possibility of a false match will be about 1/256 with most forms of 8-bit check value (including CRC), 1/65536 with most forms of 16-bit check value (including CRC), etc. The advantage of CRC comes from its treatment of inputs which are very similar.
With an 8-bit CRC whose polynomial generates two periods of length 128, the fraction of single, double, or triple bit errors in a packet shorter than that which go undetected won't be 1/256--it will be zero. Likewise with a 16-bit CRC of period 32768, using packets of 32768 bits or less.
If packets are longer than the CRC period, however, then a double-bit error will go undetected if the distance between the erroneous bits is a multiple of the CRC period. While that might not seem like a terribly likely scenario, a CRC8 will be somewhat worse at catching double-bit errors in long packets than at catching "packet is totally scrambled" errors. If double-bit errors are the second most common failure mode (after single-bit errors), that would be bad. If anything that corrupts some data is likely to corrupt a lot of it, however, the inferior behavior of CRCs with double-bit errors may be a non-issue.
I think the size of the CRC has more to do with how unique of a CRC you need instead of of the size of the input data. This is related to the particular usage and number of items on which you're calculating a CRC.
The CRC should be chosen specifically for the length of the messages, it is not just a question of the size of the CRC: http://www.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf
Here is a nice "real world" evaluation of CRC-N
http://www.backplane.com/matt/crc64.html
I use CRC-32 and file-size comparison and have NEVER, in the billions of files checked, run into a matching CRC-32 and File-Size collision. But I know a few exist, when not purposely forced to exist. (Hacked tricks/exploits)
When doing comparison, you should ALSO be checking "data-sizes". You will rarely have a collision of the same data-size, with a matching CRC, within the correct sizes.
Purposely manipulated data, to fake a match, is usually done by adding extra-data until the CRC matches a target. However, that results in a data-size that no-longer matches. Attempting to brute-force, or cycle through random, or sequential data, of the same exact size, would leave a real narrow collision-rate.
You can also have collisions within the data-size, just by the generic limits of the formulas used, and constraints of using bits/bytes and base-ten systems, which depends on floating-point values, which get truncated and clipped.
The point you would want to think about going larger, is when you start to see many collisions which can not be "confirmed" as "originals". (When they both have the same data-size, and (when tested backwards, they have a matching CRC. Reverse/byte or reverse/bits, or bit-offsets)
In any event, it should NEVER be used as the ONLY form of comparison, just for a quick form of comparison, for indexing.
You can use a CRC-8 to index the whole internet, and divide everything into one of N-catagories. You WANT those collisions. Now, with those pre-sorted, you only have to check one of N-directories, looking for "file-size", or "reverse-CRC", or whatever other comparison you can do to that smaller data-set, fast...
Doing a CRC-32 forwards and backwards on the same blob of data is more reliable than using CRC-64 in just one direction. (Or an MD5, for that matter.)
You can detect a single bit error with a CRC in any size packet. Detecting double bit errors or correction of single bit errors is limited to the number of distinct values the CRC can take, so for 8 bits, that would 256; for 16 bits, 65535; etc. 2^n; In practice, though, CRCs actually take on fewer distinct values for single bit errors. For example what I call the 'Y5' polynomial, the 0x5935 polynomial only takes on up to 256 different values before they repeat going back farther, but on the other hand it is able to correct double bit errors that distance, which is 30 bytes plus 2 bytes for errors in the CRC itself.
The number of bits you can correct with forward error correction is also limited by the Hamming Distance of the polynomial. For example, if the Hamming distance is three, you have to flip three bits to change from a set of bits that represents one valid message with matching CRC to another valid message with its own matching CRC. If that is the case, you can correct one bit with confidence. If the Hamming distance were 5, you could correct two bits. But when correcting multiple bits, you are effectively indexing multiple positions, so you need twice as many bits to represent the indexes of two corrected bits rather than one.
With forward error correction, you calculate the CRC on a packet and CRC together, and get a residual value. A good message with zero errors will always have the expected residual value (zero unless there's a nonzero initial value for the CRC register), and each bit position of error has a unique residual value, so use it to identify the position. If you ever get a CRC result with that residual, you know which bit (or bits) to flip to correct the error.