What does CCITT stand for? - crc

In the context of the 16-bit Cyclic Redundancy Check (CRC-16) CCITT algorithm, which uses the generator polynomial x¹⁶ + x¹² + x⁵ + 1, what does "CCITT" stand for? I can't seem to find this initialism written out in full.

CCITT stands for Consultative Committee for International Telephony and Telegraphy, which has now become the ITU-T.
https://www.sigidwiki.com/wiki/CCITT

Related

64/72 bit SECDED ECC

I would like to know if parity/syndrome generation for 64/72 bit SEC_DED coding is standardized or de-facto method used. I am going through some papers and all seem to have different combinations to generate the check bits.
There's no standard but the de facto method used is described here:
https://www.xilinx.com/support/documentation/application_notes/xapp383.pdf
or here:
https://www.youtube.com/watch?v=ms-Lnm1wJ48
The latter explains how it maps to DRAM (which uses 64/72b coding) though once you understand the general concept you can easily adapt it to any number of bits.
A variety of different H-matrices are used. The original paper, which gives a method of calculating them, is I believe from M. Y. Hsiao - A Class of Optimal Minimum Odd-weight column SEC-DEC Codes: https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hsiao70.pdf
Different matrices will have slightly different probabilities of miss correcting triple errors or detecting quadruple errors. See Table 2.

openssl RAND_add() documentation refers to RFC1750. RFC1750 is silent on the matter

The documentation for the openssl library's RAND_add function has this to say about the entropy argument:
The entropy argument is (the lower bound of) an estimate of how much randomness is contained in buf, measured in bytes. Details about sources of randomness and how to estimate their entropy can be found in the literature, e.g. RFC 1750.
source: http://linux.die.net/man/3/rand_add
RFC 1750 can be found here: https://www.rfc-editor.org/rfc/rfc1750.html
... but of course it is completely silent on the subject of "entropy" (a text search reveals zero occurrences of this word in the document).
So here are my questions:
What specifically is the entropy argument supposed to be a measurement of?
What is the valid range of values (the argument is of type double)?
Many thanks.
What specifically is the entropy argument supposed to be a measurement of?
How unpredictable the input is to an attacker.
What is the valid range of values (the argument is of type double)
The value should be the natural log of the number of guesses you expect an attacker would need to guess the contents of the input divided by 8. This is basically the same as the number of bytes of entropy the input contains.
It may not use the word, but the entire RFC is about entropy. It uses the word 'unpredictability' instead.
Per the "man" page you cited:
http://linux.die.net/man/3/rand_add
RAND_seed() is equivalent to RAND_add() when num == entropy.
Here is a discussion about the "entropy" argument:
https://www.mail-archive.com/openssl-dev#openssl.org/msg09806.html
Lutz Jaenicke wrote:
The entropy parameter should tell, how much "uncertainty" is in the
data provided.
If we choose a value of 0, we mean that there may be entropy in it, but
maybe an attacker can predict the value, so we use it but do not count
it as a really unpredictable input.
So, if we know the entropy per character (byte) what's the correct
formula for deriving the correct value for the entropy parameter?
If the entropy is 10% (compress ration 1:10), the parameter is "number
of bytes * 10%".
Please note again, that the compression ration is just the condensed
amount of information in the message. If we don't know the message, it
is more or less equivalent to the entropy (unpredictability) in it. If
the message is known, the entropy (from the cryptographic point of
view) is zero! It is therefore a difficult decision to finally
estimate the entropy coming from the source with the compressed size
being kind of an upper bound.
In other words, if your buffer is "perfectly random", then entropy == bufsize.
Finally, the RFC cited in the rand_add() man page, RFC 1750, has been superseded by RFC 4086:
https://www.rfc-editor.org/rfc/rfc4086

How can one verify the proper operation of a sieve close to 2^64?

Small primes - up to about 1,000,000,000,000 - are readily available from various sources. The Prime Pages (utm.edu) have lists for the first 50 million primes, primos.mat.br goes up to 10^12, and programs like the one available at primesieve.org go even higher.
However, when it comes to numbers close to 2^64 there's only the ten primes mentioned on the page Primes just less than a power of two at primes.utm.edu and that seems to be it.
The primality test found there refuses to work on numbers that don't fit a double, others - elsewhere - fail to refuse and just print trash. The primesieve.org program refuses to work with numbers that aren't at least some 40 billion below 2^64, which doesn't exactly inspire confidence in the quality of their coding. The same result everywhere: nada, zilch, niente.
The cogs and gears of sieves start creaking around the 2^62 mark, and close to 2^64 there's hardly a cog that doesn't creak loudly threatening to break apart. Hence the need for testing the implementation is greatest where verification is most difficult, because of the scarcity/absence of reliable reference data. The primesieve.org program seems to be the only one that works at least up to 2^63 or thereabouts, but I don't trust it too much because of the above-mentioned issue.
So how then can one verify the proper operation of a sieve close to 2^64? Are there reliable lists somewhere for a million (or ten million or a hundred million) primes just below and above powers of two like 2^64, 2^63 and so on? Or are there reliable (trustworthy, verified, banged-on a lot) programs that yield such sequences or that can verify primes or lists of primes?
Once a sieve has been verified it can be used to produce handy lists with sums/checksums for loads of interesting ranges, but absent such lists the situation seems difficult...
P.S.: I determined the upper limit for the primesieve.org turbo siever to be UINT64_MAX - 10 * UINT32_MAX, or 0xFFFFFFF600000009. That means only the 10 * UINT32_MAX highest primes don't have any reference data at all so far...
Instead of looking for a pre-computed list, you could compare the output of your sieve to a different sieve. A good sieve, written by Tomás Oliveira e Silva, is available at http://sweet.ua.pt/tos/software/prime_sieve.html.
Another way to test your code is by testing the primality of all numbers your sieve reports as prime (or conversely, testing the non-primality of all numbers your sieve does not report as prime). A good way to do that is the Baillie-Wagstaff test. You can find a good-quality implementation by Thomas R. Nicely at http://www.trnicely.net/misc/bpsw.html.
You might also be interested in Jan Feitsma's tables of pseudoprimes at http://www.janfeitsma.nl/math/psp2/index, which are complete to 264.
First, thanks for sharing your program and working on correctness. I think it's important to do testing, and sieving near the size boundary was something I spent time working on for my code.
"The same result everywhere: nada, zilch, niente." You're not looking hard enough. There are plenty of tools that do this. It's too bad primesieve doesn't go all the way to 2^64-1, but that doesn't mean nothing else does.
"So how then can one verify the proper operation of a sieve close to 2^64?" One thing I did it is make an edge-case test that runs through all combinations of start/end points near 2^64-1, verifying a number of methods all generate the same results. This relies on having a list of these primes to start, but there are many ways to get these. Not only does this test the sieve at this range, but tests the start/end conditions to make sure there are no issues there.
Some ways to generate a million primes below 2^64:
time perl -Mntheory=:all -E 'forprimes { say } ~0-44347170,~0' | md5sum
Takes ~2s to generate 1M primes. We can force use of different code (Perl or GMP), use primality tests, etc. Lots of ways to do this, including just looping and calling is_provable_prime($n), for example. There are also other Perl modules including Math::Primality though they are much slower.
echo 'forprime(i=2^64-44347170,2^64-1,print(i))' | time gp -f -q | md5sum
Takes ~13s to generate 1M primes. As with the Perl module, there are lots of alternate ways including looping calling isprime which is a deterministic routine (assuming a non-ancient version of Pari/GP).
#include <stdio.h>
#include <gmp.h>
int main(void) {
mpz_t n;
mpz_init_set_str(n,"18446744073665204445",10);
mpz_nextprime(n, n);
while (mpz_sizeinbase(n,2) < 65) {
/* If you don't trust mpz_nextprime, one could add this:
* if (!mpz_probab_prime_p(n, 100))
* { fprintf(stderr, "Bad nextprime!\n"); return -1; }
*/
gmp_printf("%Zd\n",n);
mpz_nextprime(n, n);
}
mpz_clear(n);
return 0;
}
Takes about 30s and get the same results. This one is more dubious as I don't trust its 25 preset-random base MR test as much as BPSW or one of the proof methods, but it doesn't matter in this case as we see the results match. Adding the extra 100 tests is very expensive in time, but would make it extremely unlikely to have false results (I suspect we have overlapping bases so this is also wasteful).
from sympy import nextprime
n = 2**64-44347170;
n = nextprime(n)
while n < 2**64:
print n
n = nextprime(n)
Using Python's SymPy. Unfortunately primerange uses crazy memory when given 2^64-1 so that's not possible to use. Doing the simple nextprime method isn't ideal -- it takes about 5 minutes, but generates the same results (the current SymPy isprime uses 46 prime bases, which is many more than needed for deterministic results under 2^64).
There are other tools, e.g. FLINT, GAP, etc.
I realize that since you're on Windows, the world is wonky and lots of things don't work right. I have tested Perl's ntheory on Windows and with both Cygwin and Strawberry Perl from command prompt I get the same results. The GMP code ought to work the same, assuming GMP works correctly.
Edit add: If your results don't match one of the comparison methods, then one of the two (or both) is wrong. It may be the comparison code that is wrong! It helps everyone if you find and report errors. It's unlikely but possible they are both wrong in the same way, which is why I like to compare with as many other sources as possible. To me that is more robust than picking one "golden" code to compare against. Especially if you're using an oddball platform that may not have been thoroughly tested.
For BPSW, there are a few implementations around:
Pari. AES Lucas, in the Pari source code so not sure how portable it is.
TR Nicely. Strong Lucas, standalone code.
David Cleaver. Standard, Strong or Extra Strong Lucas. Standalone library, very clear, very easy to use.
My non-GMP code, including asm Montgomery math for x86_64. Quite a bit faster than bigint codes of course.
My GMP code. Standard, Strong, AES, or Extra strong Lucas. Faster than the other bigint codes. Also has other Frobenius and other compositeness tests. Can be made standalone.
I have a version using LibTomMath that I hope to get into one of the Perl6 VMs. Only interesting if you want to use LTM.
All verified vs. the Feitsma data. I'm sure there are more implementations around as well. FLINT has a variation that is quite fast, but it isn't really BPSW (but it's been verified for numbers under 2^64).
In general, one must use less naive techniques than trial division, or be very patient.
(gp/PARI documentation)
For 64-bit integers, trial division takes millions of times as long as even a simple sieve, let alone thoroughbreds like Kim Walisch's program (primesieve.org) which is orders of magnitude faster.
The reference sieve I want to verify (there's a standalone .cpp # pastebin) finds about a million primes per second when sieving close to 2^64, whereas the trial division code I lifted out of the gmp implementation takes 20 seconds to find even one. Restricting trial division to presieved primes (stored as deltas with one byte per prime for fast iteration) speeds it up by an order of magnitude, but it still outputs less than one prime per second on my laptop.
Hence, trial division can deliver only homœopathic amounts of reference data, even if I use all cores I can lay hands on including Kindle, phone and toaster.
More sophisticated tests like Miller-Rabin or the Baillie-PSW linked by user448810 are several orders of magnitude faster than trial division. For numbers up to 2^64 the Baillie-PSW has been verified to be deterministic (no strong pseudo primes below that threshold). The Miller-Rabin may or may not be deterministic up to 2^64 if the first 12 primes are used as base, or the 7-base set found by Jim Sinclar (meaning the 'net offers statements to that effect but apparently no evidence).
With Baillie-PSW verified - and faster to boot - it seems like a good choice. Unfortunately it is also several orders of magnitude more complicated than a sieve, making it even more important to find trustworthy implementations that are ready to compile without lots of twiddling or - ideally - available as binaries.
Thomas Nicely's Baillie-PSW page has source code that uses the gmp, and gp/PARI can use either gmp or its own code. The latter is also available as a binary, which is very fortunate since building gmp code on an exotic, off-beat platform like MinGW under Windows is a non-trivial undertaking, even if MPIR is used instead of gmp.
That gets us some bulk data but still nowhere near enough for verifying the sieve, since it is orders of magnitude too slow even for covering the blank area left by the cap of primesieve.org (10 * 2^32 numbers).
This is where Will Ness's bigint idea comes in. The operation of the sieve can be verified up to 1,000,000,000,000 using reference data from multiple, independent sources. Switching index variables from 32-bit to 64-bit eliminates most of the boundary cases that could cause the code to mess up in higher regions, leaving only a very few places where even uint64_t gets close to its limits. With those places thoroughly inspected and generously covered by test cases derived from the Baillie-PSW undertaking we can have reasonably high confidence that the sieve code is good. Add copious verification against primesieve.org in the range from 10^12 up to its cap, and it should be sufficient to regard the sieve implementation as trustworthy.
With the sieve up and running, it's easy to cover arbitray ranges with bulk data. Or with digests, as a canned/compressed means of verification that can serve needs of any size and shape. It's what I use in the demo .cpp I mentioned earlier, although my real code uses a mixture between an optimised digest implementation for general work and a special raw memory checksum of 128 bits for quick self-checks of factor sieve bitmaps.
SUMMARY
up to 1,000,000,000,000 verification against primos.mat.br or similar
up to 2^64 - 10 * 2^32 verification against primesieve.org
rest up to 2^64-1: verification of strategically chosen segments using Baillie-PSW (e.g. gp/PARI)

My code works, but I'm not satisfied with the efficiency. (C++ int to binary to scaled double)

Code pasted here
Hello SO. I just wrote my first semi-significant PC program, written purely for fun/to solve a problem, having not been assigned the problem in a programming class. I'm sure many of you remember the first significant program that you wrote for fun.
My issue is, I am not satisfied with the efficiency of my code. I'm not sure if it's the I/O limitation of my terminal or my code itself, but it seems to run quite slow for DAC resolutions of 8 bits or higher.
I haven't commented the code, so here is an explanation of the problem that I was attempting to solve with this program:
The output voltage of a DAC is determined by a binary number having bits Bn, Bn-1 ... B0, and a full-scale voltage.
The output voltage has an equation of the form:
Vo = G( (1/(2^(0)))*(Bn) + (1/2^(0+1))*(Bn-1) + ... + (1/2^(0+n))*(B0) )
Where G is the gain that would make an input B of all bits high the full-scale voltage.
If you run the code the idea will be quite clear.
My issue is, I think that what I'm outputting to the console can be achieved in much less than 108 lines of C++. Yes, it can easily be done by precomputing the step voltage and simply rendering the table by incrementation, but a "self-requirement" that I have for this program is that on some level it performs the series calculation described above for each binary represented input.
I'm not trying to be crazy with that requirement. I'd like this program to prove the nature of the formula, which it currently does. What I'm looking for is some suggestions on how to make my implementation generally cleaner and more efficient.
pow(2.0, x) is almost always a bad idea - especially if you're iterating over x. pow(2.0,0) == 1, and pow(2.0,x) == 2 * pow(2.0,x-1)
DtoAeqn returns an unbalanced string (one more )), that hurts my brain.
You could use Horner's method to evaluate the formula efficiently. Here's an example I use to demonstrate it for converting binary strings to decimal:
0.1101 = (1*2-1 + 1*2-2 + 0*2-3 + 1*2-4).
To evaluate this expression efficiently, rewrite the terms from right to left, then nest and evaluate as follows:
(((1 * 2-1 + 0) * 2-1 + 1) * 2-1 + 1) * 2-1 = 0.8125.
This way, you've removed the "pow" function, and you only have to do one multiplication (division) for each bit.
By the way, I see your code allows up to 128 bits of precision. You won't be able to compute that accurately in a double.

AES Limitations and MixColumns

So we all agree keys are a fixed-length of 128bits or 192bits or 256bits. If our context was 50 characters in size (bytes) % 16 = 2 bytes. So we encrypt the context in 3 times, but the remaining two bytes how will they be stored in the State block. Should I pad them, the standard doesn't specify how to handle such conditions.
MixColumns stage is the most complicated aspect in the AES, however I have been unable to understand the mathematical representation. I have an understanding of the matrix multiplication, but I'm surprised of the mathematical results. Multiplying a value by 2, shift left for little endian 1 position and shift right for big endian. If we had the most significant bit was set as 1 (0x80) then we should XOR the shifted result with 0x1B. I thought by multiplying by 3 it would mean to shift the value 2 positions.
I've checked the various sources on Wikipedia, even the tutorial that provides a C implementation. But I'm more interested to complete my own implementation! Thank you for any possible input.
In the mix columns stage the exponents are being multiplied.
take this example
AA*3
10101010*00000011
is
x^7+x^5+x^3+x^1*x^1+x^0
x^1+x^0 is 3 represented in polynomial form
x^7+x^5+x^3+x^1 is AA represented in polynomial form
first take x^1 and dot multiply it by the polynomial for AA.
that results in...
x^8+x^6+x^4+x^2 ... adding one to each exponent
then reduce this to 8 bits by XoRing by 11B
11B is x^8+x^4+x^3+x^1+x^0 in polynomial form.
so...
x^8+x6+x^4+ x^2
x^8+ x^4+x^3+ x^1+x^0
leaves
x^6+x^3+x^2+x^1+x^0 which is AA*2
now take AA and dot multiply by x^0 (basically AA*1)
that gives you
x^7+x^5+x^3+x^1 ... a duplicate of the original value.
then exclusive or AA*2 with AA*1
x^7+ x^5+x^3+ x^1
x^6+ x^3+x^2+x^1+x^0
which leaves
x^7+x^6+x^5+x^2+x^0 or 11100101 or E5
I hope that helps.
here also is a document detailing the specifics of how mix columns works.
mix_columns.pdf
EDIT:Normal matrix multiplication does not apply to this ..so forget about normal matrices.
In response to your questions:
If you want to encrypt a stream of bytes using AES, do not just break it into individual blocks and encrypt them individually. This is not cryptographically secure and a clever attacker can recover a lot of information from your original plaintext. This is called an electronic code book and if you follow the link and see what happens when you use it to encrypt Tux the Linux Penguin you can visually see its insecurities. Instead, consider using a known secure technique like cipher-block chaining (CBC) or counter mode (CTR). These are a bit more complex to implement, but it's well worth the effort so that you can ensure a clever attacker can't break your encryption indirectly.
As for how the MixColumns stage works, I really don't understand much of the operation myself. It's based on a construction that involves fields of polynomials. If I can find a good explanation as to how it works, I'll let you know.
If you want to implement AES to further your understanding, that's perfectly fine and I encourage you to do so (though you are probably better off reading the mathematical intuition as to where the algorithm comes from). However, you should not use your own implementation for any actual cryptographic purposes. Without extreme care, you will render your implementation vulnerable to a side-channel attack that can compromise its security. The most famous example of this involves RSA encryption, in which without careful planning an attacker can actually watch the power draw of the computer as it does the encryption to recover the bits of the key. If you want to use AES to do encryption, consider using a known, tested, open-source implementation of the algorithm.
Hope this helps!
If you want to test the outcome of your own implementation (any internal state during computation) you can check this page :
http://www.keymolen.com/aes.jsp
It displays all internal states for any given plaintext, key and iv, also for the mixcolumns stage.