How does Reed Solomon code(255,233) is formed?
I understand how RS(255,223) is formed because
n=2^8-1=255
r=32, k=n-r=223
but how about RS(255,233)?
I read somewhere on the internet, it says RS(255,233) has 32 redundant symbols but why? Isn't it supposed to be 22 redundant symbols?
Any link that I can refer to would be appreciated. Thank you.
It was a mistake. RS(255,233) would be 22 parity symbols, RS(255,223) would be 32 parity symbols.
https://www.cs.cmu.edu/~guyb/realworld/reedsolomon/reed_solomon_codes.html
Note, in some cases of RS(n,k), n-k is an odd number, so 2t+1 parity symbols.
Another note, in the Wiki article, t means the number of parity symbols, instead of the number of errors that can be corrected:
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
The Wiki article also covers the original view for RS code, which is actually a different code with the same name. In this case, for GF(2^8), the max value for n is 256 instead of 255. Some erasure only codes use original view encoding, called "Vandermonde" encoding. Another encoding method is "Cauchy".
Related
I want to use CBC MAC in C++. First I hope to find some implementation of block cipher which I will use in CBC mode, which I understand is CBC MAC. But I have two questions:
1) If the length of the message to be authenticated is not multiple of block cipher block length, what shall I do?
2) To strengthen CBC MAC one recommended way as mentioned on Wiki is to put the length of the message in the first block. But how should I encode the length, as string? Or in binary? If block length of cipher is say 64 bits, do I encode the number as 64 bit number? e.g. if message length is 230, I should use following value as first block:
00000000 00000000 00000000 00000000 00000000 00000000 00000000 11100110
?
It depends on the 2nd question. You must "pad" the message with something until it is a multiple of the block size. The pad bytes are added to the message before the MAC is calculated but only the original message is transmitted/stored/etc.
For a MAC, the easiest thing to do is pad with zeros. However this has a vulnerability - if the message part ends in one or more zeros, an attacker could add or remove zeros and not change the MAC. But if you do step 2, this and another attack are both mitigated.
If you prepend the length of the message before the message (e.g. not just in the first block, but the first thing in the first block), it mitigates the ability to sometimes add/remove zeros. It also mitigates an attackers ability to forge a message with an entire arbitrary extra block added. So this is a good thing to do. And it is a good idea for completely practical reasons too - you know how many bytes the message is without relying on any external means.
It does not matter what the format of the length is - some encoded ASCII version or binary. However as a practical matter it should always be simple binary.
There is no reason that the number of bits in the length must match the cipher block size. The size of the length field must be large enough to represent the message sizes. For example, if the message size could range from 0 to 1000 bytes long, you could prepend an unsigned 16 bit integer.
This is done first before the MAC is calculated on both the sender and receiver. In essence the length is verified at the same time as the rest of the message, eliminating the ability for an attacker to forge a longer or shorter message.
There are many open source C implementations of block ciphers like AES that would be easy to find and get working.
Caveat
Presumably the purpose of the question is just for learning. Any serious use should consider a stronger MAC such as suggested by other comments and a good crypto library. There are other weaknesses and attacks that can be very subtle so you should never attempt to implement your own crypto. Neither of us are crypto experts and this should be done only for learning.
BTW, I recommend both of the following books by Bruce Schneier:
http://www.amazon.com/Applied-Cryptography-Protocols-Algorithms-Source/dp/0471117099/ref=asap_bc?ie=UTF8
http://www.amazon.com/Cryptography-Engineering-Principles-Practical-Applications/dp/0470474246/ref=asap_bc?ie=UTF8
I need to use an error correcting technique on short messages (between 100 and 200 bits). Space available to add the redundant bits is constrained to 20-50%.
I will have to implement the coding and decoding in C/C++. So it needs to be either open sourced or sufficiently easy to program. (I have had some experience in the past with decoding algorithms - they are dreadful!)
Can anyone advise of a suitable error code to use (with relevant parameters) ?
Take a look at Reed Solomon error correction.
Sample implementation in C++ is available here.
For a different option look here - see item #11
EDIT: If you want a commercial library - http://www.schifra.com/faq.html
Reed-Solomon encoders are described in the form RS(CAPACITY,PAYLOAD). The capacity is always 2^SYMBOL-1, where SYMBOL is the number of bits in each Reed-Solomon symbol. Quite often, this SYMBOL size is 8 bits (a normal byte). It can typically be anything from 3 to 16 bits. For an 8-bit symbol, the Reed-Solomon encoder will be named RS(255,PAYLOAD).
The PAYLOAD is the number of non-parity symbols. If you want 4 parity symbols, you would specify RS(255,251).
To effectively correct errors in your data block, you must first package the data as symbols (groups of bits, quite often just 8-bit bytes). Your goal is to try to arrange (if possible) for any errors to be clustered into the smallest number of symbols possible.
For example, if an error occurs on average every 8 bits, then an 8-bit symbol will not be appropriate; pretty much every symbol will have an error! You might go for 4-bit symbols and use an RS(15,11) codec -- for up to 11 4-bit symbols at a time, producing 4 parity symbols per block. The smaller the symbol size, the lower the CAPACITY (eg. for a SYMBOL size of 4 bits, 2^4-1 == 15 symbol CAPACITY).
But typically, you would use 8-bit symbols. If you have a more realistic error rate of, say, 10% of your 8-bit symbols being erroneous, then you might use an RS(255,205) -- 50 parity symbols per 255 symbol Reed-Solomon "codeword", with a maximum PAYLOAD of 205 bytes. This gives us ~25% parity, allowing us to correct a codeword containing up to ~12.5% errors.
Using https://github.com/pjkundert/ezpwd-reed-solomon's c++/ezpwd/rs Reed-Solomon API, you would specify this as:
#include <ezpwd/rs>
...
ezpwd::RS<255,205> rscodec;
Put your data in a std::string (it can handle raw 8-bit binary data just fine) or a std::vector and call the API, adding the 50 symbols of parity:
std::string data;
// ... fill data with a fixed size block, up to 205 bytes
rscodec.encode( data );
Send your data, and later on, after you receive the data+parity, recover the original data (and discard the 50 parity symbols):
int corrected = rscodec.decode( data );
If the data could be recovered, the number of symbols corrected will be returned, or -1 if the Reed-Solomon codeword contained too many errors.
Enjoy!
I need to develop an error correcting code.
My alphabet is {0,1,2,3} (4 elements)
Codeword size n will be 8 or 12
expected error correction capability = 1 digit
expected error detection capability = 2 digit
I reviewed many ecc techniques (rs,ldpc,etc), yet still dont know where to start, and how to do.
Can anybody plz help me to construct it?
Thx
Have you considered a checksum?
There are tons of ways to implement this, but a common approach would be to use a Reed-Solomon code.
Since you need to detect all two-symbol errors and correct all one-symbol errors, that means you will need two check symbols.
You say you have 2-bit (4-element) symbols, which limits your code length to 3 symbols.
Add that up and you have 1 data symbol and 2 check symbols for each 12-bit code word.
Not very efficient, eh? For that efficiency, you might as well just triplicate your symbol thrice, with the same codewords size and detective and corrective power.
To use Reed-Solomon more effectively, you'll need to use large symbols. This is true for most other types of codes as well.
EDIT:
You may want to consider generalized BCH codes which don't have quite as many limitations as Reed-Solomon codes (which are a subset of BCH codes), at the expense of more complex decoding:
http://en.wikipedia.org/wiki/BCH_code
I have an application that uses an opensource "libgcrypt" to encrypt/decrypt a data block (32 bytes). Now I am going to use Microsoft CryptAPI to replace it. My problem is that the libgcrypt and cryptApi approaches generate different ciphertext contents as I use the same AES-256 algoritjm in CFB mode, same key, and same IV, although the ciphertext can be decrypted by their own correspndingly.
Could some tell me what is the problem? Thanks.
Do the two assume different endianness, or assign the bytes in the key/IV in different orders?
If the endianness assumptions are different, you may need to re-order the bytes in the key, IV and/or plaintext to get matching results. For example, if you are supplying bytes in the order abcdefgh, you may need to switch this to 'dcbahgfe' to get things to work.
There is an additional parameter for CFB, namely the "shift amount" at each iteration. The Wikipedia page on CFB has some information. Namely, you encrypt x bits for every block encryption, where x is any value between 1 and the block size (128 for AES). I suspect that in your code, the Microsoft CryptoAPI and libgcrypt do not use the same value for x.
As explained in the documentation for CryptSetKeyParam(), Windows defaults to x=8 (i.e. one byte at a time). This is the KP_MODE_BITS parameter. On the other hand, libgcrypt defaults to x=n for a n-bit block cipher (i.e. x=128 for AES). I am not sure libgcrypt can be convinced to use another value.
i think the problem is with block size .as you said you are using 32 byte as block size make sure if block size of both are same and supports as well .because some of library block size is fixed for Aes as 16 byte .
What is the length of your key and IV?
Are ciphertexts different if the length of opentext is exactly 256 bit?
I have same problem, but with a different library. I noticed one thing in this library; If I pass input byte less than 32 bytes, in that case it's showing me both are the same encrypted data.
Is that what's happening in your case? If so, it means the problem is with the padding mechanism.
Okay so i have a packed a proprietary binary format. That is basically a loose packing of several different raster datasets. Anyways in the past just reading this and unpacking was an easy task. But now in the next version the raster xml data is now to be encrypted using AES-256(Not my choice nor do we have a choice).
Now we basically were sent the AES Key along with the SALT they are using so we can modify our unpackager.
NOTE THESE ARE NOT THE KEYS JUST AN EXAMPLE:
They are each 63 byte long ASCII characters:
Key: "QS;x||COdn'YQ#vs-`X\/xf}6T7Fe)[qnr^U*HkLv(yF~n~E23DwA5^#-YK|]v."
Salt: "|$-3C]IWo%g6,!K~FvL0Fy`1s&N<|1fg24Eg#{)lO=o;xXY6o%ux42AvB][j#/&"
We basically want to use the C++ CryptoAPI to decrypt this(I also am the only programmer here this week, and this is going live tomorrow. Not our fault). I've looked around for a simple tutorial of implementing this. Unfortunately i cannot even find a tutorial where they have both the salt and key separately. Basically all i have really right now is a small function that takes in an array of BYTE. Along with its length. How can i do this?
I've spent most of the morning trying to make heads/tails of the cryptoAPI. But its not going well period :(
EDIT
So i asked for how they encrypt it. They use C#, and use RijndaelManaged, which from my knowledge is not equivalent to AES.
EDIT2
Okay finally got exactly what was going on, and they sent us the wrong keys.
They are doing the following:
Padding = PKCS7
CipherMode = CBC
The Key is defined as a set of 32 Bytes in hex.
The IV is defined as a set of 32 bytes in hex too.
They took away the salt when i asked them.
How hard is it to set these things in CryptoAPI using the wincrypt.h header file.?
AES-256 uses 256 bit keys. Ideally, each key in your system should be equally likely. A 63 byte string would be 504 bits. You first need to figure out how the string of 63 characters needs to be converted to 256 bits (The sample ones you gave are not base64 encoded). Next, "salt" isn't an intrinsic part of AES. You might be referring to either an initialization vector (IV) in Cipher-Block-Chaining mode or you could be referring to somehow updating the key.
If I were to guess, I'm assuming that by "SALT" you mean IV and specifically CBC mode.
You will need to know all of this when using CAPI functions (e.g. decrypt).
If all of this sounds confusing, then it might be best to change your design so that you don't have to worry about getting all of this right. Crypto is hard. One bad step could invalidate all the security. Consider looking at this comment on my Stick Figure Guide to AES.
UPDATE: You can look at this for a rough starting point for C++ CAPI. You'll need a 64 character hex string to get 256 bits ( 256 bits / (4 bits / char) == 64 chars). You can convert the chars to bits yourself.
Again, I must caution that playing fast and loose with IV's and keys can have disastrous consequences. I've studied AES/Rijndael in depth down to the math and gate level and have even written my own implementation. However, in my production code, I stick to using a well-tested TLS implementation if at all possible for data in transit. Even for data at rest, it'd be better to use a higher level library.
Rijndael is the algorithm name for AES