I've been looking for a crypto library for C++ for a while, and finally came across Crypto++. The library seemed OK until I tried to use 3DES. The problem is that the key length for DES algorithm implemented in this library is 64 bits (instead of usual 56 bits). I spent some time searching for explanation and all I finally got was a couple of words from the Crypto++ documentation:
The DES implementation in Crypto++ ignores the parity bits (the least significant bits of each byte) in the key.
Does this mean that if I have a usual 56-bit key and want to decrypt some data with this library I have to "expand" my key by inserting a meaningless extra bit after every 7 bits of my key data? Or is there another way to use 56-bit keys with this wonderful library?
A usual DES key is 8-bytes, it is just that the lsb (least significant bit), the parity bit, is ignored in almost all current implementations DES.
But that begs the question wether DES and/or 3DES (with 112-bit and/or 168-bit keys) should be used. The answer for DES: no, for 3DES: only for legacy compatibility. Both of these have been superseded by AES.
Related
So for a piece of code I am writing, I want to create a 128 bit hash - like the one in the MurmurHash3 library (https://pypi.python.org/pypi/mmh3/2.5.1)
Note: I also want to add a salt to the hash which I already have as a string
I was looking around and it was suggested to truncate a SHA256 hash to 128 bits, but is there a way to get SHA256 using Crystal?
I know it supports MD5 and SHA1 in its libraries, but could I even use the OpenSSL library in the code? Would this require the OS to be running OpenSSL?
EDIT:
There is an OpenSSL::Digest module in Crystal (https://crystal-lang.org/api/0.24.1/OpenSSL/Digest.html) but how can I generate a hash to eventually be truncated to 128 bits
You CAN use the OpenSSL module to generate a SHA256 digest, or any other algorithm supported by OpenSSL. Unfortunately, I would not suggest truncating as that is not an accurate representation of a true hash and has a higher chance of collision. Have you thought about porting murmur to Crystal, I am sure many people would love to see the library. My other suggestion is to just use 256 bits as its more secure anyway.
I'm writing a general LZW decoder c++ program and I'm having trouble finding documentation on the length (in bits) of codewords used. Some articles I've found say that codewords are 12bits long, while others say 16bits, while still others say that variable bit length is used. So which is it? It would make sense to me that bit length is variable since that would give the best compression (i.e. initially start with 9 bits, then move to 10 when necessary, then move to 11 etc...). But I can't find any "official" documentation on what the industry standard is.
For example, if I were to open up Microsoft Paint and create a simple 100x100pixel all black image and save it as a Tiff. The image is saved in the Tiff using LZW compression. So in this scenario when I'm parsing the LZW codewords, should I read in 9bits, 12bits, or 16bits for the first codeword? and how would I know which to use?
Thanks for any help you can provide.
LZW can be done any of these ways. By far the most common (at least in my experience) is start with 9 bit codes, then when the dictionary gets full, move to 10 bit codes, and so on up to some maximum size.
From there, you typically have a couple of choices. One is to clear the dictionary and start over. Another is to continue using the current dictionary, without adding new entries. In the latter case, you typically track the compression rate, and if it drops too far, then you clear the dictionary and start over.
I'd have to dig through docs to be sure, but if I'm not mistaken, the specific implementation of LZW used in TIFF starts at 9 and goes up to 12 bits (when it was being designed, MS-DOS was a major target, and the dictionary for 12-bit codes used most of the available 640K of RAM). If memory serves, it clears the table as soon as the last 12-bit code has been used.
I need to calculate CRC in order to form a hash function on an INTEL machine and came up with the following two intrinsic functions:
_mm_crc32_u32
_mm_crc32_u64
In my project, I am dealing with 32-bit variables and my dilemma is between shifting and ORing each two variables (thus creating a 64-bit variable) and then using the 64-bit CRC or run the 32-bit CRC on each of the two 32-bit variables.
I can't find anywhere the amount of cycles that each one of these functions take, and from the Intel function specifications it is unclear which one is preferable.
The same dilemma also applies on the 16-bit version of the CRC function:
_mm_crc32_u16
I tried checking it by taking the time before and after the CRC. The results were pretty much the same. So I need a more sophisticated way of calculating it.
Don't use CRC for hash values. It's not the same kind of thing.
Use the murmurhash for classic computer science hashing needs (that is, not huge cryptographic strength hashes). That also has implementations for different widths.
I don't understand what you mean: you have two 32-bit values and want a hash of that? That might be sensible or might not, depending on why. Can you clarify what you are trying to accomplish?
Say, if I have a byte array of a various length and a pass-phrase, what is the quickest way to encrypt it in a platform-independent way?
PS. I can make a SHA1 digest on the pass-phrase but how do I apply it to the byte array -- doing a simple repeated XOR makes it too obvious.
PS2. Sorry, crypto guys, if I"m asking too obvious stuff...
A Hash (like sha1) create a one-way result, you cannot decrypt a hash. XORing the data is not secure by any means, don't do that.
If you need to be able to decrypt the data, then I suggest using something like Twofish which uses a symmetric key block cipher and is not restricted by licensing or patents (thus you can find platform independent reference code).
Hello all
I need to encrypt text what is the best encryption to use programmatically ?
In general I have input file with string that I need to encrypt then read the file in the application
Decrypt it for the application flow .
with c++
The strongest encryption is to use a one-time pad (with XOR for example). The one time pad algorithm (unlike most other commonly used algorithms) is provably secure when used correctly.
One serious problem with this algorithm is that the distribution of the one-time pad must be done securely and this is often impractical. If it were possible to transmit the one time pad securely then it would typically also be possible to send the message securely using the same channel.
In situations where it is not possible to send information securely via another channel, public key cryptography is used. Generally the strength of these algorithms increases as the key length increases, unless some critical weakness is found in the algorithm. RSA is a commonly used public key algorithm.
To get strong encryption with public key cryptography the keys tend to be large (thousands of bits is not uncommon) and the algorithms are slow to compute. An alternative is to use a symmetric key algorithm instead. These can often get the same strength encryption with shorter keys and can be faster to encrypt and decrypt. Like one-time-pads this also has the problem of key distribution, but this time the key is very short so it is more feasible to be able to transfer it securely. An example of a commonly used symmetric key algorithm is AES.
One time pad is the strongest, but probably you are looking sth that you can use easily in your application. Check this page to learn about strength of algorithms - http://security.resist.ca/crypt.shtml and here you have a C++ library: crypto++ (the link points to a benchmark that compare performance of different algorithms) http://www.cryptopp.com/benchmarks.html.
The answer depends on what you mean by "strong encryption".
When cryptographers talk about strong encryption modes, they usually expect that it has at least two properties:
confidentiality: I.e. it is not possible to find any information about the plaintext given the ciphertext (with the possible exception of the plaintext length).
integrity: It must not be possible for an adversary to modify the ciphertext, without the receiver of the message noticing the modification.
When cryptographers call a cryptosystem "provably secure under some assumption" then they typically mean that the cryptosystem is secure against chosen ciphertext attacks unless the assumptions (e.g. there is no efficient algorithm for some well known problem) are not satisfied.
In particular, some of the other answers claim that the one-time pad is the most secure algorithm. However, the one-time pad alone does not provide any integrity. Without any modifiction it is easy to modify a ciphertext, without that the receiver notices the modification. That means that the one-time pad only satisfies a rather weak security notion called "perfect secrecy". I.e. nowadays it is quite misleading to call the one-time pad "provably secure", without mentioning that this only holds under a security model that does not include message integrity.
To select a strong encryption mode an might also look at practical aspect. E.g., how much cryptanalysis has gone into an encryption mode, or how well has the cryptographic library that implements the algorithm been analyzed. With that in mind, selecting a well-known cryptographic library, properly encrypting with AES, authenticating with HMAC is going to be close to optimal.