HMAC with Cryptography API C/C++ - c++

I'm using the Microsoft example from http://msdn.microsoft.com/en-us/library/aa382379(v=vs.85).aspx. It compiles fine and appears work, returning the result:
48 f2 57 38 29 29 43 16 fd f4 db 58 31 e1 0c 74 48 8e d4 e2
However when I run the same calculation using C# code below, I get a different hash?
4C-2D-E3-61-BA-89-58-55-8D-E3-D0-49-ED-1F-B5-C1-15-65-6E-65
HMAC hashMaker = new HMACSHA1(new byte[]{0x70,0x61,0x73,0x73,0x77,0x6F,0x72,0x64});
byte[] hash = hashMaker.ComputeHash(new byte[] {0x6D,0x65,0x73,0x73,0x61,0x67,0x65});
string hashStr = BitConverter.ToString(hash);
I fear the example C code is in correct in some way. What is going on here?

The posted C# code and C code linked (but not posted) do different things. The output will therefore be different as well.
The WinCrypt code performs the following:
Create a SHA1 hash of the key bytes
Derive a session key from the resulting hash using RC4
Initialize an HMAC_SHA1 digest using the derived key
Perform the HMAC CryptHashData with the HMAC
Request the resulting hash bytes
The C# code performs the following:
Create an HMAC_SHA1 using the key bytes as the actual key (no derivation)
Perform the HMAC via ComputeHash, returning the resulting hash digest
In other words, they're different because they're doing different things. Which one is "right" is dependent on what you're trying to do (which was not mentioned in the question).
OpenSSL Equivalent to the C# code
unsigned char key[] = { 0x70,0x61,0x73,0x73,0x77,0x6F,0x72,0x64 };
unsigned char bytes[] = { 0x6D,0x65,0x73,0x73,0x61,0x67,0x65 };
unsigned char md[32] = {0};
unsigned int md_len = (int)sizeof(md);
HMAC_CTX ctx;
HMAC_CTX_init(&ctx);
HMAC_Init(&ctx, key, (int)sizeof(key), EVP_sha1());
HMAC_Update(&ctx, bytes, sizeof(bytes));
HMAC_Final(&ctx, md, &md_len);
The resulting digest in md matches the C# code respectively (omitted, but take my word for it or test it yourself).

I believe it is not possible to performs a HMAC with a non derive key using CryptoAPI CryptCreateHash.
However it is possible to perform HMAC SHA1/SHA256 using the BCrypt set of methods in the Cryptography API: Next Generation (CNG). See this question for an example.

Related

What is the default IV when encrypting with aes_256_cbc cipher?

I've generated a random 256 bit symmetric key, in a file, to use for encrypting some data using the OpenSSL command line which I need to decrypt later programmatically using the OpenSSL library. I'm not having success, and I think the problem might be in the initialization vector I'm using (or not using).
I encrypt the data using this command:
/usr/bin/openssl enc -aes-256-cbc -salt -in input_filename -out output_filename -pass file:keyfile
I'm using the following call to initialize the decrypting of the data:
EVP_DecryptInit_ex(ctx, EVP_aes_256_cbc(), nullptr, keyfile.data(), nullptr))
keyfile is a vector<unsigned char> that holds the 32 bytes of the key. My question is regarding that last parameter. It's supposed to be an initialization vector to the cipher algorithm. I didn't specify an IV when encrypting, so some default must have been used.
Does passing nullptr for that parameter mean "use the default"? Is the default null, and nothing is added to the first cipher block?
I should mention that I'm able to decrypt from the command line without supplying an IV.
What is the default IV when encrypting with EVP_aes_256_cbc() [sic] cipher...
Does passing nullptr for that parameter mean "use the default"? Is the default null, and nothing is added to the first cipher block?
There is none. You have to supply it. For completeness, the IV should be non-predictable.
Non-Predictable is slightly different than both Unique and Random. For example, SSLv3 used to use the last block of ciphertext for the next block's IV. It was Unique, but it was neither Random nor Non-Predictable, and it made SSLv3 vulnerable to chosen plaintext attacks.
Other libraries do clever things like provide a null vector (a string of 0's). Their attackers thank them for it. Also see Why is using a Non-Random IV with CBC Mode a vulnerability? on Stack Overflow and Is AES in CBC mode secure if a known and/or fixed IV is used? on Crypto.SE.
/usr/bin/openssl enc -aes-256-cbc...
I should mention that I'm able to decrypt from the command line without supplying an IV.
OpenSSL uses an internal mashup/key derivation function which takes the password, and derives a key and iv. Its called EVP_BytesToKey, and you can read about it in the man pages. The man pages also say:
If the total key and IV length is less than the digest length and MD5 is used then the derivation algorithm is compatible with PKCS#5 v1.5 otherwise a non standard extension is used to derive the extra data.
There are plenty of examples of EVP_BytesToKey once you know what to look for. Openssl password to key is one in C. How to decrypt file in Java encrypted with openssl command using AES in one in Java.
EVP_DecryptInit_ex(ctx, EVP_aes_256_cbc(), nullptr, keyfile.data(), nullptr))
I didn't specify an IV when encrypting, so some default must have been used.
Check your return values. A call should have failed somewhere along the path. Maybe not at EVP_DecryptInit_ex, but surely before EVP_DecryptFinal.
If its not failing, then please file a bug report.
EVP_DecryptInit_ex is an interface to the AES decryption primitive. That is just one piece of what you need to decrypt the OpenSSL encryption format. The OpenSSL encryption format is not well documented, but you can work it backwards from the code and some of the docs. The key and IV computation is explained in the EVP_BytesToKey documentation:
The key and IV is derived by concatenating D_1, D_2, etc until enough
data is available for the key and IV. D_i is defined as:
D_i = HASH^count(D_(i-1) || data || salt)
where || denotes concatentaion, D_0 is empty, HASH is the digest
algorithm in use, HASH^1(data) is simply HASH(data), HASH^2(data) is
HASH(HASH(data)) and so on.
The initial bytes are used for the key and the subsequent bytes for the
IV.
"HASH" here is MD5. In practice, this means you compute hashes like this:
Hash0 = ''
Hash1 = MD5(Hash0 + Password + Salt)
Hash2 = MD5(Hash1 + Password + Salt)
Hash3 = MD5(Hash2 + Password + Salt)
...
Then you pull of the bytes you need for the key, and then pull the bytes you need for the IV. For AES-128 that means Hash1 is the key and Hash2 is the IV. For AES-256, the key is Hash1+Hash2 (concatenated, not added) and Hash3 is the IV.
You need to strip off the leading Salted___ header, then use the salt to compute the key and IV. Then you'll have the pieces to feed into EVP_DecryptInit_ex.
Since you're doing this in C++, though, you can probably just dig through the enc code and reuse it (after verifying its license is compatible with your use).
Note that the OpenSSL IV is randomly generated, since it's the output of a hashing process involving a random salt. The security of the first block doesn't depend on the IV being random per se; it just requires that a particular IV+Key pair never be repeated. The OpenSSL process ensures that as long as the random salt is never repeated.
It is possible that using MD5 this way entangles the key and IV in a way that leaks information, but I've never seen an analysis that claims that. If you have to use the OpenSSL format, I wouldn't have any hesitations over its IV generation. The big problems with the OpenSSL format is that it's fast to brute force (4 rounds of MD5 is not enough stretching) and it lacks any authentication.

Microsoft CryptoAPI: how to convert PUBLICKEYBLOB to DER/PEM?

I have a generated RSA key pair stored as PRIVATEKEYBLOB and PUBLICKEYBLOB, and I need to be able to convert these keys to DER or PEM formats so I could use it in PHP or Python. I figured out that I could use CryptEncodeObject function to convert my PRIVATEKEYBLOB to DER. In order to do that I need to use PKCS_RSA_PRIVATE_KEY encoding flag. But I couldn't find any clue on how to convert PUBLICKEYBLOB to DER.
Here is my code for PRIVATEKEYBLOB convertion:
LPCSTR type = PKCS_RSA_PRIVATE_KEY;
DWORD encd = X509_ASN_ENCODING | PKCS_7_ASN_ENCODING;
DWORD dlen = 0;
if(!CryptEncodeObject(encd, type, key, null, &dlen))
{ LOG_ERROR(); return false; }
// Buffer allocation (der variable)
if(!CryptEncodeObject(encd, type, key, der, &dlen))
{ LOG_ERROR(); return false; }
I test my keys by comparing them to the output of openssl tool:
openssl rsa -pubin -inform MS\ PUBLICKEYBLOB -in pub.ms -outform DER -out pub.der
openssl rsa -inform MS\ PRIVATEKEYBLOB -in pri.ms -outform DER -out pri.der
ADDED: I tried RSA_CSP_PUBLICKEYBLOB with X509_ASN_ENCODING, but the result is different to the output of openssl tool, and the key import failes. The openssl's exported DER is 25 bytes longer, and only first 3 bytes are equal in both keys. Here is the picture of key comparison:
If we look closely at this picture, we can see that openssl's key version has some kind of additional 24 bytes header after the 3rd byte. Haven't figured out what is it as of yet, but if I concatinate this hardcoded header with the output I get from CryptEncodeObject with RSA_CSP_PUBLICKEYBLOB it all works fine. Not sure if that header is always the same or not though.
Use RSA_CSP_PUBLICKEYBLOB as documented in
https://msdn.microsoft.com/en-us/library/windows/desktop/aa378145(v=vs.85).aspx
I struggled with PUBLICKEYBLOB -> PEM/DER formats until I found a post about pulling one from a smart card and converting it. The gist of it is, MS PUBLICKEYBLOB needs to have the first 32 bytes removed, then reverse the order and add 02 03 01 00 01 to the end. That will give you DER format. You can then base64 encode to get PEM and then add the requisite begin/end public key lines.
You can refer to the original post for background.

Generate SHA256 in c++

I need to generate SHA256 of some data. I found this example is a very good one. Now my question is Can I generate a sha256 by using my own key.
EDIT:
First of all, sorry for wrong question. I don't mean that to change the key used to generate SHA256. I really need is that, to convert the following java code to c++
public static String calculateHMAC(String data, String key) throws Exception {
String result;
try {
// get an hmac_sha2 key from the raw key bytes
SecretKeySpec signingKey = new SecretKeySpec(key.getBytes(), HMAC_SHA2_ALGORITHM);
// get an hmac_sha1 Mac instance and initialize with the signing key
Mac sha256_HMAC = Mac.getInstance(HMAC_SHA2_ALGORITHM);
sha256_HMAC.init(signingKey);
// compute the hmac on input data bytes
byte[] rawHmac = sha256_HMAC.doFinal(data.getBytes());
// base64-encode the hmac
StringBuilder sb = new StringBuilder();
char[] charArray = Base64.encode(rawHmac);
for ( char a : charArray){
sb.append(a);
}
result = sb.toString();
}
catch (Exception e) {
throw new SignatureException("Failed to generate HMAC : " + e.getMessage());
}
return result;
}
Edit (as OP changed the question):
There are lots of C++ libraries available for cryptographic operations:
OpenSSL (My personal choice, we use this library in our industry products).
Crypto++.
Here's an example of Generate sha256 with OpenSSL and C++.
OLD ANSWER:
SHA-256 is a member of SHA-2 cryptographic hash functions family, which usually generates 256 bits or 32 bytes HASH code from an input message.
It's not an "encryption" mechanism which means, from the HASH (also known as message digest or digest) you can not regenerate the message.
Therefore, we do not need any "keys" to generate SHA-256 message digest.
Moreover, hash functions are considered practically impossible to invert, that is, to recreate the input data from its hash value (message digest) alone. So You can't "decrypt" a HASH message/message digest to its input message, which concludes reversing is not possible for Hashing. For example,
SHA256(plainText) -> digest
Then there is NO mechanism like inverseSHA256 which can do the following,
// we cannot do the following
inverseSHA256(digest) -> plainText
I would recommend the free Crypto++ library. Here's a sample for HMAC.

Generate signature using private key with OpenSSL API

I'm trying to implement the SSL equivalent of:
openssl dgst -sha1 -binary -out digest.txt what.txt
openssl rsautl -sign -inkey key.pem -in what.txt -out signature.txt
I have my 7 lines private key, 304 chars:
const char* k = "-----BEGIN... \n... \n... END RSA.."
The code I use is this one:
BIO* bio = BIO_new_mem_buf((void*)k, (int)strlen(k));
RSA* privateKey = PEM_read_bio_RSAPrivateKey(bio, NULL, 0, NULL);
std::string appName = "hello";
unsigned char SHA[20];
SHA1((const unsigned char*)appName.c_str(), appName.size(), SHA);
Then I sign using the following code, passing the SHA and it's size as buffer and bufferSize (which should always be 20 in this case and it's correct) while the signature size is 32:
unsigned int privateKeySize = RSA_size(privateKey);
*signature = new unsigned char[ privateKeySize ];
RSA_sign(NID_sha1, buffer, bufferSize, *signature, signatureSize, privateKey);
RSA_sign return 0, I can't find any error code etc.
The SHA[] array content is the same of the file digest.txt, but the signature is totally wrong. I can't find any example of this.. Do you have any idea how this should work, or point me into the right direction?
Thank You.
First of all, the digest is never treated as hexadecimals in a signature, which is what you seem to assume by the extension of digest.txt. The output of the dgst command within OpenSSL is also binary (although the manual pages on my system seem to indicate otherwise - openssl's documentation isn't known to be very precise). This is only a superficial problem as the rest of your code does seem to treat the SHA-1 hash as binary.
What's probably hurting you is that, although openssl rsautl -sign seems to pad the data, it doesn't seem to surround it with the structure that indicates the hash algorithm used. You need to append this structure yourself. This is probably a legacy from OpenSSL's support for SSLv3 RSA authentication where this structure may be absent.
A quick and dirty fix that is actually present in the PKCS#1 standard is to prefix the hash before signing. In the case of SHA-1 you should prefix the hash with the binary value of the following hex representation:
30 21 30 09 06 05 2b 0e 03 02 1a 05 00 04 14
this should be easy to do in your shell language of choice. Even the windows command line is able to do this through the COPY command.

PBKDF in Crypto++

I have following code in C#
PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes,
HashAlgorithm, PasswordIterations);
byte[] KeyBytes = DerivedPassword.GetBytes(32);
.Net's PasswordDeriveBytes uses PBKDF1. I am using "SHA1" hashing algorithm to generate 32 byte key.
Kindly tell me how I can achieve this in crypto++. In crypto++ using PBKDF2 its generate 20 byte key. How can I make 32 byte key which C# generating.
You can't, because PasswordDerivedBytes is not standard PBKDF1, while the crypto++ implementation is. Use the PBKDF2 implementation instead (Rfc2898DeriveBytes in .NET).