Simple AES encryption using WinAPI - c++

I need to do simple single-block AES encryption / decryption in my Qt / C++ application. This is a "keep the honest people honest" implementation, so just a basic encrypt(key, data) is necessary--I'm not worried about initialization vectors, etc. My input and key will always be exactly 16 bytes.
I'd really like to avoid another dependency to compile / link / ship with my application, so I'm trying to use what's available on each platform. On the Mac, this was a one-liner to CCCrypt. On Windows, I'm getting lost in the API from WinCrypt.h. Their example of encrypting a file is almost 600 lines long. Seriously?
I'm looking at CryptEncrypt, but I'm falling down the rabbit hole of dependencies you have to create before you can call that.
Can anyone provide a simple example of doing AES encryption using the Windows API? Surely there's a way to do this in a line or two. Assume you already have a 128-bit key and 128-bits of data to encrypt.

Here's the best I've been able to come up with. Suggestions for improvement are welcome!
static void encrypt(const QByteArray &data,
const QByteArray &key,
QByteArray *encrypted) {
// Create the crypto provider context.
HCRYPTPROV hProvider = NULL;
if (!CryptAcquireContext(&hProvider,
NULL, // pszContainer = no named container
NULL, // pszProvider = default provider
PROV_RSA_AES,
CRYPT_VERIFYCONTEXT)) {
throw std::runtime_error("Unable to create crypto provider context.");
}
// Construct the blob necessary for the key generation.
AesBlob128 aes_blob;
aes_blob.header.bType = PLAINTEXTKEYBLOB;
aes_blob.header.bVersion = CUR_BLOB_VERSION;
aes_blob.header.reserved = 0;
aes_blob.header.aiKeyAlg = CALG_AES_128;
aes_blob.key_length = kAesBytes128;
memcpy(aes_blob.key_bytes, key.constData(), kAesBytes128);
// Create the crypto key struct that Windows needs.
HCRYPTKEY hKey = NULL;
if (!CryptImportKey(hProvider,
reinterpret_cast<BYTE*>(&aes_blob),
sizeof(AesBlob128),
NULL, // hPubKey = not encrypted
0, // dwFlags
&hKey)) {
throw std::runtime_error("Unable to create crypto key.");
}
// The CryptEncrypt method uses the *same* buffer for both the input and
// output (!), so we copy the data to be encrypted into the output array.
// Also, for some reason, the AES-128 block cipher on Windows requires twice
// the block size in the output buffer. So we resize it to that length and
// then chop off the excess after we are done.
encrypted->clear();
encrypted->append(data);
encrypted->resize(kAesBytes128 * 2);
// This acts as both the length of bytes to be encoded (on input) and the
// number of bytes used in the resulting encrypted data (on output).
DWORD length = kAesBytes128;
if (!CryptEncrypt(hKey,
NULL, // hHash = no hash
true, // Final
0, // dwFlags
reinterpret_cast<BYTE*>(encrypted->data()),
&length,
encrypted->length())) {
throw std::runtime_error("Encryption failed");
}
// See comment above.
encrypted->chop(length - kAesBytes128);
CryptDestroyKey(hKey);
CryptReleaseContext(hProvider, 0);
}

Related

How to generate a secure random STRING (Key and IV) for AES-256-CBC WinApi ? [C/C++]

I need to generate (in C/C++) a secure random string (32byte) to use as Key and another (16byte) to use as IV for AES-256-CBC encryption using WinApi. The problem is that I need to save the generated string in a text file in order to manually test the decryption using OpenSSL in terminal.
What can I use to generate the secure random string other than CryptGenRandom? The problem of CryptGenRandom is that it generates a random byte sequence that I can't save/use as OpenSSL input because it's not an ASCII text:
openssl aes-256-cbc -d -K "..." -iv ".." -in encrypted.txt
Is there an alternative?
This is my working code:
// handles for csp and key
HCRYPTPROV hProv = NULL;
HCRYPTKEY hKey = NULL;
BYTE *szKey = (BYTE*)calloc(DEFAULT_AES_KEY_SIZE + 1, sizeof(BYTE));
BYTE *szIV = (BYTE*)calloc(DEFAULT_IV_SIZE + 1, sizeof(BYTE));
char* ciphertext= 0;
DWORD dwPlainSize = lstrlenA(*plaintext), dwBufSize = 0;
AES256KEYBLOB AESBlob;
memset(&AESBlob, 0, sizeof(AESBlob));
// initalize key and plaintext
StrCpyA((LPSTR)szKey, "00112233445566778899001122334455");
StrCpyA((LPSTR)szIV, "4455667788990011");
// generate key & IV
//if (!CryptGenRandom(hProv, DEFAULT_AES_KEY_SIZE, szKey)) {goto error;}
//if (!CryptGenRandom(hProv, DEFAULT_IV_SIZE, szIV)) {goto error;}
// blob data for CryptImportKey() function (include key and version and so on...)
AESBlob.bhHdr.bType = PLAINTEXTKEYBLOB;
AESBlob.bhHdr.bVersion = CUR_BLOB_VERSION;
AESBlob.bhHdr.reserved = 0;
AESBlob.bhHdr.aiKeyAlg = CALG_AES_256;
AESBlob.dwKeySize = DEFAULT_AES_KEY_SIZE;
StrCpyA((LPSTR)AESBlob.szBytes, (LPCSTR)szKey);
// create a cryptographic service provider (CSP)
if (!CryptAcquireContextA(&hProv, NULL, MS_ENH_RSA_AES_PROV_A, PROV_RSA_AES, CRYPT_VERIFYCONTEXT)) {goto error;}
// populate crypto provider (CSP)
if (!CryptImportKey(hProv, (BYTE*)&AESBlob, sizeof(AES256KEYBLOB), NULL, CRYPT_EXPORTABLE, &hKey)) { goto error; }
if (!CryptSetKeyParam(hKey, KP_IV, szIV, 0)) { goto error; }
// ciphertext allocation
dwBufSize = BUFFER_FOR_PLAINTEXT + dwPlainSize;
ciphertext = (char*)calloc(dwBufSize, sizeof(char));
memcpy_s(ciphertext, dwBufSize, *plaintext, dwPlainSize);
// encryption
if (!CryptEncrypt(hKey, NULL, TRUE, 0, (BYTE*)ciphertext, &dwPlainSize, dwBufSize) ) { goto error; }
The key and IV are given to OpenSSL as hexadecimal. There's no need to generate visible ASCII characters only. You could have figured that out yourself just by pretending that the generated key and IV was "Hello". You get an error message that's pretty helpful.
$> openssl aes-256-cbc -d -K Hello -iv Hello
hex string is too short, padding with zero bytes to length
non-hex digit
invalid hex iv value
And hey, look at your code,
// initalize key and plaintext
StrCpyA((LPSTR)szKey, "00112233445566778899001122334455");
StrCpyA((LPSTR)szIV, "4455667788990011");
These string are not only ASCII text, they happen to be hex as well. It looks to as if you just need to convert hex to bytes in your code or convert bytes to a hex string if you have the bytes in code and want to generate the command line.

Will this digital signing code work with a Qualified Digital Certificate?

I was recently asked if our existing signing code will work with a European Qualified Digital Certificate in place of the RSA signing certificate we normally use. As far as I can tell it should, but I haven't found anything that actually confirms this theory.
Short of figuring out how to acquire a qualified certificate and actually testing it, I'm not sure how to answer this question definitively. Anybody happen to have experience with this?
The code in question is shown below. It's a Windows-based C++ application that is using Microsoft's Cryptographic API for signing.
int SignData(
const std::string &data, // Data to be signed
const char *containerName, // Name of key container to use
std::string &signature) // Returns the signature
{
HCRYPTPROV hProv = NULL;
HCRYPTHASH hHash = NULL;
HCRYPTKEY hKey = NULL;
DWORD dwLength;
int status = 0;
// Attempt to open the key container as a LocalMachine key.
if (CryptAcquireContext(
&hProv,
containerName,
NULL,
PROV_RSA_FULL,
CRYPT_MACHINE_KEYSET | CRYPT_SILENT))
{
// Create a SHA-1 hash context.
if (CryptCreateHash(hProv, CALG_SHA1, 0, 0, &hHash))
{
// Calculate hash of the buffer.
if (CryptHashData(hHash, (const BYTE *)data.data(), (DWORD)data.size(), 0))
{
// Determine the size of the signature and allocate memory.
if (CryptSignHash(hHash, AT_SIGNATURE, 0, 0, NULL, &dwLength))
{
signature.resize(dwLength);
// Sign the hash object.
if (!CryptSignHash(hHash, AT_SIGNATURE, 0, 0, (BYTE*)&signature[0], &dwLength))
status = HandleCryptError("CryptSignHash failed");
}
else
status = HandleCryptError("CryptSignHash failed");
}
else
status = HandleCryptError("CryptHashData failed");
}
else
status = HandleCryptError("CryptCreateHash failed");
}
else
status = HandleCryptError("CryptAcquireContext failed");
return status;
}
The above code is using the default Microsoft software cryptographic provider which doesn't meet the requirements of eIDAS.
If you replace the default Microsoft provider by an eIDAS compliant provider in the above code then this approach of performing signature is OK. Under the hood, the compliant CSP will take care of non-repudiation requirement of the key usage, for example by displaying a PIN popup or if the key is stored remotely in a server by sending a 2FA notification to the key owner.

First 16 bytes of AES-128 CFB-8 decryption are damaged

I've been working on a project recently that should connect to a server with the help of a protocol. So far so good, but when I combed to decrypt the packages, I quickly noticed that something is not working properly.
The first 16 bytes of all packets are decrypted incorrectly. I have tried it with different libraries but that does not work either. I work in the C++ language and have so far used Crypto++ and OpenSSL for decryption, without success.
Under this Link you can find the protocol, here the decryption protocol Link and here is my corresponding code:
OpenSSL:
void init() {
unsigned char* sharedSecret = new unsigned char[AES_BLOCK_SIZE];
std::generate(sharedSecret,
sharedSecret + AES_BLOCK_SIZE,
std::bind(&RandomGenerator::GetInt, &m_RNG, 0, 255));
for (int i = 0; i < 16; i++) {
sharedSecretKey += sharedSecret[i];
}
// Initialize AES encryption and decryption
if (!(m_EncryptCTX = EVP_CIPHER_CTX_new()))
std::cout << "123" << std::endl;
if (!(EVP_EncryptInit_ex(m_EncryptCTX, EVP_aes_128_cfb8(), nullptr, (unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str())))
std::cout << "123" << std::endl;
if (!(m_DecryptCTX = EVP_CIPHER_CTX_new()))
std::cout << "123" << std::endl;
if (!(EVP_DecryptInit_ex(m_DecryptCTX, EVP_aes_128_cfb8(), nullptr, (unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str())))
std::cout << "123" << std::endl;
m_BlockSize = EVP_CIPHER_block_size(EVP_aes_128_cfb8());
}
std::string result;
int size = 0;
result.resize(1000);
EVP_DecryptUpdate(m_DecryptCTX, &((unsigned char*)result.c_str())[0], &size, &sendString[0], data.size());
Crypto++:
CryptoPP::CFB_Mode<CryptoPP::AES>::Decryption AESDecryptor((byte*)sharedSecret.c_str(), (unsigned int)16, sharedSecret.c_str(), 1);
std::string sTarget("");
CryptoPP::StringSource ss(data, true, new CryptoPP::StreamTransformationFilter(AESDecryptor, new CryptoPP::StringSink(sTarget)));
I think important to mention is that I use one and the same shared secret for the key and the iv (initialization vector). In other posts, this was often labeled as a problem. I do not know how to fix it in this case because the protocol want it.
I would be looking forward to a constructive feedback.
EVP_EncryptInit_ex(m_EncryptCTX, EVP_aes_128_cfb8(), nullptr,
(unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str()))
And:
CFB_Mode<AES>::Decryption AESDecryptor((byte*)sharedSecret.c_str(),
(unsigned int)16, sharedSecret.c_str(), 1);
std::string sTarget("");
StringSource ss(data, true, new StreamTransformationFilter(AESDecryptor, new StringSink(sTarget)));
It is not readily apparent, but you need to set feedback size for the mode of operation of the block cipher in Crypto++. The Crypto++ feedback size is 128 by default.
The code to set the feedback size of CFB mode can be found at CFB Mode on the Crypto++ wiki. You want the 3rd or 4th example down the page.
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(iv));
That is kind of an awkward way to pass parameters. It is documented in the sources files and on the wiki at NameValuePairs. It allows you to pass arbitrary parameters through consistent interfaces. It is powerful once you acquire a taste for it.
And then use params to key the encryptor and decryptor:
CFB_Mode< AES >::Encryption enc;
enc.SetKey( key, key.size(), params );
// CFB mode must not use padding. Specifying
// a scheme will result in an exception
StringSource ss1( plain, true,
new StreamTransformationFilter( enc,
new StringSink( cipher )
) // StreamTransformationFilter
); // StringSource
I believe your calls would look something like this (if I am parsing the OpenSSL correctly):
const byte* ptr = reinterpret_cast<const byte*>(sharedSecret.c_str());
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(ptr, 16));
CFB_Mode< AES >::Encryption enc;
enc.SetKey( ptr, 16, params );
In your production code you should use unique key and iv. So do something like this using HKDF:
std::string seed(AES_BLOCK_SIZE, '0');
std::generate(seed, seed + AES_BLOCK_SIZE,
std::bind(&RandomGenerator::GetInt, &m_RNG, 0, 255));
SecByteBlock sharedSecret(32);
const byte usage[] = "Key and IV v1";
HKDF<SHA256> hkdf;
hkdf.DeriveKey(sharedSecret, 32, &seed[0], 16, usage, COUNTOF(usage), nullptr, 0);
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(sharedSecret+16, 16));
CFB_Mode< AES >::Encryption enc;
enc.SetKey(sharedSecret+0, 0, params);
In the code above, sharedSecret is twice as large as it needs to be. You derive the key and iv from the seed using HDKF. sharedSecret+0 is the 16-byte key, and sharedSecret+16 is the 16-byte iv.

What is the preferred way to get a device path for CreateFile() in a UWP C++ App?

I am converting a project to a UWP App, and thus have been following guidelines outlined in the MSDN post here. The existing project heavily relies on CreateFile() to communicate with connected devices.
There are many posts in SO that show us how to get a CreateFile()-accepted device path using SetupAPI's SetupDiGetDeviceInterfaceDetail() Is there an alternative way to do this using the PnP Configuration Manager API? Or an alternative, user-mode way at all?
I had some hope when I saw this example in Windows Driver Samples github, but quickly became dismayed when I saw that the function they used in the sample is ironically not intended for developer use, as noted in this MSDN page.
function GetDevicePath in general correct and can be used as is. about difference between CM_*(..) and CM_*_Ex(.., HMACHINE hMachine) - the CM_*(..) simply call CM_*_Ex(.., NULL) - so for local computer versions with and without _Ex suffix the same.
about concrete GetDevicePath code - call CM_Get_Device_Interface_List_Size and than CM_Get_Device_Interface_List only once not 100% correct - because between this two calls new device with this interface can be arrived to system and buffer size returned by CM_Get_Device_Interface_List_Size can be already not enough for CM_Get_Device_Interface_List. of course possibility of this very low, and you can ignore this. but i prefer make code maximum theoretical correct and call this in loop, until we not receive error other than CR_BUFFER_SMALL. also need understand that CM_Get_Device_Interface_List return multiple, NULL-terminated Unicode strings - so we need iterate here. in [example] always used only first returned symbolic link name of an interface instance. but it can be more than 1 or at all - 0 (empty). so better name function - GetDevicePaths - note s at the end. i be use code like this:
ULONG GetDevicePaths(LPGUID InterfaceClassGuid, PWSTR* pbuf)
{
CONFIGRET err;
ULONG len = 1024;//first try with some reasonable buffer size, without call *_List_SizeW
for(PWSTR buf;;)
{
if (!(buf = (PWSTR)LocalAlloc(0, len * sizeof(WCHAR))))
{
return ERROR_NO_SYSTEM_RESOURCES;
}
switch (err = CM_Get_Device_Interface_ListW(InterfaceClassGuid, 0, buf, len, CM_GET_DEVICE_INTERFACE_LIST_PRESENT))
{
case CR_BUFFER_SMALL:
err = CM_Get_Device_Interface_List_SizeW(&len, InterfaceClassGuid, 0, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
default:
LocalFree(buf);
if (err)
{
return CM_MapCrToWin32Err(err, ERROR_UNIDENTIFIED_ERROR);
}
continue;
case CR_SUCCESS:
*pbuf = buf;
return NOERROR;
}
}
}
and usage example:
void example()
{
PWSTR buf, sz;
if (NOERROR == GetDevicePaths((GUID*)&GUID_DEVINTERFACE_VOLUME, &buf))
{
sz = buf;
while (*sz)
{
DbgPrint("%S\n", sz);
HANDLE hFile = CreateFile(sz, FILE_GENERIC_READ, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING, 0, 0);
if (hFile != INVALID_HANDLE_VALUE)
{
// do something
CloseHandle(hFile);
}
sz += 1 + wcslen(sz);
}
LocalFree(buf);
}
}
so we must not simply use in returned DevicePathS (sz) only first string, but iterate it
while (*sz)
{
// use sz
sz += 1 + wcslen(sz);
}
I got a valid Device Path to a USB Hub Device, and used it successfully to get various device descriptors by sending some IOCTLs, by using the function I posted in my own answer to another question
I'm reporting the same function below:
This function returns a list of NULL-terminated Device Paths (that's what we get from CM_Get_Device_Interface_List())
You need to pass it the DEVINST, and the wanted interface GUID.
Since both the DEVINST and interface GUID are specified, it is highly likely that CM_Get_Device_Interface_List() will return a single Device Path for that interface, but technically you should be prepared to get more than one result.
It is responsibility of the caller to delete[] the returned list if the function returns successfully (return code 0)
int GetDevInstInterfaces(DEVINST dev, LPGUID interfaceGUID, wchar_t**outIfaces, ULONG* outIfacesLen)
{
CONFIGRET cres;
if (!outIfaces)
return -1;
if (!outIfacesLen)
return -2;
// Get System Device ID
WCHAR sysDeviceID[256];
cres = CM_Get_Device_ID(dev, sysDeviceID, sizeof(sysDeviceID) / sizeof(sysDeviceID[0]), 0);
if (cres != CR_SUCCESS)
return -11;
// Get list size
ULONG ifaceListSize = 0;
cres = CM_Get_Device_Interface_List_Size(&ifaceListSize, interfaceGUID, sysDeviceID, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
if (cres != CR_SUCCESS)
return -12;
// Allocate memory for the list
wchar_t* ifaceList = new wchar_t[ifaceListSize];
// Populate the list
cres = CM_Get_Device_Interface_List(interfaceGUID, sysDeviceID, ifaceList, ifaceListSize, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);
if (cres != CR_SUCCESS) {
delete[] ifaceList;
return -13;
}
// Return list
*outIfaces = ifaceList;
*outIfacesLen = ifaceListSize;
return 0;
}
Please note that, as RbMm already said in his answer, you may get a CR_BUFFER_SMALL error from the last CM_Get_Device_Interface_List() call, since the device list may have been changed in the time between the CM_Get_Device_Interface_List_Size() and CM_Get_Device_Interface_List() calls.

MSDN HMAC-SHA1 example not working

Creating an HMAC steps by using CryptoAPI found here: http://msdn.microsoft.com/en-us/library/Aa379863
To compute an HMAC
Get a pointer to the Microsoft Cryptographic Service Provider
(CSP) by calling CryptAcquireContext.
Create a handle to an HMAChash object by calling
CryptCreateHash. Pass CALG_HMAC in the Algid parameter. Pass the
handle of a symmetric key in the hKey parameter. This symmetric key
is the key used to compute the HMAC.
Specify the type of hash to be used by calling
CryptSetHashParam with the dwParam parameter set to the value
HP_HMAC_INFO. The pbData parameter must point to an initialized
HMAC_INFO structure.
Call CryptHashData to begin computing the HMAC of the data. The
first call to CryptHashData causes the key value to be combined using
the XOR operator with the inner string and the data. The result of
the XOR operation is hashed, and then the target data for the HMAC
(pointed to by the pbData parameter passed in the call to
CryptHashData) is hashed. If necessary, subsequent calls to
CryptHashData may then be made to finish the hashing of the target
data.
Call CryptGetHashParam with the dwParam parameter set to
HP_HASHVAL. This call causes the inner hash to be finished and the
outer string to be combined using XOR with the key. The result of the
XOR operation is hashed, and then the result of the inner hash
(completed in the previous step) is hashed. The outer hash is then
finished and returned in the pbData parameter and the length in the
dwDataLen parameter.
I can not, for the life of me get this working. I have all the steps in order, and still can not even run my program. Errors while running:
Error in CryptImportKey 0x8009007
Error in CryptCreatHash 0x8009003
Error in CryptSetHashParam 0x00000057
Error in CryptHashData 0x00000057
Error in CryptGetHashParam 0x00000057
Can anyone help?
#include <iostream>
#include <windows.h>
#include <wincrypt.h>
using namespace std;
#define CALG_HMAC CALG_SHA1
int main()
{
//--------------------------------------------------------------------
// Declare variables.
HCRYPTPROV hProv = NULL;
HCRYPTHASH hHash = NULL;
HCRYPTKEY hKey = NULL;
BYTE DesKeyBlob[] = { 0x70,0x61,0x73,0x73,0x77,0x6F,0x72,0x64 };
HCRYPTHASH hHmacHash = NULL;
PBYTE pbHash = NULL;
DWORD dwDataLen = 20;
BYTE Data[] = {0x6D,0x65,0x73,0x73,0x61,0x67,0x65};
HMAC_INFO HmacInfo;
//--------------------------------------------------------------------
// Zero the HMAC_INFO structure
ZeroMemory(&HmacInfo, sizeof(HmacInfo));
HmacInfo.HashAlgid = CALG_HMAC;
HmacInfo.pbInnerString = (BYTE*)0x36;
HmacInfo.cbInnerString = 0;
HmacInfo.pbOuterString = (BYTE*)0x5C;
HmacInfo.cbOuterString = 0;
// Step 1
if (!CryptAcquireContext(
&hProv, // handle of the CSP
NULL, // key container name
NULL, // CSP name
PROV_RSA_FULL, // provider type
CRYPT_VERIFYCONTEXT)) // no key access is requested
{
printf(" Error in AcquireContext 0x%08x \n",
GetLastError());
}
//--------------------------------------------------------------------
//Step 2
//in step two, we need the hash key used to be imported?
//imports the key used... as hKey1
if(!CryptImportKey(
hProv,
DesKeyBlob,
sizeof(DesKeyBlob),
0,
CRYPT_EXPORTABLE,
&hKey ))
{
printf("Error in !CryptImportKey 0x%08x \n",
GetLastError());
}
if (!CryptCreateHash(
hProv, // handle of the CSP
CALG_HMAC, // hash algorithm to use
hKey, // hash key this shoudl point to a key used to compute the HMAC?
0, // reserved
&hHmacHash // address of hash object handle
)){
printf("Error in CryptCreateHash 0x%08x \n",
GetLastError());
}
// Step 3
if (!CryptSetHashParam(
hHmacHash,//hProv,//hHash,//hHmacHash, // handle of the HMAC hash object
HP_HMAC_INFO, // setting an HMAC_INFO object
(BYTE*)&HmacInfo, // the HMAC_INFO object
0)) // reserved
{
printf("Error in CryptSetHashParam 0x%08x \n",
GetLastError());
}
//Step 4
if (!CryptHashData(
hHmacHash, // handle of the HMAC hash object
Data, // message to hash
sizeof(Data), // number of bytes of data to add
0)) // flags
{
printf("Error in CryptHashData 0x%08x \n",
GetLastError());
}
//Step 5
if (!CryptGetHashParam(
hHmacHash, // handle of the HMAC hash object
HP_HASHVAL, // query on the hash value
pbHash, // pointer to the HMAC hash value
&dwDataLen, // length, in bytes, of the hash
0))
{
printf("Error in CryptGetHashParam 0x%08x \n", GetLastError());
}
// Print the hash to the console.
printf("The hash is: ");
for(DWORD i = 0 ; i < dwDataLen ; i++)
{
printf("%2.2x ",pbHash[i]);
}
printf("\n");
int a;
std::cin >> a;
return 0;
}
You might (?1) need to specify what hash algorithm you want to use.
#define CALG_HMAC CALG_SHA1 // or CALG_MD5 etc
Edit
Why do you initialize dwDataLen = 20 (instead of 0)?
Why did you change the hash algorithm from SHA1
Why do you not exit on ErrorExit anymore (that alone will prevent the crash instead of proper error message)
You use CryptImportKey instead of CryptDeriveKey -- no such thing even exists in the sample on MSDN. It can't be a coincidence that CryptImportKey is the call failing with 0x80090005 (NTE_BAD_DATA). The key is not supported by your CSP!
For that to work, you need key access so you'd at least need to change CRYPT_VERIFY_CONTEXT into something else (don't know what); I tried using
.
if (!CryptAcquireContext(
&hProv,
NULL,
MS_STRONG_PROV, // allow 2048 bit keys, in case you need it
PROV_RSA_FULL,
CRYPT_MACHINE_KEYSET)) // just a guess
Now my program results in 0x80090016: Keyset does not exist. This might simply be because I don't have that keyset, or because I'm running on Linux under Wine.
Hope this helps.
1 Compiled on Linux using:
i586-mingw32msvc-g++ -m32 -O2 -g test.cpp -o test.exe
It did crash when run (without parameters) but that might be wine incompatibility (or the fact that I haven't read the source to see what it does :))