Pass Powershell SecureString to C++ program? - c++

I have a native program that takes a password that is passed in on command line. That password is showing up in server logs so I want to obfuscate it by encrypting it before putting on the command line. I would then decrypt it in the program and use it as before. The idea is I use powershell to create a SecureString with the password, then make it a printable text string using ConvertFrom-SecureString. That string is then passed in on the command line to the native c++ program. From there, I decode it back to a binary excrypted form, and then decrypt it back to the original plain text password. Easy right?
From scant documentation, I think the ConvertFrom-SecureString does a Base64 encoding to make the binary SecureString inot printable text. Can anyone confirm that?
I recover the binary bytes using ATL::Base64Decode(). This appears to work when comparing the first 20 bytes from orignal and decoded.
After that I'm trying to decrypt the SecureString bytes. Again some documentation appears to imply that the SecureString Encryption is done using Machine Key (or User Session Key). Based on this, I'm trying to decrypt using the DPAPI CryptUnprotectData method. Here, though I get an decrupt failure with "(0x8007000d) The data is invalid". Does this sound like it will work? If so any idea where I'm off course?
Heres the decrypt method ...
// Decrypts an encoded and encrypted string with DPAPI and Machine Key, returns the decrypted string
static HRESULT Decrypt(CStringW wsEncryptedBase64, OUT CStringW& wsPlainText)
{
HRESULT hr = S_OK;
DATA_BLOB dbIn = { 0 };
DATA_BLOB dbOut = { 0 };
const wchar_t *pos = wsEncryptedBase64.GetBuffer(wsEncryptedBase64.GetLength());
dbIn.cbData = wsEncryptedBase64.GetLength() / 2;
dbIn.pbData = (byte*)malloc(dbIn.cbData * sizeof(byte));
int num = 0;
for (size_t i = 0; i < dbIn.cbData; i += 1)
{
swscanf_s(pos, L"%2hhx", &num);
dbIn.pbData[i] = num;
pos += sizeof(wchar_t);
}
if (::CryptUnprotectData(&dbIn, NULL, NULL, NULL, NULL,
CRYPTPROTECT_UI_FORBIDDEN, &dbOut))
{
wsPlainText = CStringW(reinterpret_cast< wchar_t const* >(dbOut.pbData), dbOut.cbData / 2);
}
else
{
hr = HRESULT_FROM_WIN32(::GetLastError());
if (hr == S_OK)
{
hr = SEC_E_DECRYPT_FAILURE;
}
}
return hr;
}

From what I can tell looking at the binary in dotPeek, ConvertFrom-String is using SecureStringToCoTaskMemUnicode to convert the secure string payload to an array of bytes. That array of bytes is returned in hex form e.g. byte.ToString("x2).
This assumes that you are using DPAPI as you say and not use the Key or SecureKey parameters on ConvertFrom-SecureString.
So in your C++ program do not use Base64Decode, just parse every two chars as a hex byte. Then call CryptUnprotectData on the resulting byte array (stuffed into the DATA_BLOB).

Related

OpenSSL signature length is different for a different message

I have been struggling with a weird problem with RSA_verify. I am trying to RSA_sign using C and RSA_verify using C++. I have generated the private key and certificate using OpenSSL commands.
message = "1.2.0:08:00:27:2c:88:77"
When I use the message above, generate a hash and use RSA_sign to sign the digest, I get a signature of length 256 (strlen(signature)) and also the length returned from RSA_sign is 256. I use this length to verify and verification succeeds.
But when I use a message = "1.2.0:08:00:27:2c:88:08", the signature length is 60 and RSA_sign returns 256. When I use this length 60 to verify it fails. It fails to verify with length 256 as well. Also for some messages (1.2.0:08:00:27:2c:88:12) the signature generated is zero.
I am using SHA256 to hash the message and NID_SHA256 to RSA_sign and RSA_verify this digest. I have used -sha256 while generating the keys using the OpenSSL command.
I am forming the message by parsing an XML file reading some of the tags using some string operation.
Kindly suggest.
Below is the code used to sign.
int main(void)
{
int ret;
RSA *prikey;
char *data ;
unsigned char* signature;
int slen = 0;
FILE * fp_priv = NULL;
char* privfilepath = "priv.pem";
unsigned char* sign = NULL;
ERR_load_crypto_strings();
data = generate_hash();
printf("Message after generate hash %s: %d\n", data, strlen(data));
fp_priv = fopen(privfilepath, "r");
if (fp_priv == NULL)
{
printf("Private key path not found..");
return 1;
}
prikey = RSA_new();
prikey = PEM_read_RSAPrivateKey(fp_priv, &prikey, NULL, NULL);
if (prikey == NULL)
{
printf("Private key returned is NULL\n");
return 1;
}
signature = (unsigned char*)malloc(RSA_size(prikey));
if( signature == NULL )
return 1;
if(RSA_sign(NID_sha256, (unsigned char*)data, strlen(data),
signature, &slen, prikey) != 1) {
ERR_print_errors_fp(stdout);
return 1;
}
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
system("xxd -p -c256 sig_bin sig_hex");
RSA_free(prikey);
if(signature)
free(signature);
return 0;
}
One very, very important thing to learn about C is it has two distinct types with the same name.
char*: This represents the beginning of a character string. You can do things like strstr or strlen.
You should never strstr or strlen, but rather strnstr and strnlen, but that's a different problem.
char*: This represents the beginning of a data blob (aka byte array, aka octet string), you can't meaningfully apply strlen/etc to it.
RSA_sign uses the latter. It returns "data", not "a message". So, in your snippet
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
data came from a function called generate_hash(); it's probably non-textual, so strlen doesn't apply. signature definitely is data, so strlen doesn't apply. fprintf also doesn't apply, for the same reasons. These functions identify the end of the character string by the first occurrence of a zero-byte (0x00, '\0', etc). But 0x00 is perfectly legal to have in a signature, or a hash, or lots of "data".
The length of the output of RSA_sign is written into the address passed into the 5th parameter. You passed &slen (address-of slen), so once the function exits (successfully) slen is the length of the signature. Note that it will only very rarely match strlen(signature).
To write your signature as binary, you should use fwrite, such as fwrite(sig_bin, sizeof(char), signature, slen);. If you want it as text, you should Base-64 encode your data.

How to get random salt from OpenSSL as std::string

I would like to generate a random string with OpenSSL and use this as a salt in a hashing function afterwards (will be Argon2). Currently I'm generating the random data this way:
if(length < CryptConfig::sMinSaltLen){
return 1;
}
if (!sInitialized){
RAND_poll();
sInitialized = true;
}
unsigned char * buf = new unsigned char[length];
if (!sInitialized || !RAND_bytes(buf, length)) {
return 1;
}
salt = std::string (reinterpret_cast<char*>(buf));
delete buf;
return 0;
But a std::cout of salt doesn't seem to be a proper string (contains control symbols and other stuff). This is most likely only my fault.
Am I using the wrong functions of OpenSSL to generate the random data?
Or is my conversion from buf to string faulty?
Random data is random data. That's what you're asking for and that's exactly what you are getting. Your salt variable is a proper string that happens to contain unprintable characters. If you wish to have printable characters, one way of achieving that is using base64 encoding, but that will blow up its length. Another option is to somehow discard non-printable characters, but I don't see any mechanism to force RAND_bytes to do this. I guess you could simply fetch random bytes in a loop until you get length printable characters.
If encoding base64 is acceptable for you, here is an example of how to use the OpenSSL base64 encoder, extracted from Joe Linoff's Cipher library:
string Cipher::encode_base64(uchar* ciphertext,
uint ciphertext_len) const
{
DBG_FCT("encode_base64");
BIO* b64 = BIO_new(BIO_f_base64());
BIO* bm = BIO_new(BIO_s_mem());
b64 = BIO_push(b64,bm);
if (BIO_write(b64,ciphertext,ciphertext_len)<2) {
throw runtime_error("BIO_write() failed");
}
if (BIO_flush(b64)<1) {
throw runtime_error("BIO_flush() failed");
}
BUF_MEM *bptr=0;
BIO_get_mem_ptr(b64,&bptr);
uint len=bptr->length;
char* mimetext = new char[len+1];
memcpy(mimetext, bptr->data, bptr->length-1);
mimetext[bptr->length-1]=0;
BIO_free_all(b64);
string ret = mimetext;
delete [] mimetext;
return ret;
}
To this code, I suggest adding BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL), because otherwise you'll get a new line character inserted after every 64 characters. See OpenSSL's -A switch for details.

CryptProtectData: C++ Code for calling function from Crypt32.dll for MAPI Profile

i am trying to use the CryptProtectData function so i can encrypt my password and use it inside my MAPI profile. I am using these 2 articles http://blogs.msdn.com/b/dvespa/archive/2013/05/21/how-to-mfcmapi-create-mapi-profile-exchange-2013.aspx
and http://blogs.msdn.com/b/dvespa/archive/2013/07/15/create-profile-connect-mfcmapi-to-office-365.aspx for connecting to my hosted exchange(2013) account with MFCMAPI. When setting all my properties i am being prompted for my credentials, and there i got the problem that the field provided for the domain is too short for my domain. So i have to set these properties manually (howto is described in the second article).
Now i need to set username and password in my MAPI profile and it seems like i need to encrpyt the password on my own (i have to build an application to do so). I am using "MAPI Download configuration guidance.docx" (can be downloaded from www .microsoft.com/en-us/download/details.aspx?id=39045 the piece of code i am using is at the end of the document) for building my own application to encrypt my password (i am using the smaller example for just encrypting the password, not for creating the whole profile). There i got a lot of problems, the application didnt run on a 32bit Windows, than the crypt32.lib was missing (i had to create it by my own) and so on. Now i got it running on a 64bit machine, but now i am not sure how to provide my data to the program.
I have the following code:
std::string stemp = "myPassword";
std::wstring stemp1 = std::wstring(stemp.begin(), stemp.end());
LPWSTR pwszPassword = (LPWSTR)stemp1.c_str();//stemp.c_str();//
HRESULT hr = S_OK;
DATA_BLOB dataBlobIn = {0};
DATA_BLOB dataBlobOut = {0};
SPropValue propValues[2] = {0};
// Validate parameters
// Encrypt password based on local user authentication
dataBlobIn.pbData = (LPBYTE)pwszPassword;
// Include NULL character
dataBlobIn.cbData = (::wcslen(pwszPassword) + 1) * sizeof(WCHAR);
CryptProtectData(
&dataBlobIn,
NULL,
NULL,
NULL,
NULL,
0,
&dataBlobOut);
std::cout<<"\n-- ";
std::wcout<<(dataBlobOut.cbData);
std::cout<<" --\n";
std::wcout<<(dataBlobOut.pbData);
Now when outputting these 2 values, for dataBlobOut.cbData i mostly get "230" (i thought that this might change when i change the size of the password, but it does not, it has the same value for passwords like "aaa", "bbbbb", "cc" ...), and for dataBlobOut.pbData i get a Hexadezimal value (something like 0x2cde50) i think it is the address of the variable, since pbData is a pointer.
Since i am getting the exact same values for diffrente passwords i assume that my approach is not right. But what do i have to change to get my encrypted password so i can fill the property PR_PROFILE_AUTH_PASSWORD in my MAPI profile?
I have asked this question also on the Microsoft exchange forum , but i think that their forum is more technically oriented than software development.
Kind regards
rimes
Sorry it seems like i didnt post the message with my right account (and with this i cannot write comments).
As i said i am trying to output the encrypted data into a text file:
std::ofstream myfile;
myfile.open ("encrypted.txt");
myfile << (LPCWSTR)(dataBlobIn.pbData);
But when i open my .txt i just get something like "0xca7ee8" (or it is empty when i write it without the (LPCWSTR) ). May this be the "right" output at all? I mean i was expecting a lot more chars but as Igor said that the size of the encrypted password does not neccesercly has to change when i change the size of the plain password, maybe this output is what i need ? (or is there any way so i can verify that this is the encrypted password and not some random address from the memory).
Edit:
I think i got the solution. It seems like i need to output my data this way:
for (unsigned int i=0;i<dataBlobOut.cbData;i++){
myfile<<dataBlobOut.pbData+i;
}
so i get a lot of nonsens data in my text file, but i think that this is my encrypted password. I will check it and report if it worked setting my (encrypted) password in my MAPI profile.
But i still have one concern. Does the output hast always to be the same for the same input or can it also change (because when running my program i sometimes get different outputs for the same password) ?
You can use something like this:
#include <windows.h>
#include <stdio.h>
#pragma comment(lib, "crypt32")
int main(int argc, char** argv) {
if(argc != 2) {
fprintf(stdout, "usage: cpd <string>\n");
exit(1);
}
const char* u = argv[1];
DATA_BLOB db_i;
db_i.cbData = static_cast<DWORD>(strlen(u));
db_i.pbData = (BYTE*)(u);
DATA_BLOB db_o = {0, NULL};
if(!::CryptProtectData(&db_i,
NULL, NULL, NULL, NULL,
CRYPTPROTECT_UI_FORBIDDEN,
&db_o)) {
fprintf(stdout,
"CryptProtectData failed with error %ld\n",
GetLastError());
}
else {
short nl = 0;
for(DWORD c = 0; c < db_o.cbData; c++) {
if((c % 40) == 0)
fprintf(stdout, "\n");
fprintf(stdout, "%2.2x", db_o.pbData[c]);
}
LocalFree(db_o.pbData);
}
return 0;
}
The string is written as hex representation (max line 80). You can accumulate the db_o.pbData formatted into a std::string for serialization (there are multiple ways of doing it - I prefer to allocate a char* for db_o.cbData * 2 + 1 to have 2 chars for hex representation, plus 1 for NUL terminator, then write 2 chars sequentially for each BYTE from pbData).
If crypt32.lib is not available, you can use LoadLibrary/GetProcAddress to load the function(s) dynamically.

Trouble getting footage path after effects sdk

Having trouble with the after effects sdk.
Basically I'm looping through all of the footage project items and trying to get the footage path from them. Here's the code I have inside of the loop.
AEGP_ItemType itemType = NULL;
ERR(suites.ItemSuite6()->AEGP_GetNextProjItem(projH, itemH, &itemH));
if (itemH == NULL) {
break;
}
ERR(suites.ItemSuite6()->AEGP_GetItemType(itemH, &itemType));
if (itemType == AEGP_ItemType_FOOTAGE) {
numFootage++;
AEGP_FootageH footageH;
ERR(suites.FootageSuite5()->AEGP_GetMainFootageFromItem(itemH, &footageH));
A_char newItemName[AEGP_MAX_ITEM_NAME_SIZE] = {""};
wchar_t footagePath[AEGP_MAX_PATH_SIZE];
ERR(suites.ItemSuite6()->AEGP_GetItemName(itemH, newItemName));
AEGP_MemHandle pathH = NULL;
ERR(suites.FootageSuite5()->AEGP_GetFootagePath(footageH, 0, AEGP_FOOTAGE_MAIN_FILE_INDEX, &pathH));
ERR(suites.MemorySuite1()->AEGP_LockMemHandle(pathH, reinterpret_cast<void**>(&footagePath)));
std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
const std::string utf8_string = converter.to_bytes(footagePath);
std::ofstream tempFile;
tempFile.open ("C:\\temp\\log1.txt");
tempFile << utf8_string;
tempFile.close();
ERR(suites.MemorySuite1()->AEGP_UnlockMemHandle(pathH));
ERR(suites.MemorySuite1()->AEGP_FreeMemHandle(pathH));
}
I'm getting the footagePath
I then convert the UTF-16 (wchar_t) pointer to a UTF-8 string
Then I write that UTF-8 string to a temp file and it always outputs the following.
펐㛻
Can I please have some guidance on this? Thanks!
I was able to figure out the answer.
http://forums.adobe.com/message/5112560#5112560
This is what was wrong.
It was because the executing code was in a loop and I wasn't allocating strings with the new operator.
This was the line that needed a new on it.
wchar_t footagePath[AEGP_MAX_PATH_SIZE];
Another piece of information that would have been useful to know is that not ALL footage items have paths.
If the don't have a path it will return empty string.
This is the code I ended up with.
if (itemType == AEGP_ItemType_FOOTAGE) {
A_char* newItemName = new A_char[AEGP_MAX_ITEM_NAME_SIZE];
ERR(suites.ItemSuite6()->AEGP_GetItemName(newItemH, newItemName));
AEGP_MemHandle nameH = NULL;
AEGP_FootageH footageH = NULL;
char* footagePathStr = new char[AEGP_MAX_PATH_SIZE];
ERR(suites.FootageSuite5()->AEGP_GetMainFootageFromItem(newItemH, &footageH));
if (footageH) {
suites.FootageSuite5()->AEGP_ GetFootagePath(footageH, 0, AEGP_FOOTAGE_MAIN_FILE_INDEX, &nameH);
if(nameH) {
tries++;
AEGP_MemSize size = 0;
A_UTF16Char *nameP = NULL;
suites.MemorySuite1()->AEGP_GetMemHandleSize(nameH, &size);
suites.MemorySuite1()->AEGP_LockMemHandle(nameH, (void **)&nameP);
std::wstring test = L"HELLO";
std::string output;
int len = WideCharToMultiByte(CP_OEMCP, 0, (LPCWSTR)nameP, -1, NULL, 0, NULL, NULL);
if (len > 1) {
footagePathStr = new char[len];
int len2 = WideCharToMultiByte(CP_OEMCP, 0, (LPCWSTR)nameP, -1, footagePathStr, len, NULL, NULL);
ERR(suites.MemorySuite1()->AEGP_UnlockMemHandle(nameH));
suites.MemorySuite1()->AEGP_FreeMemHandle(nameH);
}
}
}
}
You've already had the data as smarter std::wstring, why did you convert it to byte array and then force it as simple std::string? In general, you should avoid converting strings via byte arrays. My knowledge on C++ STDLIB is a few years off now, but the problem may be in that the std::string class may simply still not have any UTF8 support.
Do you really need to store it as utf8? If it is just for logging, try using ofwstream (the wide one), remove the conversion and the 'string' completely, and just write the 'wstring' directly to the stream instead.
Also, it is completely possible that everything went correctly, and it is just your FILE VIEWER that goes rabid. Examine your log file with hexeditor and check if the beginning of the file contains the Unicode format markers like 0xFFFE etc:
if it has some, and you wrote data in not-identical encoding as the markers indicate, then that's the problem
if it has none, then try adding correct markers. Maybe your file-viewer simply did not notice it is unicode-of-that-type and misread the file. Unicode markers help the readers to decode data properly.

Wincrypt: Unable to decrypt file which was encrypted in C#. NTE_BAD_DATA at CryptDecrypt

I am trying to decrypt a piece of a file with wincrypt and I cannot seem to make this function decrypt correctly. The bytes are encrypted with the RC2 implementation in C# and I am supplying the same password and IV to both the encryption and decryption process (encrypted in C#, decrypted in c++).
All of my functions along the way are returning true until the final "CryptDecrypt" function. Instead of me typing out any more, here is the function:
static char* DecryptMyFile(char *input, char *password, int size)
{
HCRYPTPROV provider = NULL;
if(CryptAcquireContext(&provider, NULL, MS_ENHANCED_PROV, PROV_RSA_FULL, 0))
{printf("Context acquired.");}
else
{
if (GetLastError() == NTE_BAD_KEYSET)
{
if(CryptAcquireContext(&provider, 0, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET))
{printf("new key made.");}
else
{
printf("Could not acquire context.");
}
}
else
{printf("Could not acquire context.");}
}
HCRYPTKEY key = NULL;
HCRYPTHASH hash = NULL;
if(CryptCreateHash(provider, CALG_MD5, 0, 0, &hash))
{printf("empty hash created.");}
else
{printf("could not create hash.");}
if(CryptHashData(hash, (BYTE *)password, strlen(password), 0))
{printf("data buffer is added to hash.");}
else
{printf("error. could not add data buffer to hash.");}
if(CryptDeriveKey(provider, CALG_RC2, hash, 0, &key))
{printf("key derived.");}
else
{printf("Could not derive key.");}
DWORD dwKeyLength = 128;
if(CryptSetKeyParam(key, KP_EFFECTIVE_KEYLEN, reinterpret_cast<BYTE*>(&dwKeyLength), 0))
{printf("success");}
else
{printf("failed.");}
BYTE IV[8] = {0,0,0,0,0,0,0,0};
if(CryptSetKeyParam(key, KP_IV, IV, 0))
{printf("worked");}
else
{printf("faileD");}
DWORD dwCount = size;
BYTE *decrypted = new BYTE[dwCount + 1];
memcpy(decrypted, input, dwCount);
decrypted[dwCount] = 0;
if(CryptDecrypt(key,0, true, 0, decrypted, &dwCount))
{printf("succeeded");}
else
{printf("failed");}
return (char *)decrypted;
}
input is the data passed to the function, encrypted. password is the same password used to encrypt the data in C#. size is the size of the data while encrypted.
All of the above functions return true until CryptDecrypt, which I cannot seem to figure out why. At the same time, I'm not sure how the CryptDecrypt function would possibly edit my "decrypted" variable, since I am not passing a reference of it.
Any help or advice onto why this is not working would be greatly appreciated. This is my first endeavour with wincrypt and first time using C++ in years.
If it is of any more help, as well, this is my encryption (in C#):
public static byte[] EncryptString(byte[] input, string password)
{
PasswordDeriveBytes pderiver = new PasswordDeriveBytes(password, null);
byte[] ivZeros = new byte[8];
byte[] pbeKey = pderiver.CryptDeriveKey("RC2", "MD5", 128, ivZeros);
RC2CryptoServiceProvider RC2 = new RC2CryptoServiceProvider();
//using an empty initialization vector for convenience.
byte[] IV = new byte[8];
ICryptoTransform encryptor = RC2.CreateEncryptor(pbeKey, IV);
MemoryStream msEncrypt = new MemoryStream();
CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write);
csEncrypt.Write(input, 0, input.Length);
csEncrypt.FlushFinalBlock();
return msEncrypt.ToArray();
}
I have confirmed that my hash value in C++ is identical to my key in C#, created by PasswordDeriveBytes.CryptDeriveKey
First, as in my comment, use GetLastError() so you know what it failed. I'll assume that you get NTE_BAD_DATA, all the other errors are much more easier to deal with since they basically mean you missed some step int he API call sequence.
The typical reason why CryptDecrypt would fail with NTE_BAD_DATA would be that you're decrypting the last block of a block cypher (as you are) and the decrypted padding bytes are incorrect. This can happen if the input is truncated (not all encrypted bytes were saved to the file) or if the key is incorrect.
I would suggest you take this methodically since there are so many places where this can fail that will all manifest only at CryptDecrypt time:
Ensure that the file you encrypt in C# can be decrypted in C#. This would eliminate any file save truncation issues.
Try to encrypt and decrypt with fixed hard codded key first (no password derived), this will ensure that your key set code IV initialization are correct (as well as padding mode and cypher chaining mode).
Ensure that the password derivation process arives at the same hash. Things like ANSI vs. Unicode or terminal 0 can wreak havok on the MD5 hash and result in wildly different keys from apparently the same password hash.
Some people have discovered issues when moving between operating systems.
The CryptDeriveKey call uses a "default key length" based on the operating system and algorithm chosen. For RC2, the default generated key length is 40 bits on Windows 2000 and 128 bits on Windows 2003. This results in a "BAD DATA" return code when the generated key is used in a CryptDecrypt call.
Presumably this is related to "garbage" appearing at the end of the final buffer after trying to apply a 128 bit key to decrypt a 40 bit encrypted stream. The error code typically indicates bad padding bytes - but the root cause may be a key generation issue.
To generate a 40 bit encryption key, use ( ( 40 <<16 ) ) in the flags field of the CryptDeriveKey call.