I would like to generate a random string with OpenSSL and use this as a salt in a hashing function afterwards (will be Argon2). Currently I'm generating the random data this way:
if(length < CryptConfig::sMinSaltLen){
return 1;
}
if (!sInitialized){
RAND_poll();
sInitialized = true;
}
unsigned char * buf = new unsigned char[length];
if (!sInitialized || !RAND_bytes(buf, length)) {
return 1;
}
salt = std::string (reinterpret_cast<char*>(buf));
delete buf;
return 0;
But a std::cout of salt doesn't seem to be a proper string (contains control symbols and other stuff). This is most likely only my fault.
Am I using the wrong functions of OpenSSL to generate the random data?
Or is my conversion from buf to string faulty?
Random data is random data. That's what you're asking for and that's exactly what you are getting. Your salt variable is a proper string that happens to contain unprintable characters. If you wish to have printable characters, one way of achieving that is using base64 encoding, but that will blow up its length. Another option is to somehow discard non-printable characters, but I don't see any mechanism to force RAND_bytes to do this. I guess you could simply fetch random bytes in a loop until you get length printable characters.
If encoding base64 is acceptable for you, here is an example of how to use the OpenSSL base64 encoder, extracted from Joe Linoff's Cipher library:
string Cipher::encode_base64(uchar* ciphertext,
uint ciphertext_len) const
{
DBG_FCT("encode_base64");
BIO* b64 = BIO_new(BIO_f_base64());
BIO* bm = BIO_new(BIO_s_mem());
b64 = BIO_push(b64,bm);
if (BIO_write(b64,ciphertext,ciphertext_len)<2) {
throw runtime_error("BIO_write() failed");
}
if (BIO_flush(b64)<1) {
throw runtime_error("BIO_flush() failed");
}
BUF_MEM *bptr=0;
BIO_get_mem_ptr(b64,&bptr);
uint len=bptr->length;
char* mimetext = new char[len+1];
memcpy(mimetext, bptr->data, bptr->length-1);
mimetext[bptr->length-1]=0;
BIO_free_all(b64);
string ret = mimetext;
delete [] mimetext;
return ret;
}
To this code, I suggest adding BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL), because otherwise you'll get a new line character inserted after every 64 characters. See OpenSSL's -A switch for details.
Related
I have been struggling with a weird problem with RSA_verify. I am trying to RSA_sign using C and RSA_verify using C++. I have generated the private key and certificate using OpenSSL commands.
message = "1.2.0:08:00:27:2c:88:77"
When I use the message above, generate a hash and use RSA_sign to sign the digest, I get a signature of length 256 (strlen(signature)) and also the length returned from RSA_sign is 256. I use this length to verify and verification succeeds.
But when I use a message = "1.2.0:08:00:27:2c:88:08", the signature length is 60 and RSA_sign returns 256. When I use this length 60 to verify it fails. It fails to verify with length 256 as well. Also for some messages (1.2.0:08:00:27:2c:88:12) the signature generated is zero.
I am using SHA256 to hash the message and NID_SHA256 to RSA_sign and RSA_verify this digest. I have used -sha256 while generating the keys using the OpenSSL command.
I am forming the message by parsing an XML file reading some of the tags using some string operation.
Kindly suggest.
Below is the code used to sign.
int main(void)
{
int ret;
RSA *prikey;
char *data ;
unsigned char* signature;
int slen = 0;
FILE * fp_priv = NULL;
char* privfilepath = "priv.pem";
unsigned char* sign = NULL;
ERR_load_crypto_strings();
data = generate_hash();
printf("Message after generate hash %s: %d\n", data, strlen(data));
fp_priv = fopen(privfilepath, "r");
if (fp_priv == NULL)
{
printf("Private key path not found..");
return 1;
}
prikey = RSA_new();
prikey = PEM_read_RSAPrivateKey(fp_priv, &prikey, NULL, NULL);
if (prikey == NULL)
{
printf("Private key returned is NULL\n");
return 1;
}
signature = (unsigned char*)malloc(RSA_size(prikey));
if( signature == NULL )
return 1;
if(RSA_sign(NID_sha256, (unsigned char*)data, strlen(data),
signature, &slen, prikey) != 1) {
ERR_print_errors_fp(stdout);
return 1;
}
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
system("xxd -p -c256 sig_bin sig_hex");
RSA_free(prikey);
if(signature)
free(signature);
return 0;
}
One very, very important thing to learn about C is it has two distinct types with the same name.
char*: This represents the beginning of a character string. You can do things like strstr or strlen.
You should never strstr or strlen, but rather strnstr and strnlen, but that's a different problem.
char*: This represents the beginning of a data blob (aka byte array, aka octet string), you can't meaningfully apply strlen/etc to it.
RSA_sign uses the latter. It returns "data", not "a message". So, in your snippet
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
data came from a function called generate_hash(); it's probably non-textual, so strlen doesn't apply. signature definitely is data, so strlen doesn't apply. fprintf also doesn't apply, for the same reasons. These functions identify the end of the character string by the first occurrence of a zero-byte (0x00, '\0', etc). But 0x00 is perfectly legal to have in a signature, or a hash, or lots of "data".
The length of the output of RSA_sign is written into the address passed into the 5th parameter. You passed &slen (address-of slen), so once the function exits (successfully) slen is the length of the signature. Note that it will only very rarely match strlen(signature).
To write your signature as binary, you should use fwrite, such as fwrite(sig_bin, sizeof(char), signature, slen);. If you want it as text, you should Base-64 encode your data.
I have a native program that takes a password that is passed in on command line. That password is showing up in server logs so I want to obfuscate it by encrypting it before putting on the command line. I would then decrypt it in the program and use it as before. The idea is I use powershell to create a SecureString with the password, then make it a printable text string using ConvertFrom-SecureString. That string is then passed in on the command line to the native c++ program. From there, I decode it back to a binary excrypted form, and then decrypt it back to the original plain text password. Easy right?
From scant documentation, I think the ConvertFrom-SecureString does a Base64 encoding to make the binary SecureString inot printable text. Can anyone confirm that?
I recover the binary bytes using ATL::Base64Decode(). This appears to work when comparing the first 20 bytes from orignal and decoded.
After that I'm trying to decrypt the SecureString bytes. Again some documentation appears to imply that the SecureString Encryption is done using Machine Key (or User Session Key). Based on this, I'm trying to decrypt using the DPAPI CryptUnprotectData method. Here, though I get an decrupt failure with "(0x8007000d) The data is invalid". Does this sound like it will work? If so any idea where I'm off course?
Heres the decrypt method ...
// Decrypts an encoded and encrypted string with DPAPI and Machine Key, returns the decrypted string
static HRESULT Decrypt(CStringW wsEncryptedBase64, OUT CStringW& wsPlainText)
{
HRESULT hr = S_OK;
DATA_BLOB dbIn = { 0 };
DATA_BLOB dbOut = { 0 };
const wchar_t *pos = wsEncryptedBase64.GetBuffer(wsEncryptedBase64.GetLength());
dbIn.cbData = wsEncryptedBase64.GetLength() / 2;
dbIn.pbData = (byte*)malloc(dbIn.cbData * sizeof(byte));
int num = 0;
for (size_t i = 0; i < dbIn.cbData; i += 1)
{
swscanf_s(pos, L"%2hhx", &num);
dbIn.pbData[i] = num;
pos += sizeof(wchar_t);
}
if (::CryptUnprotectData(&dbIn, NULL, NULL, NULL, NULL,
CRYPTPROTECT_UI_FORBIDDEN, &dbOut))
{
wsPlainText = CStringW(reinterpret_cast< wchar_t const* >(dbOut.pbData), dbOut.cbData / 2);
}
else
{
hr = HRESULT_FROM_WIN32(::GetLastError());
if (hr == S_OK)
{
hr = SEC_E_DECRYPT_FAILURE;
}
}
return hr;
}
From what I can tell looking at the binary in dotPeek, ConvertFrom-String is using SecureStringToCoTaskMemUnicode to convert the secure string payload to an array of bytes. That array of bytes is returned in hex form e.g. byte.ToString("x2).
This assumes that you are using DPAPI as you say and not use the Key or SecureKey parameters on ConvertFrom-SecureString.
So in your C++ program do not use Base64Decode, just parse every two chars as a hex byte. Then call CryptUnprotectData on the resulting byte array (stuffed into the DATA_BLOB).
I have RSA encrypted my string but it's now a unsigned char *. How do I create a human readable std::string that I can output for the user? I want to use it in an amazon signed url. Here are the meat and potatoes of the code from GitHub
unsigned char* RSA_SHA1_Sign(std::string policy, RSA *privateKey) throw(std::runtime_error)
{
//sha1 digest the data
unsigned char hash[SHA_DIGEST_LENGTH] = {'0'};
SHA1((const unsigned char *)policy.c_str(), policy.length(), hash);
// Sign the data
int rsaSize = RSA_size(privateKey);
// std::unique_ptr<unsigned char[]> signedData(new unsigned char[size]);//if c++11 available
unsigned char *signedData = (unsigned char *)malloc(sizeof(unsigned char) * rsaSize);
unsigned int signedSize = 0;
//use RSA_sign instead of RSA_private_encrypt
if(!RSA_sign(NID_sha1, hash, SHA_DIGEST_LENGTH, signedData, &signedSize, privateKey)){
throw std::runtime_error("Failed to sign");
}
return signedData;
}
std::string base64Encode(unsigned char *signedData)
{
//prepare
BIO *b64 = BIO_new(BIO_f_base64());
BIO *bmem = BIO_new(BIO_s_mem());
BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL);
b64 = BIO_push(b64, bmem);
//write
BIO_write(b64, signedData, 256);
BIO_flush(b64);
//create string
BUF_MEM *bptr;
BIO_get_mem_ptr(b64, &bptr);
std::string base64String(bptr->data);
BIO_free_all(b64);
return base64String;
}
int main(int argc, const char * argv[]) {
RSA *privateKey = createRSAFromPrivateKeyFile("/path/to/privatekey");
std::string sourceString = "testing";
std::string signature = RSA_SHA1_Sign(sourceString, privateKey);
std::string encodedSignature = base64Encode(signature);
std::cout << "RESULT: " << encodedSignature << std::endl;
return 0;
}
UPDATE: I was using the wrong sign function. Once updated, using base64 encode gave me the correct string.
RSA_PKCS1_PADDING
PKCS #1 v1.5 padding. This function does not handle the algorithmIdentifier specified in PKCS #1.
When generating or verifying PKCS #1 signatures, RSA_sign(3) and RSA_verify(3) should be used.
To save all the data, use this std::string constructor: std::string( char *data, int size ). The size will be useful as the output MIGHT contain a null character.
To send it to amazon over an url, consider using the base64 encoding, again, as the encrypted data might contain NULLs and other shenanigans.
Firstly, to get it into an std::string object, which will probably be helpful in general:
std::string s{private_key, size};
However, to then make that compatible with Amazon's scheme you'll need to pick out (or write your own) Base64 library and URL encoder to escape special URL chars. A cursory search of Google or StackOverflow will provide you with what you need in this respect and it's beyond the scope of this question to write out how to do Base64 encoding and URL escaping in C++.
Also, since you're using C++, consider std::unique_ptr<unsigned char[]> rather than straight-up malloc();
std::unique_ptr<unsigned char[]> signedData{new unsigned char[size]};
I'm having lots of troubles while dealing with some characters into a URL, let's suppose that I have the following URL:
http: //localhost/somewere/myLibrary.dll/rest/something?parameter=An%C3%A1lisis
Which must be converted to:
http: //localhost/somewere/myLibrary.dll/rest/something?parameter=Análisis
In order to deal with the decoding of diacritic letters, I've decided to use the InternetCanonicalizeUrl function, because the application that I'm working on is going to work only in Windows and I don't want to install additional libraries, the helper function I've used is the following:
String DecodeURL(const String &a_URL)
{
String result;
unsigned long size = a_reportType.Length() * 2;
wchar_t *buffer = new wchar_t[size];
if (InternetCanonicalizeUrlW(a_URL.c_str(), buffer, &size, ICU_DECODE | ICU_NO_ENCODE))
{
result = buffer;
}
delete [] buffer;
return result;
}
That works kind of well for almost any of the URL passed through it, except for diacritic letters, my example URL is decoded as follows:
http: //localhost/somewere/myLibrary.dll/rest/something?parameter=Análisis
The IDE I'm working with is CodeGear™ C++Builder® 2009 (that's why I'm forced to use String instead of std::string), I've also tried with a AnsiString and char buffer version with the same results.
Any hint/alternative about how to deal with this error?
Thanks in advance.
InternetCanonicalizeUrl() is doing the right thing, you just have to take into account what it is actually doing.
URLs do not support Unicode (IRIs do), so Unicode data has to be charset-encoded into byte octets and then those octets are url-encoded using %HH sequences as needed. In this case, the data was encoded as UTF-8 (not uncommon in many URLs nowadays, but also not guaranteed), but InternetCanonicalizeUrl() has no way of knowing that as URLs do not have a syntax for describing which charset is being used. All it can do is decode %HH sequences to the relevant byte octet values, it cannot charset-decode the octets for you. In the case of the Unicode version, InternetCanonicalizeUrlW() returns those byte values as-is as wchar_t elements. But either way, you have to charset-decode the octets yourself to recover the original Unicode data.
So what you can do in this case is copy the decoded data to a UTF8String and then assign/return that as a String so it gets decoded to UTF-16. That will only work for UTF-8 encoded URLs, of course. For example:
String DecodeURL(const String &a_URL)
{
DWORD size = 0;
if (!InternetCanonicalizeUrlW(a_URL.c_str(), NULL, &size, ICU_DECODE | ICU_NO_ENCODE))
{
if (GetLastError() == ERROR_INSUFFICIENT_BUFFER)
{
String buffer;
buffer.SetLength(size-1);
if (InternetCanonicalizeUrlW(a_URL.c_str(), buffer.c_str(), &size, ICU_DECODE | ICU_NO_ENCODE))
{
UTF8String utf8;
utf8.SetLength(buffer.Length());
for (int i = 1; i <= buffer.Length(); ++i)
utf8[i] = (char) buffer[i];
return utf8;
}
}
}
return String();
}
Alternatively:
// encoded URLs are always ASCII, so it is safe
// to pass an encoded URL UnicodeString as an
// AnsiString...
String DecodeURL(const AnsiString &a_URL)
{
DWORD size = 0;
if (!InternetCanonicalizeUrlA(a_URL.c_str(), NULL, &size, ICU_DECODE | ICU_NO_ENCODE))
{
if (GetLastError() == ERROR_INSUFFICIENT_BUFFER)
{
UTF8String buffer;
buffer.SetLength(size-1);
if (InternetCanonicalizeUrlA(a_URL.c_str(), buffer.c_str(), &size, ICU_DECODE | ICU_NO_ENCODE))
{
return utf8;
}
}
}
FYI, C++Builder ships with Indy pre-installed. Indy has a TIdURI class, which can decode URL and take charsets into account, eg:
#include <IdGlobal.hpp>
#include <IdURI.hpp>
String DecodeURL(const String &a_URL)
{
return TIdURI::URLDecode(URL, enUTF8);
}
In any case, you have to know the charset used to encode the URL data. If you do not, all you can do is decode the raw octets and then use heuristic analysis to guess what the charset might be, but that is not 100% reliable for non-ASCII and non-UTF charsets.
I am using this simple function for decrypting a AES Encrypted string
unsigned char *aes_decrypt(EVP_CIPHER_CTX *e, unsigned char *ciphertext, int *len)
{
int p_len = *len, f_len = 0;
unsigned char *plaintext = (unsigned char*)malloc(p_len + 128);
memset(plaintext,0,p_len);
EVP_DecryptInit_ex(e, NULL, NULL, NULL, NULL);
EVP_DecryptUpdate(e, plaintext, &p_len, ciphertext, *len);
EVP_DecryptFinal_ex(e, plaintext+p_len, &f_len);
*len = p_len + f_len;
return plaintext;
}
The problem is that len is returning a value that does not match the entire decoded string. What could be the problem ?
When you say "string", I assume you mean a zero-terminated textual string. The encryption process is dependent on a cipher block size, and oftentimes padding. What's actually being encoded and decoded is up to the application... it's all binary data to the cipher. If you're textual string is smaller than what's returned from the decrypt process, your application needs to determine the useful part. So for example if you KNOW your string inside the results is zero-terminated, you can get the length doing a simple strlen. That's risky of course if you can't guarantee the input... probably better off searching the results for a null up to the decoded length...
If you are using cipher in ECB, CBC or some other chaining modes, you must pad plain text to the length, which is multiple of cipher block length. You can see a PKCS#5 standard for example. High-level functions like in OpenSSL can perform padding transparently for programmer. So, encrypted text can be larger than plain text up to additional cipher block size.