I am new to socket programming, so be kind :)
I am writing a client-server application in C++ and using OpenSSL. Till now I have generated the public-private keys for the client and server and have exchanged it over the network. Now is the part where I want to encrypt my client's message using the server's public key. But my public_encrypt function returns gibberish. I know the methods which I am using are deprecated and there are better methods but the purpose is to get the hands dirty only.
Below is the function that invokes the encryption API. (Ignore the if part, it's for sending the clients public key)
#define RSA_SIZE 256
void sendMessage(int clientFD, uint16_t type, char *data, serverState *server){
uint16_t length = strlen(data);
unsigned char message[MESSAGE_SIZE];
if (server->state == 0)
{
memcpy(message, (char *)&length, sizeof(length));
memcpy(message + 2, (char *)&type, sizeof(type));
memcpy(message + 4, data, length);
send(clientFD, message, 4 + length, 0);
server->state = 1;
}
else
{
unsigned char encrypted[RSA_SIZE] = {0};
length = public_encrypt(reinterpret_cast<unsigned char *>(data), length, server->key, encrypted);
assert(length != -1);
printf("%s\n", encrypted);
memcpy(message, (char *)&length, sizeof(length));
memcpy(message + 2, (char *)&type, sizeof(type));
memcpy(message + 4, encrypted, length);
send(clientFD, message, 4 + length, 0);
}}
This is the code for the encryption
int padding = RSA_PKCS1_OAEP_PADDING;
RSA *createRSA(unsigned char *key, int pub){
RSA *rsa = NULL;
BIO *keybio;
keybio = BIO_new_mem_buf(key, -1);
if (keybio == NULL)
{
printf("Failed to create key BIO");
return 0;
}
if (pub)
{
rsa = PEM_read_bio_RSA_PUBKEY(keybio, &rsa, NULL, NULL);
}
else
{
rsa = PEM_read_bio_RSAPrivateKey(keybio, &rsa, NULL, NULL);
}
if (rsa == NULL)
{
printf("Failed to create RSA");
}
return rsa;}
int public_encrypt(unsigned char *data, int data_len, unsigned char *key, unsigned char *encrypted){
printf("Data:%s\n:", data);
printf("Data Length:%d\n:", data_len);
printf("Server's Key:\n%s\n:", key);
RSA *rsa = createRSA(key, 1);
int result = RSA_public_encrypt(data_len, data, encrypted, rsa, padding);
return result;}
Please check out the link https://i.stack.imgur.com/WJn7e.png to see my output.
PS: Sorry for such a long post.
The output of RSA is a random value between 0 and the modulus of the RSA private key, encoded as an unsigned big endian octet string (octet string is just another name for byte array, a char[] in C / C++). It contains bytes with any value, and it is therefore certainly not ASCII. If you want ASCII you have to base 64 encode the ciphertext.
However, quite often ciphertext is "stringified" for no good reason at all, so only do this if this is necessary within your protocol / system. Python strings are made somewhat readable for you by the Python runtime. I'm not sure if that's a good thing or not - it's certainly not a good idea to copy that string as it is only Python proprietary.
C is not as forgiving, if you treat the binary array as text you'll run into trouble, as it can contain any character, including control characters and the NUL character (00), which can play merry hell with functions such as strlen and many others that expect a textual string instead of an array of bytes (both are usually based on char in C/C++).
Related
I'm trying to implement AES decryption into one of my C++ program. The idea would be to use the following openSSL command line to generate the ciphered text (but to use the C++ API to decipher) :
openssl enc -aes-256-cbc -in plaintext.txt -base64 -md sha512 -pbkdf2 -pass pass:<passwd>
As the official doc is a bit too complicated I based my implementation on this tutorial to implement the decryption : https://eclipsesource.com/blogs/2017/01/17/tutorial-aes-encryption-and-decryption-with-openssl/
It does works well, but uses a deprecated key-derivation algorithm which I wanna replace with PBKDF2.
As far as I understand I should then use PKCS5_PBKDF2_HMAC() rather than the EVP_BytesToKey() suggested in the tutorial. My problem is that EVP_BytesToKey was able to derivate both key and IV from salt and password, where PKCS5_PBKDF2_HMAC only seems to derivate one at a time.
I couldn't find any more information/tutorial on how to get both key and IV, and tried several implementations, but couldn't find how the openSSL CLI generates the IV.
I'd really like to avoid to write the IV in either the CLI or the payload, the implementation of the tutorial was really convenient for that.
Could someone help me ?
Thanks, best regards
I realize the question is about a month old by now but I came across it in my search of information on doing something similar. Given the lack of answers here I went to the source for answers.
TL;DR (direct answer)
PKCS5_PBKDF2_HMAC() generates both key and IV at the same time. Although it's concatenated to one string. It's up you to split the string into the needed parts.
const EVP_CIPHER *cipher = EVP_aes_256_cbc();
int iklen = EVP_CIPHER_key_length(cipher);
int ivlen = EVP_CIPHER_iv_length(cipher);
PKCS5_PBKDF2_HMAC(pass, -1, salt, 8, iter, EVP_sha512(), iklen + ivlen, keyivpair);
memcpy(key, keyivpair, iklen);
memcpy(iv, keyivpair + iklen, ivlen);
Detailed description
Before going into specifics I feel that I should mention that I'm using C and not C++. I do however hope that the information provided is helpful even for C++.
Before anything else the string needs to be decoded from base64 in the application. After that we can move along to the key and IV generation.
The openssl tool indicates that a salt is being used by starting the encrypted string with the string 'Salted__' followed by 8 bytes of salt (at least for aes-256-cbc). In addition to the salt we also need to know the length of both the key and the IV. Luckily there are API calls for this.
const EVP_CIPHER *cipher = EVP_aes_256_cbc();
int iklen = EVP_CIPHER_key_length(cipher);
int ivlen = EVP_CIPHER_iv_length(cipher);
We also need to know the number of iterations (the default in openssl 1.1.1 when using -pbkdf2 is 10000), as well as the message digest function which in this case will be EVP_sha512() (as specified by option -md sha512).
When we have all of the above it's time to call PKCS5_PBKDF2_HMAC().
PKCS5_PBKDF2_HMAC(pass, -1, salt, 8, iter, EVP_sha512(), iklen + ivlen, keyivpair);
Short info on the arguments
pass is of type (const char *)
password length (int), if set to -1 the length will be determined by strlen(pass)
salt is of type (const unsigned char *)
salt length (int)
iteration count (int)
message digest (const EVP_MD *), in this case returned by EVP_sha512()
total length of key + iv (int)
keyivpair (unsigned char *), this is where the key and IV is stored
Now we need to split the key and IV apart and store them i separate variables.
unsigned char key[EVP_MAX_KEY_LENGTH];
unsigned char iv[EVP_MAX_IV_LENGTH];
memcpy(key, keyivpair, iklen);
memcpy(iv, keyivpair + iklen, ivlen);
And now we have a key and IV which can be used to decrypt data encrypted by the openssl tool.
PoC
To further clarify I wrote the following proof of concept (written on and for Linux).
/*
* PoC written by zoke
* Compiled with gcc decrypt-poc.c -o decrypt-poc -lcrypto -ggdb3 -Wall -Wextra
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/conf.h>
#include <openssl/evp.h>
#include <openssl/err.h>
void bail() {
ERR_print_errors_fp(stderr);
exit(EXIT_FAILURE);
}
int main(int argc, char *argv[]) {
if(argc < 3)
bail();
unsigned char key[EVP_MAX_KEY_LENGTH];
unsigned char iv[EVP_MAX_IV_LENGTH];
unsigned char salt[8]; // openssl tool uses 8 bytes for salt
unsigned char decodeddata[256];
unsigned char ciphertext[256];
unsigned char plaintext[256];
const char *pass = argv[1]; // use first argument as password (PoC only)
unsigned char *encodeddata = (unsigned char *)argv[2]; // use second argument
int decodeddata_len, ciphertext_len, plaintext_len, len;
// Decode base64 string provided as second option
EVP_ENCODE_CTX *ctx;
if(!(ctx = EVP_ENCODE_CTX_new()))
bail();
EVP_DecodeInit(ctx);
EVP_DecodeUpdate(ctx, decodeddata, &len, encodeddata, strlen((const char*)encodeddata));
decodeddata_len = len;
if(!EVP_DecodeFinal(ctx, decodeddata, &len))
bail();
EVP_ENCODE_CTX_free(ctx);
// openssl tool format seems to be 'Salted__' + salt + encrypted data
// take it apart
memcpy(salt, decodeddata + 8, 8); // 8 bytes starting at 8th byte
memcpy(ciphertext, decodeddata + 16, decodeddata_len - 16); // all but the 16 first bytes
ciphertext_len = decodeddata_len - 16;
// Get some needed information
const EVP_CIPHER *cipher = EVP_aes_256_cbc();
int iklen = EVP_CIPHER_key_length(cipher);
int ivlen = EVP_CIPHER_iv_length(cipher);
int iter = 10000; // default in openssl 1.1.1
unsigned char keyivpair[iklen + ivlen];
// Generate the actual key IV pair
if(!PKCS5_PBKDF2_HMAC(pass, -1, salt, 8, iter, EVP_sha512(), iklen + ivlen, keyivpair))
bail();
memcpy(key, keyivpair, iklen);
memcpy(iv, keyivpair + iklen, ivlen);
// Decrypt data
EVP_CIPHER_CTX *cipherctx;
if(!(cipherctx = EVP_CIPHER_CTX_new()))
bail();
if(!EVP_DecryptInit_ex(cipherctx, cipher, NULL, key, iv))
bail();
if(!EVP_DecryptUpdate(cipherctx, plaintext, &len, ciphertext, ciphertext_len))
bail();
plaintext_len = len;
if(!EVP_DecryptFinal_ex(cipherctx, plaintext + len, &len))
bail();
plaintext_len += len;
EVP_CIPHER_CTX_free(cipherctx);
plaintext[plaintext_len] = '\0'; // add null termination
printf("%s", plaintext);
exit(EXIT_SUCCESS);
}
Application tested by running
$ openssl aes-256-cbc -e -a -md sha512 -pbkdf2 -pass pass:test321 <<< "Some secret data"
U2FsdGVkX19ZNjDQXX/aACg7d4OopxqvpjclkaSuybeAxOhVRIONXoCmCQaG/Vg9
$ ./decrypt-poc test321 U2FsdGVkX19ZNjDQXX/aACg7d4OopxqvpjclkaSuybeAxOhVRIONXoCmCQaG/Vg9
Some secret data
The Key/IV generation used by the command line tool is in apps/enc.c and was very helpful when figuring this out.
I have a Ruby on Rails api which handles a simple API call and returns some encrypted data. The encryption is done in C++, using the ruby native C api. (reference here).
The native part works fine when compiled and linked as a standalone program, and also when used with ruby in IRB.
However, when I use it from within the Rails API, I sometimes get a "Stack level too deep" error.
The error seems to occur or not depending on the size of the data processed.
According to this answer, the stack 'level' is actually stack space, so it would make sense that if I have more data to process, then I have more data in the stack, so it fills up quicker etc...
I had initially left all my variables in the stack, for simplicity and to avoid forgetting to free allocated memory. Seeing this error, I switched to a dynamical allocation approach. However contrarily to what I was expecting, the Stack level too deep error occurs for even smaller data size.
data_controller.rb
def load_data data_path, width
authorize!
encrypted = NativeDataProtector.encrypt(data_path, get_key(), get_iv())
return [ encrypted, "application/octet-stream" ]
end
native_encryptor.cpp
VALUE encrypt_n(VALUE _self, VALUE data_path, VALUE key, VALUE salt){
DataProtector protector;
string *b64 = protector.encrypt(StringValueCStr(data_path), \
StringValueCStr(key), \
StringValueCStr(salt));
VALUE ret = rb_str_new(b64->c_str(), b64->length());
delete(b64);
return ret;
}
extern "C" void Init_data_protector() {
VALUE mod = rb_define_module("NativeDataProtector");
rb_define_module_function(mod, "encrypt", (VALUE(*)(ANYARGS))encrypt_n, 3);
}
encrypt.h
#include <ruby.h>
#include "extconf.h"
#include <iostream>
#include <fstream>
#include <vector>
#include <list>
#include <openssl/conf.h>
#include <openssl/evp.h>
#include <openssl/err.h>
class DataProtector {
private :
int pad_cleartext(vector<unsigned char> *cleartext);
vector<unsigned char> *read_data(string path);
int aes_encrypt(vector<unsigned char> *plaintext, string key,
string iv, unsigned char *ciphertext);
string to_b64(unsigned char* in);
void handleErrors(void);
public :
string *encrypt(string data_path, string key, string salt);
};
encrypt.cpp
string *DataProtector::encrypt(string data_path, string key, string salt) {
vector<unsigned char> *cleartext = readData(data_path);
int length = pad_cleartext(cleartext);
unsigned char* output = new unsigned char[length + 16];
int ciphertext_len;
// encrypt
string *encrypted = new string("");
ciphertext_len = aes_encrypt(&((*cleartext), key, iv, output);
(*encrypted) += to_b64(output);
delete(cleartext);
delete(output);
return encrypted;
}
int DataProtector::aes_encrypt(vector<unsigned char> *plaintext, string key,
string iv, unsigned char *ciphertext)
{
EVP_CIPHER_CTX *ctx;
int len;
int ciphertext_len;
/* Create and initialise the context */
if(!(ctx = EVP_CIPHER_CTX_new())) handleErrors();
/* Initialise the encryption operation. IMPORTANT - ensure you use a key
* and IV size appropriate for your cipher
* In this example we are using 256 bit AES (i.e. a 256 bit key). The
* IV size for *most* modes is the same as the block size. For AES this
* is 128 bits */
if(1 != EVP_EncryptInit_ex(ctx, EVP_aes_128_cbc(), NULL, (const unsigned char *)key.c_str(), (const unsigned char *)iv.c_str()))
handleErrors();
/* Provide the message to be encrypted, and obtain the encrypted output.
* EVP_EncryptUpdate can be called multiple times if necessary
*/
if(1 != EVP_EncryptUpdate(ctx, ciphertext, &len, reinterpret_cast<unsigned char*>(plaintext->data()), plaintext->size()))
handleErrors();
ciphertext_len = len;
/* Finalise the encryption. Further ciphertext bytes may be written at
* this stage.
*/
if(1 != EVP_EncryptFinal_ex(ctx, ciphertext + len, &len)) handleErrors();
ciphertext_len += len;
/* Clean up */
EVP_CIPHER_CTX_free(ctx);
return ciphertext_len;
}
int DataProtector::pad_cleartext(vector<unsigned char> *in) {
// padds to length multiple of 16
int nb_blocks = in->size() / 16 + ((in->size()%16 == 0)? 1:1);
int size = nb_blocks*16;
for (unsigned int i=in->size(); i<size; i++) {
unsigned char c = '0';
in->push_back(c);
}
return size;
}
vector<unsigned char> *DataProtector::read_data(string path) {
streampos size;
ifstream file(path, ios::binary);
file.seekg(0, ios::end);
size = file.tellg();
file.seekg(0, ios::beg);
vector<unsigned char> *data = new vector<unsigned char>(fileSize);
file.read((char*) &data[0], size);
return data;
}
void DataProtector::handleErrors(void) {
ERR_print_errors_fp(stderr);
abort();
}
(the actual encryption is from here)
The error stack trace I get :
SystemStackError (stack level too deep):
app/controllers/data_controller.rb:41:in `encrypt'
app/controllers/data_controller.rb:41:in `load_data'
app/controllers/data_controller.rb:15:in `show'
Is Believe that the reason for this error is too much data allocated on the stack, and not a recursion issue. However, I don't understand why switching to heap allocation did not improve anything.
I can imagine 2 solutions :
cutting up the data in ruby and calling the native method several times with less data.
increasing the ruby stack size.
However both these solutions are unideal for my project, for performance/resource issues.
Is there any other way I can reduce the usage of the stack by my program ?
I have been struggling with a weird problem with RSA_verify. I am trying to RSA_sign using C and RSA_verify using C++. I have generated the private key and certificate using OpenSSL commands.
message = "1.2.0:08:00:27:2c:88:77"
When I use the message above, generate a hash and use RSA_sign to sign the digest, I get a signature of length 256 (strlen(signature)) and also the length returned from RSA_sign is 256. I use this length to verify and verification succeeds.
But when I use a message = "1.2.0:08:00:27:2c:88:08", the signature length is 60 and RSA_sign returns 256. When I use this length 60 to verify it fails. It fails to verify with length 256 as well. Also for some messages (1.2.0:08:00:27:2c:88:12) the signature generated is zero.
I am using SHA256 to hash the message and NID_SHA256 to RSA_sign and RSA_verify this digest. I have used -sha256 while generating the keys using the OpenSSL command.
I am forming the message by parsing an XML file reading some of the tags using some string operation.
Kindly suggest.
Below is the code used to sign.
int main(void)
{
int ret;
RSA *prikey;
char *data ;
unsigned char* signature;
int slen = 0;
FILE * fp_priv = NULL;
char* privfilepath = "priv.pem";
unsigned char* sign = NULL;
ERR_load_crypto_strings();
data = generate_hash();
printf("Message after generate hash %s: %d\n", data, strlen(data));
fp_priv = fopen(privfilepath, "r");
if (fp_priv == NULL)
{
printf("Private key path not found..");
return 1;
}
prikey = RSA_new();
prikey = PEM_read_RSAPrivateKey(fp_priv, &prikey, NULL, NULL);
if (prikey == NULL)
{
printf("Private key returned is NULL\n");
return 1;
}
signature = (unsigned char*)malloc(RSA_size(prikey));
if( signature == NULL )
return 1;
if(RSA_sign(NID_sha256, (unsigned char*)data, strlen(data),
signature, &slen, prikey) != 1) {
ERR_print_errors_fp(stdout);
return 1;
}
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
system("xxd -p -c256 sig_bin sig_hex");
RSA_free(prikey);
if(signature)
free(signature);
return 0;
}
One very, very important thing to learn about C is it has two distinct types with the same name.
char*: This represents the beginning of a character string. You can do things like strstr or strlen.
You should never strstr or strlen, but rather strnstr and strnlen, but that's a different problem.
char*: This represents the beginning of a data blob (aka byte array, aka octet string), you can't meaningfully apply strlen/etc to it.
RSA_sign uses the latter. It returns "data", not "a message". So, in your snippet
printf("Signature length while signing... %d : %d : %d ",
strlen(signature), slen, strlen(data));
FILE * sig_bin = fopen("sig_bin", "w");
fprintf(sig_bin, "%s", signature);
fclose(sig_bin);
data came from a function called generate_hash(); it's probably non-textual, so strlen doesn't apply. signature definitely is data, so strlen doesn't apply. fprintf also doesn't apply, for the same reasons. These functions identify the end of the character string by the first occurrence of a zero-byte (0x00, '\0', etc). But 0x00 is perfectly legal to have in a signature, or a hash, or lots of "data".
The length of the output of RSA_sign is written into the address passed into the 5th parameter. You passed &slen (address-of slen), so once the function exits (successfully) slen is the length of the signature. Note that it will only very rarely match strlen(signature).
To write your signature as binary, you should use fwrite, such as fwrite(sig_bin, sizeof(char), signature, slen);. If you want it as text, you should Base-64 encode your data.
I would like to generate a random string with OpenSSL and use this as a salt in a hashing function afterwards (will be Argon2). Currently I'm generating the random data this way:
if(length < CryptConfig::sMinSaltLen){
return 1;
}
if (!sInitialized){
RAND_poll();
sInitialized = true;
}
unsigned char * buf = new unsigned char[length];
if (!sInitialized || !RAND_bytes(buf, length)) {
return 1;
}
salt = std::string (reinterpret_cast<char*>(buf));
delete buf;
return 0;
But a std::cout of salt doesn't seem to be a proper string (contains control symbols and other stuff). This is most likely only my fault.
Am I using the wrong functions of OpenSSL to generate the random data?
Or is my conversion from buf to string faulty?
Random data is random data. That's what you're asking for and that's exactly what you are getting. Your salt variable is a proper string that happens to contain unprintable characters. If you wish to have printable characters, one way of achieving that is using base64 encoding, but that will blow up its length. Another option is to somehow discard non-printable characters, but I don't see any mechanism to force RAND_bytes to do this. I guess you could simply fetch random bytes in a loop until you get length printable characters.
If encoding base64 is acceptable for you, here is an example of how to use the OpenSSL base64 encoder, extracted from Joe Linoff's Cipher library:
string Cipher::encode_base64(uchar* ciphertext,
uint ciphertext_len) const
{
DBG_FCT("encode_base64");
BIO* b64 = BIO_new(BIO_f_base64());
BIO* bm = BIO_new(BIO_s_mem());
b64 = BIO_push(b64,bm);
if (BIO_write(b64,ciphertext,ciphertext_len)<2) {
throw runtime_error("BIO_write() failed");
}
if (BIO_flush(b64)<1) {
throw runtime_error("BIO_flush() failed");
}
BUF_MEM *bptr=0;
BIO_get_mem_ptr(b64,&bptr);
uint len=bptr->length;
char* mimetext = new char[len+1];
memcpy(mimetext, bptr->data, bptr->length-1);
mimetext[bptr->length-1]=0;
BIO_free_all(b64);
string ret = mimetext;
delete [] mimetext;
return ret;
}
To this code, I suggest adding BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL), because otherwise you'll get a new line character inserted after every 64 characters. See OpenSSL's -A switch for details.
I detoured recv function, i trying to decrypt buffer, but decrypt function change buffer size, and i think decryotion is invalid, code:
int WINAPI OwnRecv(SOCKET s, char FAR *buff, int len, int flags)
{
if(s == GameClientSocket)
{
int received = pTrampolineRecv(s, buff, len, flags);
if(received <= 0)
{
return received;
}
// now strlen(buff) is 2!!
char * plaintext;
plaintext = (char *)aes_decrypt(&Decrypt_Context, (unsigned char*)buff, &received);
(char *) buff = plaintext; // now strlen(buff) is 5!!
return received;
}
return pTrampolineRecv(s, buff, len, flags);
}
What's wrong with my code?
Thanks!
You forgot to implement a protocol! Whatever protocol you use to encrypt and decrypt the data, you have to actually implement it. It has to define block sizes, padding, and so on. It won't just work by magic. (Note that using a stream cipher will make this much easier than using a block cipher.)
Also, don't call strlen on arbitrary binary data! The strlen function is only for C-style strings.
Also, this line of code doesn't do what you think it does:
(char *) buff = plaintext; // now strlen(buff) is 5!!
Changing the value of buff, the variable that holds a pointer to the buffer, has no effect on the contents of the buffer. That's all the caller cares about.