Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long.
I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum.
I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck.
header segment:
typedef struct KEYS {
RSA *serv;
char* serv_pub;
int pub_size;
RSA *clnt;
} KEYS;
KEYS keys;
Initializing function:
// Generates and validates the servers key
/* code for generating server RSA left out, it's working */
//Set client exponent
keys.clnt = 0;
keys.clnt = RSA_new();
BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent
Problem code (in Network::server_handshake):
// *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)*
cout << "Assigning clients RSA" << endl;
// I have verified that 'buffer' contains the proper key
if (BN_hex2bn(&keys.clnt->n, buffer) < 0) {
Error("ERROR reading server RSA");
}
cout << "clients RSA has been assigned" << endl;
The program segfaults at
BN_hex2bn(&keys.clnt->n, buffer)
with the error (valgrind output)
Invalid read of size 8
at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8)
by 0x40F23E: Network::server_handshake() (Network.cpp:177)
by 0x40EF42: Network::startNet() (Network.cpp:126)
by 0x403C38: main (server.cpp:51)
Address 0x20 is not stack'd, malloc'd or (recently) free'd
Process terminating with default action of signal 11 (SIGSEGV)
Access not within mapped region at address 0x20
at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8)
And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!
RSA_new() only creates the RSA struct, it does not create any of the bignum objects inside that struct, like the n and e fields. You must create these yourself using BN_new(), or more likely you need to find the right openssl function to generate or read in your RSA key.
Related
AES has maximum block size of 128, and key sizes like 128, 196 & 256.
I have implemented the aes algorithm like so:
int main()
{
unsigned char key[KEY_128] = "very strong key";
unsigned char plaintext[16] = "this is a test";
unsigned char ciphertext[16];
unsigned char decptext[16];
aes_ctx_t *ctx;
virtualAES::Initialize();
ctx = virtualAES::AllocateCTX(key, sizeof(key));
virtualAES::Encrypt(ctx, plaintext, ciphertext);
cout << "encrypted: " << ciphertext << endl;
virtualAES::Encrypt(ctx, ciphertext, decptext);
cout << "decrypted: " << decptext << endl;
return 0;
}
but I want to encrypt larger data than 128bits, for example string that's 512 bits long. I need somekind of a loop that splits the strings into 128bit blocks and then encrypts & joins them again, but I have hard time doing this. Could someone provide an example?
I am more familiar with C#, which has several modes of encryption exposed through the System.Security.Cryptography namespace. However I know how Cipher Block Chaining works. I'll explain it to you, but keep in mind it is really easy to mess up crypto, so this is informational only, and I hope you will find a library that does what you need done.
With cipher block chaining (CBC) here is what you do. Take your data and break it into block sizes. 128 bits is 16 bytes, so there you go. If you have less than 16 bytes in your last block, you must pad. The commonest way I know of is PKCS7 padding, which means for example if you need 3 bytes of padding at the end of your last block, you would add 0x03, 0x03, 0x03 to make it a full block.
So now you are ready to encrypt. You should have an initialization vector (IV) to start off with. Bitwise XOR that IV with your first block of plain text. Then encrypt the result the way you normally would encrypt a single block of data (ECB mode). The result is your first block of cipher text. But it is also equivalent to the IV for the next block you want to encrypt. Bitwise XOR it with the second block and encrypt. Take that encrypted block, record it, and also use it to XOR with the third block. And so on.
This process makes it so that the exact same text appearing, let's say 5 times in a document will look totally different each time it appears. So it adds more security. Despite this, the IV does not need to be kept secret at all. Passwords and salts do, IVs do not.
I am developing an encrypted version of a realtime communication application. The issue I have is, that the encrypted data pakets sent to the receiver are faulty. An example from the error log: (hex encoded data, the original data is pure byte code).
sent: 262C1688215232656B5235B691826A21C51D37A99413050BAEADB81D8892493FC0DB519250199F5BE73E18F2703946593C4F6CEA396A168B3313FA689DE84F380606ED3C322F2ADFC561B9F1571E29DF5870B59D2FCF497E01D9CD5DFCED743559C3EE5B00678966C8D73EA3A5CD810BB848309CDF0F955F949FDBA618C401DA70A10C36063261C5DBAB0FC0F1
received: 262C1688215232656B5235B691826A21C51D37A99413050BAEADB81D8892493FC0DB519250199F5BE73E18F2703946593C4F6CEA396A168B3313FA689DE84F380606ED3C322F2ADFC561B9F1571E29DF5870B59D2FCF497E01D9CD5DFCED743559C3EE5B00CDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCD
This is the call of the send-method:
string encSendBuffer = sj->cipherAgent->encrypt(sj->dFC->sendBuffer, sj->dFC->sendBytes);
char* newSendBuffer = new char[encSendBuffer.length() + 1];
strcpy(newSendBuffer, encSendBuffer.c_str());
sj->dFC->s->async_send_to(boost::asio::buffer(newSendBuffer, encSendBuffer.length()),
*sj->dFC->f,
boost::bind(&sender::sendHandler, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
)
sj->dFC->s is a UDP-Socket and sj->dFC->f is an UDP Endpoint.
The error code of the sendHandler is always system: 0
This is how I do the encryption using the Crypto++ library: (extract)
string cipherEngine::encrypt(char* input, int length)
{
string cipher = "";
CTR_Mode<AES>::Encryption e;
e.SetKeyWithIV(key, keyLength, iv);
ArraySource as((byte*)input, length, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
)
);
return cipher;
}
UPDATE: Code of the receive function:
void receiver::receive(){
int maxLength = 4096;
sj->dFC->s->async_receive_from(boost::asio::buffer(input,maxLength),
senderEndpoint,
boost::bind(&receiver::handleReceiveFrom, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
After the Data is received, it is stored in the char buffer input and decrypted in the handleReceiveFrom function.
Without encryption everything is fine. The number of bytes that are sended is always correct, on receiver side too. The length of de "CD"- blocks are quite random. I already checked the encryption and the decrypted data is the same as the original plain text.
Does any know where this behavior comes from?
The key here is that the erroneous data begins after the first null (0x00) value in your encrypted data array. The following line:
strcpy(newSendBuffer, encSendBuffer.c_str());
...looks like it's only copying up to the data until that null byte into newSendBuffer. The send function is sending that buffer contents just fine; the buffer just doesn't have the data you expect. You'll need to load newSendBuffer in a different way, not using strcpy(), that can handle null bytes. Try std::memcpy().
Thank you Joachim Pileborg and Jack O'Reilly! You are right indeed.
I changed my code from strcpy(newSendBuffer, encSendBuffer.c_str());
to
for (int i = 0; i < encSendBuffer.length(); i++)
{
newSendBuffer[i] = encSendBuffer.at(i);
}
on sender and receiver side. It actually solved the problem. It is quite naive code but it does what it should.
std::memcpy() seems to be much more elegant and i will try it out.
I'm trying to encrypt something with RSA.
But my rsa libary doesn't seam to be able to use x509 keys.
So i tried to convert it to a DER key using openssl.
but i don't really understand how it works. i spotted two classes that seemed ok but i can't figure out how to use them.
the function are :
-i2d_X509
-X509
I did find a piece of code, but i can't understand it :
int len;
unsigned char *buf, *p;
len = i2d_X509(x, NULL);
buf = OPENSSL_malloc(len);
if (buf == NULL)
/* error */
p = buf;
i2d_X509(x, &p);
If you could help me out it would be great.
i2d_X509 means convert X509 object from internal representation (which is X509 structure) to DER encoded representation (which is copied over a buffer or in file).
So, in this code in line
len = i2d_X509(x, NULL);
you are determining the length of buffer or number of bytes required to represent the given certificate in DER from.
Then, you are allocating that much memory and final statement
len = i2d_X509(x, &p);
copies the X509 * certificate into this buffer in DER format.
This buffer you can persist in the file and save it as a certificate file say .cer, .crt and can open with any certificate tool.
Coming back to your problem, you can use this buffer into your program which accepts DER certificate.
But you mentioned key, did you?
If you need RSA public key, then you can do the following.
You may need to extract the key first by using X509_get_pubkey which will give key in EVP_PKEY structure.
EVP_PKEY * pkey;
pkey = X509_get_pubkey(x);
RSA * rsa;
rsa = EVP_PKEY_get1_RSA(pkey);
Now, output this RSA structure into DER.
int len;
unsigned char *buf, *p;
len = i2d_RSAPublicKey(rsa, buffer, buffer_length);
Allocate buffer to sufficient large length say 4000 depending on the key.
I think this would help you.
SOLVED: I was dumb. First argument of encrypt should have been key.size() and first argument of decrypt should have been RSA_size(myKey).
ORIGINAL QUESTION
Hey guys, I'm having some trouble figuring out how to do this.
Basically I just want a client and server to be able to send each other encrypted messages.
This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor.
So far I've got all the keys working but encryption/decryption is giving me hell.
I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems.
Note that on the client side I receive the following error in regards to decryption.
error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed
I don't really understand how padding works so I don't know how to fix it.
Anywho here are the relevant variables on each side followed by the code.
Client:
RSA *myKey; // Loaded with private key
// The below will hold the decrypted message
unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey));
/* The below holds the encrypted string received over the network.
Originally held in a C-string but C strings never work for me and scare me
so I put it in a C++ string */
string encrypted;
// The reinterpret_cast line was to get rid of an error message.
// Maybe the cause of one of my problems?
if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1)
{
cout << "Private decryption failed" << endl;
ERR_error_string(ERR_peek_last_error(), errBuf);
printf("Error: %s\n", errBuf);
free(decrypted);
exit(1);
}
Server:
RSA *pkey; // Holds the client's public key
string key; // Holds a session key I want to encrypt and send
//The below will hold the encrypted message
unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey));
// The reinterpret_cast line was to get rid of an error message.
// Maybe the cause of one of my problems?
if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1)
{
cout << "Public encryption failed" << endl;
ERR_error_string(ERR_peek_last_error(), errBuf);
printf("Error: %s\n", errBuf);
free(encrypted);
exit(1);
}
Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this.
I'm sorry if this offends you veteran coders.
Thanks in advance for any help you guys can provide!
Maybe not the only problem but: The first argument to RAS_xxxcrypt functions is the number of bytes of the buffers. sizeof(key.c_str()) does not yield the number of bytes in key, it yields the size of the type of key.c_str()'s result type, i.e. sizeof(const char*). You probably want to pass the number of chars in the string instead, which can be obtained with the size() member function.
I am trying to decrypt a piece of a file with wincrypt and I cannot seem to make this function decrypt correctly. The bytes are encrypted with the RC2 implementation in C# and I am supplying the same password and IV to both the encryption and decryption process (encrypted in C#, decrypted in c++).
All of my functions along the way are returning true until the final "CryptDecrypt" function. Instead of me typing out any more, here is the function:
static char* DecryptMyFile(char *input, char *password, int size)
{
HCRYPTPROV provider = NULL;
if(CryptAcquireContext(&provider, NULL, MS_ENHANCED_PROV, PROV_RSA_FULL, 0))
{printf("Context acquired.");}
else
{
if (GetLastError() == NTE_BAD_KEYSET)
{
if(CryptAcquireContext(&provider, 0, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET))
{printf("new key made.");}
else
{
printf("Could not acquire context.");
}
}
else
{printf("Could not acquire context.");}
}
HCRYPTKEY key = NULL;
HCRYPTHASH hash = NULL;
if(CryptCreateHash(provider, CALG_MD5, 0, 0, &hash))
{printf("empty hash created.");}
else
{printf("could not create hash.");}
if(CryptHashData(hash, (BYTE *)password, strlen(password), 0))
{printf("data buffer is added to hash.");}
else
{printf("error. could not add data buffer to hash.");}
if(CryptDeriveKey(provider, CALG_RC2, hash, 0, &key))
{printf("key derived.");}
else
{printf("Could not derive key.");}
DWORD dwKeyLength = 128;
if(CryptSetKeyParam(key, KP_EFFECTIVE_KEYLEN, reinterpret_cast<BYTE*>(&dwKeyLength), 0))
{printf("success");}
else
{printf("failed.");}
BYTE IV[8] = {0,0,0,0,0,0,0,0};
if(CryptSetKeyParam(key, KP_IV, IV, 0))
{printf("worked");}
else
{printf("faileD");}
DWORD dwCount = size;
BYTE *decrypted = new BYTE[dwCount + 1];
memcpy(decrypted, input, dwCount);
decrypted[dwCount] = 0;
if(CryptDecrypt(key,0, true, 0, decrypted, &dwCount))
{printf("succeeded");}
else
{printf("failed");}
return (char *)decrypted;
}
input is the data passed to the function, encrypted. password is the same password used to encrypt the data in C#. size is the size of the data while encrypted.
All of the above functions return true until CryptDecrypt, which I cannot seem to figure out why. At the same time, I'm not sure how the CryptDecrypt function would possibly edit my "decrypted" variable, since I am not passing a reference of it.
Any help or advice onto why this is not working would be greatly appreciated. This is my first endeavour with wincrypt and first time using C++ in years.
If it is of any more help, as well, this is my encryption (in C#):
public static byte[] EncryptString(byte[] input, string password)
{
PasswordDeriveBytes pderiver = new PasswordDeriveBytes(password, null);
byte[] ivZeros = new byte[8];
byte[] pbeKey = pderiver.CryptDeriveKey("RC2", "MD5", 128, ivZeros);
RC2CryptoServiceProvider RC2 = new RC2CryptoServiceProvider();
//using an empty initialization vector for convenience.
byte[] IV = new byte[8];
ICryptoTransform encryptor = RC2.CreateEncryptor(pbeKey, IV);
MemoryStream msEncrypt = new MemoryStream();
CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write);
csEncrypt.Write(input, 0, input.Length);
csEncrypt.FlushFinalBlock();
return msEncrypt.ToArray();
}
I have confirmed that my hash value in C++ is identical to my key in C#, created by PasswordDeriveBytes.CryptDeriveKey
First, as in my comment, use GetLastError() so you know what it failed. I'll assume that you get NTE_BAD_DATA, all the other errors are much more easier to deal with since they basically mean you missed some step int he API call sequence.
The typical reason why CryptDecrypt would fail with NTE_BAD_DATA would be that you're decrypting the last block of a block cypher (as you are) and the decrypted padding bytes are incorrect. This can happen if the input is truncated (not all encrypted bytes were saved to the file) or if the key is incorrect.
I would suggest you take this methodically since there are so many places where this can fail that will all manifest only at CryptDecrypt time:
Ensure that the file you encrypt in C# can be decrypted in C#. This would eliminate any file save truncation issues.
Try to encrypt and decrypt with fixed hard codded key first (no password derived), this will ensure that your key set code IV initialization are correct (as well as padding mode and cypher chaining mode).
Ensure that the password derivation process arives at the same hash. Things like ANSI vs. Unicode or terminal 0 can wreak havok on the MD5 hash and result in wildly different keys from apparently the same password hash.
Some people have discovered issues when moving between operating systems.
The CryptDeriveKey call uses a "default key length" based on the operating system and algorithm chosen. For RC2, the default generated key length is 40 bits on Windows 2000 and 128 bits on Windows 2003. This results in a "BAD DATA" return code when the generated key is used in a CryptDecrypt call.
Presumably this is related to "garbage" appearing at the end of the final buffer after trying to apply a 128 bit key to decrypt a 40 bit encrypted stream. The error code typically indicates bad padding bytes - but the root cause may be a key generation issue.
To generate a 40 bit encryption key, use ( ( 40 <<16 ) ) in the flags field of the CryptDeriveKey call.