Load/Export RandomNumber - c++

In the frame of an home-made ECDHE application, both the client and the server have to send a randomly generated numbers (rng), in order to build later the MasterSecret during the handshake (TLS-like)...
With crypto++, it's easy to create these numbers, thanks to :
AutoSeededRandomPool rng;
My problem is 1) to export them to a string or equivalent, and 2) to load them from a string.
I must put these numbers within a frame, and nor the Class definition, nor the examples precise that.
On the web I haven't been able to find Save/Load examples (like the ones for RSA::PublicKeys).
Apparently I'm the first to want this, as their examples generate the client and the server in the same program, and thus don't need to transmit the numbers.
And, as part of this handshake, I also try to do the same with curvesID...

This question was a misunderstanding from my part, so I'll explain it, in case of anyone having the same interrogations. It's largely inspired from the crypto++ wiki...
There are 2 distinct objects :
AutoSeededRandomPool prng;
prng.GenerateBlock( scratch, scratch.size() );
AutoSeededRandomPool prng; is the generator of random numbers (that will be auto-seeded)
prng.GenerateBlock is the command that will extract bits from this random number to build the std::string scratch of the desired length.
And as the scratch is a string, we can do what we want with it, to use it anywhere... So please refer to the string import/export.

Related

How to generate a symmetric key in C or C++ the same way this script does?

I am implementing Azure DPS (device provisioning service) for my ESP32-based firmware.
The bash script I use so far is as follows (where KEY is the primary key of the DPS enrolment group and REG_ID is the registration device Id for the given ESP it runs on):
#!/bin/sh
KEY=KKKKKKKKK
REG_ID=RRRRRRRRRRR
keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64
I use the Arduino platform in platformIO.
How to translate the script in C/C++?
[UPDATE] The reason why I can't run openSSL: I need to generate the symmetric key from the actual device MAC address in order to obtain the credential from DPS and then be granted to connect to IoT Hub - I run on an EPS32-based custom PCB. No shell. No OS.
I manage to do it by using bed library (which is available from both ESP32/Arduino platforms).
Here is my implementation for the Arduino platform:
#include <mbedtls/md.h> // mbed tls lib used to sign SHA-256
#include <base64.hpp> // Densaugeo Base64 version 1.2.0 or 1.2.1
/// Returns the SHA-256 signature of [dataToSign] with the key [enrollmentPrimaryKey]
/// params[in]: dataToSign The data to sign (for our purpose, it is the registration ID (or the device ID if it is different)
/// params[in]: enrollmentPrimaryKey The group enrollment primary key.
/// returns The SHA-256 base-64 signature to present to DPS.
/// Note: I use mbed to SHA-256 sign.
String Sha256Sign(String dataToSign, String enrollmentPrimaryKey){
/// Length of the dataToSign string
const unsigned dataToSignLength = dataToSign.length();
/// Buffer to hold the dataToSign as a char[] buffer from String.
char dataToSignChar[dataToSignLength + 1];
/// String to c-style string (char[])
dataToSign.toCharArray(dataToSignChar, dataToSignLength + 1);
/// The binary decoded key (from the base 64 definition)
unsigned char decodedPSK[32];
/// Encrypted binary signature
unsigned char encryptedSignature[32];
/// Base 64 encoded signature
unsigned char encodedSignature[100];
Serial.printf("Sha256Sign(): Registration Id to sign is: (%d bytes) %s\n", dataToSignLength, dataToSignChar);
Serial.printf("Sha256Sign(): DPS group enrollment primary key is: (%d bytes) %s\n", enrollmentPrimaryKey.length(), enrollmentPrimaryKey.c_str());
// Need to base64 decode the Preshared key and the length
const unsigned base64DecodedDeviceLength = decode_base64((unsigned char*)enrollmentPrimaryKey.c_str(), decodedPSK);
Serial.printf("Sha256Sign(): Decoded primary key is: (%d bytes) ", base64DecodedDeviceLength);
for(int i= 0; i<base64DecodedDeviceLength; i++) {
Serial.printf("%02x ", (int)decodedPSK[i]);
}
Serial.println();
// Use mbed to sign
mbedtls_md_type_t mdType = MBEDTLS_MD_SHA256;
mbedtls_md_context_t hmacKeyContext;
mbedtls_md_init(&hmacKeyContext);
mbedtls_md_setup(&hmacKeyContext, mbedtls_md_info_from_type(mdType), 1);
mbedtls_md_hmac_starts(&hmacKeyContext, (const unsigned char *) decodedPSK, base64DecodedDeviceLength);
mbedtls_md_hmac_update(&hmacKeyContext, (const unsigned char *) dataToSignChar, dataToSignLength);
mbedtls_md_hmac_finish(&hmacKeyContext, encryptedSignature);
mbedtls_md_free(&hmacKeyContext);
Serial.print("Sha256Sign(): Computed hash is: ");
for(int i= 0; i<sizeof(encryptedSignature); i++) {
Serial.printf("%02x ", (int)encryptedSignature[i]);
}
Serial.println();
// base64 decode the HMAC to a char
encode_base64(encryptedSignature, sizeof(encryptedSignature), encodedSignature);
Serial.printf("Sha256Sign(): Computed hash as base64: %s\n", encodedSignature);
// creating the real SAS Token
return String((char*)encodedSignature);
}
You have a very interesting question from mathematical/algorithmical point of view. So just for fun decided to implement ALL sub-algorithms of it from scratch, without almost NO dependacy on standard C++ library.
All algorithms of me are based on Wikipedia and described well in its articles SHA-256, HMAC, Base64 (and StackOverflow), Hex.
I made whole my code specifically from scratch and without almost NO dependency on std C++ library. Only two headers used right now <cstdint> for implementing all sized integers u8, u16, u32, i32, u64, i64.
And <string> is used only to implement Heap allocations. Also you can easily implement this heap allocations inside my HeapMem class, or by removing using String = std::string; (and #include <string>) on first lines of my code and using built-in heap-allocated String of Arduino if it has built-in one.
Header <iostream> is used only in few last lines of code snippet, only to output result to console, so that StackOverflow visitors my run program without external dependencies. This console output may be removed of course.
Besides main algorithms I had to implement my own classes Array, Vector, Str, Tuple, HeapMem to re-implement basic concepts of standard C++ library. Also standard library function like MemSet(), MemCpy(), MemCmp(), StrLen(), Move() had to be implemented.
You may notice too that I never used exceptions in code, specifically if you have disabled/non-supporting them. Instead of exceptions I implemented special Result<T> template that resembles Result from Rust language. This template is used to return/check correct and error results from whole stack of functions.
All algorithms (Sha256, Hmac, Base64) are tested by simple test cases with reference vectors taken from internet. Final SignSha256() function that you desired is also tested by several test cases against your reference bash OpenSSL script.
Important!. Don't use this code snippet directly inside production code, because it is not very well tested and might contain some errors. Use it Only for educational purposes or test it thourughly before using.
Code snippet is very large, around 32 KiB, bigger that limit of StackOverflow post size (which is 30 000 symbols), so I'm sharing code snippet through two external services - GodBolt (click Try it online! link), where you can also test it online, and GitHub Gist service for download/view only.
SOURCE CODE HERE
Try it online on GodBolt!
GitHub Gist

Saving key and iv to file AES implementation Crypto++

So I am using the Crypto++ Library to encrypt a file. I need to save the key and iv for future use. I am following this tutorial. Here is my function :
void AESUtil::encrypt(string filename,bool savekeys,string savefilename){
AutoSeededRandomPool rnd;
// Generate a random key
byte key[AES::DEFAULT_KEYLENGTH];
rnd.GenerateBlock(key, AES::DEFAULT_KEYLENGTH);
// Generate a random IV
byte iv[AES::BLOCKSIZE];
rnd.GenerateBlock(iv, AES::BLOCKSIZE);
Binary b;
string plaintext = b.decoder(filename);
unsigned char *ciphertext= new unsigned char[plaintext.size()+1];
ciphertext[plaintext.size()]='\0';
if(savekeys){
ofstream("key.bin", ios::binary).write((char*)key, sizeof(key));
}
CFB_Mode<AES>::Encryption cfbEncryption(key, AES::DEFAULT_KEYLENGTH, iv);
cfbEncryption.ProcessData(ciphertext,reinterpret_cast<const unsigned char*>(plaintext.c_str()),plaintext.size()+1);
ofstream outfile(savefilename.c_str());
outfile.write((char*)ciphertext,sizeof(ciphertext));
}
The files contain data in �/���� format. I want to know the best method to save the key and iv programmatically which are a byte array to a file and the ciphertext which is a unsigned char* to a separate file.
The key could be saved in a separate file. Normally the key is established between sender / receiver in advance, or it is encrypted using a public key of the receiver. Note that it doesn't make sense to save the key next to the ciphertext, as it would provide no protection whatsoever. The handling of keys is called key management and entire books have been written about it.
The IV is a different animal. The IV should be randomly generated. For CFB it should at least be unique and identical at both sides. Usually the IV is simply prefixed to the ciphertext, it doesn't have to be kept secret.
Your key and iv variables are the key and IV used to encrypt the plain text.
You didn't fill either; you're actually using an array filled with 0 bytes as both the key and IV for your encryption.
The IV is public information. You don't need to hide it. Save it the way you want.
The KEY is what you must keep safe. To do that you may decide how much effort you want to put on it to hide it from the external factors.
I have some keys that I don't care to leave them as a "plain text" in the binary code. (NO SECURITY, but my mom can't figure out what to do, but a beginner in reverse engineer will laugh at it.)
Some keys I do a play with the bytes, like inverting parts, separating them, XOR with something. (Very unsafe, but better than nothing, a programmer with decent knowledge in reverse engineering will be able to spend some time and eventually break the security)
Some other cases I use 3rd party advanced obfuscation... If possible, depending on what you want, you may even replace your encryption engine with some "white-box" cryptography. Then you will have your keys very well protected. But this is usually hard/expensive. It doesn't seem to be your case. (Yes, even a very advanced assembly guru will not be happy to start reverse engineer this case.)
Another solution, if you don't need the key on your binary, is to give it to the system's password manager. On Windows, it's called "Data Protection API", and on Mac, it's called "Keychain". Take a look at these, and then you will understand why this is considered security. But it's because all the passwords here are encrypted with the "user password" so nothing is stored "on disk". A turned-off device in this scenario is considered very secure.

Exporting Plaintext AES 128 Key to buffer/file Windows Crypto API c++

I'm having a lot of difficulty understanding and implementing the Windows Crypto API to Import and Export Keys in c++.
Despite reading through the MSDN documentation many many times I can't seem to get it to work in the way I want.
Below is a snippet of code from what i'm working on.
if(CryptAcquireContext(&CryptoHandle,NULL,provPointer, PROV_RSA_AES, 0xF0000000))
{
HCRYPTKEY aesKey;
//We now have context on Enhanced AES
if(CryptGenKey(CryptoHandle,CALG_AES_128,CRYPT_EXPORTABLE,&aesKey))
{
DWORD dwBlobLen;
BYTE* pbKeyBlob;
CryptExportKey(aesKey,0,PLAINTEXTKEYBLOB,0,NULL,&dwBlobLen);
if(pbKeyBlob=new BYTE[dwBlobLen])
{
if(CryptExportKey(aesKey, NULL,PLAINTEXTKEYBLOB, 0,pbKeyBlob, &dwBlobLen))
{
//Blah Blah
}
}
}
}
*(Where provPointer is a pointer to the Enhanced crypto api string.
As you might be able to tell from the snippet i'm trying to export a AES 128 key to plaintext.
In the debugger it all executes fine (No visible errors) but I don't understand the outcome at all.
The first call to CryptExportKey fills the dwBlobLen with '28' (What does this mean? Why?)
After the second CryptExport key i've tried writing pbKeyBlob(Which I assume points to the key) to file But I just end up with a constant set of bytes (Same for every try) followed by a set of bytes that I different every time (I assume this is some of the key) (Which add to 28 bytes total)
I'd really appreciate if someone could identify where I've gone wrong. I'm pretty clueless with the whole crypto lingo (Sessions,machine keys, blobs etc.)
In the future I'd like to be able to generate an AES key, use it and export it into a file in a form where I can import it again later.
Thanks in advance.
I'm not an expert on the Windows Cryptography API (or on cryptography in general) but I believe I can shed some light on what's going on here.
The first call to CryptExportKey puts 28 in dwBlobLen because that is the size of the blob that it will created when the key is exported. This is in the MSDN docs: http://msdn.microsoft.com/en-us/library/windows/desktop/aa379931%28v=vs.85%29.aspx
AS far as what you're doing wrong. You aren't doing anything wrong. You are asking CryptExportKey to export a plaintext blob which has the following layout:typedef struct _PLAINTEXTKEYBLOB {
BLOBHEADER hdr;
DWORD dwKeySize;
BYTE rgbKeyData[];
} PLAINTEXTKEYBLOB, *PPLAINTEXTKEYBLOB;
As you can see, the blob starts with a header and a key size (which is the constant set of bytes which you have reported, and should be 12 bytes long), followed by the key data (which is the data that changes every time, and should be 16 bytes long). Remember you are generating a 128 bit key (which is 16 bytes).
The BLOBHEADER has the following layout:typedef struct _BLOBHEADER {
BYTE bType;
BYTE bVersion;
WORD Reserved;
ALG_ID aiKeyAlg;
} BLOBHEADER;
By the way, from the doc on the CryptImportKey function, you can't import the PLAINTEXTBLOB directly, because the BYTE array that you pass to CryptImportKey does not include the keysize. You need to pass a buffer with the BLOBHEADER followed by the key data.

crypto++ saving key to a byte queue and back to a key

I want to license my software using RSA encryption. My software has several executables and I plan to have each check the signature of a common license file before they proceed to do what it is they do. My goal is not to make it impossible to circumvent the licensing protection, just make it very difficult. I know that no one can make it impossible.
The executables currently run in a Windows environment, but they will only be released to work in a Linux environment.
My current plan is to put the public key within each executable for signature verification. The programs already have a 'safe' encrypted area in which to put the key.
My question for this post is, does my implementation method make sense? Is there another alternative? The alternative to have the public key in a separate file would allow a hacker to replace that file and use their own license file and signature. That doesn't seem as safe.
Additionally I've been reading through the crypto++ documentation and running the example code to try and accomplish this task. I cannot get any code to work that puts the key into a non-file sink and back again. All the examples write and read to files. I need to be able to save and load from a string or a byte queue. A simple attempt to do this is below, but when it runs I get this error, when r2.BERDecodePrivateKey() executes:
Unhandled exception at 0x7630c41f in MyRSA.exe: Microsoft C++ exception: CryptoPP::BERDecodeErr at memory location 0x002ef32c..
#include <osrng.h>
#include "rsa.h"
using CryptoPP::RSA;
#include <queue.h>
using CryptoPP::ByteQueue;
int myCode_key2ByteQueueToKey(void)
{
////////////////////////////////////////////////////////////////////////
// Generate the keys
AutoSeededRandomPool rnd;
CryptoPP::RSA::PrivateKey r1;
r1.GenerateRandomWithKeySize(rnd, 2048);
CryptoPP::RSA::PublicKey rsaPublic(r1);
////////////////////////////////////////////////////////////////////////
// Put the 'inner' part of the key into a ByteQueue - whatever that is.
ByteQueue queue;
r1.DEREncodePublicKey(queue);
////////////////////////////////////////////////////////////////////////
// Copy the byte queued inner key into another key
CryptoPP::RSA::PrivateKey r2;
r2.BERDecodePrivateKey(queue, false /*optParams*/, queue.MaxRetrievable());
////////////////////////////////////////////////////////////////////////
// Validate the key made the trip in and out of a byte queue ok.
if(!r1.Validate(rnd, 3)) printf("Validation of oringal key failed.\n");
if(!r2.Validate(rnd, 3)) printf("Validation of reloaded key failed.\n");
if(r1.GetModulus() != r2.GetModulus() ||
r1.GetPublicExponent() != r2.GetPublicExponent() ||
r1.GetPrivateExponent() != r2.GetPrivateExponent())
{
printf("Key didn't survive round trip in and out of a byte queue.");
}
return 0;
}
I do not have a fix for the above code. There's something I don't understand about library and as a result something's missing, but I've got to get a move on.
I thought I'd post an alternative I've found. Its an example on the Crypto++ wiki that puts the keys into strings (and not files) and back again. The round trip is shown to work.
http://www.cryptopp.com/wiki/BERDecode
Instead of
CryptoPP::RSA::PrivateKey
it uses
CryptoPP::RSAES_OAEP_SHA_Decryptor
and similarly for the public key. This allows the use of a member function AccessKey() which isn't available for the PrivateKey class.
If anyone has a fix for the original code, I urge you to please post it as it would help me better understand this library.
So basically you do this:
Generate 2048 bits private key
Encode a public key in DER format from the exponents of your private key to your bytequeue
Try to decode it as a private key from the bytequeue <-- here is the error
Validate the key...
You cannot decode a public key to a private key, some encoding flags differs.
As for CryptoPP::RSAES_OAEP_SHA_Decryptor
it's a good practice to use it for key generation as it is aware of safe primes.
It is also simpler to use for general decryption task as it includes everything you need in one object.
See http://www.cryptopp.com/wiki/Keys_and_Formats#High_Level_Objects
Hope it helped even it's a late answer ;)

How to use openssl to base 64 decode?

I am developing in C++ using Boost.Asio. I want to be able to base64 decode data and since Boost.Asio links to openssl I want to use it's functions to do so and not add an extra dependency(eg crypto++). I have found this code here that shows how to do it. (change int finalLen = BIO_read(bmem, (void*)pOut, outLen); to inLen )
I don't know if it works. I just pass to it some test data that I verify with an online decoder found here(2) (select decode safely as text and count the symbols). The test string I use is this one: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" (without the ""). Both the online decoder and a quick crypto++ implementation return 23 chars. But the code I mentioned above using openssl returns 0. Am I doing something wrong? Is openssl suitable for base64 decoding?
Please provide me a solution (if one exists). Thanks for you time.
pff sorry my mistake. I forgot to allocate memory for pOut. The code seems to work now.