How to generate a symmetric key in C or C++ the same way this script does? - c++

I am implementing Azure DPS (device provisioning service) for my ESP32-based firmware.
The bash script I use so far is as follows (where KEY is the primary key of the DPS enrolment group and REG_ID is the registration device Id for the given ESP it runs on):
#!/bin/sh
KEY=KKKKKKKKK
REG_ID=RRRRRRRRRRR
keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64
I use the Arduino platform in platformIO.
How to translate the script in C/C++?
[UPDATE] The reason why I can't run openSSL: I need to generate the symmetric key from the actual device MAC address in order to obtain the credential from DPS and then be granted to connect to IoT Hub - I run on an EPS32-based custom PCB. No shell. No OS.

I manage to do it by using bed library (which is available from both ESP32/Arduino platforms).
Here is my implementation for the Arduino platform:
#include <mbedtls/md.h> // mbed tls lib used to sign SHA-256
#include <base64.hpp> // Densaugeo Base64 version 1.2.0 or 1.2.1
/// Returns the SHA-256 signature of [dataToSign] with the key [enrollmentPrimaryKey]
/// params[in]: dataToSign The data to sign (for our purpose, it is the registration ID (or the device ID if it is different)
/// params[in]: enrollmentPrimaryKey The group enrollment primary key.
/// returns The SHA-256 base-64 signature to present to DPS.
/// Note: I use mbed to SHA-256 sign.
String Sha256Sign(String dataToSign, String enrollmentPrimaryKey){
/// Length of the dataToSign string
const unsigned dataToSignLength = dataToSign.length();
/// Buffer to hold the dataToSign as a char[] buffer from String.
char dataToSignChar[dataToSignLength + 1];
/// String to c-style string (char[])
dataToSign.toCharArray(dataToSignChar, dataToSignLength + 1);
/// The binary decoded key (from the base 64 definition)
unsigned char decodedPSK[32];
/// Encrypted binary signature
unsigned char encryptedSignature[32];
/// Base 64 encoded signature
unsigned char encodedSignature[100];
Serial.printf("Sha256Sign(): Registration Id to sign is: (%d bytes) %s\n", dataToSignLength, dataToSignChar);
Serial.printf("Sha256Sign(): DPS group enrollment primary key is: (%d bytes) %s\n", enrollmentPrimaryKey.length(), enrollmentPrimaryKey.c_str());
// Need to base64 decode the Preshared key and the length
const unsigned base64DecodedDeviceLength = decode_base64((unsigned char*)enrollmentPrimaryKey.c_str(), decodedPSK);
Serial.printf("Sha256Sign(): Decoded primary key is: (%d bytes) ", base64DecodedDeviceLength);
for(int i= 0; i<base64DecodedDeviceLength; i++) {
Serial.printf("%02x ", (int)decodedPSK[i]);
}
Serial.println();
// Use mbed to sign
mbedtls_md_type_t mdType = MBEDTLS_MD_SHA256;
mbedtls_md_context_t hmacKeyContext;
mbedtls_md_init(&hmacKeyContext);
mbedtls_md_setup(&hmacKeyContext, mbedtls_md_info_from_type(mdType), 1);
mbedtls_md_hmac_starts(&hmacKeyContext, (const unsigned char *) decodedPSK, base64DecodedDeviceLength);
mbedtls_md_hmac_update(&hmacKeyContext, (const unsigned char *) dataToSignChar, dataToSignLength);
mbedtls_md_hmac_finish(&hmacKeyContext, encryptedSignature);
mbedtls_md_free(&hmacKeyContext);
Serial.print("Sha256Sign(): Computed hash is: ");
for(int i= 0; i<sizeof(encryptedSignature); i++) {
Serial.printf("%02x ", (int)encryptedSignature[i]);
}
Serial.println();
// base64 decode the HMAC to a char
encode_base64(encryptedSignature, sizeof(encryptedSignature), encodedSignature);
Serial.printf("Sha256Sign(): Computed hash as base64: %s\n", encodedSignature);
// creating the real SAS Token
return String((char*)encodedSignature);
}

You have a very interesting question from mathematical/algorithmical point of view. So just for fun decided to implement ALL sub-algorithms of it from scratch, without almost NO dependacy on standard C++ library.
All algorithms of me are based on Wikipedia and described well in its articles SHA-256, HMAC, Base64 (and StackOverflow), Hex.
I made whole my code specifically from scratch and without almost NO dependency on std C++ library. Only two headers used right now <cstdint> for implementing all sized integers u8, u16, u32, i32, u64, i64.
And <string> is used only to implement Heap allocations. Also you can easily implement this heap allocations inside my HeapMem class, or by removing using String = std::string; (and #include <string>) on first lines of my code and using built-in heap-allocated String of Arduino if it has built-in one.
Header <iostream> is used only in few last lines of code snippet, only to output result to console, so that StackOverflow visitors my run program without external dependencies. This console output may be removed of course.
Besides main algorithms I had to implement my own classes Array, Vector, Str, Tuple, HeapMem to re-implement basic concepts of standard C++ library. Also standard library function like MemSet(), MemCpy(), MemCmp(), StrLen(), Move() had to be implemented.
You may notice too that I never used exceptions in code, specifically if you have disabled/non-supporting them. Instead of exceptions I implemented special Result<T> template that resembles Result from Rust language. This template is used to return/check correct and error results from whole stack of functions.
All algorithms (Sha256, Hmac, Base64) are tested by simple test cases with reference vectors taken from internet. Final SignSha256() function that you desired is also tested by several test cases against your reference bash OpenSSL script.
Important!. Don't use this code snippet directly inside production code, because it is not very well tested and might contain some errors. Use it Only for educational purposes or test it thourughly before using.
Code snippet is very large, around 32 KiB, bigger that limit of StackOverflow post size (which is 30 000 symbols), so I'm sharing code snippet through two external services - GodBolt (click Try it online! link), where you can also test it online, and GitHub Gist service for download/view only.
SOURCE CODE HERE
Try it online on GodBolt!
GitHub Gist

Related

Converting String for Delphi to std::string for C++

I am developing a device (ESP32) and there is a rxValue variable that defined as below;
std::string rxValue = pCharacteristic->getValue(); <- ESP32 (C++) side
But I need to send data (string or character) from Delphi App via SetValueAs... method but when I use
Characteristics.SetValueAsString('b'); <- Delphi side (but problematic)
EDIT: I'm adding SetValueAsString procedure here
procedure TBluetoothGattCharacteristic.SetValueAsString(const AValue: string; IsUTF8: Boolean);
begin
if IsUTF8 then
Value := TEncoding.UTF8.GetBytes(AValue)
else
Value := TEncoding.Unicode.GetBytes(AValue);
end;
Arduino serial monitor shows me a strange character (square) and command cannot be executed.
for (int i = 0; i < rxValue.length(); i++) { <- ESP32 (C++) side
Serial.print(rxValue[i]);
}
if (rxValue.find("a") != -1) {
digitalWrite(LED, HIGH);
}
As a result, what is the way of sending a string from Delphi to C++ (ESP32 board) as std::string. In addition (I didn't want to ask a little question separately.) What is the way of exact comparing rxValue with string? I mean if rxValue is exact same as abcdef execute a command?
Thanks right now..
The problem has become more complex after diving into deep. The main problem was 8-Bit / 16-Bit string incompatibility as #RemyLebeau pointed out and #David Heffernan was correct because you cannot use be used std::string for interop across programming languages.
So I've decided to convert my data to 8Bit AnsiString and send but this time I saw that Delphi doesn't support AnsiChar and Ansistring in mobile compilers anymore (you can read here and here).
So I cannot use 8-Bit strings in Delphi mobile (there are many people they ask "why" to Embarcadero because this means that you cannot use devices that use 8-Bit (many of IoT devices) ) but thank goodness, there are some good people solve this problem with a custom library (you can find the source here).
After adding this library to my app and execute the command like below, problem has been solved;
Characteristics.SetValueAsString(RawByteString('b'));
BUT answer is: You can't use std::string this way, it's not suitable for interop.

Data Encryption from UNIVERSE/U2/PICK

I am extracting some data from a UNIVERSE system and want to encrypt it for transfer via email.
I am no UNIVERSE expert so am using bits and pieces we have found from around the internet and it "looks" like it is working BUT I just can't seem to decrypt the data.
Below is the script I have used based on code found on the web:
RESULT=''
ALGORITHM="rc2-cbc" ; * 128 bit rc2 algorithm in CBC mode
MYKEY="23232323" ; * HEX - Actual Key
IV= "12121212" ; * HEX - Initialization Vector
DATALOC=1 ; * Data in String
KEYLOC=1 ; * Key in String
ACTION=5 ; * Base64 encode after encryption
KEYACTION=1 ; * KEY_ACTUAL_OPENSSL
SALT='' ; * SALT not used
RESULTLOC=1 ; * Result in String RESULT
OPSTRING = ''
RETURN.CODE=ENCRYPT(ALGORITHM,ACTION,DATASTRING,DATALOC,MYKEY,KEYLOC,KEYACTION,SALT,IV,OPSTRING,RESULTLOC)
RETURN.CODE = OPSTRING
Below are a few data strings I have processed through this script and the resulting string:
INPUT 05KI
OUTPUT iaYoHzxYlmM=
INPUT 05FOAA
OUTPUT e0XB/jyE9ZM=
When I try to decode and decrypt the resulting OUTPUT with an online decrypter, I still get no results: https://www.tools4noobs.com/online_tools/decrypt/
I'm thinking it might be a character encoding issue or perhaps the encryption is not working but I have no idea how to resolve - we have been working on this for a few weeks and cannot get any data that is decryptable...
All setups and fields have been set based on this: https://www.dropbox.com/s/ban1zntdy0q27z3/Encrypt%20Function.pdf?dl=0
If I feed the base-64 encrypted string from your code back into the Unidata DECRYPYT function with the same parameters it decrypts just fine.
I suspect something funny is happening with the key. This page mentions something like that: https://u2devzone.rocketsoftware.com/accelerate/articles/data-encryption/data-encryption.html "Generating a suitable key is one of the thornier problems associated with encryption. Keys should be generated as random binary strings, making them obviously difficult to remember. Accordingly, it is probably more common for applications to supply a pass phrase to the ENCRYPT function and have the function internally generate the actual encryption key."
One option to remove the Universe ENCRYPT function from the picture is to use openSSL directly. It looks like the ENCRYPT/DECRYPT functions are just thin wrappers around the openSSL library, so you can execute that to get the result. I'm having problems with the php page you're using for verification, but if I feed the base-64 encrypted string to an openSSL decrypt command on a different machine, it decrypts fine.
MYKEY="A long secret key"
DATASTRING="data to be encrypted data here"
EXECUTE '!echo "':DATASTRING:'"| openssl enc -base64 -e -rc2-cbc -nosalt -k "':MYKEY:'"' CAPTURING RESULT

What is the proper way to efficiently create digital signatures? Can I use DSA_sign_setup()?

I am working on an application whose performance is critical.
In this application I have a lot of messages(i.e. several thousands) needed to be signed (and verified of course) separately with a same private key/public key. I am using the OpenSSL library.
A naive approach with DSA functions (see below) will take tens of seconds to sign which is not nice. I tried to useDSA_sign_setup() function to speed things up but I can't figure out the correct way to use it.
I also tried ECDSA but I am lost in getting the correct configuration.
What is the proper way to do this if I really care about efficiency?
#include <openssl/dsa.h>
#include <openssl/engine.h>
#include <stdio.h>
#include <openssl/evp.h>
int N=3000;
int main()
{
DSA *set=DSA_new();
int a;
a=DSA_generate_parameters_ex(set,1024,NULL,1,NULL,NULL,NULL);
printf("%d\n",a);
a=DSA_generate_key(set);
printf("%d\n",a);
unsigned char msg[]="I am watching you!I am watching you!";
unsigned char sign[256];
unsigned int size;
for(int i=0;i<N;i++)
a=DSA_sign(1,msg,32,sign,&size,set);
printf("%d %d\n",a,size);
}
Using DSA_sign_setup() in the way proposed above is actually completely insecure, and luckily OpenSSL developers made the DSA structure opaque so that developers cannot try to force their way.
DSA_sign_setup() generates a new random nonce (that is sort of an ephemeral key per signature). It should never be reused under the same long term secret key. Never.
You could theoretically still be relatively safe reusing the same nonce for the same message, but as soon as the same combination of private key and nonce gets reused for two different messages you just reveal all the information that an attacker needs to retrieve your secret key (see Sony fail0verflow which is basically due to doing the same mistake of reusing the nonce with ECDSA).
Unfortunately DSA is slow, especially now that longer keys are required: to speed up your application you could try using ECDSA (e.g. with curve NISTP256, still no nonce reuse) or Ed25519 (deterministic nonce).
Proof of concept using the EVP_DigestSign API
Update: here is a proof of concept of how to programmatically generate signatures with OpenSSL.
The preferred way is to use the EVP_DigestSign API as it abstracts away which kind of asymmetric key is being used.
The following example expands the PoC in this OpenSSL wiki page: I tested this works using a DSA or NIST P-256 private key, with OpenSSL 1.0.2, 1.1.0 and 1.1.1-pre6.
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <openssl/pem.h>
#include <openssl/err.h>
#include <openssl/evp.h>
#define KEYFILE "private_key.pem"
#define N 3000
#define BUFFSIZE 80
EVP_PKEY *read_secret_key_from_file(const char * fname)
{
EVP_PKEY *key = NULL;
FILE *fp = fopen(fname, "r");
if(!fp) {
perror(fname); return NULL;
}
key = PEM_read_PrivateKey(fp, NULL, NULL, NULL);
fclose(fp);
return key;
}
int do_sign(EVP_PKEY *key, const unsigned char *msg, const size_t mlen,
unsigned char **sig, size_t *slen)
{
EVP_MD_CTX *mdctx = NULL;
int ret = 0;
/* Create the Message Digest Context */
if(!(mdctx = EVP_MD_CTX_create())) goto err;
/* Initialise the DigestSign operation - SHA-256 has been selected
* as the message digest function in this example */
if(1 != EVP_DigestSignInit(mdctx, NULL, EVP_sha256(), NULL, key))
goto err;
/* Call update with the message */
if(1 != EVP_DigestSignUpdate(mdctx, msg, mlen)) goto err;
/* Finalise the DigestSign operation */
/* First call EVP_DigestSignFinal with a NULL sig parameter to
* obtain the length of the signature. Length is returned in slen */
if(1 != EVP_DigestSignFinal(mdctx, NULL, slen)) goto err;
/* Allocate memory for the signature based on size in slen */
if(!(*sig = OPENSSL_malloc(*slen))) goto err;
/* Obtain the signature */
if(1 != EVP_DigestSignFinal(mdctx, *sig, slen)) goto err;
/* Success */
ret = 1;
err:
if(ret != 1)
{
/* Do some error handling */
}
/* Clean up */
if(*sig && !ret) OPENSSL_free(*sig);
if(mdctx) EVP_MD_CTX_destroy(mdctx);
return ret;
}
int main()
{
int ret = EXIT_FAILURE;
const char *str = "I am watching you!I am watching you!";
unsigned char *sig = NULL;
size_t slen = 0;
unsigned char msg[BUFFSIZE];
size_t mlen = 0;
EVP_PKEY *key = read_secret_key_from_file(KEYFILE);
if(!key) goto err;
for(int i=0;i<N;i++) {
if ( snprintf((char *)msg, BUFFSIZE, "%s %d", str, i+1) < 0 )
goto err;
mlen = strlen((const char*)msg);
if (!do_sign(key, msg, mlen, &sig, &slen)) goto err;
OPENSSL_free(sig); sig = NULL;
printf("\"%s\" -> siglen=%lu\n", msg, slen);
}
printf("DONE\n");
ret = EXIT_SUCCESS;
err:
if (ret != EXIT_SUCCESS) {
ERR_print_errors_fp(stderr);
fprintf(stderr, "Something broke!\n");
}
if (key)
EVP_PKEY_free(key);
exit(ret);
}
Generating a key:
# Generate a new NIST P-256 private key
openssl ecparam -genkey -name prime256v1 -noout -out private_key.pem
Performance/Randomness
I ran both your original example and my code on my (Intel Skylake) machine and on a Raspberry Pi 3. In both cases your original example does not take tens of seconds.
Given that apparently you see a huge performance improvement in using the insecure DSA_sign_setup() approach in OpenSSL 1.0.2 (which internally consumes new randomness, in addition to some somewhat expensive modular arithmetic), I suspect you might actually have a problem with the PRNG that is slowing down the generation of new random nonces and has a bigger impact than the modular arithmetic operations.
If that's the case you might definitely benefit from using Ed25519 as in that case the nonce is deterministic rather than random (it's generated using secure hash functions and combining the private key and the message).
Unfortunately that means that you will need to wait until OpenSSL 1.1.1 is released (hopefully during this summer).
On Ed25519
To use Ed25519 (which will be supported natively starting with OpenSSL 1.1.1) the above example needs to be modified, as in OpenSSL 1.1.1 there is no support for Ed25519ph and instead of using the Init/Update/Final streaming API you would need to call the one-shot EVP_DigestSign() interface (see documentation).
Full disclaimer: the next paragraph is a shameless plug for my libsuola research project, as I could definitely benefit from testing for real-world applications from other users.
Alternatively, if you cannot wait, I am the developer of an OpenSSL ENGINE called libsuola that adds support for Ed25519 in OpenSSL 1.0.2, 1.1.0 (and also 1.1.1 using alternative implementations). It's still experimental, but it uses third-party implementations (libsodium, HACL*, donna) for the crypto part and so far my testing (for research purposes) has not yet revealed outstanding bugs.
Benchmarking comparison of OP original example and mine
To address some of the comments, I compiled and executed OP's original example, a slightly modified version fixing some bugs and memory leaks, and my example of how to use the EVP_DigestSign API, all compiled against OpenSSL 1.1.0h (compiled as a shared library to a custom prefix from the release tarball with default configuration parameters).
The full details can be found at this gist, which includes the exact versions I benchmarked, the Makefile containing all the details on how the examples where compiled and how the benchmark was run, and details about my machine (briefly it's a quad-core i5-6500 # 3.20GHz, and freq scaling/Turbo boost is disabled from software and from the UEFI).
As can be seen from make_output.txt:
Running ./op_example
time ./op_example >/dev/null
0.32user 0.00system 0:00.32elapsed 100%CPU (0avgtext+0avgdata 3452maxresident)k
0inputs+0outputs (0major+153minor)pagefaults 0swaps
Running ./dsa_example
time ./dsa_example >/dev/null
0.42user 0.00system 0:00.42elapsed 100%CPU (0avgtext+0avgdata 3404maxresident)k
0inputs+0outputs (0major+153minor)pagefaults 0swaps
Running ./evp_example
time ./evp_example >/dev/null
0.12user 0.00system 0:00.12elapsed 99%CPU (0avgtext+0avgdata 3764maxresident)k
0inputs+0outputs (0major+157minor)pagefaults 0swaps
This shows that using ECDSA over NIST P-256 through the EVP_DigestSign API is 2.66x faster than the original OP's example and 3.5x faster than the corrected version.
As a late additional note, the code in this answer also computes the SHA256 digest of the input plaintext, while OP's original code and the "fixed" version skip it.
Therefore the speedup demonstrated by the ratios reported above is even more significant!
TL;DR: The proper way to efficiently use digital signatures in OpenSSL is through the EVP_DigestSign API: trying to use DSA_sign_setup() in the way proposed above is ineffective in OpenSSL 1.1.0 and 1.1.1, and is wrong (as in completely breaking the security of DSA and revealing the private key) in ≤1.0.2. I completely agree that the DSA API documentation is misleading and should be fixed; unfortunately the function DSA_sign_setup() cannot be completely removed as minor releases must retain binary compatibility, hence the symbol needs to stay there even for the upcoming 1.1.1 release (but is a good candidate for removal in the next major release).
I have decided to delete this answer because it compromises the efforts of the OpenSSL team to make their software safe.
The code I posted is still visible if you look at the edit but DO NOT USE IT, IT IS NOT SAFE. If you do, you risk exposing your private key.
Please don't say you haven't been warned. In fact, treat it as a warning if you are using DSA_sign_setup() in your own code, because you shouldn't be. Romen's answer above has more details about this. Thank you.
If the messages are large, it is customary to secure-hash them and sign the hash. This is a lot quicker. You need to transmit the message, the hash, and the signature, of course, and the checking process must include both re-hashing and checking that for equality, and digital signature verification.

Exporting Plaintext AES 128 Key to buffer/file Windows Crypto API c++

I'm having a lot of difficulty understanding and implementing the Windows Crypto API to Import and Export Keys in c++.
Despite reading through the MSDN documentation many many times I can't seem to get it to work in the way I want.
Below is a snippet of code from what i'm working on.
if(CryptAcquireContext(&CryptoHandle,NULL,provPointer, PROV_RSA_AES, 0xF0000000))
{
HCRYPTKEY aesKey;
//We now have context on Enhanced AES
if(CryptGenKey(CryptoHandle,CALG_AES_128,CRYPT_EXPORTABLE,&aesKey))
{
DWORD dwBlobLen;
BYTE* pbKeyBlob;
CryptExportKey(aesKey,0,PLAINTEXTKEYBLOB,0,NULL,&dwBlobLen);
if(pbKeyBlob=new BYTE[dwBlobLen])
{
if(CryptExportKey(aesKey, NULL,PLAINTEXTKEYBLOB, 0,pbKeyBlob, &dwBlobLen))
{
//Blah Blah
}
}
}
}
*(Where provPointer is a pointer to the Enhanced crypto api string.
As you might be able to tell from the snippet i'm trying to export a AES 128 key to plaintext.
In the debugger it all executes fine (No visible errors) but I don't understand the outcome at all.
The first call to CryptExportKey fills the dwBlobLen with '28' (What does this mean? Why?)
After the second CryptExport key i've tried writing pbKeyBlob(Which I assume points to the key) to file But I just end up with a constant set of bytes (Same for every try) followed by a set of bytes that I different every time (I assume this is some of the key) (Which add to 28 bytes total)
I'd really appreciate if someone could identify where I've gone wrong. I'm pretty clueless with the whole crypto lingo (Sessions,machine keys, blobs etc.)
In the future I'd like to be able to generate an AES key, use it and export it into a file in a form where I can import it again later.
Thanks in advance.
I'm not an expert on the Windows Cryptography API (or on cryptography in general) but I believe I can shed some light on what's going on here.
The first call to CryptExportKey puts 28 in dwBlobLen because that is the size of the blob that it will created when the key is exported. This is in the MSDN docs: http://msdn.microsoft.com/en-us/library/windows/desktop/aa379931%28v=vs.85%29.aspx
AS far as what you're doing wrong. You aren't doing anything wrong. You are asking CryptExportKey to export a plaintext blob which has the following layout:typedef struct _PLAINTEXTKEYBLOB {
BLOBHEADER hdr;
DWORD dwKeySize;
BYTE rgbKeyData[];
} PLAINTEXTKEYBLOB, *PPLAINTEXTKEYBLOB;
As you can see, the blob starts with a header and a key size (which is the constant set of bytes which you have reported, and should be 12 bytes long), followed by the key data (which is the data that changes every time, and should be 16 bytes long). Remember you are generating a 128 bit key (which is 16 bytes).
The BLOBHEADER has the following layout:typedef struct _BLOBHEADER {
BYTE bType;
BYTE bVersion;
WORD Reserved;
ALG_ID aiKeyAlg;
} BLOBHEADER;
By the way, from the doc on the CryptImportKey function, you can't import the PLAINTEXTBLOB directly, because the BYTE array that you pass to CryptImportKey does not include the keysize. You need to pass a buffer with the BLOBHEADER followed by the key data.

What are highest performance SQLite encryption codecs in Botan?

When using Botan encryption with botansqlite3, what are the optimal configuration settings for performance?
OR
How can I configure Botansqlite3 to use CAST5?
I am currently using AES and it is too slow. My use case is a game.
I am looking for weak or moderate encryption to protect my game's data (not end user data) so security is less of a consideration than performance.
Here is my current BotanSqlite3 codec.h
/*These constants can be used to tweak the codec behavior as follows */
//BLOCK_CIPHER_STR: Cipher and mode used for encrypting the database
//make sure to add "/NoPadding" for modes that use padding schemes
const string BLOCK_CIPHER_STR = "Twofish/XTS";
//PBKDF_STR: Key derivation function used to derive both the encryption
//and IV derivation keys from the given database passphrase
const string PBKDF_STR = "PBKDF2(SHA-160)";
//SALT_STR: Hard coded salt used to derive the key from the passphrase.
const string SALT_STR = "&g#nB'9]";
//SALT_SIZE: Size of the salt in bytes (as given in SALT_STR)
const int SALT_SIZE = 64/8; //64 bit, 8 byte salt
//MAC_STR: CMAC used to derive the IV that is used for db page
//encryption
const string MAC_STR = "CMAC(Twofish)";
//PBKDF_ITERATIONS: Number of hash iterations used in the key derivation
//process.
const int PBKDF_ITERATIONS = 10000;
//KEY_SIZE: Size of the encryption key. Note that XTS splits the key
//between two ciphers, so if you're using XTS, double the intended key
//size. (ie, "AES-128/XTS" should have a 256 bit KEY_SIZE)
const int KEY_SIZE = 512/8; //512 bit, 64 byte key. (256 bit XTS key)
//IV_DERIVATION_KEY_SIZE: Size of the key used with the CMAC (MAC_STR)
//above.
const int IV_DERIVATION_KEY_SIZE = 256/8; //256 bit, 32 byte key
//This is definited in sqlite.h and very unlikely to change
#define SQLITE_MAX_PAGE_SIZE 32768
I believe that I need to find replacements for BLOCK_CIPHER_STR, PBKDF_STR, MAC_STR, KEY_SIZE and IV_DERIVATION_KEY_SIZE to reconfigure BotanSqlite3 to use a different codec.
I found a extensive comparison test of Botan codec performance here:
http://panthema.net/2008/0714-cryptography-speedtest-comparison/crypto-speedtest-0.1/results/cpu-sidebyside-comparison-3x2.pdf#page=5
However, the testing was done with Botan directly, not botansqlite3 as I intend to use it. Looking at the charts, a good candidate appears to be CAST5 from a performance perspective.
The database in question is 300KB, mostly INTEGER fields with some text blobs.
I am configuring Botan as suggested by OlivierJG of botansqlite3 fame, using the amalgamation
'./configure.py --no-autoload --enable-modules=twofish,xts,pbkdf2,cmac,sha1 --gen-amalgamation --cc=msvc --os=win32 --cpu=x86 --disable-shared --disable-asm'
References:
http://github.com/OlivierJG/botansqlite3 - botansqlite3 is an encryption codec for SQLite3 that can use any algorithms in Botan for encryption
http://www.sqlite.org - sqlite3 is a cross-platform SQL database
http://botan.randombit.net/ - botan is a C++ encryption library with support for a number of codecs
You can get CAST-128 (or as I was calling it, CAST5) to work, it is a block cipher.
The best bet is the above with different configuration of key size.
Twofish is pretty fast.
Thank you to 'Olivier JG' for all the excellent code.