methods DSA_do_verify and SHA1 (OpenSSL library for Windows) - c++

i am working on a program to authenticate an ENC signature file by using OpenSSL for windows, and specifically methods DSA_do_verify(...) and SHA1(...) hash algorithm, but is having problems as the result from DSA_do_verify is always 0 (invalid).
I am using the signature file of test set 4B from the IHO S-63 Data Protection Scheme, and also the SA public key (downloadable from IHO) for verification.
Below is my program, can anyone help to see where i have gone wrong as i have tried many ways but failed to get the verification to be valid, thanks..
The signature file from test set 4B
// Signature part R:
3F14 52CD AEC5 05B6 241A 02C7 614A D149 E7D6 C408.
// Signature part S:
44BB A3DB 8C46 8D11 B6DB 23BE 1A79 55E6 B083 7429.
// Signature part R:
93F5 EF86 1FF6 BA6F 1C2B B9BB 7F36 0C80 2F9B 2414.
// Signature part S:
4877 8130 12B4 50D8 3688 B52C 7A84 8E26 D442 8B6E.
// BIG p
C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B.
// BIG q
B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D.
// BIG g
4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718.
// BIG y
15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0.
dataServer_pkeyfile.txt (extracted from above)
// BIG p
C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B.
// BIG q
B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D.
// BIG g
4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718.
// BIG y
15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0.
Program abstract:
QbyteArray pk_data;
QFile pk_file("./dataServer_pkeyfile.txt");
if (pk_file.open(QIODevice::Text | QIODevice::ReadOnly))
{
pk_data.append(pk_file.readAll());
}
pk_file.close();
unsigned char ptr_sha_hashed[20];
unsigned char *ptr_pk_data = (unsigned char *)pk_data.data();
// openssl SHA1 hashing algorithm
SHA1(ptr_pk_data, pk_data.length(), ptr_sha_hashed);
DSA_SIG *dsasig = DSA_SIG_new();
char ptr_r[] = "93F5EF861FF6BA6F1C2BB9BB7F360C802F9B2414"; //from tset 4B
char ptr_s[] = "4877813012B450D83688B52C7A848E26D4428B6E"; //from tset 4B
if (BN_hex2bn(&dsasig->r, ptr_r) == 0) return 0;
if (BN_hex2bn(&dsasig->s, ptr_s) == 0) return 0;
DSA *dsakeys = DSA_new();
//the following values are from the SA public key
char ptr_p[] = "FCA682CE8E12CABA26EFCCF7110E526DB078B05EDECBCD1EB4A208F3AE1617AE01F35B91A47E6DF63413C5E12ED0899BCD132ACD50D99151BDC43EE737592E17";
char ptr_q[] = "962EDDCC369CBA8EBB260EE6B6A126D9346E38C5";
char ptr_g[] = "678471B27A9CF44EE91A49C5147DB1A9AAF244F05A434D6486931D2D14271B9E35030B71FD73DA179069B32E2935630E1C2062354D0DA20A6C416E50BE794CA4";
char ptr_y[] = "963F14E32BA5372928F24F15B0730C49D31B28E5C7641002564DB95995B15CF8800ED54E354867B82BB9597B158269E079F0C4F4926B17761CC89EB77C9B7EF8";
if (BN_hex2bn(&dsakeys->p, ptr_p) == 0) return 0;
if (BN_hex2bn(&dsakeys->q, ptr_q) == 0) return 0;
if (BN_hex2bn(&dsakeys->g, ptr_g) == 0) return 0;
if (BN_hex2bn(&dsakeys->pub_key, ptr_y) == 0) return 0;
int result; //valid = 1, invalid = 0, error = -1
result = DSA_do_verify(ptr_sha_hashed, 20, dsasig, dsakeys);
//result is 0 (invalid)

Found the problem.. should open the file as binary, instead of text as follows:
if (pk_file.open(QIODevice::ReadOnly))

Related

RSA encryption breakes when M^e mod(N) = 0 (C++). How to fix?

Currently trying to implement RSA encryption in c++ and running into an issue with the encrypted message.
If we take p = 2, q = 7 then N = 14 and phi = 6. e is forced to be 5 and d can be a variety of numbers but lets take the first applicable from the list d = 11. Now suppose the asccii message to encrypt is "b".
b in asccii is 98 and if we encrypt this with e = 5, N = 14 then 98^5 mod(14) = 0 the resulting encrypted message is 0. Now if we try to decrypt with d = 11 or any d for that matter, 0^d mod14 = 0 gives that the original message was 0 or NULL which is not true.
How can this be solved so that any message can be encrypted and decrypted without loss of characters for which M^5 mod14 is 0?
In the code we get the already encrypted data and turn it into a vector of asccii values:
#define ull unsigned long long int
void dataToAscii() {
asciiVec = {};
for (wchar_t c: dataString) {
asciiVec.push_back((ull) c);
}
}
Then decrypt is called
void decrypt(){
std::vector<ull> decrypted;
for (int i=0;i<= this->asciiVec.size()-1;i++) {
decrypted.push_back(modPow((ull)this->asciiVec.at(i), (ull)privateKey.at(0), (ull)privateKey.at(1)));
}
deAsciiVec = decrypted;
deAscciiToData();
}
void deAscciiToData() {
deDataString = "";
for (ull el:deAsciiVec) {
deDataString.push_back((wchar_t) el);
}
}
But I dont think the issue lies with the code but with the mathematics as shown when M^e mod(N) = 0

Pass Byte Array as std::vector<char> from Node.js to C++ Addon

I have some constraints where the addon is built with nan.h and v8 (not the new node-addon-api).
The end function is a part of a library. It accepts std::vector<char> that represents the bytes of an image.
I tried creating an image buffer from Node.js:
const img = fs.readFileSync('./myImage.png');
myAddonFunction(Buffer.from(img));
I am not really sure how to continue from here. I tried creating a new vector with a buffer, like so:
std::vector<char> buffer(data);
But it seems like I need to give it a size, which I am unsure how to get. Regardless, even when I use the initial buffer size (from Node.js), the image fails to go through.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
[1] 16021 abort (core dumped)
However, when I read the image directly from C++, it all works fine:
std::ifstream ifs ("./myImage.png", std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> buffer(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&buffer[0], pos);
// further below, I pass "buffer" to the function and it works just fine.
But of course, I need the image to come from Node.js. Maybe Buffer is not what I am looking for?
Here is an example based on N-API; I would also encourage you to take a look similar implementation based on node-addon-api (it is an easy to use C++ wrapper on top of N-API)
https://github.com/nodejs/node-addon-examples/tree/master/array_buffer_to_native/node-addon-api
#include <assert.h>
#include "addon_api.h"
#include "stdio.h"
napi_value CArrayBuffSum(napi_env env, napi_callback_info info)
{
napi_status status;
const size_t MaxArgExpected = 1;
napi_value args[MaxArgExpected];
size_t argc = sizeof(args) / sizeof(napi_value);
status = napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);
assert(status == napi_ok);
if (argc < 1)
napi_throw_error(env, "EINVAL", "Too few arguments");
napi_value buff = args[0];
napi_valuetype valuetype;
status = napi_typeof(env, buff, &valuetype);
assert(status == napi_ok);
if (valuetype == napi_object)
{
bool isArrayBuff = 0;
status = napi_is_arraybuffer(env, buff, &isArrayBuff);
assert(status == napi_ok);
if (isArrayBuff != true)
napi_throw_error(env, "EINVAL", "Expected an ArrayBuffer");
}
int32_t *buff_data = NULL;
size_t byte_length = 0;
int32_t sum = 0;
napi_get_arraybuffer_info(env, buff, (void **)&buff_data, &byte_length);
assert(status == napi_ok);
printf("\nC: Int32Array size = %d, (ie: bytes=%d)",
(int)(byte_length / sizeof(int32_t)), (int)byte_length);
for (int i = 0; i < byte_length / sizeof(int32_t); ++i)
{
sum += *(buff_data + i);
printf("\nC: Int32ArrayBuff[%d] = %d", i, *(buff_data + i));
}
napi_value rcValue;
napi_create_int32(env, sum, &rcValue);
return (rcValue);
}
The JavaScript code to call the addon
'use strict'
const myaddon = require('bindings')('mync1');
function test1() {
const array = new Int32Array(10);
for (let i = 0; i < 10; ++i)
array[i] = i * 5;
const sum = myaddon.ArrayBuffSum(array.buffer);
console.log();
console.log(`js: Sum of the array = ${sum}`);
}
test1();
The Output of the code execution:
C: Int32Array size = 10, (ie: bytes=40)
C: Int32ArrayBuff[0] = 0
C: Int32ArrayBuff[1] = 5
C: Int32ArrayBuff[2] = 10
C: Int32ArrayBuff[3] = 15
C: Int32ArrayBuff[4] = 20
C: Int32ArrayBuff[5] = 25
C: Int32ArrayBuff[6] = 30
C: Int32ArrayBuff[7] = 35
C: Int32ArrayBuff[8] = 40
C: Int32ArrayBuff[9] = 45
js: Sum of the array = 225

NTE_BAD_DATA in CryptSetKeyParam while setting KP_P in wincrypt

I am having the below code. I am setting a prime for diffie-hellman algorithm using char *.
I am getting bad data after i set the prime. Where am i doing wrong?
I followed the same example in this link.
https://msdn.microsoft.com/en-us/library/aa381969(VS.85).aspx#exchanging_diffie-hellman_keys
What is the correct way to set prime in diffie-hellman using wincrypt?
#define DHKEYSIZE 1024
int fld_sz = 256;
BYTE* g_rgbPrime = new BYTE[DHKEYSIZE/8];
char * prime = "A1BD60EBD2D43C53FA78D938C1EF8C9AD231F9862FC402739302DEF1B6BEB01E5BE59848A04C48B0069A8FB56143688678F7CC1097B921EA3E13E1EF9B9EB5381BEFDE7BBF614C13827493A1CA31DA76B4083B62C5073451D6B1F06A2F1049C291464AC68CBB2F69474470BBAD374073392696B6447C82BF55F20B2D015EB97B";
string s_prime(prime, fld_sz);
vector<std::string> res;
// split the string two charactes for converting into hex format
for (size_t i = 0; i < fld_sz; i += 2)
res.push_back(s_prime.substr(i, 2));
for(int i = 0; i < res.size(); i++) {
BYTE b = static_cast<BYTE>(std::stoi(res[i], 0, 16));
g_rgbPrime[i] = b;
}
BYTE g_rgbGenerator[128] =
{
0x02
};
BOOL fReturn;
HCRYPTPROV hProvParty1 = NULL;
HCRYPTPROV hProvParty2 = NULL;
CRYPT_DATA_BLOB P;
CRYPT_DATA_BLOB G;
HCRYPTKEY hPrivateKey1 = NULL;
HCRYPTKEY hPrivateKey2 = NULL;
PBYTE pbKeyBlob1 = NULL;
PBYTE pbKeyBlob2 = NULL;
HCRYPTKEY hSessionKey1 = NULL;
HCRYPTKEY hSessionKey2 = NULL;
PBYTE pbData = NULL;
/************************
Construct data BLOBs for the prime and generator. The P and G
values, represented by the g_rgbPrime and g_rgbGenerator arrays
respectively, are shared values that have been agreed to by both
parties.
************************/
P.cbData = DHKEYSIZE / 8;
P.pbData = (BYTE*)(g_rgbPrime);
G.cbData = DHKEYSIZE / 8;
G.pbData = (BYTE*)(g_rgbGenerator);
/************************
Create the private Diffie-Hellman key for party 1.
************************/
// Acquire a provider handle for party 1.
fReturn = CryptAcquireContext(
&hProvParty1,
NULL,
MS_ENH_DSS_DH_PROV,
PROV_DSS_DH,
CRYPT_VERIFYCONTEXT);
if(!fReturn)
{
goto ErrorExit;
}
// Create an ephemeral private key for party 1.
fReturn = CryptGenKey(
hProvParty1,
CALG_DH_EPHEM,
DHKEYSIZE << 16 | CRYPT_EXPORTABLE | CRYPT_PREGEN,
&hPrivateKey1);
if(!fReturn)
{
goto ErrorExit;
}
// Set the prime for party 1's private key.
fReturn = CryptSetKeyParam(
hPrivateKey1,
KP_P,
(PBYTE)&P,
0);
if(!fReturn)
{
std::cout << GetLastError() << endl;
goto ErrorExit;
}
// Set the generator for party 1's private key.
fReturn = CryptSetKeyParam(
hPrivateKey1,
KP_G,
(PBYTE)&G,
0);
if(!fReturn)
{
std::cout << GetLastError() << endl;
goto ErrorExit;
}
Thanks in advance.
Update 1:
Thanks to #RbMm I was able to set the prime. The problem was with DHKEYSize. However i am getting an error in while setting KP_X. updated the code above to reflect the new code.
Here i converted the string to hex bytes array.
size of prime KP_P (and KP_G) and DH key size hard connected. must be cbKey == 8*cbP. look for example Diffie-Hellman Client Code for Creating the Master Key:
as key size if used cbP * 8 where cbP size of prime P. in your link also P.cbData = DHKEYSIZE/8;
also in code instead hard-code size of P (and G) you can get it in runtime:
ULONG dwDataLen;
CryptGetKeyParam(hPrivateKey1, KP_P, 0, &(dwDataLen = 0), 0);
CryptGetKeyParam(hPrivateKey1, KP_G, 0, &(dwDataLen = 0), 0);
and you can sure that dwDataLen == DHKEYSIZE / 8 where DHKEYSIZE is key size.
because you use 512 as key size, the length of data for P and G must be 512/8=64. but you use 256 (for P) and 1 (for G). as result and error.

How to programatically decrypt aes-256-cbc file which was encrypted using password? [duplicate]

For example, the command:
openssl enc -aes-256-cbc -a -in test.txt -k pinkrhino -nosalt -p -out openssl_output.txt
outputs something like:
key = 33D890D33F91D52FC9B405A0DDA65336C3C4B557A3D79FE69AB674BE82C5C3D2
iv = 677C95C475C0E057B739750748608A49
How is that key generated? (C code as an answer would be too awesome to ask for :) )
Also, how is the iv generated?
Looks like some kind of hex to me.
OpenSSL uses the function EVP_BytesToKey. You can find the call to it in apps/enc.c. The enc utility used to use the MD5 digest by default in the Key Derivation Algorithm (KDF) if you didn't specify a different digest with the -md argument. Now it uses SHA-256 by default. Here's a working example using MD5:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/evp.h>
int main(int argc, char *argv[])
{
const EVP_CIPHER *cipher;
const EVP_MD *dgst = NULL;
unsigned char key[EVP_MAX_KEY_LENGTH], iv[EVP_MAX_IV_LENGTH];
const char *password = "password";
const unsigned char *salt = NULL;
int i;
OpenSSL_add_all_algorithms();
cipher = EVP_get_cipherbyname("aes-256-cbc");
if(!cipher) { fprintf(stderr, "no such cipher\n"); return 1; }
dgst=EVP_get_digestbyname("md5");
if(!dgst) { fprintf(stderr, "no such digest\n"); return 1; }
if(!EVP_BytesToKey(cipher, dgst, salt,
(unsigned char *) password,
strlen(password), 1, key, iv))
{
fprintf(stderr, "EVP_BytesToKey failed\n");
return 1;
}
printf("Key: "); for(i=0; i<cipher->key_len; ++i) { printf("%02x", key[i]); } printf("\n");
printf("IV: "); for(i=0; i<cipher->iv_len; ++i) { printf("%02x", iv[i]); } printf("\n");
return 0;
}
Example usage:
gcc b2k.c -o b2k -lcrypto -g
./b2k
Key: 5f4dcc3b5aa765d61d8327deb882cf992b95990a9151374abd8ff8c5a7a0fe08
IV: b7b4372cdfbcb3d16a2631b59b509e94
Which generates the same key as this OpenSSL command line:
openssl enc -aes-256-cbc -k password -nosalt -p < /dev/null
key=5F4DCC3B5AA765D61D8327DEB882CF992B95990A9151374ABD8FF8C5A7A0FE08
iv =B7B4372CDFBCB3D16A2631B59B509E94
OpenSSL 1.1.0c changed the digest algorithm used in some internal components. Formerly, MD5 was used, and 1.1.0 switched to SHA256. Be careful the change is not affecting you in both EVP_BytesToKey and commands like openssl enc.
If anyone is looking for implementing the same in SWIFT
I converted the EVP_BytesToKey in swift
/*
- parameter keyLen: keyLen
- parameter ivLen: ivLen
- parameter digest: digest e.g "md5" or "sha1"
- parameter salt: salt
- parameter data: data
- parameter count: count
- returns: key and IV respectively
*/
open static func evpBytesToKey(_ keyLen:Int, ivLen:Int, digest:String, salt:[UInt8], data:Data, count:Int)-> [[UInt8]] {
let saltData = Data(bytes: UnsafePointer<UInt8>(salt), count: Int(salt.count))
var both = [[UInt8]](repeating: [UInt8](), count: 2)
var key = [UInt8](repeating: 0,count: keyLen)
var key_ix = 0
var iv = [UInt8](repeating: 0,count: ivLen)
var iv_ix = 0
var nkey = keyLen;
var niv = ivLen;
var i = 0
var addmd = 0
var md:Data = Data()
var md_buf:[UInt8]
while true {
addmd = addmd + 1
md.append(data)
md.append(saltData)
if(digest=="md5"){
md = NSData(data:md.md5()) as Data
}else if (digest == "sha1"){
md = NSData(data:md.sha1()) as Data
}
for _ in 1...(count-1){
if(digest=="md5"){
md = NSData(data:md.md5()) as Data
}else if (digest == "sha1"){
md = NSData(data:md.sha1()) as Data
}
}
md_buf = Array (UnsafeBufferPointer(start: md.bytes, count: md.count))
// md_buf = Array(UnsafeBufferPointer(start: md.bytes.bindMemory(to: UInt8.self, capacity: md.count), count: md.length))
i = 0
if (nkey > 0) {
while(true) {
if (nkey == 0){
break
}
if (i == md.count){
break
}
key[key_ix] = md_buf[i];
key_ix = key_ix + 1
nkey = nkey - 1
i = i + 1
}
}
if (niv > 0 && i != md_buf.count) {
while(true) {
if (niv == 0){
break
}
if (i == md_buf.count){
break
}
iv[iv_ix] = md_buf[i]
iv_ix = iv_ix + 1
niv = niv - 1
i = i + 1
}
}
if (nkey == 0 && niv == 0) {
break
}
}
both[0] = key
both[1] = iv
return both
}
I use CryptoSwift for the hash.
This is a much cleaner way as apples does not recommend OpenSSL in iOS
UPDATE : Swift 3
Here is a version for mbedTLS / Polar SSL - tested and working.
typedef int bool;
#define false 0
#define true (!false)
//------------------------------------------------------------------------------
static bool EVP_BytesToKey( const unsigned int nDesiredKeyLen, const unsigned char* salt,
const unsigned char* password, const unsigned int nPwdLen,
unsigned char* pOutKey, unsigned char* pOutIV )
{
// This is a re-implemntation of openssl's password to key & IV routine for mbedtls.
// (See openssl apps/enc.c and /crypto/evp/evp_key.c) It is not any kind of
// standard (e.g. PBKDF2), and it only uses an interation count of 1, so it's
// pretty crappy. MD5 is used as the digest in Openssl 1.0.2, 1.1 and late
// use SHA256. Since this is for embedded system, I figure you know what you've
// got, so I made it compile-time configurable.
//
// The signature has been re-jiggered to make it less general.
//
// See: https://wiki.openssl.org/index.php/Manual:EVP_BytesToKey(3)
// And: https://www.cryptopp.com/wiki/OPENSSL_EVP_BytesToKey
#define IV_BYTE_COUNT 16
#if BTK_USE_MD5
# define DIGEST_BYTE_COUNT 16 // MD5
#else
# define DIGEST_BYTE_COUNT 32 // SHA
#endif
bool bRet;
unsigned char md_buf[ DIGEST_BYTE_COUNT ];
mbedtls_md_context_t md_ctx;
bool bAddLastMD = false;
unsigned int nKeyToGo = nDesiredKeyLen; // 32, typical
unsigned int nIVToGo = IV_BYTE_COUNT;
mbedtls_md_init( &md_ctx );
#if BTK_USE_MD5
int rc = mbedtls_md_setup( &md_ctx, mbedtls_md_info_from_type( MBEDTLS_MD_MD5 ), 0 );
#else
int rc = mbedtls_md_setup( &md_ctx, mbedtls_md_info_from_type( MBEDTLS_MD_SHA256 ), 0 );
#endif
if (rc != 0 )
{
fprintf( stderr, "mbedutils_md_setup() failed -0x%04x\n", -rc );
bRet = false;
goto exit;
}
while( 1 )
{
mbedtls_md_starts( &md_ctx ); // start digest
if ( bAddLastMD == false ) // first time
{
bAddLastMD = true; // do it next time
}
else
{
mbedtls_md_update( &md_ctx, &md_buf[0], DIGEST_BYTE_COUNT );
}
mbedtls_md_update( &md_ctx, &password[0], nPwdLen );
mbedtls_md_update( &md_ctx, &salt[0], 8 );
mbedtls_md_finish( &md_ctx, &md_buf[0] );
//
// Iteration loop here in original removed as unused by "openssl enc"
//
// Following code treats the output key and iv as one long, concatentated buffer
// and smears as much digest across it as is available. If not enough, it takes the
// big, enclosing loop, makes more digest, and continues where it left off on
// the last iteration.
unsigned int ii = 0; // index into mb_buf
if ( nKeyToGo != 0 ) // still have key to fill in?
{
while( 1 )
{
if ( nKeyToGo == 0 ) // key part is full/done
break;
if ( ii == DIGEST_BYTE_COUNT ) // ran out of digest, so loop
break;
*pOutKey++ = md_buf[ ii ]; // stick byte in output key
nKeyToGo--;
ii++;
}
}
if ( nIVToGo != 0 // still have fill up IV
&& // and
ii != DIGEST_BYTE_COUNT // have some digest available
)
{
while( 1 )
{
if ( nIVToGo == 0 ) // iv is full/done
break;
if ( ii == DIGEST_BYTE_COUNT ) // ran out of digest, so loop
break;
*pOutIV++ = md_buf[ ii ]; // stick byte in output IV
nIVToGo--;
ii++;
}
}
if ( nKeyToGo == 0 && nIVToGo == 0 ) // output full, break main loop and exit
break;
} // outermost while loop
bRet = true;
exit:
mbedtls_md_free( &md_ctx );
return bRet;
}
If anyone passing through here is looking for a working, performant reference implementation in Haskell, here it is:
import Crypto.Hash
import qualified Data.ByteString as B
import Data.ByteArray (convert)
import Data.Monoid ((<>))
evpBytesToKey :: HashAlgorithm alg =>
Int -> Int -> alg -> Maybe B.ByteString -> B.ByteString -> (B.ByteString, B.ByteString)
evpBytesToKey keyLen ivLen alg mSalt password =
let bytes = B.concat . take required . iterate go $ hash' passAndSalt
(key, rest) = B.splitAt keyLen bytes
in (key, B.take ivLen rest)
where
hash' = convert . hashWith alg
required = 1 + ((keyLen + ivLen - 1) `div` hashDigestSize alg)
passAndSalt = maybe password (password <>) mSalt
go = hash' . (<> passAndSalt)
It uses hash algorithms provided by the cryptonite package. The arguments are desired key and IV size in bytes, the hash algorithm to use (like e.g. (undefined :: MD5)), optional salt and the password. The result is a tuple of key and IV.

std::list copy to std::vector skipping elements

I've run across a rather bizarre exception while running C++ code in my objective-C application. I'm using libxml2 to read an XSD file. I then store the relevant tags as instances of the Tag class in an std::list. I then copy this list into an std::vector using an iterator on the list. However, every now and then some elements of the list aren't copied to the vector. Any help would be greatly appreciated.
printf("\n length list = %lu, length vector = %lu\n",XSDFile::tagsList.size(), XSDFile::tags.size() );
std::list<Tag>::iterator it = XSDFile::tagsList.begin();
//result: length list = 94, length vector = 0
/*
for(;it!=XSDFile::tagsList.end();++it)
{
XSDFile::tags.push_back(*it); //BAD_ACCESS code 1 . . very bizarre . . . . 25
}
*/
std::copy (XSDFile::tagsList.begin(), XSDFile::tagsList.end(), std::back_inserter (XSDFile::tags));
printf("\n Num tags in vector = %lu\n", XSDFile::tags.size());
if (XSDFile::tagsList.size() != XSDFile::tags.size())
{
printf("\n length list = %lu, length vector = %lu\n",XSDFile::tagsList.size(), XSDFile::tags.size() );
//result: length list = 94, length vector = 83
}
I've found the problem. The memory was corrupted causing the std::list to become corrupted during the parsing of the XSD. I parse the XSD using a function start_element.
xmlSAXHandler handler = {0};
handler.startElement = start_element;
I used malloc guard in xcode to locate the use of freed memory. It pointed to the line:
std::strcpy(message, (char*)name);
So I removed the malloc (actually commented in the code) and it worked. The std::vector now consistently copies all 94 entries of the list. If anyone has an explanation as to why this worked that would be great.
static void start_element(void * ctx, const xmlChar *name, const xmlChar **atts)
{
// int len = strlen((char*)name);
// char *message = (char*)malloc(len*sizeof(char));
// std::strcpy(message, (char*)name);
if (atts != NULL)
{
// atts[0] = type
// atts[1] = value
// len = strlen((char*)atts[1]);
// char *firstAttr = (char*)malloc(len*sizeof(char));
// std::strcpy(firstAttr, (char*)atts[1]);
if(strcmp((char*)name, "xs:include")==0)
{
XSDFile xsd;
xsd.ReadXSDTypes((char*)atts[1]);
}
else if(strcmp((char*)name, "xs:element")==0)
{
doElement(atts);
}
else if(strcmp((char*)name, "xs:sequence")==0)
{
//set the default values
XSDFile::sequenceMin = XSDFile::sequenceMax = 1;
if (sizeof(atts) == 4)
{
if(strcmp((char*)atts[3],"unbounded")==0)
XSDFile::sequenceMax = -1;
int i = 0;
while(atts[i] != NULL)
{
//atts[i] = name
//atts[i+i] = value
std::string name((char*)atts[i]);
std::string value((char*)atts[i+1]);
if(name=="minOccurs")
XSDFile::sequenceMin = (atoi(value.c_str()));
else if(name=="maxOccurs")
XSDFile::sequenceMax = (atoi(value.c_str()));
i += 2;
}
}
}
}
//free(message);
}