I'm writing the websocket server on C++ and having a problem with calculating the Sec-WebSocket-Accept key. I've tested sha1 and it generates the right value (I've taken the string from wikipedia example and got the same hash) and I also found the online base64 converter to test the second part, and it seems working right. But as I can see from other issues, my string is too long.
std::string secKey = readHeader(buffer,n,"Sec-WebSocket-Key");
std::string magic = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11";
secKey.append(magic);
unsigned char hash[20];
char hexstring[41];
sha1::calc(secKey.c_str(),secKey.length(),hash);
sha1::toHexString(hash,hexstring);
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(hexstring) ,strlen(hexstring));
For secKey = "4aRdFZG5uYrEUw8dsNLW6g==" I get encoded value "YTYwZWRlMjQ4NWFhNzJiYmRjZTQ5ODI4NjUwMWNjNjE1YTM0MzZkNg=="
I think you can skip the toHexString line and base64 encode the 20 byte hash instead.
sha1::calc(secKey.c_str(),secKey.length(),hash);
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(hash), 20);
That's what my C++ server does and handshakes complete successfully.
Related
I am having a lot of trouble with GRPC when using byte array. This is by .proto
message myType {
int32 format = 1;
bytes data = 2;
}
I am using CPP for Server implementation and Java for Client. Using ByteString in Java is a breeze but cannot deserialize in CPP (byte[] being changed from what was being sent from Java).
buffer is a byte[] byte buffer[<large_size>] And I'm converting the byte array (it's an image) into a smaller byte array, and it's crashing when trying to convert the byte[] received from grpc. The conversion function in CPP is good as I used it with the same image before using GRPC
This is the deserialization code for CPP. Here "req" is a myType object, and buffer is a byte[]
myFormat = req->format();
dataLen = req->data().length();
memcpy(buffer, req->data().c_str(), dataLen);
From what I understand, req->data() is in cpp std::string format
On the client side, you should pass both the parameter and its length.
parameter.set_framemat(mat, 12);
Do check if the length of the array at the server side is not zero. Note that bytes is char array and grpc is de marshalling it as string. So if say the array is filled with null characters the data length comes as zero.
I was trying a similar use case
Proto file
message Parameters {
bytes framemat = 16;
};
Client Snippet
const char mat[12] = {0}
parameter.set_framemat(mat);
stream->Write(parameter);
Server Snippet
std::thread reader([&]() {
::nokia::nas::Parameters request;
while (stream->Read(&request))
{
//get the params
grpc::string mat = request.framemat();
logger->info(" Length of Bytes= {} , mat.length()}
Output was zero!
So I changed the client input to check with some char strings and sure enough the data length was coming as 4 at the server side; because this time it was a valid string and string length matched.
const char mat[12] = "sdsd";
Better way is to specify in the proto the data as well as the data length
message StreamBytes {
bytes data_bytes =1;
int32 data_length = 2;
};
I am working on a google chrome NaCl extension that involves encryption and decryption of data using openssl library functions. The encryption works perfectly and the decryption also works fine as of now but for that I had to put in a kind of hack but I am not sure if that's the correct way to handle it.
else if(action == "decryption")
{
pp::Var content = dict_message.Get("content");
//aes decryption starts here
pp::VarArrayBuffer buffer(content);
const char *password = "password";
unsigned char key[EVP_MAX_KEY_LENGTH], iv[EVP_MAX_IV_LENGTH];
int cipherlen = buffer.ByteLength();
int len = cipherlen + EVP_MAX_BLOCK_LENGTH;
unsigned char *plaintext = (unsigned char*)malloc(len*sizeof(unsigned char));
unsigned char* ciphertext = static_cast<unsigned char*>(buffer.Map());
aes_init(password, (int)strlen(password), key, iv);
len = decrypt(ciphertext, cipherlen, key, iv, plaintext);
buffer.Unmap();
plaintext[len]='\0'; //fix a bug of overshooting plaintext sent
//aes decryption ends here
pp::VarDictionary reply;
reply.Set("action","decryption");
reply.Set("plaintext", (char*)plaintext);
reply.Set("fileSize", len);
PostMessage(reply);
free(plaintext);
}
Now this code decrypts the data and sends back to the javascript on extension page. Notice the line plaintext[len]='\0';, if I dont put it then sometimes I get a garbage after the correctly decrypted text in plaintext and that reflects as a null in my javascript. So is the correct way to handle the bug ?
As others have mentioned, you're using a C-string, which must be NULL-terminated. When you call pp::VarDictionary::Set, the parameters are pp::Vars, and you're taking advantage of an implicit conversion constructor from C-string to pp::Var.
If instead you make plaintext a std::string or use pp::VarArrayBuffer, this won't be necessary. PostMessage() is optimized to deal with large VarArrayBuffers, so you should probably prefer that anyway. So I'd suggest replacing this line:
unsigned char *plaintext = (unsigned char*)malloc(len*sizeof(unsigned char));
with something like:
pp::VarArrayBuffer plaintext(len);
...and change your decrypt line to something like:
len = decrypt(ciphertext, cipherlen, key, iv,\b
static_cast<unsigned char*>(plaintext.Map()));
Note, this will change the type JavaScript receives from a string to an ArrayBuffer. If you want it to remain a string, you can use a std::string instead:
std::string plaintext(len, '\0');
and access the string's buffer using operator[]. So the call to decrypt looks like this (I'm assuming len is >0):
len = decrypt(ciphertext, cipherlen, key, iv, &plaintext[0]);
The '\0' is the terminator for all strings in C. Since your output is in a string, if the '\0' character is missing, your program won't know where the string ends, and so may move on to areas beyond the string when using it, which corresponds to the garbage value near the end.
Whenever strings are declared, '\0' is put at the end. However, you are first allocating the memory for the decrypted text, and then writing into it. In this case you have to take care to at the string terminating character at the end.
The reason for the existence of the string terminating character is that strings are stored in the form of a character array in C, and the size of the array is not known by just the pointer to the beginning of the string.
If the \0 is not there at the end of a c-string, the code will not know when the string ends and will walk off into unknown parts of memory. The null terminator lets the program know "this is the end of my string". Without this, you are inviting undefined behavior into your program.
Edits and Updates
3/24/2013:
My output hash from Python is now matching the hash from c++ after converting to utf-16 and stoping before hitting any 'e' or 'm' bytes. However the decrypted results do not match. I know that my SHA1 hash is 20 bytes = 160 bits and RC4 keys can vary in length from 40 to 2048 bits so perhaps there is some default salting going on in WinCrypt that I will need to mimic. CryptGetKeyParam KP_LENGTH or KP_SALT
3/24/2013:
CryptGetKeyParam KP_LENGTH is telling me that my key ength is 128bits. I'm feeding it a 160 bit hash. So perhaps it's just discarding the last 32 bits...or 4 bytes. Testing now.
3/24/2013:
Yep, that was it. If I discard the last 4 bytes of my SHA1 hash in python...I get the same decryption results.
Quick Info:
I have a c++ program to decrypt a datablock. It uses the Windows Crytographic Service Provider so it only works on Windows. I would like it to work with other platforms.
Method Overview:
In Windows Crypto API
An ASCII encode password of bytes is converted to a wide character representation and then hashed with SHA1 to make a key for an RC4 stream cipher.
In Python PyCrypto
An ASCII encoded byte string is decoded to a python string. It is truncated based on empircally obsesrved bytes which cause mbctowcs to stop converting in c++. This truncated string is then enocoded in utf-16, effectively padding it with 0x00 bytes between the characters. This new truncated, padded byte string is passed to a SHA1 hash and the first 128 bits of the digest are passed to a PyCrypto RC4 object.
Problem [SOLVED]
I can't seem to get the same results with Python 3.x w/ PyCrypto
C++ Code Skeleton:
HCRYPTPROV hProv = 0x00;
HCRYPTHASH hHash = 0x00;
HCRYPTKEY hKey = 0x00;
wchar_t sBuf[256] = {0};
CryptAcquireContextW(&hProv, L"FileContainer", L"Microsoft Enhanced RSA and AES Cryptographic Provider", 0x18u, 0);
CryptCreateHash(hProv, 0x8004u, 0, 0, &hHash);
//0x8004u is SHA1 flag
int len = mbstowcs(sBuf, iRec->desc, sizeof(sBuf));
//iRec is my "Record" class
//iRec->desc is 33 bytes within header of my encrypted file
//this will be used to create the hash key. (So this is the password)
CryptHashData(hHash, (const BYTE*)sBuf, len, 0);
CryptDeriveKey(hProv, 0x6801, hHash, 0, &hKey);
DWORD dataLen = iRec->compLen;
//iRec->compLen is the length of encrypted datablock
//it's also compressed that's why it's called compLen
CryptDecrypt(hKey, 0, 0, 0, (BYTE*)iRec->decrypt, &dataLen);
// iRec is my record that i'm decrypting
// iRec->decrypt is where I store the decrypted data
//&dataLen is how long the encrypted data block is.
//I get this from file header info
Python Code Skeleton:
from Crypto.Cipher import ARC4
from Crypto.Hash import SHA
#this is the Decipher method from my record class
def Decipher(self):
#get string representation of 33byte password
key_string= self.desc.decode('ASCII')
#so far, these characters fail, possibly others but
#for now I will make it a list
stop_chars = ['e','m']
#slice off anything beyond where mbstowcs will stop
for char in stop_chars:
wc_stop = key_string.find(char)
if wc_stop != -1:
#slice operation
key_string = key_string[:wc_stop]
#make "wide character"
#this is equivalent to padding bytes with 0x00
#Slice off the two byte "Byte Order Mark" 0xff 0xfe
wc_byte_string = key_string.encode('utf-16')[2:]
#slice off the trailing 0x00
wc_byte_string = wc_byte_string[:len(wc_byte_string)-1]
#hash the "wchar" byte string
#this is the equivalent to sBuf in c++ code above
#as determined by writing sBuf to file in tests
my_key = SHA.new(wc_byte_string).digest()
#create a PyCrypto cipher object
RC4_Cipher = ARC4.new(my_key[:16])
#store the decrypted data..these results NOW MATCH
self.decrypt = RC4_Cipher.decrypt(self.datablock)
Suspected [EDIT: Confirmed] Causes
1. mbstowcs conversion of the password resulted in the "original data" being fed to the SHA1 hash was not the same in python and c++. mbstowcs was stopping conversion at 0x65 and 0x6D bytes. Original data ended with a wide_char encoding of only part of the original 33 byte password.
RC4 can have variable length keys. In the Enhanced Win Crypt Sevice provider, the default length is 128 bits. Leaving the key length unspecified was taking the first 128 bits of the 160 bit SHA1 digest of the "original data"
How I investigated
edit: based on my own experimenting and the suggestions of #RolandSmith, I now know that one of my problems was mbctowcs behaving in a way I wasn't expecting. It seems to stop writing to sBuf on "e" (0x65) and "m"(0x6d) (probably others). So the passoword "Monkey" in my description (Ascii encoded bytes), would look like "M o n k" in sBuf because mbstowcs stopped at the e, and placed 0x00 between the bytes based on the 2 byte wchar typedef on my system. I found this by writing the results of the conversion to a text file.
BYTE pbHash[256]; //buffer we will store the hash digest in
DWORD dwHashLen; //store the length of the hash
DWORD dwCount;
dwCount = sizeof(DWORD); //how big is a dword on this system?
//see above "len" is the return value from mbstowcs that tells how
//many multibyte characters were converted from the original
//iRec->desc an placed into sBuf. In some cases it's 3, 7, 9
//and always seems to stop on "e" or "m"
fstream outFile4("C:/desc_mbstowcs.txt", ios::out | ios::trunc | ios::binary);
outFile4.write((const CHAR*)sBuf, int(len));
outFile4.close();
//now get the hash size from CryptGetHashParam
//an get the acutal hash from the hash object hHash
//write it to a file.
if(CryptGetHashParam(hHash, HP_HASHSIZE, (BYTE *)&dwHashLen, &dwCount, 0)) {
if(CryptGetHashParam(hHash, 0x0002, pbHash, &dwHashLen,0)){
fstream outFile3("C:/test_hash.txt", ios::out | ios::trunc | ios::binary);
outFile3.write((const CHAR*)pbHash, int(dwHashLen));
outFile3.close();
}
}
References:
wide characters cause problems depending on environment definition
Difference in Windows Cryptography Service between VC++ 6.0 and VS 2008
convert a utf-8 to utf-16 string
Python - converting wide-char strings from a binary file to Python unicode strings
PyCrypto RC4 example
https://www.dlitz.net/software/pycrypto/api/current/Crypto.Cipher.ARC4-module.html
Hashing a string with Sha256
http://msdn.microsoft.com/en-us/library/windows/desktop/aa379916(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/aa375599(v=vs.85).aspx
You can test the size of wchar_t with a small test program (in C):
#include <stdio.h> /* for printf */
#include <stddef.h> /* for wchar_t */
int main(int argc, char *argv[]) {
printf("The size of wchar_t is %ld bytes.\n", sizeof(wchar_t));
return 0;
}
You could also use printf() calls in your C++ code to write e.g. iRec->desc and the result of the hash in sbuf to the screen if you can run the C++ program from a terminal. Otherwise use fprintf() to dump them to a file.
To better mimic the behavior of the C++ program, you could even use ctypes to call mbstowcs() in your Python code.
Edit: You wrote:
One problem is definitely with mbctowcs. It seems that it's transferring an unpredictable (to me) number of bytes into my buffer to be hashed.
Keep in mind that mbctowcs returns the number of wide characters converted. In other words, a 33 byte buffer in a multi-byte encoding
can contain anything from 5 (UTF-8 6-byte sequences) up to 33 characters depending on the encoding used.
Edit2: You are using 0 as the dwFlags parameter for CryptDeriveKey. According to its documentation, the upper 16 bits should contain the key length. You should check CryptDeriveKey's return value to see if the call succeeded.
Edit3: You could test mbctowcs in Python (I'm using IPython here.):
In [1]: from ctypes import *
In [2]: libc = CDLL('libc.so.7')
In [3]: monkey = c_char_p(u'Monkey')
In [4]: test = c_char_p(u'This is a test')
In [5]: wo = create_unicode_buffer(256)
In [6]: nref = c_size_t(250)
In [7]: libc.mbstowcs(wo, monkey, nref)
Out[7]: 6
In [8]: print wo.value
Monkey
In [9]: libc.mbstowcs(wo, test, nref)
Out[9]: 14
In [10]: print wo.value
This is a test
Note that in Windows you should probably use libc = cdll.msvcrt instead of libc = CDLL('libc.so.7').
I am using this simple function for decrypting a AES Encrypted string
unsigned char *aes_decrypt(EVP_CIPHER_CTX *e, unsigned char *ciphertext, int *len)
{
int p_len = *len, f_len = 0;
unsigned char *plaintext = (unsigned char*)malloc(p_len + 128);
memset(plaintext,0,p_len);
EVP_DecryptInit_ex(e, NULL, NULL, NULL, NULL);
EVP_DecryptUpdate(e, plaintext, &p_len, ciphertext, *len);
EVP_DecryptFinal_ex(e, plaintext+p_len, &f_len);
*len = p_len + f_len;
return plaintext;
}
The problem is that len is returning a value that does not match the entire decoded string. What could be the problem ?
When you say "string", I assume you mean a zero-terminated textual string. The encryption process is dependent on a cipher block size, and oftentimes padding. What's actually being encoded and decoded is up to the application... it's all binary data to the cipher. If you're textual string is smaller than what's returned from the decrypt process, your application needs to determine the useful part. So for example if you KNOW your string inside the results is zero-terminated, you can get the length doing a simple strlen. That's risky of course if you can't guarantee the input... probably better off searching the results for a null up to the decoded length...
If you are using cipher in ECB, CBC or some other chaining modes, you must pad plain text to the length, which is multiple of cipher block length. You can see a PKCS#5 standard for example. High-level functions like in OpenSSL can perform padding transparently for programmer. So, encrypted text can be larger than plain text up to additional cipher block size.
How can i conver from base64 string to hexa string (i'm working in ubuntu - c++ code). My hexa string I would like to be like 0x0c....and so on. Need help. Can someone please give me an exaple?Thx!
A quick solution that uses common (though not standard) functions:
std::string input = MY_ENCODED_STRING;
unsigned long decoded_value = strtol(input.c_str(), NULL, 64);
char buffer[100] = {0};
std::string output = itoa(decoded_value, buffer, 16);
boost::lexical_cast may be able to provide a more elegant solution (not sure on that one, though).