I'm currently writing a C++ application and would like to use GPGME for message signing, encryption and key management. I know I can encrypt data in this way:
err = gpgme_op_encrypt(mContext, recipients,...);
if(err) {
// .. error handling
}
result = gpgme_op_encrypt_result(mContext);
if(result->invalid_recipients){
//error handling
}
nbytes = gpgme_data_seek(encrypted_text, 0, SEEK_SET);
if(nbytes == -1) {
//error handling
}
buffer = malloc(MAXLEN);
nbytes = gpgme_data_read(encrypted_text, buffer, MAXLEN);
But as one can see I would have to use MAXLEN as limit for reading the encrypted data in my buffer. Is there a way to determine how long my encrypted data result will be in advance (given the plaintex)? Or will I have to accept the static limit?
I'm not familiar with this particular API but the gpgme_data_seek and gpgme_data_read call look like they may behave like read() and seek() from the file I/O system.
(1) Simply allocate as much buffer as you can effort (lets say N).
(2) Call n=gpgme_data_read(...,N) until N!=n.
(3) Check for error conditions (my guess is n<0)
proceed until you have processed all data you are interested in.
Related
I am trying to write reverse proxy with nonblocking socket and epoll. That seems ok at first, but when I tried to open a big jpg file, I got stuck.
When I try to write into client sometimes It may not writable and how can I handle proper way.
Additional Notes:
this->getFd() = ProxyFd
this->clientHandler->getFd = clientFd
I am using EPOLLET flag both proxy and client
if( (flag & EPOLLIN) ){
char buffer[1025] = {'\0'};
int readSize;
while( (readSize = read(this->getFd(),buffer,1024)) > 0){
this->headerParse(buffer);
this->readSize += readSize;
int check = 0;
do{
check = write(this->clientHandler->getFd(),buffer,readSize);
}while(check < 0);
}
if(this->headerEnd == 1 && this->readSize >= this->headerLenght ){
close(this->clientHandler->getFd());
close(this->getFd());
delete this->clientHandler;
delete this;
}
}
Thanks for taking time to read.
Assuming your headerParse() method doesn't change buffer in a size-extending way (you'd need to update readsize, at least, not to mention the buffer full scenario), it seems like your write() path is broken.
if the socket you're writing to is also in nonblocking mode, it's perfectly legal for write() to return -1 (and set errno to EGAIN or EWOULDBLOCK or whatever your platform has) before you wrote all data.
In that case, you must store the remaining data (the remainder of buffer minus what was written if one or more calls to write() succeeded), program epoll to notify the clientHandler->getFd() descriptor for writeability, if not already, and when you get subsequent "write ready" event, you write the data you stored. On this case, the write() can again be unable to flush all your data, so you must cycle until all data is sent.
Up to this point, I used to decrypt files (located on an USB stick) with AES as follows:
FILE * fp = fopen(filePath, "r");
vector<char> encryptedChars;
if (fp == NULL) {
//Could not open file
continue;
}
while(true) {
int nextEncryptedChar = fgetc(fp);
if (nextEncryptedChar == EOF) {
break;
}
encryptedChars.push_back(nextEncryptedChar);
}
fclose(fp);
char encryptedFileArray[encryptedChars.size()];
int encryptedByteCount = encryptedChars.size();
for (int x = 0; x < aantalChars; x++) {
encryptedFileArray[x] = encryptedChars[x];
}
encryptedChars.clear();
AES aes;
//Decrypt the message in-place
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt(encryptedFileArray, sizeof(encryptedFileArray));
aes.clear();
This works perfectly for small files. At this point, I am opening a file from a USB stick and storing all characters into a vector and copying the vector to an array. I know that &encryptedChars[0] can be used as an array pointer as well and will save some memory.
Now I want to decrypt a file of 256Kb (as opposed to 1Kb). Copying the data into a source array will require at least 256Kb of RAM. I however only have 100Kb at my disposal and therefore, cannot create a source array containing the encrypted data.
So I tried to use the FILE * that fopen gives me as a FILE pointer, and created a new file on the same USB stick as a destination pointer. I was hoping that the decryption rounds would use the memory of the USB stick as opposed to available memory on the heap.
FILE * fp = fopen(encryptedFilePath, "r");
FILE * fpDecrypt = fopen(decryptedFilePath, "w+");
if (fp == NULL || fpDecrypt == NULL) {
//Could not open file!?
return;
}
AES aes;
//Decrypt the message in-place
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt((const char*)fp, fpDecrypt, firmwareSize);
aes.clear();
Unfortunately, the system locks up (no idea why).
Does anybody know if I can pass a FILE * to a function that expects a const char * as source and a void * as a destination?
I am using the following library: https://os.mbed.com/users/neilt6/code/AES/docs/tip/AES_8h_source.html
Thanks!
A lot of crypto libraries provide "incremental" APIs that allow a stream of data to be en/decrypted piece by piece, without having to load the stream into memory. Unfortunately, it appears that the library you're using doesn't (or, at least, does not explicitly document it).
However, if you know how CBC mode encryption works, it's possible to roll your own. Basically, all you need to do is take the last AES block (i.e. the last 16 bytes) of the previous chunk of ciphertext and use it as the IV when decrypting (or encrypting) the next block, something like this:
char buffer[1024]; // this needs to be a multiple of 16 bytes!
char ivTemp[16];
while(true) {
int bytesRead = fread(buffer, 1, sizeof(buffer), inputFile);
// save last 16 bytes of ciphertext as IV for next block
if (bytesRead == sizeof(buffer)) memcpy(ivTemp, buffer + bytesRead - 16, 16);
// decrypt the message in-place
AES aes;
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt(buffer, bytesRead);
aes.clear();
// write out decrypted data (todo: check for write errors!)
fwrite(buffer, 1, bytesRead, outputFile);
// use the saved last 16 bytes of ciphertext as IV for next block
if (bytesRead == sizeof(buffer)) memcpy(iv, ivTemp, 16);
if (bytesRead < sizeof(buffer)) break; // end of file (or read error)
}
Note that this code will overwrite the iv array. That should be OK, though, since you should never use the same IV twice anyway. (In fact, with CBC mode, the IV should be chosen by the encryptor at random, using a cryptographically secure RNG, and sent alongside the message. The usual way to do that is to simply prepend the IV to the message file.)
Also, the code above is somewhat less efficient than it needs to be, since it calls aes.setup() and thus re-runs the whole AES key expansion for each chunk. Unfortunately, I couldn't find any documented way to tell your crypto library to change the IV without re-running the setup.
However, looking at the implementation of your library, as linked by Sister Fister in the comments below, it looks like it's already replacing its internal copy of the IV with the last ciphertext block. Thus, it looks like all you really need to do is call aes.decrypt() for each block without a setup call in between, something like this:
char buffer[1024]; // this needs to be a multiple of 16 bytes!
AES aes;
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
while(true) {
int bytesRead = fread(buffer, 1, sizeof(buffer), inputFile);
// decrypt the chunk of data in-place (continuing from previous chunk)
aes.decrypt(buffer, bytesRead);
// write out decrypted data (todo: check for write errors!)
fwrite(buffer, 1, bytesRead, outputFile);
if (bytesRead < sizeof(buffer)) break; // end of file (or read error)
}
aes.clear();
Note that this code is relying on a feature of the crypto library that does not seem to be explicitly documented, namely that calling aes.decrypt() multiple times will cause the decryptions to be chained correctly. (That's actually a pretty reasonable thing to do, for CBC mode, but you can never be sure without reading the code or finding explicit documentation saying so.) You should make sure to have a comprehensive test suite for this, and to re-run the tests whenever you upgrade the library.
Also note that I haven't tested either of these examples, so there obviously could be bugs or typos. Also, the docs for your crypto library are somewhat sparse, so it's possible that it might not work exactly like I'm assuming it does. Please test anything based on this code throughly before using it!
In general, if something doesn't fit to memory, you can resort to:
Random accessing files. Use fseek to find the position and read or write what you need. Memory requirement minimal.
Processing in batches that will fit in to memory. Memory requirement is adjustable, but the algorithm must be suitable for this.
System virtual memory, which allows you to reserve as big blocks as your system can address, you have free disk space and your system settings. This is usually transparent depending on your system.
Other paged memory mechanisms.
Since AES encryption is made in blocks of 128 bits, and you're short of memory, you should probably use random access on your file.
This is more of a request for confirmation than a question, so I'll keep it brief. (I am away from my PC and so can't simply implement this solution to test).
I'm writing a program to send an image file taken via webcam (along with meta data) from a raspberryPi to my PC.
I've worked out that the image is roughly around 130kb, the packet header is 12b and the associated meta data another 24b. Though I may increase the image size in future, once I have a working prototype.
At the moment I am not able to retrieve this whole packet successfully as, after sending it to the PC I only ever get approx 64kb recv'd in the buffer.
I have assumed that this is because for whatever reason the default buffer size for a socket declared like:
SOCKET sock = socket(PF_INET, SOCK_STREAM, 0);
is 64kb (please could someone clarify this if you're 'in the know')
So - to fix this problem I intend to increase the socket size to 1024kb via the setsockopt(x..) command.
Please could someone confirm that my diagnosis of the problem, and proposed solution are correct?
I ask this question as I am away form my PC right now and am unable to try it until I get back home.
This most likely has nothing to do with the socket buffers, but with the fact that recv() and send() do not have to receive and send all the data you want. Check the return value of those function calls, it indicates how many bytes have actually been sent and received.
The best way to deal with "short" reads/writes is to put them in a loop, like so:
char *buf; // pointer to your data
size_t len; // length of your data
int fd; // the socket filedescriptor
size_t offset = 0;
ssize_t result;
while (offset < len) {
result = send(fd, buf + offset, len - offset, 0);
if (result < 0) {
// Deal with errors here
}
offset += result;
}
Use a similar construction for receiving data. Note that one possible error condition is that the function call was interrupted (errno = EAGAIN or EWOULDBLOCK), in that case you should retry the send command, in all other cases you should exit the loop.
Currently, I'm learning how to build a transparent HTTP proxy in C++. There had two issues on the proxy client side I couldn't resolve for long time. Hope someone can point out the root causes based on following scenarios. Thanks a lot. :D
The HTTP proxy I built right now is somehow work partially only. For example, I could access google's main page through proxy while I couldn't get any search result after I typed keyword(the google instant is also not working at all). On the other hand, youtube is working perfectly includes searching, loading video and commenting. What's more, there also got some websites like yahoo even couldn't display main page after I keyed in its URL.
The reason why I said the issues are on the proxy client side at the begining is because I traced the data flow of my program. I found out the written size returned by socket programming function write() was smaller than the data size I passed to my write back function. The most weird observation for me was the data losing issue is independent from the size of data. The socket write() function could work properly for youtube video data which is nearly 2MB while it would loss data for google search request which is just 20KB.
Furthermore, there also had another situation that browser displayed blank when the data size I passed to my write back function and the written size returned by socket write function() are the same. I used wireshark to trace the flow of communication and compared mine with pure IP communication without proxy involved. I found out that browser didn't continuously send out HTTP requests after it received certain HTTP responses comparing with pure IP communication flow. I couldn't find out why the browser didn't send out rest of HTTP requests.
Following is my code for write back function:
void Proxy::get_data(char* buffer, size_t length)
{
cout<<"Length:"<<length<<endl;
int connfd;
size_t ret;
// get connfd from buffer
memset(&connfd, 0, sizeof(int));
memcpy(&connfd, buffer, sizeof(int));
cout<<"Get Connection FD:"<<connfd<<endl;
// get receive data size
size_t rData_length = length-sizeof(int);
cout<<"Data Size:"<<rData_length<<endl;
// create receive buffer
char* rBuf = new char[rData_length];
// allocate memory to receive buffer
memset(rBuf, 0, rData_length);
// copy data to buffer
memcpy(rBuf, buffer+sizeof(int), rData_length);
ret = write(connfd, rBuf, rData_length);
if(ret < 0)
{
cout<< "received data failed"<< endl;
close(connfd);
delete[] rBuf;
exit(1);
}
else
{
printf("Write Data[%d] to Socket\n", ret);
}
close(connfd);
delete[] rBuf;
}
May be you could try this
int curr = 0;
while( curr < rData_length ) {
ret = write( connfd, rBuf + curr, rData_length - curr );
if( ret == -1 ) { /* ERROR */ }
else
curr += ret;
}
instead of
ret = write(connfd, rBuf, rData_length);
In general, the number of bytes written by write() could differ from what you ask to write. You should better read some manual. Say, http://linux.die.net/man/2/write
Copying bytes between an input socket and an output socket is much simpler than this. You don't need to dynamically allocate buffers according to how much data was read by the last read. You just need to read into a char[] array and write from that array to the target, taking due account of the length value returned by the read.
I am using read function to read data from a socket, but when the data is more than 4k, read function just read part of the data, for example, less than 4k. Here is the key code:
mSockFD = socket(AF_INET, SOCK_STREAM, 0);
if (connect(mSockFD, (const sockaddr*)(&mSockAdd), sizeof(mSockAdd)) < 0)
{
cerr << "Error connecting in Crawl" << endl;
perror("");
return false;
}
n = write(mSockFD, httpReq.c_str(), httpReq.length());
bzero(mBuffer, BUFSIZE);
n = read(mSockFD, mBuffer, BUFSIZE);
Note than BUFSIZE is much larger than 4k.
When data is just a few hundred bytes, read function works as expected.
This is by design and to be expected.
The short answer to your question is you should continue calling "read" until you get all the data you expect. That is:
int total_bytes = 0;
int expected = BUFSIZE;
int bytes_read;
char *buffer = malloc(BUFSIZE+1); // +1 for null at the end
while (total_bytes < expected)
{
int bytes_read = read(mSockFD, buffer+total_bytes, BUFSIZE-total_bytes);
if (bytes_read <= 0)
break;
total_bytes += bytes_read;
}
buffer[total_bytes] = 0; // null terminate - good for debugging as a string
From my experience, one of the biggest misconceptions (resulting in bugs) that you'll receive as much data as you ask for. I've seen shipping code in real products written with the expectation that sockets work this way (and no one certain as to why it doesn't work reliably).
When the other side sends N bytes, you might get lucky and receive it all at once. But you should plan for receiving N bytes spread out across multiple recv calls. With the exception of a real network error, you'll eventually get all N bytes. Segmentation, fragmentation, TCP window size, MTU, and the socket layer's data chunking scheme are the reasons for all of this. When partial data is received, the TCP layer doesn't know about how much more is yet to come. It just passes what it has up to the app. It's up to the app to decide if it got enough.
Likewise, "send" calls can get conglomerated into the same packet together.
There may be ioctls and such that will make a socket block until all the expected data is received. But I don't know of any off hand.
Also, don't use read and write for sockets. Use recv and send.
Read this book. It will change your life with regards to sockets and TCP: