Inconsistent encryption and decryption with OpenSSL RC4 in C++ - c++

First off, I understand that RC4 is not the safest encryption method and that it is outdated, this is just for a school project. Just thought I put it out there since people may ask.
I am working on using RC4 from OpenSSL to make a simple encryption and decryption program in C++. I noticed that the encryption and decryption is inconsistent. Here is what I have so far:
#include <fcntl.h>
#include <openssl/evp.h>
#include <openssl/rc4.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int inputFile = open(argv[1], O_RDONLY);
if (inputFile < 0) {
printf("Error opening file\n");
return 1;
}
unsigned char *keygen = reinterpret_cast<unsigned char*>(argv[2]);
RC4_KEY key;
size_t size = lseek(inputFile, 0, SEEK_END);
lseek(inputFile, 0, SEEK_SET);
unsigned char *fileIn = (unsigned char*) calloc(size, 1);
if (pread(inputFile, fileIn, size, 0) == -1) {
perror("Error opening read\n");
return 1;
}
unsigned char *fileOut = (unsigned char*) calloc(size, 1);
unsigned char *actualKey;
EVP_BytesToKey(EVP_rc4(), EVP_sha256(), NULL, keygen, sizeof(keygen), 1, actualKey, NULL);
RC4_set_key(&key, sizeof(actualKey), actualKey);
RC4(&key, size, fileIn, fileOut);
int outputFile = open(argv[3], O_WRONLY | O_TRUNC | O_CREAT, 0644);
if (outputFile < 0) {
perror("Error opening output file");
return 1;
}
if (pwrite(outputFile, fileOut, size, 0) == -1) {
perror("error writing file");
return 1;
}
close(inputFile);
close(outputFile);
free(fileIn);
free(fileOut);
return 0;
}
The syntax for running this in Ubuntu is:
./myRC4 test.txt pass123 testEnc.txt
MOST of the time this works fine, and encrypts and decrypts the file. However occasionally I get a Segmentation fault. If I do, I run the same exact command again and it encrypts or decrypts fine, at least for .txt files.
When I test on .jpg files, or any larger file, the issue seems to be more common and inconsistent. I notice that sometimes the images appear to have been decrypted (no segmentation fault) but in reality it has not, which I test by doing a diff between the original and the decrypted file.
Any ideas as to why I get these inconsistencies? Does it have to do with how I allocate memory for fileOut and fileIn?
Thank you in advance

actualKey needs to be pointing to a buffer of appropriate size before you pass it to EVP_BytesToKey. As it is you are passing in an uninitialised pointer which would explain your inconsistent results.
The documentation for EVP_BytesToKey has this to say:
If data is NULL, then EVP_BytesToKey() returns the number of bytes needed to store the derived key.
So you can call EVP_BytesToKey once with the data parameter set to NULL to determine the length of actualKey, then allocate a suitable buffer and call it again with actualKey pointing to that buffer.
As others have noted, passing sizeof(keygen) to EVP_BytesToKey is also incorrect. You probably meant strlen (argv [2]).
Likewise, passing sizeof(actualKey) to RC4_set_key is also an error. Instead, you should pass the value returned by EVP_BytesToKey.

Related

Want to put binary data of images into RocksDB in C++

I'm trying to save binary data of images in Key-Value Store
1st, read data using "fread" function. 2nd, save it into RocksDB. 3rd, Get the data from RocksDB and restore the data into form of image.
Now I don't know whether I have problem in 2nd step of 3rd step.
2nd step Put
#include <iostream>
#include <string.h>
#include "rocksdb/db.h"
DB* db;
Options options;
options.create_if_missing = true;
Status s = DB::Open(options, <DBPath>, &db);
assert(s.ok());
//read image
FILE* file_in;
int fopen_err = fopen_s(&file_in, <input_file_path>, "rb");
if (fopen_err != 0) {
printf(input_file_path, "%s is not valid");;
}
fseek(file_in, 0, SEEK_END);
long int file_size = ftell(file_in);
rewind(file_in);
//malloc buffer
char* buffer = (char*)malloc(file_size);
if (buffer == NULL) { printf("Memory Error!!"); }
fread(buffer, file_size, 1, file_in);
//main func
db->Put(WriteOptions(), file_key, buffer);
assert(s.ok());
fclose(file_in);
free(buffer);
buffer = NULL;
delete db;
3rd step Get
#include <iostream>
#include <string.h>
#include "rocksdb/db.h"
DB* db;
Options options;
options.create_if_missing = true;
Status s = DB::Open(options, <DBPath>, &db);
assert(s.ok());
//main func
std::string file_data
s = db->Get(ReadOptions(), file_key, &file_data);
assert(s.ok());
//convert std::string to char*
char* buffer = (char*)malloc(file_data.size() + 1);
std::copy(file_data.begin(), file_data.end(), buffer);
//restore image
FILE* test;
fopen_s(&test, "test.jpg", "wb");
fwrite(buffer, file_data.size(), 1, test);
fclose(test);
free(buffer);
delete db;
The output image is not valid, and if I convert jpg to txt, I only get "???".
I tried on BerkeleyDB in the same process, and I succeed to restore image.(I think it's because of Dbt class of BerkeleyDB)
I don't know where the data get crashed. Did I missed some options or process...?
char* buffer = ...
db->Put(WriteOptions(), file_key, buffer);
How is RocksDB supposed to know the length of the buffer? When passing in a char* here, it is assumed to be a nul-terminated C string using the Slice(char *) implicit conversion. Nul-terminated C strings cannot be used for binary data because the data will be cut off at the first zero byte.
Although some RocksDB APIs are not up to modern C++ standards (for API compatibility), it is written for use with C++. Nul-terminated char *, FILE, fseek etc. are from C and cause lots of difficulty when attempting to interact with C++. If buffer were std::string, this bug would be fixed because the Slice(std::string) implicit conversion is very safe.
Other bugs:
Failure to re-assign s for the db->Put
Failure to abort on error cases with printf
Better to call DB::Close(db) before delete to check status, as there could be a background error
Not checking for error in fread
Performance/clarity issue:
In 3rd step, no need to create char *buffer and copy in std::string file_data to it. file_data.data() and file_data.size() give you access to the underlying char buffer if needed (but using C++ APIs is better).

C creating a file with given size

I am trying to create a file with a given size using lseek() and adding a byte at the end of the file, however it creates a sparse file with 0 byte.
Below is the code...any suggestions?
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#ifndef BUF_SIZE
#define BUF_SIZE 1024
#endif // BUF_SIZE
int main(int argc, char *argv[])
{
int inputFd;
int fileSize = 500000000;
int openFlags;
int result;
mode_t filePerms;
ssize_t numRead;
char buf[BUF_SIZE];
openFlags = O_WRONLY | O_CREAT | O_EXCL;
filePerms = S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH; /*rw-rw-ew*/
inputFd = open(argv[1], openFlags, filePerms);
if (inputFd == -1)
printf("problem opening file %s ", argv[1]);
return 1;
printf ("input FD: %d", inputFd);
result = lseek(inputFd, fileSize-1, SEEK_SET);
if (result == -1){
close(inputFd);
printf("Error calling lseek() to stretch the file");
return 1;
}
result = write(inputFd, "", 1);
if (result < 0){
close(inputFd);
printf("Error writing a byte at the end of file\n");
return 1;
}
if (close(inputFd) == -1)
printf("problem closing file %s \n",argv[1]);
return 0;
}
You are missing some braces:
if (inputFd == -1)
printf("problem opening file %s ", argv[1]);
return 1;
You need to change this to:
if (inputFd == -1) {
printf("problem opening file %s ", argv[1]);
return 1;
}
Without the braces, the only statement controlled by the if statement is the printf, and the return 1; statement is always run no matter what the value of inputFd is.
It is good practice to always use braces around a controlled block, even if there is only one statement (such as for the close at the end of your program).
Do you have any example of writing a byte on every block of the file?
This code is from a slightly different context, but can be adapted to your case. The context was ensuring that the disk space for an Informix database was all allocated, so the wrapper code around this created the file (and it had not to exist, etc). However, the entry point to actually writing was the second of these two functions — the fill buffer function replicated the 8-byte word informix into a 64 KiB block.
/* Fill the given buffer with the string 'informix' repeatedly */
static void fill_buffer(char *buffer, size_t buflen)
{
size_t filled = sizeof("informix") - 1;
assert(buflen > filled);
memmove(buffer, "informix", sizeof("informix")-1);
while (filled < buflen)
{
size_t ncopy = (filled > buflen - filled) ? buflen - filled : filled;
memmove(&buffer[filled], buffer, ncopy);
filled *= 2;
}
}
/* Ensure the file is of the required size by writing to it */
static void write_file(int fd, size_t req_size)
{
char buffer[64*1024];
size_t nbytes = (req_size > sizeof(buffer)) ? sizeof(buffer) : req_size;
size_t filesize = 0;
fill_buffer(buffer, nbytes);
while (filesize < req_size)
{
size_t to_write = nbytes;
ssize_t written;
if (to_write > req_size - filesize)
to_write = req_size - filesize;
if ((written = write(fd, buffer, to_write)) != (ssize_t)to_write)
err_syserr("short write (%d vs %u requested)\n",
(int)written, (unsigned)to_write);
filesize += to_write;
}
}
As you can see, it writes in 64 KiB chunks. Frankly, there's going to be no difference between writing all bytes on a page and writing one byte per page. Indeed, if anything, writing the whole page will be faster because the new value can simply be written, whereas if you write just one byte per page, an old page has to be created/read, modified, and then written back.
In your context, I would extend the current file to a multiple of 4 KiB (8 KiB if you prefer), then go writing the main data blocks, and the final partial block if necessary. You would probably simply do memset(buffer, '\0', sizeof(buffer)); whereas the sample code was deliberately writing something other than blocks of zero bytes. AFAIK, even if the block you write is all zero bytes, the driver actually writes that block to the disk — the simple act of writing ensures the file is non-sparse.
The err_syserr() function is a bit like fprintf(stderr, …), but it adds the system error message from errno and strerror() and exits the program too. The code does assume 32-bit (or larger) int values. I never got to experiment with terabyte size files — the code was last updated in 2009.

unread a file in C++

I am trying to read files that are simultaneously written to disk. I need to read chunks of specific size. If the size read is less than the specific size, I'd like to unread the file (something like what ungetc does, instead for a char[]) and try again. Appending to the bytes read already is not an option for me.
How is this possible?
I tried saving the current position through:
FILE *fd = fopen("test.txt","r+");
fpos_t position;
fgetpos (fd, &position);
and then reading the file and putting the pointer back to its before-fread position.
numberOfBytes = fread(buff, sizeof(unsigned char), desiredSize, fd)
if (numberByBytes < desiredSize) {
fsetpos (fd, &position);
}
But it doesn't seem to be working.
Replacing my previous suggestions with code I just checked (Ubuntu 12.04 LTS, 32bit). GCC is 4.7 but I'm pretty sure this is 100% standard solution.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define desiredSize 10
#define desiredLimit 100
int main()
{
FILE *fd = fopen("test.txt","r+");
if (fd == NULL)
{
perror("open");
exit(1);
}
int total = 0;
unsigned char buff[desiredSize];
while (total < desiredLimit)
{
fpos_t position;
fgetpos (fd, &position);
int numberOfBytes = fread(buff, sizeof(unsigned char), desiredSize, fd);
printf("Read try: %d\n", numberOfBytes);
if (numberOfBytes < desiredSize)
{
fsetpos(fd, &position);
printf("Return\n");
sleep(10);
continue;
}
total += numberOfBytes;
printf("Total: %d\n", total);
}
return 0;
}
I was adding text to file from another console and yes, read was progressing by 5 chars blocks in accordance to what I was adding.
fseek seems perfect for this:
FILE *fptr = fopen("test.txt","r+");
numberOfBytes = fread(buff, 1, desiredSize, fptr)
if (numberOfBytes < desiredSize) {
fseek(fptr, -numberOfBytes, SEEK_CUR);
}
Also note that a file descriptor is what open returns, not fopen.

Setting a buffer/pointer to null

I am trying to constantly read data into a buffer of type unsigned char* from different files. However, I can't seem to set the buffer to NULL prior to reading in the next file.
Here is only the relevant code:
#include <stdio.h>
#include <fstream>
int
main (int argc, char** argv) {
FILE* dataFile = fopen("C:\\File1.txt", "rb");
unsigned char *buffer = NULL;
buffer = (unsigned char*)malloc(1000);
fread(buffer,1,1000,dataFile);
fclose(dataFile);
dataFile = fopen("C:\\File2.txt", "rb");
buffer = NULL;
fread(buffer,1,1000,dataFile);
fclose(dataFile);
system("pause");
return 0;
}
The error I run into is at the second occurrence of this line: fread(buffer,1,1000,dataFile);
The error I get is:
Debug Assertion Failed!
Expression: (buffer != NULL)
It points me to Line 147 of fread.c which is basically:
/* validation */
_VALIDATE_RETURN((buffer != NULL), EINVAL, 0);
if (stream == NULL || num > (SIZE_MAX / elementSize))
{
if (bufferSize != SIZE_MAX)
{
memset(buffer, _BUFFER_FILL_PATTERN, bufferSize);
}
_VALIDATE_RETURN((stream != NULL), EINVAL, 0);
_VALIDATE_RETURN(num <= (SIZE_MAX / elementSize), EINVAL, 0);
}
I did Google for ways to get the buffer pointer to NULL and tried the various suggestions, but none seem to work. Anyone can clarify what is the right way to set it to NULL?
Your buffer is a pointer.
When you do this:
buffer = (unsigned char*)malloc(1000);
you allocate some space in memory, and assign its starting position to buffer. Remember, buffer holds the address of the beginning of the space, that's all. When you do this:
buffer = NULL;
you have thrown away that address.
EDIT:
C++ style, without dynamic memory:
#include <fstream>
using std:: string;
using std:: ifstream;
void readFromFile(string fname)
{
char buffer[1000];
ifstream fin(fname.c_str());
fin.read(buffer, sizeof(buffer));
// maybe do things with the data
}
int main ()
{
readFromFile("File1.txt");
readFromFile("File2.txt");
return 0;
}
There's no need to erase the contents of the buffer. If the cost of allocating and deallocating the buffer with each call is too much, just add static:
static char buffer[1000];
It will be overwritten each time.
You can't say buffer = NULL because fread wil try to dereference it. Dereferencing NULL is one of the things that are certainly and completely illegal in C++. In effect you're losing what you got from malloc. Perhaps you're looking for memset and trying to zero the buffer:
memset(buffer, 0, 1000);
However, you don't need to do this before calling fread. There's simply no reason since fread will write the buffer anyway: it doesn't care if it's zeroed or not.
As a side note: you're writing very C-ish code in what I suspect is C++ (given your fstream header). There are better-suited I/O options for C++.

using lzo library in c++ application

I got lzo library to use in our application. The version was provided is 1.07.
They have given me .lib along with some header file and some .c source files.
I have setup test environment as per specs. I am able to see lzo routine functions in my application.
Here is my test application
#include "stdafx.h"
#include "lzoconf.h"
#include "lzo1z.h"
#include <stdlib.h>
int _tmain(int argc, _TCHAR* argv[])
{
FILE * pFile;
long lSize;
unsigned char *i_buff;
unsigned char *o_buff;
int i_len,e = 0;
unsigned int o_len;
size_t result;
//data.txt have a single compressed packet
pFile = fopen("data.txt","rb");
if (pFile==NULL)
return -1;
// obtain file size:
fseek (pFile , 0 , SEEK_END);
lSize = ftell (pFile);
rewind (pFile);
// allocate memory to contain the whole file:
i_buff = (unsigned char*) malloc (sizeof(char)*lSize);
if (i_buff == NULL)
return -1;
// copy the file into the buffer:
result = fread (i_buff,1,lSize,pFile);
if (result != lSize)
return -1;
i_len = lSize;
o_len = 512;
// allocate memory for output buffer
o_buff = (unsigned char*) malloc(sizeof(char)*o_len);
if (o_buff == NULL)
return -1;
lzo_memset(o_buff,0,o_len);
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
return 0;
}
It gives access violation on last line.
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
in provided library signature for above functiion is
lzo1z_decompress ( const lzo_byte *src, lzo_uint src_len,
lzo_byte *dst, lzo_uint *dst_len,
lzo_voidp wrkmem /* NOT USED */ );
What is wrong?
Are you sure 512 bytes is big enough for the decompressed data? You shouldn't be using an arbitrary value, but rather you should have stowed away the original size somewhere as a header when your file was compressed:
LZO Decompression Buffer Size
You should probably make your data types match the interface spec (e.g. o_len should be a lzo_uint...you're passing an address so the actual underlying type matters).
Beyond that, it's open source. So why don't you build lzo with debug info and step into it to see where the problem is?
http://www.oberhumer.com/opensource/lzo/
Thans everyone for suggestion and comments.
The problem was with data. I have successfully decomressed it.