I got lzo library to use in our application. The version was provided is 1.07.
They have given me .lib along with some header file and some .c source files.
I have setup test environment as per specs. I am able to see lzo routine functions in my application.
Here is my test application
#include "stdafx.h"
#include "lzoconf.h"
#include "lzo1z.h"
#include <stdlib.h>
int _tmain(int argc, _TCHAR* argv[])
{
FILE * pFile;
long lSize;
unsigned char *i_buff;
unsigned char *o_buff;
int i_len,e = 0;
unsigned int o_len;
size_t result;
//data.txt have a single compressed packet
pFile = fopen("data.txt","rb");
if (pFile==NULL)
return -1;
// obtain file size:
fseek (pFile , 0 , SEEK_END);
lSize = ftell (pFile);
rewind (pFile);
// allocate memory to contain the whole file:
i_buff = (unsigned char*) malloc (sizeof(char)*lSize);
if (i_buff == NULL)
return -1;
// copy the file into the buffer:
result = fread (i_buff,1,lSize,pFile);
if (result != lSize)
return -1;
i_len = lSize;
o_len = 512;
// allocate memory for output buffer
o_buff = (unsigned char*) malloc(sizeof(char)*o_len);
if (o_buff == NULL)
return -1;
lzo_memset(o_buff,0,o_len);
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
return 0;
}
It gives access violation on last line.
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
in provided library signature for above functiion is
lzo1z_decompress ( const lzo_byte *src, lzo_uint src_len,
lzo_byte *dst, lzo_uint *dst_len,
lzo_voidp wrkmem /* NOT USED */ );
What is wrong?
Are you sure 512 bytes is big enough for the decompressed data? You shouldn't be using an arbitrary value, but rather you should have stowed away the original size somewhere as a header when your file was compressed:
LZO Decompression Buffer Size
You should probably make your data types match the interface spec (e.g. o_len should be a lzo_uint...you're passing an address so the actual underlying type matters).
Beyond that, it's open source. So why don't you build lzo with debug info and step into it to see where the problem is?
http://www.oberhumer.com/opensource/lzo/
Thans everyone for suggestion and comments.
The problem was with data. I have successfully decomressed it.
Related
I'm trying to save binary data of images in Key-Value Store
1st, read data using "fread" function. 2nd, save it into RocksDB. 3rd, Get the data from RocksDB and restore the data into form of image.
Now I don't know whether I have problem in 2nd step of 3rd step.
2nd step Put
#include <iostream>
#include <string.h>
#include "rocksdb/db.h"
DB* db;
Options options;
options.create_if_missing = true;
Status s = DB::Open(options, <DBPath>, &db);
assert(s.ok());
//read image
FILE* file_in;
int fopen_err = fopen_s(&file_in, <input_file_path>, "rb");
if (fopen_err != 0) {
printf(input_file_path, "%s is not valid");;
}
fseek(file_in, 0, SEEK_END);
long int file_size = ftell(file_in);
rewind(file_in);
//malloc buffer
char* buffer = (char*)malloc(file_size);
if (buffer == NULL) { printf("Memory Error!!"); }
fread(buffer, file_size, 1, file_in);
//main func
db->Put(WriteOptions(), file_key, buffer);
assert(s.ok());
fclose(file_in);
free(buffer);
buffer = NULL;
delete db;
3rd step Get
#include <iostream>
#include <string.h>
#include "rocksdb/db.h"
DB* db;
Options options;
options.create_if_missing = true;
Status s = DB::Open(options, <DBPath>, &db);
assert(s.ok());
//main func
std::string file_data
s = db->Get(ReadOptions(), file_key, &file_data);
assert(s.ok());
//convert std::string to char*
char* buffer = (char*)malloc(file_data.size() + 1);
std::copy(file_data.begin(), file_data.end(), buffer);
//restore image
FILE* test;
fopen_s(&test, "test.jpg", "wb");
fwrite(buffer, file_data.size(), 1, test);
fclose(test);
free(buffer);
delete db;
The output image is not valid, and if I convert jpg to txt, I only get "???".
I tried on BerkeleyDB in the same process, and I succeed to restore image.(I think it's because of Dbt class of BerkeleyDB)
I don't know where the data get crashed. Did I missed some options or process...?
char* buffer = ...
db->Put(WriteOptions(), file_key, buffer);
How is RocksDB supposed to know the length of the buffer? When passing in a char* here, it is assumed to be a nul-terminated C string using the Slice(char *) implicit conversion. Nul-terminated C strings cannot be used for binary data because the data will be cut off at the first zero byte.
Although some RocksDB APIs are not up to modern C++ standards (for API compatibility), it is written for use with C++. Nul-terminated char *, FILE, fseek etc. are from C and cause lots of difficulty when attempting to interact with C++. If buffer were std::string, this bug would be fixed because the Slice(std::string) implicit conversion is very safe.
Other bugs:
Failure to re-assign s for the db->Put
Failure to abort on error cases with printf
Better to call DB::Close(db) before delete to check status, as there could be a background error
Not checking for error in fread
Performance/clarity issue:
In 3rd step, no need to create char *buffer and copy in std::string file_data to it. file_data.data() and file_data.size() give you access to the underlying char buffer if needed (but using C++ APIs is better).
First off, I understand that RC4 is not the safest encryption method and that it is outdated, this is just for a school project. Just thought I put it out there since people may ask.
I am working on using RC4 from OpenSSL to make a simple encryption and decryption program in C++. I noticed that the encryption and decryption is inconsistent. Here is what I have so far:
#include <fcntl.h>
#include <openssl/evp.h>
#include <openssl/rc4.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int inputFile = open(argv[1], O_RDONLY);
if (inputFile < 0) {
printf("Error opening file\n");
return 1;
}
unsigned char *keygen = reinterpret_cast<unsigned char*>(argv[2]);
RC4_KEY key;
size_t size = lseek(inputFile, 0, SEEK_END);
lseek(inputFile, 0, SEEK_SET);
unsigned char *fileIn = (unsigned char*) calloc(size, 1);
if (pread(inputFile, fileIn, size, 0) == -1) {
perror("Error opening read\n");
return 1;
}
unsigned char *fileOut = (unsigned char*) calloc(size, 1);
unsigned char *actualKey;
EVP_BytesToKey(EVP_rc4(), EVP_sha256(), NULL, keygen, sizeof(keygen), 1, actualKey, NULL);
RC4_set_key(&key, sizeof(actualKey), actualKey);
RC4(&key, size, fileIn, fileOut);
int outputFile = open(argv[3], O_WRONLY | O_TRUNC | O_CREAT, 0644);
if (outputFile < 0) {
perror("Error opening output file");
return 1;
}
if (pwrite(outputFile, fileOut, size, 0) == -1) {
perror("error writing file");
return 1;
}
close(inputFile);
close(outputFile);
free(fileIn);
free(fileOut);
return 0;
}
The syntax for running this in Ubuntu is:
./myRC4 test.txt pass123 testEnc.txt
MOST of the time this works fine, and encrypts and decrypts the file. However occasionally I get a Segmentation fault. If I do, I run the same exact command again and it encrypts or decrypts fine, at least for .txt files.
When I test on .jpg files, or any larger file, the issue seems to be more common and inconsistent. I notice that sometimes the images appear to have been decrypted (no segmentation fault) but in reality it has not, which I test by doing a diff between the original and the decrypted file.
Any ideas as to why I get these inconsistencies? Does it have to do with how I allocate memory for fileOut and fileIn?
Thank you in advance
actualKey needs to be pointing to a buffer of appropriate size before you pass it to EVP_BytesToKey. As it is you are passing in an uninitialised pointer which would explain your inconsistent results.
The documentation for EVP_BytesToKey has this to say:
If data is NULL, then EVP_BytesToKey() returns the number of bytes needed to store the derived key.
So you can call EVP_BytesToKey once with the data parameter set to NULL to determine the length of actualKey, then allocate a suitable buffer and call it again with actualKey pointing to that buffer.
As others have noted, passing sizeof(keygen) to EVP_BytesToKey is also incorrect. You probably meant strlen (argv [2]).
Likewise, passing sizeof(actualKey) to RC4_set_key is also an error. Instead, you should pass the value returned by EVP_BytesToKey.
Hello Guys I really need help in my c/c++ programming skills. I have to load a binary file, maybe a simple "hello World" and execute it directly from a buffer. Therefore I loaded my buffer with the binary file and tried to set the programming pointer to the buffer. But it doesn't work correctly. Could you please help me with useful suggestions?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
int main(int argc, const char * argv[]) {
FILE *fileptr;
char *buffer;
long filelen;
fileptr = fopen("helloworld", "rb"); //Open File in binary mode
fseek(fileptr, 0, SEEK_END); //Jump to the end of file
filelen = ftell(fileptr); //Get the current byte offset in File
rewind(fileptr); // Jump back to beginnig of the file
buffer = (char *)malloc((filelen+1)*sizeof(char));
fread(buffer, filelen, 1, fileptr);
fclose(fileptr);
int *ret;
ret = (int *)&ret + 2;
(*ret) = (int)*buffer;
}
The program instructions are placed in the read only region of the process memory called text code segment and it is decided when the exe is generated by the compiler. The OS simply won't expect program instructions in the stack or the heap! Otherwise viruses would have been very easy to made....
I would like to encrypt a text with Bruce Schneier's class that was converted to c++ code by Jim Conger After the encryption I would like to decrypt the encrypted text. To try this out, I am using files for it. I've created a sample project, but the decrypted file isn't contains the same text as the initial file. What can be the problem?
Here's the download link for the blowfish class files.
I've created a Command line tool project in XCode and changed the main.m file to main.mm. Here you can find my contents of the main.mm file:
#import "blowfish.h"
#include <stdlib.h>
#include <stdio.h>
#define my_fopen(fileptr, filename, mode) \
fileptr = fopen(filename, mode); \
if (fileptr == NULL) { \
fprintf(stderr, "Error: Couldn't open %s.\n", filename); \
exit(1); \
}
const char *input_file_name = "test.txt";
const char *encoded_file_name = "encoded.txt";
const char *decoded_file_name = "decoded.txt";
unsigned char key[] = "thisisthekey";
int main(void) {
FILE *infile, *outfile;
int result, filesize;
const int n = 8; // make sure this is a multiple of 8
const int size = 1;
unsigned char input[n], output[n];
CBlowFish bf;
bf.Initialize(key, sizeof(key)-1); // subtract 1 to not count the null terminator
my_fopen(infile, input_file_name, "rb")
my_fopen(outfile, encoded_file_name, "wb")
filesize = 0;
while (result = fread(input, size, n, infile)) {
filesize += result;
fwrite(output, size, bf.Encode(input, output, result), outfile);
}
fclose(outfile);
fclose(infile);
my_fopen(infile, encoded_file_name, "rb")
my_fopen(outfile, decoded_file_name, "wb")
while (result = fread(input, size, n, infile)) {
bf.Decode(input, output, result);
fwrite(output, sizeof(output[0]), filesize < result ? filesize : result, outfile);
filesize -= result;
}
fclose(outfile);
fclose(infile);
return 0;
}
You're using a block cipher with padding (look at the source code to CBlowFish::Encode) to encrypt a stream. You can't do that because the decryption operation will have no way to know what constitutes a padded chunk that it should decrypt.
For example, say you're encrypting "FOOBAR", but you read "FOO" the first time and this encrypts to "XYZZY". Then you encrypt "BAR" to "ABCDE". Your written file will contain "XYZZYABCDE". But is that "XY" "ZZYA" "BCDE"? Or one block, "XYZZYABCDE" or what?
If you want to encrypt a stream, use a stream cipher. Or if you want to cut it into arbitrary blocks, you have to preserve the output block boundaries so you can decrypt the blocks.
You MUST encode/decode corresponding blocks of data. fread() and fwrite() doesn't return the same no. of bytes (result) so your plain-text data blocks and your cipher-text data blocks are not aligned.
Defines the data block length (say 64 bytes) and stick to it when encoding and decoding.
Otherwise use a stream cipher which uses "data blocks" of 1 bytes ;)
I have this theory, I can grab the file size using fseek and ftell and build a dynamic array as a buffer. Then use the buffer for fgets(). I currently can not come up with a way to do it.
My theory is based off of not knowing the size of the first file in bytes. So, I do not know how big of a buffer to build. What if the file is over 2 gigs? I want to be able to build a buffer that will change and recognize the file size of whatever file I put into SearchInFile().
Here is what I have so far below:
int SearchInFile(char *fname, char *fname2)
{
FILE *pFile, *pFile2;
int szFile, szFile2;
// Open first file
if( (fopen_s(&pFile, fname, "r")) != NULL )
{
return(-1);
}
// Open second file
if( (fopen_s(&pFile2, fname2, "r")) != NULL )
{
return(-1);
}
// Find file size
fseek(pFile, 0L, SEEK_END);
szFile = ftell(pFile);
// Readjust File Pointer
fseek(pFile, 0L, SEEK_SET);
std::vector <char> buff;
//char buff[szFile];
while(fgets(buff.push_back(), szFile, pFile))
{
}
Any thoughts or examples would be great. I've been searching the net for the last few hours.
Vector can grow, so you don't have to know the size beforehand. The following four lines do what you want.
std::vector<char> buff;
int ch;
while ((ch = fgetc(pFile)) != EOF)
buff.push_back(ch);
fgetc is a function to read a single char, simpler than using fgets.
If you do know the file size beforehand then you could call buff.reserve(szFile) before the loop. This will make the loop a little more efficient.