Writing the data of a void pointer to a file - c++

I am working on some software making use of a encryption library, the underlying mechanics of which I can't change. When the program runs it takes about 10 minuets to generate a public and private key, which is extremely frustrating when trying to debug other parts of the software. I would like to write the keys to a file and read them back to save time.
They keys are void pointers:
Enc_Key_T secKey = nullptr;
Where Enc_Key_T is defined as typedef void* Enc_Key_T
The code I have used to attempt to read and write keys is as follows (only attempted to write the secret key so far):
#ifdef WriteKey
generate_Keys(parameters, &sec_key, &prv_key);
FILE * pFile;
pFile = fopen("sk.bin", "wb");
fwrite(&sec_key, sizeof(Enc_Key_T), sizeof(&sec_key), pFile);
fclose(pFile);
#else
FILE * pFile;
long lSize;
char * buffer;
pFile = fopen("sk.bin", "rb");
if(pFile == NULL)
fileError();
fseek (pFile, 0 , SEEK_END);
lsize = ftell (pFile);
rewind (pFile);
buffer = (char *) malloc (sizeof(char)*lSize);
if(buffer == NULL)
memError();
sec_key = (void *) fread(buffer, 1, lSize, pFile);
fclose(pFile);
free(buffer);
#endif
When I write the file it comes out as 64 byte file, but reading it back in the secret key pointer gets set to a low value memory address, which makes me think I am doing something wrong. I can't find any examples of how this can be done. I'm not even sure it can, as I won't be creating any of the underling structures, so I am just trying to allocate some memory at a location provided by a pointer.
Is there anyway this can be done without having to touch the underlying library that generates the keys?

Short answer: in general, you cannot do that correctly.
Long answer:
The thing that you're missing is the structure, you have no guarantee that Enc_Key_T can be serialized by simply writing memory contents. In addition, even if it just raw random data, there is no known length of it.
In addition, there is no guarantee that the library does not have its own state, bound to the generated keys.
Code issues:
When writing, you have no known length of data. Data that is written is a pointer and then something bogus.
When reading, you don't need a secondary buffer; besides, fread returns number of bytes read, not a pointer to the data. So instead of:
buffer = (char *) malloc (sizeof(char)*lSize);
if(buffer == NULL)
memError();
sec_key = (void *) fread(buffer, 1, lSize, pFile);
you can write:
sec_key = (void *) malloc (lSize);
if(sec_key == NULL)
memError();
if ( 0 == fread(sec_key, 1, lSize, pFile) ) {
// error
}

I will assume you are going to debug intensively this application such that you may invest 1 hour building those wrappers I am suggesting.
As you long as you do not know the underlying structure and this library is a black box you have to do the following:
void generate_KeysEx(...){
#if DEBUG
// return dummy keys
#else
// call this API
#endif
}
void EncryptEx(...){
#if DEBUG
// return cipher same as plain text
#else
// call this library API
#endif
}

From discussions in the comments it doesn't sound like this is possible without capturing the structure in the library, which would require significant code changes.
A very hacky solution I found was to change a few key generation parameters which drastically reduces run time.

Related

AES decrypting files larger than available RAM

Up to this point, I used to decrypt files (located on an USB stick) with AES as follows:
FILE * fp = fopen(filePath, "r");
vector<char> encryptedChars;
if (fp == NULL) {
//Could not open file
continue;
}
while(true) {
int nextEncryptedChar = fgetc(fp);
if (nextEncryptedChar == EOF) {
break;
}
encryptedChars.push_back(nextEncryptedChar);
}
fclose(fp);
char encryptedFileArray[encryptedChars.size()];
int encryptedByteCount = encryptedChars.size();
for (int x = 0; x < aantalChars; x++) {
encryptedFileArray[x] = encryptedChars[x];
}
encryptedChars.clear();
AES aes;
//Decrypt the message in-place
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt(encryptedFileArray, sizeof(encryptedFileArray));
aes.clear();
This works perfectly for small files. At this point, I am opening a file from a USB stick and storing all characters into a vector and copying the vector to an array. I know that &encryptedChars[0] can be used as an array pointer as well and will save some memory.
Now I want to decrypt a file of 256Kb (as opposed to 1Kb). Copying the data into a source array will require at least 256Kb of RAM. I however only have 100Kb at my disposal and therefore, cannot create a source array containing the encrypted data.
So I tried to use the FILE * that fopen gives me as a FILE pointer, and created a new file on the same USB stick as a destination pointer. I was hoping that the decryption rounds would use the memory of the USB stick as opposed to available memory on the heap.
FILE * fp = fopen(encryptedFilePath, "r");
FILE * fpDecrypt = fopen(decryptedFilePath, "w+");
if (fp == NULL || fpDecrypt == NULL) {
//Could not open file!?
return;
}
AES aes;
//Decrypt the message in-place
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt((const char*)fp, fpDecrypt, firmwareSize);
aes.clear();
Unfortunately, the system locks up (no idea why).
Does anybody know if I can pass a FILE * to a function that expects a const char * as source and a void * as a destination?
I am using the following library: https://os.mbed.com/users/neilt6/code/AES/docs/tip/AES_8h_source.html
Thanks!
A lot of crypto libraries provide "incremental" APIs that allow a stream of data to be en/decrypted piece by piece, without having to load the stream into memory. Unfortunately, it appears that the library you're using doesn't (or, at least, does not explicitly document it).
However, if you know how CBC mode encryption works, it's possible to roll your own. Basically, all you need to do is take the last AES block (i.e. the last 16 bytes) of the previous chunk of ciphertext and use it as the IV when decrypting (or encrypting) the next block, something like this:
char buffer[1024]; // this needs to be a multiple of 16 bytes!
char ivTemp[16];
while(true) {
int bytesRead = fread(buffer, 1, sizeof(buffer), inputFile);
// save last 16 bytes of ciphertext as IV for next block
if (bytesRead == sizeof(buffer)) memcpy(ivTemp, buffer + bytesRead - 16, 16);
// decrypt the message in-place
AES aes;
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
aes.decrypt(buffer, bytesRead);
aes.clear();
// write out decrypted data (todo: check for write errors!)
fwrite(buffer, 1, bytesRead, outputFile);
// use the saved last 16 bytes of ciphertext as IV for next block
if (bytesRead == sizeof(buffer)) memcpy(iv, ivTemp, 16);
if (bytesRead < sizeof(buffer)) break; // end of file (or read error)
}
Note that this code will overwrite the iv array. That should be OK, though, since you should never use the same IV twice anyway. (In fact, with CBC mode, the IV should be chosen by the encryptor at random, using a cryptographically secure RNG, and sent alongside the message. The usual way to do that is to simply prepend the IV to the message file.)
Also, the code above is somewhat less efficient than it needs to be, since it calls aes.setup() and thus re-runs the whole AES key expansion for each chunk. Unfortunately, I couldn't find any documented way to tell your crypto library to change the IV without re-running the setup.
However, looking at the implementation of your library, as linked by Sister Fister in the comments below, it looks like it's already replacing its internal copy of the IV with the last ciphertext block. Thus, it looks like all you really need to do is call aes.decrypt() for each block without a setup call in between, something like this:
char buffer[1024]; // this needs to be a multiple of 16 bytes!
AES aes;
aes.setup(key, AES::KEY_128, AES::MODE_CBC, iv);
while(true) {
int bytesRead = fread(buffer, 1, sizeof(buffer), inputFile);
// decrypt the chunk of data in-place (continuing from previous chunk)
aes.decrypt(buffer, bytesRead);
// write out decrypted data (todo: check for write errors!)
fwrite(buffer, 1, bytesRead, outputFile);
if (bytesRead < sizeof(buffer)) break; // end of file (or read error)
}
aes.clear();
Note that this code is relying on a feature of the crypto library that does not seem to be explicitly documented, namely that calling aes.decrypt() multiple times will cause the decryptions to be chained correctly. (That's actually a pretty reasonable thing to do, for CBC mode, but you can never be sure without reading the code or finding explicit documentation saying so.) You should make sure to have a comprehensive test suite for this, and to re-run the tests whenever you upgrade the library.
Also note that I haven't tested either of these examples, so there obviously could be bugs or typos. Also, the docs for your crypto library are somewhat sparse, so it's possible that it might not work exactly like I'm assuming it does. Please test anything based on this code throughly before using it!
In general, if something doesn't fit to memory, you can resort to:
Random accessing files. Use fseek to find the position and read or write what you need. Memory requirement minimal.
Processing in batches that will fit in to memory. Memory requirement is adjustable, but the algorithm must be suitable for this.
System virtual memory, which allows you to reserve as big blocks as your system can address, you have free disk space and your system settings. This is usually transparent depending on your system.
Other paged memory mechanisms.
Since AES encryption is made in blocks of 128 bits, and you're short of memory, you should probably use random access on your file.

Copying binary jpg data into a buffer

void jpgToBuff(const char* srcfilename)
{
FILE* file = fopen(srcfilename, "rb");
fseek(file, 0, SEEK_END);
unsigned long fileLen = ftell(file);
fseek(file, 0, SEEK_SET);
char* file_data;
file_data = (char *)malloc((fileLen + 1) * sizeof(char));
fread(file_data, fileLen, 1, file);
fclose(file);
}
Am I doing this correctly. I want to eventually send this information through a socket and decode it on the other side. Any suggestions would be super helpful. Is this theoretically possible to send this through a socket and decode it into an image on the other side?
Well, you're not doing enough error checking. fopen, fseek, ftell, malloc, fread and fclose can all fail. Failures would likely result in a crash or other unexpected results.
fread might return fewer characters than you attempted to read, so you should probably check for that too.
You've allocated +1 byte, presumably so you could add a terminating '\0'? But you left that final byte uninitialized. A jpeg file could reasonably contain an embedded '\0', so adding a terminating '\0' is probably not going to get you the results you're hoping for.
Finally, sizeof(char) is defined by the standards to be 1, always. So, you might as well multiply by 1, or better yet, don't.
Other than that it looks basically right.

Size error on read file

RESOLVED
I'm trying to make a simple file loader.
I aim to get the text from a shader file (plain text file) into a char* that I will compile later.
I've tried this function:
char* load_shader(char* pURL)
{
FILE *shaderFile;
char* pShader;
// File opening
fopen_s( &shaderFile, pURL, "r" );
if ( shaderFile == NULL )
return "FILE_ER";
// File size
fseek (shaderFile , 0 , SEEK_END);
int lSize = ftell (shaderFile);
rewind (shaderFile);
// Allocating size to store the content
pShader = (char*) malloc (sizeof(char) * lSize);
if (pShader == NULL)
{
fputs ("Memory error", stderr);
return "MEM_ER";
}
// copy the file into the buffer:
int result = fread (pShader, sizeof(char), lSize, shaderFile);
if (result != lSize)
{
// size of file 106/113
cout << "size of file " << result << "/" << lSize << endl;
fputs ("Reading error", stderr);
return "READ_ER";
}
// Terminate
fclose (shaderFile);
return 0;
}
But as you can see in the code I have a strange size difference at the end of the process which makes my function crash.
I must say I'm quite a beginner in C so I might have missed some subtilities regarding the memory allocation, types, pointers...
How can I solve this size issue?
*EDIT 1:
First, I shouldn't return 0 at the end but pShader; that seemed to be what crashed the program.
Then, I change the type of reult to size_t, and added a end character to pShader, adding pShdaer[result] = '/0'; after its declaration so I can display it correctly.
Finally, as #JamesKanze suggested, I turned fopen_s into fopen as the previous was not usefull in my case.
First, for this sort of raw access, you're probably better off
using the system level functions: CreateFile or open,
ReadFile or read and CloseHandle or close, with
GetFileSize or stat to get the size. Using FILE* or
std::filebuf will only introduce an additional level of
buffering and processing, for no gain in your case.
As to what you are seeing: there is no guarantee that an ftell
will return anything exploitable as a numeric value; it could
very well be just a magic cookie. On most current systems, it
is a byte offset into the physical file, but on any non-Unix
system, the offset into the physical file will not map directly
to the logical file you are reading unless you open the file in
binary mode. If you use "rb" to open the file, you'll
probably see the same values. (Theoretically, you could get
extra 0's at the end of the file, but practically, the OS's
where that happened are either extinct, or only used on legacy
mainframes.)
EDIT:
Since the answer stating this has been deleted: you should loop
on the fread until it returns 0 (setting errno to 0 before
each call, and checking it after the return to see whether the
function returned because of an error or because it reached the
end of file). Having said this: if you're on one of the usual
Windows or Unix systems, and the file is local to the machine,
and not too big, fread will read it all in one go. The
difference in size you are seeing (given the numerical values
you posted) is almost certainly due to the fact that the two
byte Windows line endings are being mapped to a single '\n'
character. To avoid this, you must open in binary mode;
alternatively, if you really are dealing with text (and want
this mapping), you can just ignore the extra bytes in your
buffer, setting the '\0' terminator after the last byte
actually read.

fread gives BadPtr when done in a new process

I have written a C++ Dll which has two functions, one writes a binary file to disk and and other reads that file from disk and load into memory.
//extremely simplified code looks like this
bool Utilities::WriteToBinary(wstring const fileName)
{
//lot of code
DWORD size = //get size of data to write
LPBYTE * blob = new LPBYTE[size];
WriteDataToMemoryBlob(blob, & size);
FILE * pFile;
if(0 != _wfopen_s (&pFile , fileName.c_str() , L"wb" ))
{
//do something
return false;
}
fwrite (blob, 1, size , pFile );
fclose (pFile);
delete[] blob;
return true;
}
bool Utilities::ReadDataFromDisk(wstring const fileName)
{
long fileSize = GetFileSize(fileName);
FILE * filePointer;
if(0 != _wfopen_s (&filePointer, fileName.c_str() , L"rb" ))
return false;
//read from file
LPBYTE * blobRead = new LPBYTE[fileSize];
fread (blobRead, 1, fileSize , filePointer );
fclose (filePointer);
//rest of the code...
Problem
I have created another C++ project which call these DLL methods for testing.
Problem which is driving me crazy is that when I call WriteToBinary and ReadDataFromDisk consecutively inside same program they work perfectly fine. But when I call WriteToBinary at one time and let the program exit and call ReadDataFromDisk next time and give it path of file written earlier by WriteToBinary, I get a BadPtr in blobRead after doing fread.
I have tried my best to make sure there are no shared or static data structures involved. Both methods are totally independent.
Any idea what might be causing this?
A mistake is the allocation of the array as LPBYTE is a BYTE* so the:
LPBYTE * blobRead = new LPBYTE[fileSize];
Is allocating an array of BYTE*, not an array of BYTE. Change to:
BYTE* blobRead = new BYTE[fileSize];
To avoid dynamic allocation you could use a std::vector<BYTE> instead:
std::vector<BYTE> blobRead(fileSize);

new and malloc allocates extra 16 bytes

I'm writing on c++ in VS2010 Windows 7. I try to read file of size 64 bytes. Here's the code:
BYTE* MyReadFile(FILE *f)
{
size_t result;
BYTE *buffer;
long lSize;
if (f == NULL)
{
fputs ("File error", stderr);
exit (1);
}
fseek (f, 0, SEEK_END);
lSize = ftell (f);
rewind (f);
//buffer = (BYTE*) malloc (sizeof(char)*lSize);
buffer = new BYTE[lSize];
if (buffer == NULL)
{
fputs ("Memory error", stderr);
exit (2);
}
result = fread (buffer, 1, lSize, f);
if (result != lSize)
{
fputs ("Reading error",stderr);
exit (3);
}
fclose (f);
return buffer;
}
When I get file size it is 64, but when I allocate memory for it with new BYTE[lSize] I get 80 bytes of space and thus strange sequence ээээ««««««««оюою is added to the end of buffer. Can you please tell me how to handle this?
There is an important difference between the number of bytes you have allocated, and the number of bytes that you see.
If lsize is 64, you have indeed allocated yourself 64 bytes. This does not mean that behind the screen the C++ run time will have asked exactly 64 bytes to Windows. In practice memory managers ask slightly more memory so they are able to do their own homework. Often these extra bytes are allocated BEFORE the pointer you get back from new/malloc so you will never see them.
However, that is not your problem. The problem is that you read 64 bytes from file using fread. There is no way that fread knows what kind of data you are reading. It could be a struct, a char buffer, a set of doubles, ... It just reads these bytes for you.
This means that if the file contains the characters "ABC" you will get exactly "ABC" back. BUT, in C, strings should be nul-terminated, so if you pass this buffer to printf, it will continue to scan memory until it finds a nul-character.
So, to solve your problem, allocate 1 byte more, and set the last byte to the nul character, like this:
buffer = new BYTE[lSize+1];
buffer[lSize] = '\0';
What is behind and above is called sentinel.It is used to check if your code does not exceed boundary of allocated memory.When your program overwrite this values, CRT library will report debug messages when you release your buffer.
Look here : http://msdn.microsoft.com/en-us/library/ms220938%28v=vs.80%29.aspx
Although this may look like a memory problem, its actually a printing problem (as #Mystical pointed out). You need to put a null termination if you are going to print out anything as a string, else memory will be wildy read till one is encountered (which is UB).
Try this instead:
buffer = new BYTE[lSize + 1];
if (buffer == NULL)
{
fputs ("Memory error", stderr);
exit (2);
}
result = fread (buffer, 1, lSize, f);
if (result != lSize)
{
fputs ("Reading error",stderr);
exit (3);
}
buffer[lSize] = '\0';
It'll ensure there is a null terminator at the end of the returned buffer.
When memory is allocated, it is not on a per byte basis. Instead it is allocated in aligned blocks of 8 or 16 bytes (possibly with a header at the start, before the pointer). This is usually not a problem unless you create lots (many millions) of little objects. This doesn't have to be a problem in C and isn't even a major problem in Java (which doesn't support array of objects or objects allocated on the stack).