I'm trying to make a exe program that can read any file to binary and later use this binary to make the exact same file.
So I figured out that I can use fopen(content,"rb") to read a file as binary,
and using fwrite I can write block of data into stream. But the problem is when I fwrite it doesn't seems copy everything.
For example the text I opened contains 31231232131 in it. When I write it into another file it only copies 3123 (first 4 bytes).
I can see that it's a very simple thing that I'm missing but I don't know what.
#include <stdio.h>
#include <iostream>
using namespace std;
typedef unsigned char BYTE;
long getFileSize(FILE *file)
{
long lCurPos, lEndPos;
lCurPos = ftell(file);
fseek(file, 0, 2);
lEndPos = ftell(file);
fseek(file, lCurPos, 0);
return lEndPos;
}
int main()
{
//const char *filePath = "C:\\Documents and Settings\\Digital10\\MyDocuments\\Downloads\\123123.txt";
const char *filePath = "C:\\Program Files\\NPKI\\yessign\\User\\008104920100809181000405,OU=HNB,OU=personal4IB,O=yessign,C=kr\\SignCert.der";
BYTE *fileBuf;
FILE *file = NULL;
if ((file = fopen(filePath, "rb")) == NULL)
cout << "Could not open specified file" << endl;
else
cout << "File opened successfully" << endl;
long fileSize = getFileSize(file);
fileBuf = new BYTE[fileSize];
fread(fileBuf, fileSize, 1, file);
FILE* fi = fopen("C:\\Documents and Settings\\Digital10\\My Documents\\Downloads\\gcc.txt","wb");
fwrite(fileBuf,sizeof(fileBuf),1,fi);
cin.get();
delete[]fileBuf;
fclose(file);
fclose(fi);
return 0;
}
fwrite(fileBuf,fileSize,1,fi);
You did read fileSize bytes, but are writing sizeof(...) bytes, that is size of pointer, returned by new.
A C++ way to do it:
#include <fstream>
int main()
{
std::ifstream in("Source.txt");
std::ofstream out("Destination.txt");
out << in.rdbuf();
}
You have swapped the arguments of fread and fwrite. Element size precedes the number of elements. Should be like so:
fread(fileBuf, 1, fileSize, file);
And
fwrite(fileBuf, 1, fileSize, fi);
Also address my comment from above:
Enclose the else clause in { and }. Indentation does not determine blocks in c++. Otherwise your code will crash if you fail to open the file.
EDIT: and the another problem - you have been writing sizeof(fileBuf) bytes which is constant. Instead you should write the exact same number of bytes as you've read. Having in mind the rest of your code you could simply replace sizeof(fileBuf) with fileSize as I've done above.
fileBuf = new BYTE[fileSize];
fread(fileBuf, fileSize, 1, file);
FILE* fi = fopen("C:\\Documents and Settings\\[...]\gcc.txt","wb");
fwrite(fileBuf,sizeof(fileBuf),1,fi);
fileBuf is a pointer to BYTE. You declared it yourself, look: BYTE *fileBuf. And so sizeof(filebuf) is sizeof(BYTE *).
Perhaps you wanted:
fwrite(fileBuf, fileSize, 1, fi);
which closely mirrors the earlier fread call.
I strongly recommend that you capture the return values of I/O functions and check them.
Related
Scenario: I have a file that is 8,203,685 bytes long in binary, and I am using fread() to read in the file.
Problem: Hexdumping the data after the fread() on both Linux and Windows yields different results. Both hexdump files are the same size, but on Linux it matches the original input file that went in, whereas on Windows starting at byte 8,200,193 the rest of the hexdump contains 0's.
Code:
int main(void)
{
FILE * fp = fopen("input.exe", "rb");
unsigned char * data = NULL;
long size = 0;
if (fp)
{
fseek(fp, 0, SEEK_END);
size = ftell(fp);
fseek(fp, 0, SEEK_SET);
data = (unsigned char *)malloc(size);
size_t read_bytes = fread(data, 1, size, fp);
// print out read_bytes, value is equal to size
// Hex dump using ofstream. Hexdump file is different here on Windows vs
// on Linux. Last ~3000 bytes are all 0's on Windows.
std::ofstream out("hexdump.bin", std::ios::binary | std::ios::trunc);
out.write(reinterpret_cast<char *>(data), size);
out.close();
FILE * out_file = fopen("hexdump_with_FILE.bin", "wb");
fwrite(data, 1, size, out_file);
fflush(out_file);
fclose(out_file);
}
if (fp) fclose(fp);
if (data) free(data);
return 0;
}
Has anyone seen this behavior before, or have an idea of what might be causing the behavior that I am seeing?
P.S. Everything works as expected when using ifstream and its read function
Thanks!
I wanna read and remove the first line from a txt file (without copying, it's a huge file).
I've read the net but everybody just copies the desired content to a new file. I can't do that.
Below a first attempt. This code will be stucked in a loop as no lines are removed. If the code would remove the first line of file at each opening, the code would reach the end.
#include <iostream>
#include <string>
#include <fstream>
#include <boost/interprocess/sync/file_lock.hpp>
int main() {
std::string line;
std::fstream file;
boost::interprocess::file_lock lock("test.lock");
while (true) {
std::cout << "locking\n";
lock.lock();
file.open("test.txt", std::fstream::in|std::fstream::out);
if (!file.is_open()) {
std::cout << "can't open file\n";
file.close();
lock.unlock();
break;
}
else if (!std::getline(file,line)) {
std::cout << "empty file\n"; //
file.close(); // never
lock.unlock(); // reached
break; //
}
else {
// remove first line
file.close();
lock.unlock();
// do something with line
}
}
}
Here's a solution written in C for Windows.
It will execute and finish on a 700,000 line, 245MB file in no time. (0.14 seconds)
Basically, I memory map the file, so that I can access the contents using the functions used for raw memory access. Once the file has been mapped, I just use the strchr function to find the location of one of the pair of symbols used to denote an EOL in windows (\n and \r) - this tells us how long in bytes the first line is.
From here, I just memcpy from the first byte f the second line back to the start of the memory mapped area (basically, the first byte in the file).
Once this is done, the file is unmapped, the handle to the mem-mapped file is closed and we then use the SetEndOfFile function to reduce the length of the file by the length of the first line. When we close the file, it has shrunk by this length and the first line is gone.
Having the file already in memory since I've just created and written it is obviously altering the execution time somewhat, but the windows caching mechanism is the 'culprit' here - the very same mechanism we're leveraging to make the operation complete very quickly.
The test data is the source of the program duplicated 100,000 times and saved as testInput2.txt (paste it 10 times, select all, copy, paste 10 times - replacing the original 10, for a total of 100 times - repeat until output big enough. I stopped here because more seemed to make Notepad++ a 'bit' unhappy)
Error-checking in this program is virtually non-existent and the input is expected not to be UNICODE, i.e - the input is 1 byte per character.
The EOL sequence is 0x0D, 0x0A (\r, \n)
Code:
#include <stdio.h>
#include <windows.h>
void testFunc(const char inputFilename[] )
{
int lineLength;
HANDLE fileHandle = CreateFile(
inputFilename,
GENERIC_READ | GENERIC_WRITE,
0,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_WRITE_THROUGH,
NULL
);
if (fileHandle != INVALID_HANDLE_VALUE)
{
printf("File opened okay\n");
DWORD fileSizeHi, fileSizeLo = GetFileSize(fileHandle, &fileSizeHi);
HANDLE memMappedHandle = CreateFileMapping(
fileHandle,
NULL,
PAGE_READWRITE | SEC_COMMIT,
0,
0,
NULL
);
if (memMappedHandle)
{
printf("File mapping success\n");
LPVOID memPtr = MapViewOfFile(
memMappedHandle,
FILE_MAP_ALL_ACCESS,
0,
0,
0
);
if (memPtr != NULL)
{
printf("view of file successfully created");
printf("File size is: 0x%04X%04X\n", fileSizeHi, fileSizeLo);
LPVOID eolPos = strchr((char*)memPtr, '\r'); // windows EOL sequence is \r\n
lineLength = (char*)eolPos-(char*)memPtr;
printf("Length of first line is: %ld\n", lineLength);
memcpy(memPtr, eolPos+2, fileSizeLo-lineLength);
UnmapViewOfFile(memPtr);
}
CloseHandle(memMappedHandle);
}
SetFilePointer(fileHandle, -(lineLength+2), 0, FILE_END);
SetEndOfFile(fileHandle);
CloseHandle(fileHandle);
}
}
int main()
{
const char inputFilename[] = "testInput2.txt";
testFunc(inputFilename);
return 0;
}
What you want to do, indeed, is not easy.
If you open the same file for reading and writing in it without being careful, you will end up reading what you just wrote and the result will not be what you want.
Modifying the file in place is doable: just open it, seek in it, modify and close. However, you want to copy all the content of the file except K bytes at the beginning of the file. It means you will have to iteratively read and write the whole file by chunks of N bytes.
Now once done, K bytes will remain at the end that would need to be removed. I don't think there's a way to do it with streams. You can use ftruncate or truncate functions from unistd.h or use Boost.Interprocess truncate for this.
Here is an example (without any error checking, I let you add it):
#include <iostream>
#include <fstream>
#include <unistd.h>
int main()
{
std::fstream file;
file.open("test.txt", std::fstream::in | std::fstream::out);
// First retrieve size of the file
file.seekg(0, file.end);
std::streampos endPos = file.tellg();
file.seekg(0, file.beg);
// Then retrieve size of the first line (a.k.a bufferSize)
std::string firstLine;
std::getline(file, firstLine);
// We need two streampos: the read one and the write one
std::streampos readPos = firstLine.size() + 1;
std::streampos writePos = 0;
// Read the whole file starting at readPos by chunks of size bufferSize
std::size_t bufferSize = 256;
char buffer[bufferSize];
bool finished = false;
while(!finished)
{
file.seekg(readPos);
if(readPos + static_cast<std::streampos>(bufferSize) >= endPos)
{
bufferSize = endPos - readPos;
finished = true;
}
file.read(buffer, bufferSize);
file.seekg(writePos);
file.write(buffer, bufferSize);
readPos += bufferSize;
writePos += bufferSize;
}
file.close();
// No clean way to truncate streams, use function from unistd.h
truncate("test.txt", writePos);
return 0;
}
I'd really like to be able to provide a cleaner solution for in-place modification of the file, but I'm not sure there's one.
I have a C function that is decompressing a gzip file into another file:
bool gzip_uncompress(const std::string &compressed_file_path,std::string &uncompressed_file_path)
{
char outbuffer[1024*16];
gzFile infile = (gzFile)gzopen(compressed_file_path.c_str(), "rb");
FILE *outfile = fopen(uncompressed_file_path.c_str(), "wb");
gzrewind(infile);
while(!gzeof(infile))
{
int len = gzread(infile, outbuffer, sizeof(outbuffer));
fwrite(outbuffer, 1, len, outfile);
}
fclose(outfile);
gzclose(infile);
return true;
}
And this works well.
However, I would like to write the decompressed buffer chunks to a new char[] instead of an output file. But I don't know how to determine the length of the full decompressed file in order to declare a char[?] buffer to hold the full output.
Is it possible to modify the above function to decompress a file into memory? I assumed I'd decompress it into a char[], but maybe vector<char> is better? Does it matter? Either using C or C++ works for me.
This is straightforward in C++:
vector<char> gzip_uncompress(const std::string &compressed_file_path)
{
char outbuffer[1024*16];
gzFile infile = (gzFile)gzopen(compressed_file_path.c_str(), "rb");
vector<char> outfile;
gzrewind(infile);
while(!gzeof(infile))
{
int len = gzread(infile, outbuffer, sizeof(outbuffer));
outfile.insert(outfile.end(), outbuffer, outbuffer+len);
}
gzclose(infile);
return outfile;
}
You can also dispense with outbuffer entirely, and instead resize the vector before each read and read directly into the bytes added by the resizing, which would avoid the copying.
The C version would need to use malloc and realloc.
I have this requirement to be addressed. User inputs a encrypted zip file (only zip file is encrypted and not contents inside it) which contains a text file.
The function should decrypt the zip file using the password or key provided and then unzip the file to memory as an array of chars and return the pointer to the char.
I went through all the suggestions provided including using Minizip, microzip, zlib. But I am still not clear on what is the best fit for my requirement.
So far I have implemented decrypting the zip file using the password and converting the zip file to a string. I am planning to use this string as an input to zip decompresser and extract it to memory. However, I am not sure if my approach is right. If there are better ways to do it, please provide your suggestions along with your recommendations on the library to use in my C++ application.
https://code.google.com/p/microzip/source/browse/src/microzip/Unzipper.cpp?r=c18cac3b6126cfd1a08b3e4543801b21d80da08c
http://www.winimage.com/zLibDll/minizip.html
http://www.example-code.com/vcpp/zip.asp
http://zlib.net/
Many thanks
Please provide your suggestions.
Zero'd on using zlib. This link helped me to do that. Thought of sharing this so that it can help someone. In my case I am using that buffer directly instead of writing to a file.
http://www.gamedev.net/reference/articles/article2279.asp
#include <zlib.h>
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <Windows.h>
using namespace std;
int main(int argc, char* argv[])
{
char c;
if ( argc != 2 )
{
cout << "Usage program.exe zipfilename" << endl;
return 0;
}
FILE * FileIn;
FILE * FileOut;
unsigned long FileInSize;
void *RawDataBuff;
//input and output files
FileIn = fopen(argv[1], "rb");
FileOut = fopen("FileOut.zip", "w");
//get the file size of the input file
fseek(FileIn, 0, SEEK_END);
FileInSize = ftell(FileIn);
//buffers for the raw and compressed data</span>
RawDataBuff = malloc(FileInSize);
void *CompDataBuff = NULL;
//zlib states that the source buffer must be at least 0.1 times larger than the source buffer plus 12 bytes
//to cope with the overhead of zlib data streams
uLongf CompBuffSize = (uLongf)(FileInSize + (FileInSize * 0.1) + 12);
CompDataBuff = malloc((size_t)(CompBuffSize));
//read in the contents of the file into the source buffer
fseek(FileIn, 0, SEEK_SET);
fread(RawDataBuff, FileInSize, 1, FileIn);
//now compress the data
uLongf DestBuffSize;
int returnValue;
returnValue = compress((Bytef*)CompDataBuff, (uLongf*)&DestBuffSize,
(const Bytef*)RawDataBuff, (uLongf)FileInSize);
cout << "Return value " << returnValue;
//write the compressed data to disk
fwrite(CompDataBuff, DestBuffSize, 1, FileOut);
fclose(FileIn);
fclose(FileOut);
errno_t err;
// Open for read (will fail if file "input.gz" does not exist)
if ((FileIn = fopen("FileOut.zip", "rb")) == NULL) {
fprintf(stderr, "error: Unable to open file" "\n");
exit(EXIT_FAILURE);
}
else
printf( "Successfully opened the file\n" );
cout << "Input file name " << argv[1] << "\n";
// Open for write (will fail if file "test.txt" does not exist)
if( (err = fopen_s( &FileOut, "test.txt", "wb" )) !=0 )
{
printf( "The file 'test.txt' was not opened\n" );
system ("pause");
exit (1);
}
else
printf( "The file 'test.txt' was opened\n" );
//get the file size of the input file
fseek(FileIn, 0, SEEK_END);
FileInSize = ftell(FileIn);
//buffers for the raw and uncompressed data
RawDataBuff = malloc(FileInSize);
char *UnCompDataBuff = NULL;
//RawDataBuff = (char*) malloc (sizeof(char)*FileInSize);
if (RawDataBuff == NULL)
{
fputs ("Memory error",stderr);
exit (2);
}
//read in the contents of the file into the source buffer
fseek(FileIn, 0, SEEK_SET);
fread(RawDataBuff, FileInSize, 1, FileIn);
//allocate a buffer big enough to hold the uncompressed data, we can cheat here
//because we know the file size of the original
uLongf UnCompSize = 482000; //TODO : Revisit this
int retValue;
UnCompDataBuff = (char*) malloc (sizeof(char)*UnCompSize);
if (UnCompDataBuff == NULL)
{
fputs ("Memory error",stderr);
exit (2);
}
//all data we require is ready so compress it into the source buffer, the exact
//size will be stored in UnCompSize
retValue = uncompress((Bytef*)UnCompDataBuff, &UnCompSize, (const Bytef*)RawDataBuff, FileInSize);
cout << "Return value of decompression " << retValue << "\n";
//write the decompressed data to disk
fwrite(UnCompDataBuff, UnCompSize, 1, FileOut);
free(RawDataBuff);
free(UnCompDataBuff);
fclose(FileIn);
fclose(FileOut);
system("pause");
exit (0);
}
Most, if not all, of the most popular ZIP tools also support command-line usage. So, if I were you I would just run a system command from your C++ program to unzip and decrypt the file using one of these popular ZIP tools. After the textfile has been unzip'ed and decrypted you can load it from the disk into internal memory to further process it from there. A simple solution, but efficient.
I have this theory, I can grab the file size using fseek and ftell and build a dynamic array as a buffer. Then use the buffer for fgets(). I currently can not come up with a way to do it.
My theory is based off of not knowing the size of the first file in bytes. So, I do not know how big of a buffer to build. What if the file is over 2 gigs? I want to be able to build a buffer that will change and recognize the file size of whatever file I put into SearchInFile().
Here is what I have so far below:
int SearchInFile(char *fname, char *fname2)
{
FILE *pFile, *pFile2;
int szFile, szFile2;
// Open first file
if( (fopen_s(&pFile, fname, "r")) != NULL )
{
return(-1);
}
// Open second file
if( (fopen_s(&pFile2, fname2, "r")) != NULL )
{
return(-1);
}
// Find file size
fseek(pFile, 0L, SEEK_END);
szFile = ftell(pFile);
// Readjust File Pointer
fseek(pFile, 0L, SEEK_SET);
std::vector <char> buff;
//char buff[szFile];
while(fgets(buff.push_back(), szFile, pFile))
{
}
Any thoughts or examples would be great. I've been searching the net for the last few hours.
Vector can grow, so you don't have to know the size beforehand. The following four lines do what you want.
std::vector<char> buff;
int ch;
while ((ch = fgetc(pFile)) != EOF)
buff.push_back(ch);
fgetc is a function to read a single char, simpler than using fgets.
If you do know the file size beforehand then you could call buff.reserve(szFile) before the loop. This will make the loop a little more efficient.