I think I probably have to use an fstream object but i'm not sure how. Essentially I want to read in a file into a byte buffer, modify it, then rewrite these bytes to a file. So I just need to know how to do byte i/o.
#include <fstream>
ifstream fileBuffer("input file path", ios::in|ios::binary);
ofstream outputBuffer("output file path", ios::out|ios::binary);
char input[1024];
char output[1024];
if (fileBuffer.is_open())
{
fileBuffer.seekg(0, ios::beg);
fileBuffer.getline(input, 1024);
}
// Modify output here.
outputBuffer.write(output, sizeof(output));
outputBuffer.close();
fileBuffer.close();
From memory I think this is how it goes.
If you are dealing with a small file size, I recommend that reading the whole file is easier. Then work with the buffer and write the whole block out again. These show you how to read the block - assuming you fill in the open input/output file from above reply
// open the file stream
.....
// use seek to find the length, the you can create a buffer of that size
input.seekg (0, ios::end);
int length = input.tellg();
input.seekg (0, ios::beg);
buffer = new char [length];
input.read (buffer,length);
// do something with the buffer here
............
// write it back out, assuming you now have allocated a new buffer
output.write(newBuffer, sizeof(newBuffer));
delete buffer;
delete newBuffer;
// close the file
..........
#include <iostream>
#include <fstream>
const static int BUF_SIZE = 4096;
using std::ios_base;
int main(int argc, char** argv) {
std::ifstream in(argv[1],
ios_base::in | ios_base::binary); // Use binary mode so we can
std::ofstream out(argv[2], // handle all kinds of file
ios_base::out | ios_base::binary); // content.
// Make sure the streams opened okay...
char buf[BUF_SIZE];
do {
in.read(&buf[0], BUF_SIZE); // Read at most n bytes into
out.write(&buf[0], in.gcount()); // buf, then write the buf to
} while (in.gcount() > 0); // the output.
// Check streams for problems...
in.close();
out.close();
}
While doing file I/O, you will have to read the file in a loop checking for end of file and error conditions. You can use the above code like this
while (fileBufferHere.good()) {
filebufferHere.getline(m_content, 1024)
/* Do your work */
}
Related
I wanna read and remove the first line from a txt file (without copying, it's a huge file).
I've read the net but everybody just copies the desired content to a new file. I can't do that.
Below a first attempt. This code will be stucked in a loop as no lines are removed. If the code would remove the first line of file at each opening, the code would reach the end.
#include <iostream>
#include <string>
#include <fstream>
#include <boost/interprocess/sync/file_lock.hpp>
int main() {
std::string line;
std::fstream file;
boost::interprocess::file_lock lock("test.lock");
while (true) {
std::cout << "locking\n";
lock.lock();
file.open("test.txt", std::fstream::in|std::fstream::out);
if (!file.is_open()) {
std::cout << "can't open file\n";
file.close();
lock.unlock();
break;
}
else if (!std::getline(file,line)) {
std::cout << "empty file\n"; //
file.close(); // never
lock.unlock(); // reached
break; //
}
else {
// remove first line
file.close();
lock.unlock();
// do something with line
}
}
}
Here's a solution written in C for Windows.
It will execute and finish on a 700,000 line, 245MB file in no time. (0.14 seconds)
Basically, I memory map the file, so that I can access the contents using the functions used for raw memory access. Once the file has been mapped, I just use the strchr function to find the location of one of the pair of symbols used to denote an EOL in windows (\n and \r) - this tells us how long in bytes the first line is.
From here, I just memcpy from the first byte f the second line back to the start of the memory mapped area (basically, the first byte in the file).
Once this is done, the file is unmapped, the handle to the mem-mapped file is closed and we then use the SetEndOfFile function to reduce the length of the file by the length of the first line. When we close the file, it has shrunk by this length and the first line is gone.
Having the file already in memory since I've just created and written it is obviously altering the execution time somewhat, but the windows caching mechanism is the 'culprit' here - the very same mechanism we're leveraging to make the operation complete very quickly.
The test data is the source of the program duplicated 100,000 times and saved as testInput2.txt (paste it 10 times, select all, copy, paste 10 times - replacing the original 10, for a total of 100 times - repeat until output big enough. I stopped here because more seemed to make Notepad++ a 'bit' unhappy)
Error-checking in this program is virtually non-existent and the input is expected not to be UNICODE, i.e - the input is 1 byte per character.
The EOL sequence is 0x0D, 0x0A (\r, \n)
Code:
#include <stdio.h>
#include <windows.h>
void testFunc(const char inputFilename[] )
{
int lineLength;
HANDLE fileHandle = CreateFile(
inputFilename,
GENERIC_READ | GENERIC_WRITE,
0,
NULL,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_WRITE_THROUGH,
NULL
);
if (fileHandle != INVALID_HANDLE_VALUE)
{
printf("File opened okay\n");
DWORD fileSizeHi, fileSizeLo = GetFileSize(fileHandle, &fileSizeHi);
HANDLE memMappedHandle = CreateFileMapping(
fileHandle,
NULL,
PAGE_READWRITE | SEC_COMMIT,
0,
0,
NULL
);
if (memMappedHandle)
{
printf("File mapping success\n");
LPVOID memPtr = MapViewOfFile(
memMappedHandle,
FILE_MAP_ALL_ACCESS,
0,
0,
0
);
if (memPtr != NULL)
{
printf("view of file successfully created");
printf("File size is: 0x%04X%04X\n", fileSizeHi, fileSizeLo);
LPVOID eolPos = strchr((char*)memPtr, '\r'); // windows EOL sequence is \r\n
lineLength = (char*)eolPos-(char*)memPtr;
printf("Length of first line is: %ld\n", lineLength);
memcpy(memPtr, eolPos+2, fileSizeLo-lineLength);
UnmapViewOfFile(memPtr);
}
CloseHandle(memMappedHandle);
}
SetFilePointer(fileHandle, -(lineLength+2), 0, FILE_END);
SetEndOfFile(fileHandle);
CloseHandle(fileHandle);
}
}
int main()
{
const char inputFilename[] = "testInput2.txt";
testFunc(inputFilename);
return 0;
}
What you want to do, indeed, is not easy.
If you open the same file for reading and writing in it without being careful, you will end up reading what you just wrote and the result will not be what you want.
Modifying the file in place is doable: just open it, seek in it, modify and close. However, you want to copy all the content of the file except K bytes at the beginning of the file. It means you will have to iteratively read and write the whole file by chunks of N bytes.
Now once done, K bytes will remain at the end that would need to be removed. I don't think there's a way to do it with streams. You can use ftruncate or truncate functions from unistd.h or use Boost.Interprocess truncate for this.
Here is an example (without any error checking, I let you add it):
#include <iostream>
#include <fstream>
#include <unistd.h>
int main()
{
std::fstream file;
file.open("test.txt", std::fstream::in | std::fstream::out);
// First retrieve size of the file
file.seekg(0, file.end);
std::streampos endPos = file.tellg();
file.seekg(0, file.beg);
// Then retrieve size of the first line (a.k.a bufferSize)
std::string firstLine;
std::getline(file, firstLine);
// We need two streampos: the read one and the write one
std::streampos readPos = firstLine.size() + 1;
std::streampos writePos = 0;
// Read the whole file starting at readPos by chunks of size bufferSize
std::size_t bufferSize = 256;
char buffer[bufferSize];
bool finished = false;
while(!finished)
{
file.seekg(readPos);
if(readPos + static_cast<std::streampos>(bufferSize) >= endPos)
{
bufferSize = endPos - readPos;
finished = true;
}
file.read(buffer, bufferSize);
file.seekg(writePos);
file.write(buffer, bufferSize);
readPos += bufferSize;
writePos += bufferSize;
}
file.close();
// No clean way to truncate streams, use function from unistd.h
truncate("test.txt", writePos);
return 0;
}
I'd really like to be able to provide a cleaner solution for in-place modification of the file, but I'm not sure there's one.
I want to copy one image file to another new file. This is my method to do this:
std::ofstream myOutpue;
std::ifstream mySource;
//int i = 0;
mySource.open(ofn.lpstrFile, std::ios::binary);
myOutpue.open("im4.jpg", std::ios::binary);
char buffer;
char bufferToSave[100];
if (mySource.is_open())
{
//client->sendFilePacket(FileStates::START_SAVE, buffer, false,i);
i++;
while (!mySource.eof())
{
mySource >> std::noskipws >> buffer;
myOutpue << buffer;
//client->sendFilePacket(FileStates::CONTINUE_SAVE, buffer, false,i);
i++;
}
}
i++;
//client->sendFilePacket(FileStates::END_SAVE, buffer, true,i);
mySource.close();
//myOutpue.close();
This method work correctly, but my problem is that i want to copy char/bit's and send it to another client. When i doing this by each char , that not work correctly so i want to make a bigger buffor(for example char t[512]) or something like that and copy them to new file.
I try to doing this like that:
std::ofstream myOutpue;
std::ifstream mySource;
mySource.open(ofn.lpstrFile, std::ios::binary);
myOutpue.open("im4.jpg", std::ios::binary);
char buffer;
char bufferToSave[100];
if (mySource.is_open())
{
//client->sendFilePacket(FileStates::START_SAVE, buffer, false,i);
i++;
while (!mySource.eof())
{
if (i == 100)
{
for (int i = 0; i < 100; i++)myOutpue << bufferToSave[i];
i = 0;
}
mySource >> std::noskipws >> buffer;
bufferToSave[i] = buffer;
//myOutpue << buffer;
//client->sendFilePacket(FileStates::CONTINUE_SAVE, buffer, false,i);
i++;
}
}
i++;
//client->sendFilePacket(FileStates::END_SAVE, buffer, true,i);
mySource.close();
myOutpue.close();
But i get image that i can't open.
So my question is how to read file to get more bits from it and that create me the same image as original.
You have an error in your original file copy algorithm in that you should never loop using eof() as the end flag.
See: Why is iostream::eof inside a loop condition considered wrong?
Copying files can be a simple as this:
std::ofstream("output.jpg", std::ios::binary) << std::ifstream("input.jpg", std::ios::binary).rdbuf();
It uses a special overload of the output operator when passing an std::istream buffer (using rdbuf()). It copies the whole stream.
When reading a whole buffer you should use std::istream::read:
std::ifstream ifs("input.jpg", std::ios::binary)
char buffer[1025]; // create a buffer
// keep going as long as the reading succeeds
while(ifs.read(buffer, sizeof(buffer)))
{
// ifs.gcount() is the number of chars read successfully
client->sendFilePacket(buffer, ifs.gcount()); // send all bytes
}
I know it's been a long time, but reading these topic I found the solution:
std::ifstream ifs(ofn.lpstrFile, std::ios::binary);
std::ofstream myOutpue;
char buffer[1024]; // create a buffer
myOutpue.open("output.jpg", std::ios::binary);
//client->sendFilePacket(FileStates::START_SAVE, buffer, false, i);
while (ifs.read(buffer, sizeof(buffer)))
{
myOutpue.write(buffer, ifs.gcount());
}
//
myOutpue.write(buffer, ifs.gcount());
myOutpue.close();
Note: My answer is similar to #dawcza94, but to avoid black screen, after the loop you have to save the rest of the reading, because in the loop you save only what fits in the buffer, and the rest you ignore. Sometimes it happens that the rest can be a few characters long, and it looks like the images are the same size, but they aren't.
Note2: I posted here to help those who are still in trouble as I was!!
C++ FAQ:
You probably want to use iostream’s read() and write() methods instead of its >> and << operators. read() and write() are better for binary mode; >> and << are better for text mode.
You can specify how much you want to read. With gcount you can ask, how much characters are read successfully. Same goes for write.
I try with this code:
std::ifstream ifs(ofn.lpstrFile, std::ios::binary);
std::ofstream myOutpue;
char buffer[1024]; // create a buffer
myOutpue.open("output.jpg", std::ios::binary);
//client->sendFilePacket(FileStates::START_SAVE, buffer, false, i);
while (ifs.read(buffer, sizeof(buffer)))
{
//client->sendFilePacket(FileStates::CONTINUE_SAVE, buffer, false, ifs.gcount());
myOutpue.write(buffer, ifs.gcount());
}
//client->sendFilePacket(FileStates::END_SAVE, buffer, true, i);
myOutpue.close();
But when i doing this like that, in my copy of image i got only half of original image and half of black screen( number of kb is the same like in original file), so i don't know what's a problem with that ?
Instead of using "manual" copy, try using ifstream::read method
I have a C function that is decompressing a gzip file into another file:
bool gzip_uncompress(const std::string &compressed_file_path,std::string &uncompressed_file_path)
{
char outbuffer[1024*16];
gzFile infile = (gzFile)gzopen(compressed_file_path.c_str(), "rb");
FILE *outfile = fopen(uncompressed_file_path.c_str(), "wb");
gzrewind(infile);
while(!gzeof(infile))
{
int len = gzread(infile, outbuffer, sizeof(outbuffer));
fwrite(outbuffer, 1, len, outfile);
}
fclose(outfile);
gzclose(infile);
return true;
}
And this works well.
However, I would like to write the decompressed buffer chunks to a new char[] instead of an output file. But I don't know how to determine the length of the full decompressed file in order to declare a char[?] buffer to hold the full output.
Is it possible to modify the above function to decompress a file into memory? I assumed I'd decompress it into a char[], but maybe vector<char> is better? Does it matter? Either using C or C++ works for me.
This is straightforward in C++:
vector<char> gzip_uncompress(const std::string &compressed_file_path)
{
char outbuffer[1024*16];
gzFile infile = (gzFile)gzopen(compressed_file_path.c_str(), "rb");
vector<char> outfile;
gzrewind(infile);
while(!gzeof(infile))
{
int len = gzread(infile, outbuffer, sizeof(outbuffer));
outfile.insert(outfile.end(), outbuffer, outbuffer+len);
}
gzclose(infile);
return outfile;
}
You can also dispense with outbuffer entirely, and instead resize the vector before each read and read directly into the bytes added by the resizing, which would avoid the copying.
The C version would need to use malloc and realloc.
I'm trying to make a exe program that can read any file to binary and later use this binary to make the exact same file.
So I figured out that I can use fopen(content,"rb") to read a file as binary,
and using fwrite I can write block of data into stream. But the problem is when I fwrite it doesn't seems copy everything.
For example the text I opened contains 31231232131 in it. When I write it into another file it only copies 3123 (first 4 bytes).
I can see that it's a very simple thing that I'm missing but I don't know what.
#include <stdio.h>
#include <iostream>
using namespace std;
typedef unsigned char BYTE;
long getFileSize(FILE *file)
{
long lCurPos, lEndPos;
lCurPos = ftell(file);
fseek(file, 0, 2);
lEndPos = ftell(file);
fseek(file, lCurPos, 0);
return lEndPos;
}
int main()
{
//const char *filePath = "C:\\Documents and Settings\\Digital10\\MyDocuments\\Downloads\\123123.txt";
const char *filePath = "C:\\Program Files\\NPKI\\yessign\\User\\008104920100809181000405,OU=HNB,OU=personal4IB,O=yessign,C=kr\\SignCert.der";
BYTE *fileBuf;
FILE *file = NULL;
if ((file = fopen(filePath, "rb")) == NULL)
cout << "Could not open specified file" << endl;
else
cout << "File opened successfully" << endl;
long fileSize = getFileSize(file);
fileBuf = new BYTE[fileSize];
fread(fileBuf, fileSize, 1, file);
FILE* fi = fopen("C:\\Documents and Settings\\Digital10\\My Documents\\Downloads\\gcc.txt","wb");
fwrite(fileBuf,sizeof(fileBuf),1,fi);
cin.get();
delete[]fileBuf;
fclose(file);
fclose(fi);
return 0;
}
fwrite(fileBuf,fileSize,1,fi);
You did read fileSize bytes, but are writing sizeof(...) bytes, that is size of pointer, returned by new.
A C++ way to do it:
#include <fstream>
int main()
{
std::ifstream in("Source.txt");
std::ofstream out("Destination.txt");
out << in.rdbuf();
}
You have swapped the arguments of fread and fwrite. Element size precedes the number of elements. Should be like so:
fread(fileBuf, 1, fileSize, file);
And
fwrite(fileBuf, 1, fileSize, fi);
Also address my comment from above:
Enclose the else clause in { and }. Indentation does not determine blocks in c++. Otherwise your code will crash if you fail to open the file.
EDIT: and the another problem - you have been writing sizeof(fileBuf) bytes which is constant. Instead you should write the exact same number of bytes as you've read. Having in mind the rest of your code you could simply replace sizeof(fileBuf) with fileSize as I've done above.
fileBuf = new BYTE[fileSize];
fread(fileBuf, fileSize, 1, file);
FILE* fi = fopen("C:\\Documents and Settings\\[...]\gcc.txt","wb");
fwrite(fileBuf,sizeof(fileBuf),1,fi);
fileBuf is a pointer to BYTE. You declared it yourself, look: BYTE *fileBuf. And so sizeof(filebuf) is sizeof(BYTE *).
Perhaps you wanted:
fwrite(fileBuf, fileSize, 1, fi);
which closely mirrors the earlier fread call.
I strongly recommend that you capture the return values of I/O functions and check them.
Everytime I read in by fstream I got 1 extra character at the end, How can I avoid this?
EDIT:
ifstream readfile(inputFile);
ofstream writefile(outputFile);
char c;
while(!readfile.eof()){
readfile >> c;
//c = shiftChar(c, RIGHT, shift);
writefile << c;
}
readfile.close();
writefile.close();
This typically results from testing for the end of file incorrectly. You normally want to do something like:
while (infile>>variable) ...
or:
while (std::getline(infile, whatever)) ...
but NOT:
while (infile.good()) ...
or:
while (!infile.eof()) ...
The first two do a read, check whether it failed, and if so exit the loop. The latter two attempt a read, process what's now in the variable, and then exit the loop on the next iteration if the previous attempt failed. On the last iteration, what's in the variable after the failed read will normally be whatever was in it previously, so loops like either of the second two will typically appear to process the last item in the file twice.
To copy one file to another easily, consider using something like this:
// open the files:
ifstream readfile(inputFile);
ofstream writefile(outputFile);
// do the copy:
writefile << readfile.rdbuf();
This works well for small files, but can slow down substantially for a larger file. In such a case, you typically want to use a loop, reading from one file and writeing to the other. This also has possibilities for subtle errors as well. One way that's been tested and generally work reasonably well looks like this:
std::ifstream input(in_filename, std::ios::binary);
std::ofstream output(out_filename, std::ios::binary);
const size_t buffer_size = 512 * 1024;
char buffer[buffer_size];
std::size_t read_size;
while (input.read(buffer, buffer_size), (read_size = input.gcount()) > 0)
output.write(buffer, input.gcount());
Based on the code, it appears what you're trying to do is copy the contents of one file to another?
If so, I'd try something like this:
ifstream fin(inputFile, ios::binary);
fin.seekg(0, ios::end);
long fileSize = fin.tellg();
fin.seekg(0, ios::beg);
char *pBuff = new char[fileSize];
fin.read(pBuff, fileSize);
fin.close();
ofstream fout(outputFile, ios::binary)
fout.write(pBuff, fileSize);
fout.close;
delete [] pBuff;