How can writing memory to a filebuffer mutate it? - c++

For a while now, I have been experiencing an extremely odd problem when trying to write memory to a filebuffer in C++. The problem only occurs on MinGW. When I compile under gcc/linux, everything is fine.
Debugging Session displaying the problem
So basically, I'm writing code from a memory buffer to a filebuffer, and the binary representation in the file ends up being different from the memory I wrote. No, the file is not being modified at a later point, I ensured this by using the debugger to exit the program after closing the file. I have no idea how something like this is even possible, I even used valgrind to see if there were any memory allocation problems, but nope.
I'll paste some of the related code.
/// a struct holding information about a data file
class ResourceFile {
public:
string name;
uint32 size;
char* data;
ResourceFile(string name, uint32 size);
};
ResourceFile::ResourceFile(string name, uint32 size)
: name(name), size(size)
{
// will be free'd in ResourceBuilder's destruction
data = (char*) malloc(size * sizeof(char));
}
/// Build a data resource from a set of files
class ResourceBuilder {
public:
ofstream out; ///< File to put the resource into
vector<ResourceFile> input; ///< List of input strings
/// Add a file from disk to the resource
void add_file(string filename);
/// Create a file that the resource will be written to
void create_file(string filename);
~ResourceBuilder();
};
void ResourceBuilder::create_file(string filename) {
// open the specified file for output
out.open(filename.c_str());
uint16 number_files = htons(input.size());
out.write((char*) &number_files, sizeof(uint16));
foreach(vector<ResourceFile>,input,i) {
ResourceFile& df = *i;
uint16 name_size = i->name.size();
uint16 name_size_network = htons(name_size);
out.write((char*) &name_size_network, sizeof(uint16));
out.write(i->name.c_str(),name_size);
uint32 size_network = htonl(i->size);
out.write((char*) &size_network, sizeof(i->size) );
out.write(i->data, i->size);
}
out.close();
/// \todo write the CRC
}
The following is how the memory is allocated in the first place. This is a possible source of error, because I copypasted it from somewhere else without bothering to understand it in detail, but I honestly don't know how the method in which I allocated memory could be a reason for filebuffer output being different from the memory that I'm writing.
void ResourceBuilder::add_file(string filename) {
// loads a file and copies its content into memory
// this is done by the ResourceFile class and there is a
// small problem with this, namely that the memory is
// allocated in the ResourceFile directly,
ifstream file;
file.open(filename.c_str());
filebuf* pbuf=file.rdbuf();
int size=pbuf->pubseekoff (0,ios::end,ios::in);
pbuf->pubseekpos (0,ios::in);
ResourceFile df(filename,size);
pbuf->sgetn (df.data,size);
file.close();
input.push_back(df);
}
I'm really out of ideas. It's also not a bug pertaining to my compiler setup, as other people compiling the code under MinGW get the same error. The only explanation I can think of at this point is a bug with MinGW's filebuffer library itself, but I honestly have no idea.

You need to open the file in binary mode. When you open it in text mode on Windows, line feeds (0x0A) will get converted to CR/LF pairs (0x0D, 0x0A). On Linux you don't see this because Linux uses a single LF (0x0A) as the line terminator in text files, so no conversion is done.
Pass the ios::binary flag to the ostream::open method:
out.open(filename.c_str(), ios::out | ios::binary);
You should also use this flag when reading binary files, so that the opposite conversion isn't performed:
ifstream file;
file.open(filename.c_str(), ios::in | ios::binary);

The problem is that you open your file in text mode! In that case, 0x0a (Linefeed) is converted to 0x0d0a (Carriage return line feed). This is why you see a difference in the file and memory.
use out.open(filename.c_str(), ios::binary | ios::out);

Related

Is it possible that QFile::size() and QFile::readAll().size() differ for certain special files?

Seeing a weird crash due to read access violation. Here is the minimal code:
struct MyFile : QFile
{
...
string read ()
{
QByteArray content;
if(<something>)
content = QFile::readAll();
...
string buffer(QFile::size(), 0);
if(content.isEmpty())
{
QFile::seek(offset);
QFile::read(&buffer[0], buffer.size());
}
else
::memcpy(&buffer[0], content.data(), buffer.size());
// ^^^^ 40034 ^^^^ 42690
return buffer;
}
}
Here it's trying to read a .png file. Somehow the QFile::size() returns 42690, while the QFile::readAll() which is stored in content has a size of 40034.
Unfortunately the filename is not handy to verify the actual size. Writing test code for text or png files, it always gives proper results.
How is that possible?
Below is a debug frame for reference:
You code does not take the current file's seek position into account. Thus, in case there were some read operations already, it will crash, because QFile::readAll will return only a part of the file (from the current seek position till the end of file).
The other possibility is the use of QIODeviceBase::Text as mentioned in comments, but you're not using it, so it's not the case.

C++ file handle

I am trying to implement a file handle class similar to the one in Bjarne Stroustrup's FAQ page. (Scroll to "Why doesn't C++ provide a 'finally' construct".) Unlike his example, however, I want to use C++ file streams instead of a FILE*.
Right now, I am considering creating a FileHandleBase class, or something similarly named, and two derived classes—one for input files and one for output files. Below is the implementation I wrote as a proof-of-concept; keep in mind that it is very simple and unfinished.
class FileHandle {
public:
FileHandle(const char* fn, ios_base::openmode mode = ios_base::in | ios_base::out) {
file.open(fn, mode);
// Check to make sure file is open
}
FileHandle(const string &fn, ios_base::openmode mode = ios_base::in | ios_base::out) {
file.open(fn, mode);
// Check to make sure file is open
}
~FileHandle() {
file.close();
}
private:
fstream file;
};
I would like to know if this is a viable way of making a file handle, that is, whether my inheritance idea is good. I also want to know the best way to deal with the ios_base::openmode parameter because the C++ reference page for std::ifstream says this:
Note that even though ifstream is an input stream, its internal filebuf object may be set to also support output operations.
In what cases would an ifstream be used for output operations, and, similarly, when would an of stream be used for input operations; and should I restrict the options for the ios_base::openmode parameter for my file handle class(es)? That way my input file handle would only handle input operations, and the output version would only handle output operations.
In what cases would an ifstream be used for output operations, and, similarly, when would an ofstream be used for input operations
You would open an output file stream with an std::ios_base::in openmode and vice-versa for an input file stream if you would still like to perform those associated operations using the internal std::filebuf object, which is accessible by stream->rdbuf(). Note that the streams std::ofstream and std::ifstream will still be able to perform output and input respectively even if they are opened with opposite openmodes.
int main() {
std::ofstream stream("test.txt");
stream << "Hello" << std::flush;
stream.close();
stream.open("test.txt", std::ios_base::in);
char buffer[SIZE] = {};
stream.rdbuf()->sgetn(buffer, SIZE);
std::cout << buffer << std::endl;
}

c++ read a saved objects in file.dat

how should I proceed so that the function read, get reads the file.dat, whenever the file start?
I writing an object to the file, and I need to read when the program starts.
Problem: whenever I boot up the program to read the data once saved already, I have segmentation fault problems
void DataManip::DataManipWrite(DateAdress *writer) {
ofstream ObjectWriter;
ObjectWriter.open("dbaddress.dat", ios::binary);
ObjectWriter.write((char *)&writer, sizeof(writer));
ObjectWriter.close();
}
void DataManip::DataManipRead(DateAdress *reader) {
ifstream ObjectReader;
ObjectReader.open("dbaddress.dat", ios::binary);
ObjectReader.read((char *)&reader, sizeof(reader));
ObjectReader.close();
}
First, your sizeof operators return the size of the pointer instead of the class. Second, the class itself has to be a POD if you want to simply dump the memory to a file and read it later. Third, you're writing the value of the pointer itself, not the class data.

Segmentation fault when calling fread() c++

I dont understand the mistake I am making.
I tryed alot but I am unable to read my FILE.
Basically I write an structure into a file named 0.txt / 1.txt / 2.txt ... based of account amound.
I realy seached hours to fix my problem but I dont understand how I can fix and why I get the ERROR.
Also I have no problem in complining my code (with dev c++) but when I press on Load Accounts Button I get the ERROR "Segmentation Fault" (using windows 7).
I noticed that the problem is at fread() line in function ladeAccounts().
The name of my Structure is "iAccount".
The variable infoma is as iAccount typed and the "number of accounts existing" typed as int anzahl in newAccount() decides the path.
iAccount looks like this:
struct iAccount
{
string ID;
string password;
int level;
};
This is how I write my STRUCT into the FILE:
void Account::newAccount(int anzahl, string username, string pw, int lvl)
{
iAccount neu;
neu.ID = username;
neu.password = pw;
neu.level = lvl;
ss.str("");
ss<<anzahl;
s = ss.str();
s = "Accounts/"+s+".txt";
f1 = fopen(s.c_str(), "w");
fseek(f1, 0, SEEK_SET);
fwrite(&infoma, sizeof(iAccount), 1, f1);
fclose(f1);
}
This is how I read the File (ERROR APPEARS when I call fread()
void Account::ladeAccount(int nummer)
{
stringstream sa;
iAccount account_geladen;
sa.str("");
sa<<nummer;
s = sa.str();
s = "Accounts/"+s+".txt";
f2 = fopen(s.c_str(), "r");
fseek(f2, 0, SEEK_SET);
fread(&infoma, sizeof(infoma), 1, f2);
fclose(f2);
}
Thank you for your help. I have no clue where my problem is and as I said I am searching for hours.
EDIT:
The file gets opened I tryed it (f2 is true!).
EDIT":
ERRNO = 0 !!!
SEE HERE:
ostringstream Str;
Str << errno;
infoma.ID = Str.str();
Just did this to see the result of errno in my wxtextlabel.
Reason
You are most probably calling fread on a NULL file handle. So you have two problems here:
In your code (you don't check if fread succeeds or returns a NULL value)
Your file can't be opened for some reason (this, you should investigate...)
Explication
fopen (see documentation) can return a NULL handle for different reasons. If you don't check the validity of the handle before calling fread you will have a segmentation fault.
Tips
As you can read in the official documentation I linked above, on most library implementations the errno variable can help you giving the system-specific error code on failure. This could help you debugging your error in opening the file.
Side Issues
Once you solve this bug in our code you will have other issues. As people (notably #Christophe) remarked in other answers, there is a structural problem in your code because you try to serialize/deserialize on your file objects non POD (aka your strings). Since string are complex objects you can't serialize them directly.
The approach of using an array of characters will work correctly, as simple types can be handled the way you coded.
For this reason, you can use the std::string c_str() method to obtain a null terminated array of chars from your string and store it in the file.
The opposite operation is even more straightforward, as you can initialize a std::string simply passing the deserialized array of chars:
std::string str(the_array);
You have a problem because you use fread() to load binary data. But this works only with plain old data (POD) objects.
It uses to give desastrous results with less trivial objects especially if the internals of these manage dynamic memory allocaton and/or pointers like it's the case here with strings.
By the way:
If you read/write binary data, you should really use "rb"/"wb" as mode for fopen(). If you don't you would'nt necessary have a seg.fault, but your data might be incorrect on some systems.
Edit:
Sorry, I didn't read well enough: if it happens right at fread() the reason provided by Alex will certainly help. However I leave this answer because as soon as you've solved your fopen() issue, you might get segmentation errors if you try to work with the object that you've read. If you're not conviced, look at sizeof(iAccount) and compare it to the size your string content.
EDIT
if(f2) is true so I am wrong and file got opened successfull right?
I found out that the file is not opened/the fopen can not handle with the path for example 0.txt .
Also I tryed to enter the path directly without building it (without stringstream and so on). Still I have the problem of the segmentation fault. I checked everything the file exists in the folder Accounts. I have an other file called "Accounts.txt" in the same folder and there I have no problem reading the amound of accounts existing (also using a struct). There I dont even check if the fopen had success but it works anyway I will write the code for the file-open-check later.
The code for the reading/writing into Accounts/Accounts.txt is:
struct init{
int anzahl_1;};
init anzahl;
FILE* f;
static string ss = "Accounts/Accounts.txt";
int account_anzahl1()
{
f = fopen(ss.c_str(), "r");
fread(&anzahl, sizeof(init), 1, f);
fseek(f, 0, SEEK_END);
fclose(f);
return anzahl.anzahl_1;
}
void account_anzahl_plus()
{
anzahl.anzahl_1 = anzahl.anzahl_1 +1;
f = fopen(ss.c_str(), "w");
fwrite(&anzahl, sizeof(init), 1, f);
fclose(f);
}
There I have no problem!

How can the current file be overwritten?

For the following code:
fstream file("file.txt", ios::in):
//some code
//"file" changes here
file.close();
file.clear();
file.open("file.txt", ios::out | ios::trunc);
how can the last three lines be changed so that the current file is not closed, but "re-opened" with everything blanked out?
If I am understanding the question correctly, you'd like to clear all contents of the file without closing it (i.e. set the file size to 0 by setting EOF position). From what I can find the solution you have presented is the most appealing.
Your other option would be to use an OS-specific function to set the end of file, for example SetEndOfFile() on Windows or truncate() on POSIX.
If you're only looking to begin writing at the beginning of the file, Simon's solution works. Using this without setting end of file may leave you in a situation where you have garbage data past the last position you wrote though.
You can rewind the file: put back the put pointer to the beginning of the file, so next time you write something, it will overwrite the content of the file.
For this you can use seekp like this:
fstream file("file.txt", ios::in | ios::out); // Note that you now need
// to open the file for writing
//some code
//"something" changes here
file.seekp(0); // file is now rewinded
Note that it doesn't erase any content. Only if you overwrite it so be careful.
I'm guessing you're trying to avoid passing around the "file.txt" parameter and are trying to implement something like
void rewrite( std::ofstream & f )
{
f.close();
f.clear();
f.open(...); // Reopen the file, but we dont know its filename!
}
However ofstream doesn't provide the filename for the underlying stream, and doesn't provide a way to clear the existing data, so you're kind of out of luck. (It does provide seekp, which will let you position the write cursor back to the beginning of the file, but that wont truncate existing content...)
I'd either just pass the filename to the functions that need it
void rewrite( std::ostream & f, const std::string & filename )
{
f.close();
f.clear();
f.open( filename.c_str(), ios::out );
}
Or package the filestream and filename into a class.
class ReopenableStream
{
public:
std::string filename;
std::ofstream f;
void reopen()
{
f.close();
f.clear();
f.open( filename.c_str(), ios::out );
}
...
};
If you're feeling over zealous you could make ReopenableStream actually behave like a stream, so that you could write reopenable_stream<<foo; rather than reopenable_stream.f<<foo but IMO that seems like overkill.