Trouble saving data written to a file when I kill the app - c++

My program is always writing data to a file but when I close it before the program fully stops, the end result is nothing being written to the file. I would really like to be able to close it without completing it fully, so how can I fix this to make it constantly saving the file?
ofstream outfile;
outfile.open("text.txt", std::ios::app);
bool done = false;
int info;
while (done == false){
cin>>info;
outfile<<info;
cout<<info<<"Choose different info";
if(info == 100){
done = true;
}
}
outfile.close();
This is obviously just an example, but it is very similar to my actual code.
Edit: When i say closing I mean killing it (Hitting red X at top right of console)

You likely need to flush your std::ofstream when you have done "enough" work.
"enough" work here is going to depend on your application.
Perhaps
...
outfile<<info;
outfile.flush();
...

The operation system doesn't write to the file when you call the write function to save time, it wait to check if you want to write anything else or for a time which will be "good" to write. You write to a buffer and the operating system will write this buffer to the file.
When you close the function it write anything that left in the buffer to the file. You can force your code to write to the file using flush method. Just flush your file after every time you write and you will be ok.
flush: http://www.cplusplus.com/reference/ostream/ostream/flush/
outfile << n;
outfile.flush();

Related

C++ ofstream not creating and writing to file on subsequent runs of the program

#include <iostream>
#define LOGFATAL(msg) log(0, msg)
std::ofstream *logst = NULL;
void log(int sev, char *msg)
{ if (logst == NULL) {
logst = new std::ofstream();
logst->open("filea.txt", std::ios::out | std::ios::app);
*logst << "Logger started." << std::endl;
}
std::ofstream &_log = *logst;
_log << msg << std::endl;
_log.flush();
}
int main()
{ LOGFATAL("Log msg1.");
LOGFATAL("Log msg2.");
LOGFATAL("Log msg3.");
logst->close();
delete logst;
}
I am opening a file for logging the very first time I log and continue to keep it open until the end of program.
Since I use the flush() operation after every log invocation, I expect to see my messages printed close to immediately. BUT THIS DID NOT HAPPEN. WHY?
Currently, I kill my program using Ctrl+C before it finishes (Don't ask me why). On subsequent runs of the program I don't even see the log file getting created and even if it already exists I don't see any logs getting added. Since I don't let the close() execute, does the file descriptor get leaked and prevent future open() from new programs to fail?
I am running it on a RHEL 7.2 and I assume most new OS's handle these days even if close() isn't called accidentally. Considering Ctrl+C is the only way to stop my program currently, what can I do to make my program to log correctly every time it is started?
Is there a way from the system shell to check if there are any leaked file descriptors to my log file?
Inserting std::endl will itself flush the data. It works for me even if _log.flush(); is commented.
"I expect to see my messages printed close to immediately". If you are using vim then you need to close and reopen the file. So, use tail -F filea.txt to see the output immediately if data gets written to file.
Take the following steps to validate some assumptions:
1.) Open your file as std::fstream instead. After writing the first log message, reset the file pointer to the beginning and read the contents of the file. You should be getting your log message from the open file. If not, then you probably have a memory corruption.
2.) After writing the first log message, open the file with an std::ifstream and read its contents. You should be getting your log message from the std::ifstream. If not, then there is probably another process interfering with your file.
Note: The OS takes some time to make file changes visible to other processes. Under high load I have observed delays of several 100 ms, e.g. until the message shows up in tail -f.

Reopening a closed file stream

Consider the following code,
auto fin = ifstream("address", ios::binary);
if(fin.is_open())
fin.close()
for(auto i = 0; i < N; ++i){
fin.open()
// ....
// read (next) b bytes...
// ....
fin.close()
// Some delay
}
The code above can't be implemented in the C++ I know, but I'd like to know if it is possible to do so?
Here are my requirements:
When reopening the file, there would be no need to pass the parameters (path and mode) again.
When reopening the stream, it continues from the point in the stream that it was when got closed.
Clarification
The files I work with are big in size and in a point of time other threads from third party libraries may decide to (re)move them. An open stream will prevent such actions.
Continuously reading a big file will slow down the system.
The need
Indeed, a file can't be deleted by another process as long as a stream keeps it open.
I suppose you have already asked yourself these questions, but fo the recors I have to suggest you to think about it:
Can't the file be read into (virtual) memory and discarded when no longer needed ?
Can't the file processing be pipelined asynchronously, to read it at once and process it without unnecessary delays ?
What to do if the file can no longer be opened because it was deleted by the other process ? What to do if the location can't be found, because the file was modified (e.g. shortened) ?
If you would have the perfect solution to your issue, what would be the effect if the other process would try to delete the file when it is open (only for a short time, but nevertheless open and blocking the deletion) ?
The solution
Unfortunately, you can't achieve the desired behavior with standard streams. You could emulate it by keeping track of the filename and of the position (and more generally of the state):
auto mypos = ifs.tellg(); // saves position.
// Should flag be saved as well ? and what about gcount ?
ifs.close();
...
if (! ifs.is_open()) {
ifs.open(myfilename, myflags); // open again !
if (! ifs) {
// ouch ! file disapeared ==> process error
}
ifs.seekg(mypos); // restore position
if (! ifs) {
// ouch ! position no longer reachable ==> process error
}
}
Of course, you wouldn't like to repeat this code ever and ever. And it would not be so nice having all the sudden a lot of global variables to keep track of the stream's state. But you could very easily encapsulate it in a wrapper class that would take care of saving and restoring the stream's state using existing standard operations.

Qt5 QFile::close() very slow for writing

I am using QFile as a file reader and a file writer to copy files to USB from inside my application. I have been trying to figure out why my file copies to USB (with progress bar) are taking so long. I finally found out that when I close the QFile object that is used for writing, the close() operation can take well over the time taken for the actual write operation. These are very large files, and I read/write blocks of 16384 bytes, and then I send a signal to the GUI to increase the progress bar that is viewed by the user. I ended up adding a call to flush() after each write since I assume this is a result of the out stream not actually having yet been written to disk. That didn't make a difference. The close of the outgoing QFile object still takes much longer than what seems to have been the write time (timing taken before and after copy, and before and after each of the QFile::close() calls, the timing code has been removed for ease of reading, I also debugged and saw it happening). Of course, it doesn't help to not call the close() function, since the destruction of the QFile object causes it to be called.
My code is as follows (minus error checking, destination space checking, etc):
void FileCopy::run()
{
QByteArray bytes;
int totalBytesWritten = 0;
int inListSize = inList.size();
for (int i=0; !canceled && i<inListSize; i++)
{
QString inPath = inList.at(i).inPath;
QString outPath = inList.at(i).outPath;
QFile inFile(inPath);
QFile outFile(outPath);
int filesize = inFile.size();
int bytesWritten = 0;
if (!inFile.open(QIODevice::ReadOnly))
{
return;
}
if (!outFile.open(QIODevice::WriteOnly))
{
inFile.close();
return;
}
// copy the FCS file with progress
while (!canceled && bytesWritten < filesize)
{
bytes = inFile.read(MAXBYTES);
qint64 outsize = outFile.write(bytes);
outFile.flush();
if (outsize != bytes.size())
{
break;
}
bytesWritten += outsize;
totalBytesWritten += outsize;
Q_EMIT signalBytesCopied(totalBytesWritten, i+1, inListSize);
QThread::usleep(100); // allow time for detecting a cancel
}
inFile.close();
outFile.close();
}
// Other error checking done here
}
Can anyone see a way to get passed this? I would actually prefer that the progress bar move more slowly, more accurately displaying the state of the copy to the user, than to have the progress bar read 100% in less than half the time it takes for the copy and close to actually complete.
I have also tried using QSaveFile instead of QFile for the output, but QSaveFile::commit() has the same exact problem, taking more time to commit than to finish the actual copy loop. I assume that this is because, underneath, it is using the same functionality as QFile is, derived from QIoDevice.
I have considered moving to using standard streams, but would like to keep some consistency in how file handling is done in this application. It is a possibility though, if QFile::close() is going to take this long to close. Or is it possible that the standard stream would have the same issue?
I am working on a Win7 32-bit box with VS2010 using Qt5.1.1 and the Qt 1.2.2 VS add-in. Thanks for any suggestions.
While you are writing, the OS probably just caches the writes in memory (fast). But when you close the file it has to flush all the data to disk (slow - especially if it has not actually written any of it yet). So closing the file has to wait for the OS actually putting all the data onto the disk (USB) and that may actually be all of it at that time.
The reason why operating systems do something like this is of course to speed up writes - and often they can then get away with flushing the data to disk in the background when nothing else is going on (so you don't really notice the actual cost, since it is amortized over time where nothing else is going on). But if you just write and then close at once you are going to notice.
Note: the alternative would be the write calls being slower - you would still end up spending the same actual time.

Delay in ofstream::open, possibly due to mixing with _iobuf?

I have a C++ program that creates an output file "A" with ofstream. This file is then read by some legacy C code that opens the file with _iobuf. The legacy code then creates its own output file "B" using _iobuf, and this file is then read by the C++ program using ifstream. This sequence is iterated many times, with the same file names for A and B in each iteration.
Occasionally, the C++ program cannot open the output file A for writing, and I must try several times before it succeeds. This occurs nondeterministically, and maybe once in a thousand iterations. Note that the C program never has to wait to open its input or output file, nor does the C++ program ever have to wait to open its input file. This informal observation is based on hundreds of thousands of iterations.
I'm wondering if this has something to do with mixing ofstream and _iobuf in the same program? Both the C++ code and the C code are linked into the same program. And the legacy C code is technically C++ code, but was written in a very C-like style. Is there anything I can do to eliminate this waiting to open the ofstream file? I do not want to change the legacy code if I can possibly avoid it.
Pseudo code (not compiled):
void someObject::someMethod()
{
for (int count = 0; count < someLimit; ++count)
{
newerObject::firstMethod();
olderObject::secondMethod();
newerObject::thirdMethod();
}
}
void newerObject::firstMethod()
{
// do some processing first
// then write the results of the processing to a file
ofstream A;
A.open("A", ofstream::out); // this sometimes must be tried multiple times
// write data to file A
A.close();
}
void olderObject::secondMethod()
{
FILE* f;
f = fopen("A", "rt"); // this always works the first time
// read data from file A
fclose(f);
// do some processing
f = fopen("B", "w");
// write data to file B
fclose(f);
}
void newerObject::thirdMethod()
{
ifstream B;
B.open("B"); // this always works the first time
// read data from file B
B.close();
// do some processing
}
Currently, as a work around, I put the ofstream::open in a do-while loop. I would love to get rid of this awkwardness. Thanks in advance for any advice you can give.
First off, the problem is almost certainly not the use of different methods to access the files: under the hood, the C and C++ I/O functions use the same system I/O facilities. You seem to be using Windows (on other systems files typically can be open multiple times simultaneously) and I don't know much about the system but I would suspect that the file system hasn't been updated to reflect that the file is closed when you try to open it. This may have to do with the "t" open flag: I don't know what this is about.
On UNIXes you can force the I/O operations to wait until the actual change on disk completed. Something like this could help avoiding the problem but has the significant cost that operations become hideously slow. On UNIXes one approach would be to blow away the file system entry the moment the file was opened successfully (after all, at this point its name isn't used anymore):
if (FILE* fp = fopen("file", "r")) {
remove("file");
// do processing
}
However, if I recall correctly on Windows you can neither remove the file nor rename it. Personally, in solving the problem I would proceed as follows:
Determine under which situations the file can't be opened, e.g. by keeping the file open and trying to open it. This is mainly intended to create a setup where the problem is reproducible so you can verify later that you indeed found a solution.
Once I found a way to reproduce the problem I would probably a better idea of the actual root cause and possibly googling would help. In any case this is the point where researching the root cause comes in.
Once the cause is understood it is hopefully easy to devise a solution. If not, opening the file multiple times under it is successful may very well be the right solution.

C++ file operations cause "crash" on embedded Linux

I'm in a embedded led measuring system project now. It uses ARM & linux, and has 64M memory and 1G storage. When measuring, it's supposed to write data to a .csv file. I did it this way:
Create/open a file before measurement begins
In the measuring loop, when data is ready, put it into the file, then go to next measuring
When user stop the measurement, the file will be closed
But, when I add this feature, the program keep running several hours, then the machine won't respond to anything ( measuring stopped, UI still display but doesn't respond to any action, etc.). And the csv file is about 15MB.
While without this feature, the machine can work well all day.
I've thought about this, maybe It's because the memory is used up. With such a small memory, is it possible to keep writing a file? Or should I close it every time I finished writing data? (In that case, I will have to open/close the file very frequently, it will cause our system to be slow, what is not glad to see)
Apologize for my poor English, maybe someone can understand it and give me some help.
God is lighting your path, thank you all!
ps: I do believe the file operations itself is correct.
the code like this:
std::ofstream out_put;
out_put.open(filePath, std::ofstream::out | std::ofstream::trunc);
while(!userStoped()){
doSomeMesuring();
for(int itemIndex = 0; itemIndex < itemCount; ++itemIndex){
out_put << ',' << itemName.toStdString() << ','
<< data->mdata.item[itemIndex].mvalue << ','
<< data->mdata.item[itemIndex].judge << std::endl;
}
}
out_put.close();
You write to 'out_put', the ofstream, but never check if the stream is still valid.
You could change it to
while (out_put.good() && (!userStoped())
To prove to yourself that it is the writing to a stream which is causing the problem, comment out all of the measuring code, just write lots of 'x' (or your choice of character!) to the stream to see if you have the same result.