ofstream::write fails in the middle when writing large binary files - c++

During runtime my program creates and writes two large binary files simutaneously to the disk. File A is about 240GB, file B is about 480GB. The two files are maintained by two ofstream objects, and the write opertations are performed with the member funcion write in a loop.
Now the problem is: The write file operation fails everytime the whole write file procedure reaches 63~64%. The first time it failed on file A, and the second time it failed on file B.
During the program runs these days, the power supply of my building happens to be under upgrade. By a strange coincidence, every time the program failed, the electrician happened to be cutting and resuming the power supply of the central air-conditioner and some offices. Therefore, I really wonder whether the write file failures were caused by unstable power supply.
I'm sure that the failure is not caused by file size limit, because I've tried to write a single 700GB file using the same method without any problem.
Is there any way to find out the detailed reason? I feel that the flags (badbit, eofbit and failbit) of ofstream don't provide too much information. Now I'm trying to use errno and strerror to get the detailed error message. However, I see that a possible value of errno is EIO, which measn "I/O error", which again provides no useful information.
Is there anyone who encountered this situation before?
By the way, the program runs without error when the sizes of file A and file B are small.
PS: This time the program fails at 55%, and the errno value is EINVAL: Invalid argument. Very strange.

Confirmed, the cause is indeed a bug of NTFS: A heavily fragmented file in an NTFS volume may not grow beyond a certain size. This means that CreateFile and WriteFile cannot fundamentally solve the problem, either.

All right, I've solved the problem with Win32 API: CreateFile and WriteFile.

Related

Concept to implement recovery of file after crash of application (eg. SIGSEGV)

I want to implement a feature, that when my application crashes it saves the current data to a temporary file so it can be recovered on the next launch like many application do (eg. Word or something).
So as far as I could find out this is typically done by just saving the file every few minutes and then loading that last saved file on startup if it exists.
However I was wondering if it could also be done by catching all unhandled exceptions and then call the save method when the application crashes.
The advantage would be that I don't have to write to the disk all the time, cause SSDs don't like that, and the file would really be from the crash time and not 10 minutes old in the worst case.
I've tried this on linux with
signal(SIGSEGV, crashSave);
where crashSave() is the function that calls the save and it seems to work. However I'm not sure if this will work on Windows as well?
And is there a general reason why I should not do this (except that the saved file might be corrupted in few cases) Or what is the advantage of other applications doing timed autosave instead?

QFile / QTextStream don't show error on delete file being written to

I am writing to a QFile using a QTextStream, and all works great. I'm trying to create some error detection, so I tried deleting the output file between writes.
Strangle, Qtextstream's status continues to show 0 (no error), and QFile's error method returns 0. yet the file is gone, and text written is being lost...gone
What's going on? How can I detect the failure to write? Am I looking at the wrong methods?
Not sure about Windows, but on Linux and most Unix-type systems, the scenario you describe is simply not an error at all from the OS's point of view - it's perfectly legal to continue writing to a file that has been deleted (and it "works", data is still shuffled to/from the filesystem - this file is still there in the filesystem until the last handle to it is closed).
(I believe that on Windows you'll get an error if you try to delete the file while it's in use, at least if it was open with the default open mode - not 100% sure though.)
If you need to check for "file deleted", you'll need to write those checks yourself.

fopen: is it good idea to leave open, or use buffer?

So I have many log files that I need to write to. They are created when program begins, and they save to file when program closes.
I was wondering if it is better to do:
fopen() at start of program, then close the files when program ends - I would just write to the files when needed. Will anything (such as other file io) be slowed down with these files being still "open" ?
OR
I save what needs to be written into a buffer, and then open file, write from buffer, close file when program ends. I imagine this would be faster?
Well, fopen(3) + fwrite(3) + fclose(3) is a buffered I/O package, so another layer of buffering on top of it might just slow things down.
In any case, go for a simple and correct program. If it seems to run slowly, profile it, and then optimize based on evidence and not guesses.
Short answer:
Big number of opened files shouldn't slow down anything
Writing to file will be buffered anyway
So you can leave those files opened, but do not forget to check the limit of opened files in your OS.
Part of the point of log files is being able to figure out what happened when/if your program runs into a problem. Quite a few people also do log file analysis in (near) real-time. Your second scenario doesn't work for either of these.
I'd start with the first approach, but with a high-enough level interface that you could switch to the second if you really needed to. I wouldn't view that switch as a major benefit of the high-level interface though -- the real benefit would normally be keeping the rest of the code a bit cleaner.
There is no good reason to buffer log messages in your program and write them out on exit. Simply write them as they're generated using fprintf. The stdio system will take care of the buffering for you. Of course this means opening the file (with fopen) from the beginning and keeping it open.
For log files, you will probably want a functional interface that flushes the data to disk after each complete message, so that if the program crashes (it has been known to happen), the log information is safe. Leaving stuff in standard I/O buffers means excavating the data from a core dump - which is less satisfactory than having the information on disk safely.
Other I/O really won't be affected by holding one - or even a few - log files open. You lose a few file descriptors, perhaps, but that is not often a serious problem. When it is a problem, you use one file descriptor for one log file - and you keep it open so you can log information. You might elect to map stderr to the log file, leaving that as the file descriptor that's in use.
It's been mentioned that the FILE* returned by fopen is already buffered. For logging, you should probably also look into using the setbuf() or setvbuf() functions to change the buffering behavior of the FILE*.
In particular, you might want to set the buffering mode to line-at-a-time, so the log file is flushed automatically after each line is written. You can also specify the size of the buffer to use.

What causes WriteFile to return ERROR_ACCESS_DENIED?

We currently face the problem of a call to WriteFile (or, rather CFile::Write - but that just calls WriteFile internally) causing the Win32 error 5 ERROR_ACCESS_DENIED.
(EDIT: Note that we can't repro the behavior. All we have at the moment is a logfile indicating the source line where the CFile::Write was and containing as error ERROR_ACCESS_DENIED!)
(EDIT: The file is on a local drive and it is in fact a file and not a directory.)
Now, WriteFiles's documentation doesn't really help, and experimenting with a simple test-app yields the following results:
WriteFile will cause ERROR_ACCESS_DENIED if it is called for a file handle that is not opened for writing (i.e. is opened for reading only).
It will not cause ERROR_ACCESS_DENIED if
The handle is not valid or the file isn't open at all
The access rights, or the write protected flag for the file are modified after the file has been opened by the process. (If these are modified before the file is opened, then we never get to WriteFile because opening the file will fail.)
The file is somehow locked by another process/handle (This will at best result in error 32 ERROR_SHARING_VIOLATION).
That leaves us with the situation, that apparently the only possibility for this call to fail if the file was actually opened with the read flag instead of the write flag. However, looking at our code, this seems extremely unlikely. (Due to our tracing, we can be sure that WriteFile failed and we can be sure that the error is ERROR_ACCESS_DENIED, we cannot be 100.1% sure of the opening flags, because these are not traced out.)
Are there any other known circumstances where WriteFile (CFile::Write) would cause an ERROR_ACCESS_DENIED?
Note: To additionally clarify the context of this question:
The file was open, therefore it can't be a directory or somesuch
All tests I performed indicate that while the file is open it cannot be deleted, so the file should still have been there on the call to WriteFile
The file is located on a local drive and not on a network drive.
I should add that we're running on WIndows XP sp3 and the app is compiled with Visual Studio 2005.
The question was
What causes WriteFile to return
ERROR_ACCESS_DENIED?
and I stated in the question
WriteFile will cause
ERROR_ACCESS_DENIED if it is called
for a file handle that is not opened
for writing (i.e. is opened for
reading only).
After adding further logging for the open flags and another incident, it turns out this was correct. The logging for the open flags shows that at the point of error, the file object was opened with CFile::modeRead and therefore we got ERROR_ACCESS_DENIED.
Haven't found out yet which weird code path leads to this, but this just goes to show: Never trust your own code. :-)
(Oh, and btw. It wasn't ::WriteFile that failed, but the ::FlushFileBuffers API, but apparently that returns the same error.)
There is about a dozen different situations that might result in ERROR_ACCESS_DENIED. Internally, all WriteFile does is call NtWriteFile and map its (somewhat meaningful) NTSTATUS error code into a less meaningful HRESULT.
Among other things, ERROR_ACCESS_DENIED could indicate that the file is on a network volume and something went wrong with write permissions, or that the file is really not a file but a directory.
if you can debug it, you should. it could be a million things:
msdn is wrong (it happens a lot)
some app (virus?) is hooking WriteFile and causing different behavior
filesystem problem?
something wrong in your logging, or observations

File corruption detection and error handling

I'm a newbie C++ developer and I'm working on an application which needs to write out a log file every so often, and we've noticed that the log file has been corrupted a few times when running the app. The main scenarios seems to be when the program is shutting down, or crashes, but I'm concerned that this isn't the only time that something may go wrong, as the application was born out of a fairly "quick and dirty" project.
It's not critical to have to the most absolute up-to-date data saved, so one idea that someone mentioned was to alternatively write to two log files, and then if the program crashes at least one will still have proper integrity. But this doesn't smell right to me as I haven't really seen any other application use this method.
Are there any "best practises" or standard "patterns" or frameworks to deal with this problem?
At the moment I'm thinking of doing something like this -
Write data to a temp file
Check the data was written correctly with a hash
Rename the original file, and put the temp file in place.
Delete the original
Then if anything fails I can just roll back by just deleting the temp, and the original be untouched.
You must find the reason why the file gets corrupted. If the app crashes unexpectedly, it can't corrupt the file. The only thing that can happen is that the file is truncated (i.e. the last log messages are missing). But the app can't really jump around in the file and modify something elsewhere (unless you call seek in the logging code which would surprise me).
My guess is that the app is multi threaded and the logging code is being called from several threads which can easily lead to data corrupted before the data is written to the log.
You probably forgot to call fsync() every so often, or the data comes in from different threads without proper synchronization among them. Hard to tell without more information (platform, form of corruption you see).
A workaround would be to use logfile rollover, ie. starting a new file every so often.
I really think that you (and others) are wasting your time when you start adding complexity to log files. The whole point of a log is that it should be simple to use and implement, and should work most of the time. To that end, just write the log to an unbuffered stream (l;ike cerr in a C++ program) and live with any, very occasional in my experience, snafus.
OTOH, if you really need an audit trail of everything your app does, for legal reasons, then you should be using some form of transactional storage such as a SQL database.
Not sure if your app is multi-threaded -- if so, consider using Active Object Pattern (PDF) to put a queue in front of the log and make all writes within a single thread. That thread can commit the log in the background. All logs writes will be asynchronous, and in order, but not necessarily written immediately.
The active object can also batch writes.