How to check if a file is still being written? - c++

How can I check if a file is still being written? I need to wait for a file to be created, written and closed again by another process, so I can go on and open it again in my process.

In general, this is a difficult problem to solve. You can ask whether a file is open, under certain circumstances; however, if the other process is a script, it might well open and close the file multiple times. I would strongly recommend you use an advisory lock, or some other explicit method for the other process to communicate when it's done with the file.
That said, if that's not an option, there is another way. If you look in the /proc/<pid>/fd directories, where <pid> is the numeric process ID of some running process, you'll see a bunch of symlinks to the files that process has open. The permissions on the symlink reflect the mode the file was opened for - write permission means it was opened for write mode.
So, if you want to know if a file is open, just scan over every process's /proc entry, and every file descriptor in it, looking for a writable symlink to your file. If you know the PID of the other process, you can directly look at its proc entry, as well.
This has some major downsides, of course. First, you can only see open files for your own processes, unless you're root. It's also relatively slow, and only works on Linux. And again, if the other process opens and closes the file several times, you're stuck - you might end up seeing it during the closed period, and there's no easy way of knowing if it'll open it again.

You could let the writing process write a sentinel file (say "sentinel.ok") after it is finished writing the data file your reading process is interested in. In the reading process you can check for the existence of the sentinel before reading the data file, to ensure that the data file is completely written.

#blu3bird's idea of using a sentinel file isn't bad, but it requires modifying the program that's writing the file.
Here's another possibility that also requires modifying the writer, but it may be more robust:
Write to a temporary file, say "foo.dat.part". When writing is complete, rename "foo.dat.part" to "foo.dat". That way a reader either won't see "foo.dat" at all, or will see a complete version of it.

You can try using inotify
http://en.wikipedia.org/wiki/Inotify
If you know that the file will be opened once, written and then closed, it would be possible for your app to wait for the IN_CLOSE_WRITE event.
However if the behaviour of the other application doing the writing of the file is more like open,write,close,open,write,close....then you'll need some other mechanism of determining when the other app has truly finished with the file.

Related

Autosaving files with multiple instances

I'm writing a Qt/C++ program that does long-running simulations, and to guard against data loss, I wrote some simple autosave behaviour. The program periodically saves to the user's temp directory (using QDir::temp()), and if the program closes gracefully, this file is deleted. If the program starts up and sees the file in that directory, it assumes a previous instance crashed or was forcibly ended, and it prompts the user about loading it.
Now here is the complication - I'd like this functionality to work properly even if multiple instances of the program are used at once. So when the program loads, it can't just look for the presence of an autosave file. If it finds one, it needs to determine if that file was created by a running instance (in which case, there's nothing wrong and nothing to be done) or if it has been left over by a instance that crashed or was forcibly ended (in which case it should prompt the user about loading it).
My program is for Windows/Mac/Linux, so what would be the best way to implement this using Qt or otherwise in a cross-platform fashion?
Edit:
The comments suggested the use of the process identifier, which I can get using QCoreApplication::applicationPid(). I like this idea, but when the program loads and sees a file with a certain PID in the name, how can it look at the other running instances (if any) to see if there is a match?
You can simply use QSaveFile which, as the documentation states:-
The QSaveFile class provides an interface for safely writing to files.
QSaveFile is an I/O device for writing text and binary files, without losing existing data if the writing operation fails.
While writing, the contents will be written to a temporary file, and if no error happened, commit() will move it to the final file. This ensures that no data at the final file is lost in case an error happens while writing, and no partially-written file is ever present at the final location. Always use QSaveFile when saving entire documents to disk.
As for multiple instances, you just need to reflect that in the filename.

How can I ensure my process never locks another process out of a file?

I have a Windows process that runs in the background and periodically backs up files. The backup is done by uploading the file to a server.
During the backup, I don't want to lock any other application out of writing to or reading from the file; if another applications wants to change the file, I should stop the upload and close the file.
Share mode is useless here; even though I'm sharing all access to the file being read, if the other process attempts to open it for writing without sharing read, it will be locked out of the file.
Is it possible to accomplish this on Windows, without writing a driver?
You may be interested in Volume Shadow Copy.
You certainly could copy the file and then check that the original and copy are identical (thus representing a consistent snapshot) prior to uploading to the server.
According to this MSDN page, if using NTFS, you should be able to lock the file inside a transaction of yours, while uploading the file to a server. This will ensure your view of the file does not change, even if the file has been changed externally.

How to determine when files are done copying for further processing?

Alright so to start this is strictly for Windows and I'd prefer to use C++ over .NET but I'm not opposed to boost::filesystem although if it can be avoided in favor of straight Windows API I'd prefer that.
Now the scenario is an application on another machine I can't change is going to create files in a particular directory on the machine that I need to make backups of and do some extra processing. Currently I've made a little application which will sit and listen for change notifications in a target directory using FindFirstChangeNotification and FindNextChangeNotification windows APIs.
The problem is that while I can get notified when new files are created in the directory, modified, size changes, etc it only notifies once and does not specifically tell me which files. I've looked at ReadDirectoryChangesW as well but it's the same story there except that I can get slightly more specific information.
Now I can scan the directory and try to acquire locks or open the files to determine what specifically changed from the last notification and whether they are available for further use but in the case of copying a large file I've found this isn't good enough as the file won't be ready to be manipulated and I won't get any other notifications after the first so there is no way to tell when it's actually done copying unless after the first notification I continually try to acquire locks until it succeeds.
The only other thing I can think of that would be less hackish would be to have some kind of end token file but since I don't have control over the application creating the files in the first place I don't see how I'd go about doing that and it's still not ideal.
Any suggestions?
This is a fairly common problem and one that doesn't have an easy answer. Acquiring locks is one of the best options when you cannot change the thing at the remote end. Another I have seen is to watch the file at intervals until the size doesn't change for an interval or two.
Other strategies include writing a no-byte file as a trigger when the main file is complete and writing to a temp directory then moving the complete file to the real destination. But to be reliable, it must be the sender who controls this. As the receiver, you are constrained to watching the directory and waiting for the file to settle.
It looks like ReadDirectoryChangesW is going to be your best bet. For each file copy operation, you should be receiving FILE_ACTION_ADDED followed by a bunch of FILE_ACTION_MODIFIED notifications. On the last FILE_ACTION_MODIFIED notification, the file should no longer be locked by the copying process. So, if you try to acquire a lock after each FILE_ACTION_MODIFIED of the copy, it should fail until the copy completes. It's not a particularly elegant solution, but there doesn't seem to be any notifications available for when a file copy completes.
You can process the data once the file is closed, right? So the task is to track when the file is closed. This can be done using file system filter driver. You can write your own or you can use our CallbackFilter product.

fopen: is it good idea to leave open, or use buffer?

So I have many log files that I need to write to. They are created when program begins, and they save to file when program closes.
I was wondering if it is better to do:
fopen() at start of program, then close the files when program ends - I would just write to the files when needed. Will anything (such as other file io) be slowed down with these files being still "open" ?
OR
I save what needs to be written into a buffer, and then open file, write from buffer, close file when program ends. I imagine this would be faster?
Well, fopen(3) + fwrite(3) + fclose(3) is a buffered I/O package, so another layer of buffering on top of it might just slow things down.
In any case, go for a simple and correct program. If it seems to run slowly, profile it, and then optimize based on evidence and not guesses.
Short answer:
Big number of opened files shouldn't slow down anything
Writing to file will be buffered anyway
So you can leave those files opened, but do not forget to check the limit of opened files in your OS.
Part of the point of log files is being able to figure out what happened when/if your program runs into a problem. Quite a few people also do log file analysis in (near) real-time. Your second scenario doesn't work for either of these.
I'd start with the first approach, but with a high-enough level interface that you could switch to the second if you really needed to. I wouldn't view that switch as a major benefit of the high-level interface though -- the real benefit would normally be keeping the rest of the code a bit cleaner.
There is no good reason to buffer log messages in your program and write them out on exit. Simply write them as they're generated using fprintf. The stdio system will take care of the buffering for you. Of course this means opening the file (with fopen) from the beginning and keeping it open.
For log files, you will probably want a functional interface that flushes the data to disk after each complete message, so that if the program crashes (it has been known to happen), the log information is safe. Leaving stuff in standard I/O buffers means excavating the data from a core dump - which is less satisfactory than having the information on disk safely.
Other I/O really won't be affected by holding one - or even a few - log files open. You lose a few file descriptors, perhaps, but that is not often a serious problem. When it is a problem, you use one file descriptor for one log file - and you keep it open so you can log information. You might elect to map stderr to the log file, leaving that as the file descriptor that's in use.
It's been mentioned that the FILE* returned by fopen is already buffered. For logging, you should probably also look into using the setbuf() or setvbuf() functions to change the buffering behavior of the FILE*.
In particular, you might want to set the buffering mode to line-at-a-time, so the log file is flushed automatically after each line is written. You can also specify the size of the buffer to use.

how to make sure not to read a file before finishing the write to it

When trying to monitor a directory using inotify on Linux, as we know, we get notified as soon as the file gets created (before the other process finish writing to it)
Is there an effective way to make sure that the file is not read before writing to it is complete by the other process?
We could potentially add a delayed read; but as we all know, it is flawed.
For a little bit more clarity on the scenario; the two processes are running as different users; the load expected is about a few hundred files created per second.
Based on your question, it sounds like you're currently monitoring the directory with the IN_CREATE (and maybe IN_OPEN) flag. Why not also use the IN_CLOSE flag so that you get notified when the file is closed? From there, it should be easy to keep track of whether something has the file open and you'll know that you don't want to try reading it yet.
Create it somewhere else, write to it, close it, then rename it - or am I missing something obvious?
You can check /proc/<pid>/fd to see if the file is still opened. If it is not listed there, you can be sure that the process is no longer using it.
maybe the lsof command can help. It lists all the opened files.
$ man lsof