How to write to file in C++ without locking it? - c++

C++ In Windows 7.
When writing to my log file, i sometimes set a breakpoint, or the program gets stuck at something. When i try too peek in my logfile from another program it says "The file cannot be opened because it is in use by another process". Well thats true, however I've worked with other programs that still allows reading from a logfile while they are writing to it, so I know it should be possible. Tried the _fsopen and unlocking the file but without success.
FILE* logFile;
//fopen_s(&logFile, "log.log", "w");
logFile = _fsopen("log.log", "w", _SH_DENYNO);
if (!logFile)
throw "fopen";
_unlock_file(logFile);

If you have the log-file open with full sharing-mode, others are still stopped from opening for exclusive access, or with deny-write.
Seems the second program wants more access than would be compatible.
Also, I guess you only want to append to the log, use mode "a" instead of "w".
Last, do not call _unlock_file unless you called _lock_file on the same file previously.
There is a way to do what you want though:
Open your file without any access, and then use Opportunistic Locks.
Raymond Chen's blog The Old New Thing also has a nice example: https://devblogs.microsoft.com/oldnewthing/20130415-00/?p=4663

Related

How to allow other programs to read file, while writing to it using fopen and fwrite?

I'm opening a file for a video I'm creating and writing to disk with fopen in C++, I'm able to write to disk. But when I try to read it as I'm writing it, it will throw errors saying that it doesn't have permission to read the file as soon as I close the file or stop the program, I can suddenly read from it.
Not an issue with not finishing writing the write as if I crash the program, can still read it. Also, VLC's logs tell me it's a permission issue.
Any idea how to change that permission?
Response to William asking for code snippets or if open happened before the file existed:
Thanks William, here's what I've got. I waited a few minutes and could see the file with windows explorer by that point and waited until after I'd flushed and data was there, couldn't open with VLC or Notepad++ or Notepad or Windows Media Player
Notepad says cannot access because it is being used by another process, others too.
Here is the VLC log while it tries to open this:
http://snippi.com/s/g4cbu23
Here is where I create the file with fopen:
http://snippi.com/s/cyajw4h
At the very end is where I write to the file using fwrite and flush:
http://snippi.com/s/oz27m0g
You need to use _fsopen with _SH_DENYNO if you want the file to be shareable.

Writing to a file with QFile fails, without error code, when already opened in Excel

I try to write to an existing file with QFile, which works as expected. However, the problem is that if the file is open in Excel, writing to the file from my program fails.
I try to test the permissions with QFileInfo and have all read and write permissions on the file. The test
bool opened = file-> open (QIODevice :: WriteOnly)
returns true.
The same problem does not occur when the file is opened with notepad++.
How can I check if the file is locked and can't be written to?
Excel locks its open files for exclusive use. You can't write to an open file, move or delete it. There is no way to bypass this lock.
See also: Write to locked file regardless of lock status
When you use the QFile::write function, it returns the number of bytes written, or -1 if an error occurred.
If you check the return code from the write function, you should be able to use that to determine that the file is locked by another process.
Calling QFile::open returns without error, because you can still get a valid handle to the file, even though another process has locked it, preventing you writing to it at the same time.

QFile / QTextStream don't show error on delete file being written to

I am writing to a QFile using a QTextStream, and all works great. I'm trying to create some error detection, so I tried deleting the output file between writes.
Strangle, Qtextstream's status continues to show 0 (no error), and QFile's error method returns 0. yet the file is gone, and text written is being lost...gone
What's going on? How can I detect the failure to write? Am I looking at the wrong methods?
Not sure about Windows, but on Linux and most Unix-type systems, the scenario you describe is simply not an error at all from the OS's point of view - it's perfectly legal to continue writing to a file that has been deleted (and it "works", data is still shuffled to/from the filesystem - this file is still there in the filesystem until the last handle to it is closed).
(I believe that on Windows you'll get an error if you try to delete the file while it's in use, at least if it was open with the default open mode - not 100% sure though.)
If you need to check for "file deleted", you'll need to write those checks yourself.

How to check if a file is still being written?

How can I check if a file is still being written? I need to wait for a file to be created, written and closed again by another process, so I can go on and open it again in my process.
In general, this is a difficult problem to solve. You can ask whether a file is open, under certain circumstances; however, if the other process is a script, it might well open and close the file multiple times. I would strongly recommend you use an advisory lock, or some other explicit method for the other process to communicate when it's done with the file.
That said, if that's not an option, there is another way. If you look in the /proc/<pid>/fd directories, where <pid> is the numeric process ID of some running process, you'll see a bunch of symlinks to the files that process has open. The permissions on the symlink reflect the mode the file was opened for - write permission means it was opened for write mode.
So, if you want to know if a file is open, just scan over every process's /proc entry, and every file descriptor in it, looking for a writable symlink to your file. If you know the PID of the other process, you can directly look at its proc entry, as well.
This has some major downsides, of course. First, you can only see open files for your own processes, unless you're root. It's also relatively slow, and only works on Linux. And again, if the other process opens and closes the file several times, you're stuck - you might end up seeing it during the closed period, and there's no easy way of knowing if it'll open it again.
You could let the writing process write a sentinel file (say "sentinel.ok") after it is finished writing the data file your reading process is interested in. In the reading process you can check for the existence of the sentinel before reading the data file, to ensure that the data file is completely written.
#blu3bird's idea of using a sentinel file isn't bad, but it requires modifying the program that's writing the file.
Here's another possibility that also requires modifying the writer, but it may be more robust:
Write to a temporary file, say "foo.dat.part". When writing is complete, rename "foo.dat.part" to "foo.dat". That way a reader either won't see "foo.dat" at all, or will see a complete version of it.
You can try using inotify
http://en.wikipedia.org/wiki/Inotify
If you know that the file will be opened once, written and then closed, it would be possible for your app to wait for the IN_CLOSE_WRITE event.
However if the behaviour of the other application doing the writing of the file is more like open,write,close,open,write,close....then you'll need some other mechanism of determining when the other app has truly finished with the file.

Opening the same file twice with different flags?

Can I open the same file twice (with CreateFileA), using different flags (in this case, one with FILE_FLAG_NO_BUFFERING, and one without)?
In detail, this is the case: During startup, I create a temporary file (with FILE_FLAG_DELETE_ON_CLOSE). I fill it up sequentially, and I don't want to worry about doing unbuffered IO in this part. Then, while the process is running, I want to access that file using unbuffered IO, because I have my own caching logic. Thus, I'm thinking of opening the same file again, this time with FILE_FLAG_NO_BUFFERING, and then closing the old handle. I want to do this in this overlapped way for two reasons:
Concurrency. If I close the old handle before I open the new one, someone else might mess with my file in the meantime.
FILE_FLAG_DELETE_ON_CLOSE would delete my file when I close the first handle without having another one open. This is a minor annoyance that I could work around.
Just remember to include FILE_SHARE_DELETE in share mode. I think FILE_FLAG_DELETE_ON_CLOSE is the only flag that affects more than just "your" handle.
Isn't it ridiculous?
You want to open twice because if open after close previous handle someone might mess with your file. But reality is that you are trying to mess with your file.
If can not guarantee exclusive file access how can you prevent someone doing something? But if you open exclusively how you can reopen the file?
AFAIK, if the file is already opened exclusively no more open is allowed, even from the same process.