Check directory's sharing mode in windows - c++

My question is seems to be simple, but google is silent. I'm banned may be?:)
So the question is can I check is there any blocked from deleting file in directory or it's subdirectories before delete it? Is there simple way to do it?

No, there isn't.
And even if there was, it wouldn't work. Consider this sequence of events:
You perform the check and it succeeds (there is no blocked files).
Another process receives CPU quantum and opens a file without FILE_SHARE_DELETE flag.
Your process gains the CPU back and proceeds to delete the directory -- only to discover that it can't, because now there is a blocked file.

Related

File modification time gets overwritten by background cache flushing

I have code that performs following steps:
open file
write data
set file timestamps (via SetFileInformationByHandle(FileBasicInfo))
close file
When file is stored on certain NAS devices (and accessed via share) it's modification time ends up being set to current time.
According to Process Monitor Close() in step 4 results in a Write (local cache gets flushed/pushed to NAS device) that (seemingly) updates file's mtime on server.
If I add FlushFileBuffers() (or sleep for few seconds) between steps 2 and 3 -- everything is fine.
Is this a bug in SMB implementation of this NAS device (Dell EMC Isilon) or SetFileInformationByHandle() never promised anything?
What is the best way to deal with this situation? I would really like to avoid having to call FlushFileBuffers()...
Edit: Great... :-/ It looks like for executables (and only executables) atime (last access time) gets screwed up too (in the same way). Only these are harder to reproduce -- need to run this logic few times. Could be some antivirus... I am still investigating.
Edit 2: According to procmon access time gets updated by EXPLORER.EXE -- when it sees an executable, it can't resist opening it and reading portions of it (probably extracting the icon).
You can't really do anything -- I guess Isilon's SMB implementation doesn't support certain things (that would've preserved timestamps).
I simply added FlushFileBuffers() before SetFileInformationByHandle() and made sure there are no related race conditions in my code.

Is it safe enough to store a file in the TEMP directory

Is it safe enough to store a file in the %TEMP% directory via GetTempPath, GetTempPath and CreateFile for more than two hours? Is there any guarantees that this file won't be deleted earlier?
Thanks in advance.
A file you create in the TEMP directory must be created with the CreateFile's FILE_FLAG_DELETE_ON_CLOSE option. This ensures that the file will always be cleaned-up and you cannot spray garbage files, even if your program crashes before it has a chance to delete the file again.
This option then also inevitably forces you to do the Right Thing, keeping the file opened while you are using it. Which in turn prevents anybody from the deleting the file, even if they use a sledge-hammer.
Lots of programs don't follow this advice and a user's TEMP directory tends to be a big olde mess, forcing the user to clean it up manually once in a while. A built-in feature of Windows, he'll use the "Disk Cleanup" applet. The kind of scenario where you will lose the file if you don't follow this advice. Best to use %AppData% instead.
There are no guarantees. This folder is usually not cleared except the user starts any cleanup.
But everyone can delete files here. And it is wise to do that on a regular base
To prevent the file from being deleted, you can keep a handle open (assuming the application is running the whole time) and do not specify FILE_SHARE_DELETE (and, if applicable, neither FILE_SHARE_WRITE).
Alternative:
Use a path in %APPDATA% or %PROGRAMDATA% that you clear yourself regulary, or let the user specify a path.
In addition, you could register a scheduled task to clean the folder regulary.
If you do not want that another process can delete your files, just keep them open with a share mode of FILE_SHARE_READ | FILE_SHARE_WRITE. That way any attempt to delete them will fail, but any other process will be able to read or write them.
BTW : this is not related with the files living in %TEMP% folder.
If you cannot have a process to keep them open all the time, you must rely on other processes (and other users) on your system not doing anything ...

How to check if a file is still being written?

How can I check if a file is still being written? I need to wait for a file to be created, written and closed again by another process, so I can go on and open it again in my process.
In general, this is a difficult problem to solve. You can ask whether a file is open, under certain circumstances; however, if the other process is a script, it might well open and close the file multiple times. I would strongly recommend you use an advisory lock, or some other explicit method for the other process to communicate when it's done with the file.
That said, if that's not an option, there is another way. If you look in the /proc/<pid>/fd directories, where <pid> is the numeric process ID of some running process, you'll see a bunch of symlinks to the files that process has open. The permissions on the symlink reflect the mode the file was opened for - write permission means it was opened for write mode.
So, if you want to know if a file is open, just scan over every process's /proc entry, and every file descriptor in it, looking for a writable symlink to your file. If you know the PID of the other process, you can directly look at its proc entry, as well.
This has some major downsides, of course. First, you can only see open files for your own processes, unless you're root. It's also relatively slow, and only works on Linux. And again, if the other process opens and closes the file several times, you're stuck - you might end up seeing it during the closed period, and there's no easy way of knowing if it'll open it again.
You could let the writing process write a sentinel file (say "sentinel.ok") after it is finished writing the data file your reading process is interested in. In the reading process you can check for the existence of the sentinel before reading the data file, to ensure that the data file is completely written.
#blu3bird's idea of using a sentinel file isn't bad, but it requires modifying the program that's writing the file.
Here's another possibility that also requires modifying the writer, but it may be more robust:
Write to a temporary file, say "foo.dat.part". When writing is complete, rename "foo.dat.part" to "foo.dat". That way a reader either won't see "foo.dat" at all, or will see a complete version of it.
You can try using inotify
http://en.wikipedia.org/wiki/Inotify
If you know that the file will be opened once, written and then closed, it would be possible for your app to wait for the IN_CLOSE_WRITE event.
However if the behaviour of the other application doing the writing of the file is more like open,write,close,open,write,close....then you'll need some other mechanism of determining when the other app has truly finished with the file.

How to determine when files are done copying for further processing?

Alright so to start this is strictly for Windows and I'd prefer to use C++ over .NET but I'm not opposed to boost::filesystem although if it can be avoided in favor of straight Windows API I'd prefer that.
Now the scenario is an application on another machine I can't change is going to create files in a particular directory on the machine that I need to make backups of and do some extra processing. Currently I've made a little application which will sit and listen for change notifications in a target directory using FindFirstChangeNotification and FindNextChangeNotification windows APIs.
The problem is that while I can get notified when new files are created in the directory, modified, size changes, etc it only notifies once and does not specifically tell me which files. I've looked at ReadDirectoryChangesW as well but it's the same story there except that I can get slightly more specific information.
Now I can scan the directory and try to acquire locks or open the files to determine what specifically changed from the last notification and whether they are available for further use but in the case of copying a large file I've found this isn't good enough as the file won't be ready to be manipulated and I won't get any other notifications after the first so there is no way to tell when it's actually done copying unless after the first notification I continually try to acquire locks until it succeeds.
The only other thing I can think of that would be less hackish would be to have some kind of end token file but since I don't have control over the application creating the files in the first place I don't see how I'd go about doing that and it's still not ideal.
Any suggestions?
This is a fairly common problem and one that doesn't have an easy answer. Acquiring locks is one of the best options when you cannot change the thing at the remote end. Another I have seen is to watch the file at intervals until the size doesn't change for an interval or two.
Other strategies include writing a no-byte file as a trigger when the main file is complete and writing to a temp directory then moving the complete file to the real destination. But to be reliable, it must be the sender who controls this. As the receiver, you are constrained to watching the directory and waiting for the file to settle.
It looks like ReadDirectoryChangesW is going to be your best bet. For each file copy operation, you should be receiving FILE_ACTION_ADDED followed by a bunch of FILE_ACTION_MODIFIED notifications. On the last FILE_ACTION_MODIFIED notification, the file should no longer be locked by the copying process. So, if you try to acquire a lock after each FILE_ACTION_MODIFIED of the copy, it should fail until the copy completes. It's not a particularly elegant solution, but there doesn't seem to be any notifications available for when a file copy completes.
You can process the data once the file is closed, right? So the task is to track when the file is closed. This can be done using file system filter driver. You can write your own or you can use our CallbackFilter product.

Ensuring a file is flushed when file created in external process (Win32)

Windows Win32 C++ question about flushing file activity to disk.
I have an external application (ran using CreateProcess) which does some file creation. i.e., when it returns it will have created a file with some content.
How can I ensure that the file the process created was really flushed to disk, before I proceed?
By this I mean not the C++ buffers but really flushing disk (e.g. FlushFileBuffers).
Remember that I don't have access to any file HANDLE - this is all of course hidden inside the external process.
I guess I could open up a handle of my own to the file and then use FlushFileBuffers, but it's not clear this would work (since my handle doesn't actually contain anything which needs flushing).
Finally, I want this to run in non-admin userspace so I cannot use FlushFileBuffers on a whole volume.
Any ideas?
UPDATE: Why do I think this is a problem?
I'm working on a data backup application. Essentially it has to create some files as described. It then has to update it's internal DB (using SQLite embedded DB).
I recently had a data corruption issue which occurred during a bluescreen (the cause of which was unrelated to my app).
What I'm concerned about is application integrity during a system crash. And yes, I do care about this because this app is a data backup app.
The use case I'm concerned about is this:
A small data file is created using external process. This write is waiting in the OS cache to be written to disk.
I update the DB and commit. This is a disk activity. This write is also waiting in the OS cache.
A system failure occurs.
As I see it, we're now in a potential race condition. If "1" gets flushed and "2" doesn't then we're fine (as the DB transact wasn't then committed). If neither gets flushed or both get flushed then we're also OK.
As I understand it, the writes will be non-deterministic. i.e., I'm not aware that the OS will guarantee to write "1" before "2". (Am I wrong?)
So, if "2" gets flushed, but "1" doesn't then we have a problem.
What I observed was that the DB was correctly updated, but that the file had garbage in: the last 2 thirds of the data was binary "zeroes". Now, I don't know what it looks like when you have a file part flushed at the time of bluescreen, but I wouldn't be surprised if it looked like that.
Can I guarantee this is the cause? No I cannot guarantee this. I'm just speculating. It could just be that the file was "naturally" corrupted due to disk failure or as a result of the blue screen.
With regards to performance, this is something I believe I can deal with.
For example, the default behaviour of SQLite is to do a full file flush (using FlushFileBuffers) every time you commit a transaction. They are quite clear that if you don't do this then at the time of system crash, you might have a corrupted DB.
Also, I believe I can mitigate the performance hit by only flushing at "checkpoints". For example, writing 50 files, flushing the lot and then writing to the DB.
How likely is all this to be a problem? Beats me. But then my app might well be archiving at or around the time of system failure so it might be more likely that you think.
Hope that explains why I wan't to do this.
Why would you want this? The OS will make sure that the data is flushed to the disk in due time. If you access it, it will either return the data from the cache or from disk, so this is transparent for you.
If you need some safety in case of disaster, then you must call FlushFileBuffers, for example by creating a process with admin rights after running the external process. But that can severely impact the performance of the whole machine.
Your only other option is to modify the source of the other process.
[EDIT] The most simple solution is probably to copy the file in your process and then flush the copy (since you have the handle). Save the copy under a name which says "not committed in the database".
Then update the database. Write into the database, "updated from file ...". If this entry already exists next time, don't update the database and skip this step.
Flush the database to disk.
Rename the file to "file has been processed into database". Rename is an atomic operation (so it either happens or not).
If you can't think of a good filename for the different states, then use subfolders and move the file between them.
Well, there are no attractive options here. There is no documented way to retrieve the file handle you need from the process. Although there are undocumented ones, go there (via DuplicateHandle) only with careful consideration.
Yes, calling FlushFileBuffers on a volume handle is the documented way. You can avoid the privilege problem by letting a service make the call. Talk to it from your app with one of the standard process interop mechanisms. A named pipe whose name is prefixed with Global\ is probably the easiest way to get that going.
After your update I think http://sqlite.org/atomiccommit.html gives you the answers you need.
The way SQLite ensures that everything is flushed to disc works. So it works for you as well - take a look at the source.