inotify does not raise DELETE_SELF if an open file-fd exists - c++

I am trying to monitor a directory using inotify and I am registering for ALL the events. Now, I have a requirement in my project to track any MOVE_SELF operations performed on the directory, so that I should be able to detect to which new location has the monitored directory moved to. To achieve this I am storing a reference of open file-descrptor (int fd) of the monitored directory and when I get a MOVE_SELF, I try to get the new path using:
//code to store a reference of file-descrptor of the monitored sirectory
fd = open(watchPath.c_str(), O_RDONLY)
//code to learn the new location of the moved directory
char fdpath[4096];
char path[4096];
sprintf(fdpath, "/proc/self/fd/%d", fd);
ssize_t sz = readlink(fdpath, path, sizeof(path) - 1); //Path will contain the new location after the move happens
But the side effect of this is, in case I delete the directory, I do not get DELETE_SELF event, because there is still an open file descriptor that I am holding. Could anyone suggest me on how to get around this issue?
Thanks,
-Sandeep

In case someone stumbles into this issue: this is definitely an expected behavior. Inotify does not monitor "files", it monitors "file objects" (aka inodes). An inode does not get removed by kernel until all open file descriptors, pointing to it, are closed.
This is also why IN_DELETE/IN_DELETE_SELF does not get triggered if you remove one of several hard links to file (because hard links share the same inode).
You can partially work around the hard link issue by subscribing to IN_ATTRIB event: it is triggered when the reference count of inode changes (e.g. when one of hard links is deleted), so you can use it to check if the file still exist at the old path.
As for "open descriptors" issue — I am not aware of any workarounds. Personally, I just don't care. So what if your program temporarily de-synchronizes with disk contents? Even if inotify were completely flawless, you would still need occasional re-sync due to queue overruns and event races.

Related

Could DropBox interfere with DeleteFile()/rename()

I had the following code which got executed every two
minutes all day long:
int sucessfully_deleted = DeleteFile(dest_filename);
if (!sucessfully_deleted)
{
// this never happens
}
rename(source_filename,dest_filename);
Once every several hours the rename() would fail with errno=13 (EACCES). The files involved were all sitting on a DropBox directory and I had a hunch that DropBox could be the cause. I figured that it might just be possible that the DeleteFile() function may return with a non-zero successfully_deleted but actually DropBox could still be busy doing some stuff in relation to the deletion that prevented rename() from succeeding. What I did next was to change rename() to my_rename() which would attempt a rename() and upon any failure would Sleep() for one second and try a second time. Sure enough that has worked perfectly ever since. What's more, I get a diagnostic message displaying first-attempt-failures every several hours. It has never failed on the second attempt.
So you could say that the problem is entirely solved... but I would like to understand what might be going on so as to better defend myself against any related DropBox issues in the future...
Really I would like to have a new super_delete() function which does not return until the file is properly deleted and finished with in all respects.
under windows request to delete file really never delete file just. it mark it FCB (File Control Block) with special flag (FCB_STATE_DELETE_ON_CLOSE). real deletion will be only when the last file handle will be closed.
The DeleteFile function marks a file for deletion on close. Therefore,
the file deletion does not occur until the last handle to the file is
closed. Subsequent calls to CreateFile to open the file fail with
ERROR_ACCESS_DENIED.
also if exist section ( memory-mapped file ) open on file - file even can not be marked for delete. api call fail with STATUS_CANNOT_DELETE. so in general impossible always delete file.
in case exist another open handles for file (but not section !) begin from windows 10 rs1 exist new functional for delete - FileDispositionInformationEx with FILE_DISPOSITION_POSIX_SEMANTICS. in this case:
Normally a file marked for deletion is not actually deleted until all
open handles for the file have been closed and the link count for the
file is zero. When marking a file for deletion using
FILE_DISPOSITION_POSIX_SEMANTICS, the link gets removed from the visible namespace as soon as the POSIX delete handle has been closed,
but the file’s data streams remain accessible by other existing
handles until the last handle has been closed.
ULONG DeletePosix(PCWSTR lpFileName)
{
HANDLE hFile = CreateFileW(lpFileName, DELETE, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS|FILE_FLAG_OPEN_REPARSE_POINT, 0);
if (hFile == INVALID_HANDLE_VALUE)
{
return GetLastError();
}
static FILE_DISPOSITION_INFO_EX fdi = { FILE_DISPOSITION_DELETE| FILE_DISPOSITION_POSIX_SEMANTICS };
ULONG dwError = SetFileInformationByHandle(hFile, FileDispositionInfoEx, &fdi, sizeof(fdi))
? NOERROR : GetLastError();
// win10 rs1: file removed from parent folder here
CloseHandle(hFile);
return dwError;
}
Update
Sorry i didn't get the question correctly the first time. I thought DeleteFile returned error 13.
Now I understand that DeleteFile succeeds but rename fails immediatlely after.
It could be just a sync issue with the filesystem. After calling DeleteFile the file will be deleted when the OS commits the changes to the filesystem. That may not appen immediately.
If you need to perform multiple operations to the same path, you should have a look at transactions https://learn.microsoft.com/it-it/windows/desktop/api/winbase/nf-winbase-deletefiletransacteda.
-- OLD ANSWER --
That is correct. If the another application handles to that file, DeleteFile will fail.
Citing MSDN docs https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-deletefile :
The DeleteFile function fails if an application attempts to delete a file that has other handles open for normal I/O or as a memory-mapped file (FILE_SHARE_DELETE must have been specified when other handles were opened).
This applies to dropbox, the antivirus, or in general, any other application that may open those files.
Dropbox may open the file to compute its hash (to look for changes) at any moment. Same goes with the antivirus.

Close all tasks that are using a file [duplicate]

PROBLEM HISTORY:
Now I use Windows Media Player SDK 9 to play AVI files in my desktop application. It works well on Windows XP but when I try to run it on Windows 7 I caught an error - I can not remove AVI file immediately after playback. The problem is that there are opened file handles exist. On Windows XP I have 2 opened file handles during the playing file and they are closed after closing of playback window but on Windows 7 I have already 4 opened handles during the playing file and 2 of them remain after the closing of playback window. They are become free only after closing the application.
QUESTION:
How can I solve this problem? How to remove the file which has opened handles? May be exists something like "force deletion"?
The problem is that you're not the only one getting handles to your file. Other processes and services are also able to open the file. So deleting it isn't possible until they release their handles. You can rename the file while those handles are open. You can copy the file while those handles are open. Not sure if you can move the file to another container, however?
Other processes & services esp. including antivirus, indexing, etc.
Here's a function I wrote to accomplish "Immediate Delete" under Windows:
bool DeleteFileNow(const wchar_t * filename)
{
// don't do anything if the file doesn't exist!
if (!PathFileExistsW(filename))
return false;
// determine the path in which to store the temp filename
wchar_t path[MAX_PATH];
wcscpy_s(path, filename);
PathRemoveFileSpecW(path);
// generate a guaranteed to be unique temporary filename to house the pending delete
wchar_t tempname[MAX_PATH];
if (!GetTempFileNameW(path, L".xX", 0, tempname))
return false;
// move the real file to the dummy filename
if (!MoveFileExW(filename, tempname, MOVEFILE_REPLACE_EXISTING))
{
// clean up the temp file
DeleteFileW(tempname);
return false;
}
// queue the deletion (the OS will delete it when all handles (ours or other processes) close)
return DeleteFileW(tempname) != FALSE;
}
Technically you can delete a locked file by using MoveFileEx and passing in MOVEFILE_DELAY_UNTIL_REBOOT. When the lpNewFileName parameter is NULL, the Move turns into a delete and can delete a locked file. However, this is intended for installers and, among other issues, requires administrator privileges.
Have you checked which application is still using the avi file?
you can do this by using handle.exe. You can try deleting/moving the file after closing the process(es) that is/are using that file.
The alternative solution would be to use unlocker appliation (its free).
One of the above two method should fix your problem.
Have you already tried to ask WMP to release the handles instead? (IWMPCore::close seems to do that)

Windows CE/Embedded C++ non-volatile files created by my app being deleted on reboot

I am developing an application with Windows Embedded Compact 7 using C++. The problem I have just recently come across is that a .ini settings file and a .txt logfile that I create with the application and place on the \Mounted Volume (which is a non-volatile partition) are being deleted on a reboot.
The application was working to open the .ini file, edit the values, save the file, and next time I booted up it would be there with the settings I updated. Only recently after a major software update did I start having problems. But the specific functions that deal with the opening and closing of the files were not touched during the update.
Although it seems like it is something related to my application and the way I open/edit/save/close the file because if I open the .ini file with Wordpad and edit values manually then save it, the settings are saved on a reboot. I also have appropriate error handling with all the functions and no errors are occurring.
I have read on MSDN about possibly needing to "Flush" open buffers. Possibly I need to do this? I was really hoping someone has dealt with Windows embedded / CE and has possibly ran across a similar issue of a non-volatile file partition acting more like volatile memory.
Thanks for any help! Here is the code that I am using to write to a logfile which is essentially the same code as writing to the .ini file:
int writeLogFile(const char* szString)
{
FILE* pFile;
if((pFile = fopen("\\Mounted Volume\\logFile.txt", "a+")) == NULL )
debugMessage("Function: writeLogFile - Error! Could not open logFile.txt\n\r");
else
debugMessage("Function: writeLogFile - Notice. Opened logFile.txt\n\r");
if(fprintf(pFile, "%s\r",szString) < 0)
debugMessage("Function: writeLogFile - Error! There was a problem writing the alarm string to logFile.txt.\n\r");
if(fclose(pFile))
debugMessage("Function: writeLogFile - Error! Could not close logFile.txt\n\r");
else
debugMessage("Function: writeLogFile - Notice. Closed logFile.txt\n\r");
return 1;
}
Could you try to ad a fflush call before you close the file.
That should force an actual write. If you don't force it explicitely the file system may cache the writes.

How can I delete a file upon its close in C++ on Linux?

I wish for a file to be deleted from disk only when it is closed. Up until that point, other processes should be able to see the file on disk and read its contents, but eventually after the close of the file, it should be deleted from disk and no longer visible on disk to other processes.
Open the file, then delete it while it's open. Other processes will be able to use the file, but as soon as all handles to file are closed, it will be deleted.
Edit: based on the comments WilliamKF added later, this won't accomplish what he wants -- it'll keep the file itself around until all handles to it are closed, but the directory entry for the file name will disappear as soon as you call unlink/remove.
Open files in Unix are reference-counted. Every open(2) increments the counter, every close(2) decrements it. The counter is shared by all processes on the system.
Then there's a link count for a disk file. Brand-new file gets a count of one. The count is incremented by the link(2) system call. The unlink(2) decrements it. File is removed from the file system when this count drops to zero.
The only way to accomplish what you ask is to open the file in one process, then unlink(2) it. Other processes will be able to open(2) or stat(2) it between open(2) and unlink(2). Assuming the file had only one link, it'll be removed when all processes that have it open close it.
Use unlink
#include <unistd.h>
int unlink(const char *pathname);
unlink() deletes a name from the
filesystem. If that name was the last
link to a file and no processes have
the file open the file is deleted and
the space it was using is made
available for reuse.
If the name was the last link to a
file but any processes still have the
file open the file will remain in
existence until the last file
descriptor referring to it is closed.
If the name referred to a symbolic
link the link is removed.
If the name referred to a socket, fifo
or device the name for it is removed
but processes which have the object
open may continue to use it.
Not sure, but you could try remove, but it looks more like c-style.
Maybe boost::filesystem::remove?
bool remove( const path & ph );
Precondition: !ph.empty()
Returns: The value of exists( ph )
prior to the establishment of the
postcondition.
Postcondition: !exists( ph )
Throws: if ph.empty() || (exists(ph)
&& is_directory(ph) && !is_empty(ph)).
See empty path rationale.
Note: Symbolic links are themselves
deleted, rather than what they point
to being deleted.
Rationale: Does not throw when
!exists( ph ) because not throwing:
Works correctly if ph is a dangling
symbolic link. Is slightly
easier-to-use for many common use
cases. Is slightly higher-level
because it implies use of
postcondition semantics rather than
effects semantics, which would be
specified in the somewhat lower-level
terms of interactions with the
operating system. There is, however, a
slight decrease in safety because some
errors will slip by which otherwise
would have been detected. For example,
a misspelled path name could go
undetected for a long time.
The initial version of the library
threw an exception when the path did
not exist; it was changed to reflect
user complaints.
You could create a wrapper class that counts references, using one of the above methods to delete de file .
class MyFileClass{
static unsigned _count;
public:
MyFileClass(std::string& path){
//open file with path
_count++;
}
//other methods
~MyFileClass(){
if (! (--_count)){
//delete file
}
}
};
unsigned MyFileClass::_count = 0; //elsewhere
I think you need to extend your notion of “closing the file” beyond fclose or std::fstream::close to whatever you intend to do. That might be as simple as
class MyFile : public std::fstream {
std::string filename;
public:
MyFile(const std::string &fname) : std::fstream(fname), filename(fname) {}
~MyFile() { unlink(filename); }
}
or it may be something much more elaborate. For all I know, it may even be much simpler – if you close files only at one or two places in your code, the best thing to do may be to simply unlink the file there (or use boost::filesystem::remove, as Tom suggests).
OTOH, if all you want to achieve is that processes started from your process can use the file, you may not need to keep it lying around on disk at all. forked processes inherit open files. Don't forget to dup them, lest seeking in the child influences the position in the parent or vice versa.

How to see if a subfile of a directory has changed

In Windows, is there an easy way to tell if a folder has a subfile that has changed?
I verified, and the last modified date on the folder does not get updated when a subfile changes.
Is there a registry entry I can set that will modify this behavior?
If it matters, I am using an NTFS volume.
I would ultimately like to have this ability from a C++ program.
Scanning an entire directory recursively will not work for me because the folder is much too large.
Update: I really need a way to do this without a process running while the change occurs. So installing a file system watcher is not optimal for me.
Update2: The archive bit will also not work because it has the same problem as the last modification date. The file's archive bit will be set, but the folders will not.
This article should help. Basically, you create one or more notification object such as:
HANDLE dwChangeHandles[2];
dwChangeHandles[0] = FindFirstChangeNotification(
lpDir, // directory to watch
FALSE, // do not watch subtree
FILE_NOTIFY_CHANGE_FILE_NAME); // watch file name changes
if (dwChangeHandles[0] == INVALID_HANDLE_VALUE)
{
printf("\n ERROR: FindFirstChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
// Watch the subtree for directory creation and deletion.
dwChangeHandles[1] = FindFirstChangeNotification(
lpDrive, // directory to watch
TRUE, // watch the subtree
FILE_NOTIFY_CHANGE_DIR_NAME); // watch dir name changes
if (dwChangeHandles[1] == INVALID_HANDLE_VALUE)
{
printf("\n ERROR: FindFirstChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
and then you wait for a notification:
while (TRUE)
{
// Wait for notification.
printf("\nWaiting for notification...\n");
DWORD dwWaitStatus = WaitForMultipleObjects(2, dwChangeHandles,
FALSE, INFINITE);
switch (dwWaitStatus)
{
case WAIT_OBJECT_0:
// A file was created, renamed, or deleted in the directory.
// Restart the notification.
if ( FindNextChangeNotification(dwChangeHandles[0]) == FALSE )
{
printf("\n ERROR: FindNextChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
break;
case WAIT_OBJECT_0 + 1:
// Restart the notification.
if (FindNextChangeNotification(dwChangeHandles[1]) == FALSE )
{
printf("\n ERROR: FindNextChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
break;
case WAIT_TIMEOUT:
// A time-out occurred. This would happen if some value other
// than INFINITE is used in the Wait call and no changes occur.
// In a single-threaded environment, you might not want an
// INFINITE wait.
printf("\nNo changes in the time-out period.\n");
break;
default:
printf("\n ERROR: Unhandled dwWaitStatus.\n");
ExitProcess(GetLastError());
break;
}
}
}
This is perhaps overkill, but the IFS kit from MS or the FDDK from OSR might be an alternative. Create your own filesystem filter driver with simple monitoring of all changes to the filesystem.
ReadDirectoryChangesW
Some excellent sample code in this CodeProject article
If you can't run a process when the change occurs, then there's not much you can do except scan the filesystem, and check the modification date/time. This requires you to store each file's last date/time, though, and compare.
You can speed this up by using the archive bit (though it may mess up your backup software, so proceed carefully).
An archive bit is a file attribute
present in many computer file systems,
notably FAT, FAT32, and NTFS. The
purpose of an archive bit is to track
incremental changes to files for the
purpose of backup, also called
archiving.
As the archive bit is a binary bit, it
is either 1 or 0, or in this case more
frequently called set (1) and clear
(0). The operating system sets the
archive bit any time a file is
created, moved, renamed, or otherwise
modified in any way. The archive bit
therefore represents one of two
states: "changed" and "not changed"
since the last backup.
Archive bits are not affected by
simply reading a file. When a file is
copied, the original file's archive
bit is unaffected, however the copy's
archive bit will be set at the time
the copy is made.
So the process would be:
Clear the archive bit on all the files
Let the file system change over time
Scan all the files - any with the archive bit set have changed
This will eliminate the need for your program to keep state, and since you're only going over the directory entries (where the bit is stored) and they are clustered, it should be very, very fast.
If you can run a process during the changes, however, then you'll want to look at the FileSystemWatcher class. Here's an example of how you might use it.
It also exists in .NET (for future searchers of this type of problem)
Perhaps you can leave a process running on the machine watching for changes and creating a file for you to read later.
-Adam
Perhaps you can use the NTFS 5 Change Journal with DeviceIoControl as explained here
If you are not opposed to using .NET the FileSystemWatcher class will handle this for you fairly easily.
From the double post someone mentioned: WMI Event Sink
Still looking for a better answer though.
Nothing easy - if you have a running app you can use the Win32 file change notification apis (FindFirstChangeNotification) as suggested with the other answers. warning: circa 2000 trend micro real-time virus scanner would group the changes together making it necessary to use really large buffers when requesting the file system change lists.
If you don't have a running app, you can turn on ntfs journaling and scan the journal for changes http://msdn.microsoft.com/en-us/library/aa363798(VS.85).aspx but this can be slower than scanning the whole directory when the # of changes is larger than the # of files.