Deleted file still reported as existing (Windows only) - c++

(Note that this is not primarily a Qt question)
It seems to me that the return value of QFile::exists() is sometimes incorrect.
Consider the following two unit-test-like snippets (each of which I have executed a few thousand times in a loop)
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// assert file is gone
QVERIFY(!file.exists()); // <-- 5..10 % chance of failure
and
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// retry until file is gone (or until timeout)
for (auto i = 0; i < 10; i++)
{
if (!file.exists()) // <-- note that only the check is retried, not the actual delete
return;
QThread::yieldCurrentThread();
}
QFAIL("file is still reported as existing"); // <-- never reached in my tests
The first unit test fails about 8 out of 100 times. Always on the last line of code (indicating that the file still exists). The second unit test never fails.
This behavior was observed on a Windows 10 system using NTFS (with Qt 5.2.1). It could not be reproduced using ubuntu 16.04 LTS using ext4 on a VM (with Qt 5.8.0)
Not sure if this helps:
Process Monitor (when it succeeds)
Process Monitor (when it fails)
So my questions are:
what is happening?
what are implications that I might be interested in?
update:
For clarification: I am hoping for an answer like "this is caused by the NTFS feature 'bills-fancy-caching-magic'". From there I would like to find out, whether Qt does look over this feature intentionally.

According to the Windows API documentation, it is defined behaviour:
The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED.
It seems to be a property of the Windows kernel and therefore not to be limited to NTFS.
The behaviour seems to be unpredictable, as other services (think virus scanners) might open the file in question.

Related

reading (not writing) last_write_time gives a file creation error

I'm stumped... it took me a while to track this down.. because I don't have visual C++ IDE installed on the problematic system (it is windows server 2019)... my code works fine with VS 2022 on my laptop (W11 22H2)... anyhow.. I get this exception
Cannot create a file when that file already exists
I tracked it down to this code:
const auto fileTime = fs::last_write_time(p);
apparently this function can also write to the file to modify the time.
but I'm just trying to read it... (I didn't add the arguments necessary to write)
does anyone have any idea why this error might be happening?
https://en.cppreference.com/w/cpp/filesystem/last_write_time
please note it is highly likely that sometimes the file is actually being written to when I call this code (this code is called in a loop, and so is the file that is being written by another program)

VS2015 removing a file on linux from windows program

We have a server on windows, but it has a network drive which is actually on a linux server. The Program has to delete a file at the same location with the same name (signals), it works ok when those files are on local drive, but when running on the network drive, it will sometime not delete the file, and even worse, the functions will return that everything went ok(meaning the file is deleted). I tried with remove, _unlink, DeleteFileA , the problem still persists,sometime completely at random it won't be deleted and it will stay like this.
The code is really simple:
bool File::Delete()
{
if(isFile() && exist())
{
return DeleteFileA(filename.c_str()) != 0 ? true : false;
}
else
return false;
}
This will always return true even if the file is not removed, if for example it would not have permission it should fail(and fail each time, not at random), could someone give me an idea ? I ran out of options :(
Edit:
Thanks to #ExcessPhase, it seems like moveFile actually detects an error, so renaming before deleting can detect a problem "ERROR_FILE_NOT_FOUND".
Other things : This random problem can only happen when the files are created from linux server. If I create them from windows, they will always be deleted. Even more: If I have a file that the program cannot delete, and I create another file next to it from Windows, the program will detect and delete the one it could not delete before.
Edit2: Closer to answer: filename test and TEST in linux is different, while in Windows it's the same. The problem seems to appear at random when the case don't match. But I'm not sure since it's so random.
I believe the problem is with Samba service on Linux, which implements the SMB protocol for Windows. DeleteFile function just requests the SMB server (Server service on Windows) to delete a file. The success is returned by Samba.
Maybe you should try something more higher level like boost file system, or std::experimental::filesystem::remove

Virtual machine and memory access issue

I'am trying to watch a directory and notify for file changing(creation), I'am using a virtual machine (VirtualBox) on windows 7 and Ubuntu 12.04 as the guest, when a file was added to my watched folder I test the existence of this file to get its size and manipulate it later, the problem is: sometimes it return 0 as size although the file exists and have a non zero size, here is my code:
void Watcher::OnFileChanged (char* FileBaseName)
{
QDir watchDir(RootToWatch);
QString fileN="";
fileN=QString::fromLocal8Bit(FileBaseName);
QFileInfo file(RootToWatch+"/"+fileN);
qDebug()<<"Watcher::OnFileChanged "<<fileN;
try
{
if(file.exists())
{
qDebug()<<"FileWatcher::OnFileChanged "<<fileN<<"exists";
qDebug()<<"OnFileChanged:fileName "<<file.fileName()
<<"\nOnFileChanged File().size"<<QFile(fileN).size()
<<"\nOnFileChanged QFileInfo Size:"<<file.size();
....
}
}
}
should I wait for laps of time to get the real size (if yes how match and how test if the file is ready to be read) or do I missed some thing.
Any help will be appreciated.
This does sound like a typical case of "watcher goes off before the file is finally written" - which would be perfectly expected behaviour really, as all you can say is "tell me if this file has changed" - getting size zero is definitely a change - whether it previously existed or is just coming into existance, it's a change from the previous state.
My approach in this case would be to set the watch again, and see if the file has "grown" next time around.

What are the possible causes of "BUG: scheduling while atomic?"

There is another process continuously creating files that need processing by this code.
This code constantly scans the file-system for new files that need processing by comparing the contents of the file-system against a sqlite database that contains the processing results - one record for each file. This process is running at nice -n 19 so as not to interfere with the creation of new files by the other process.
It all works perfectly for a large number (>1k) of files, but then blows up with BUG: scheduling while atomic.
According to this
"Scheduling while atomic" indicates that you've tried to sleep
somewhere that you shouldn't
But the only sleep in the code is like this
void doFiles(void) {
for (...) { // for each file in the file-system
... // check database - do processing if needed
}
sleep(1);
}
int main(int argc, char *argv[], char *envp[]) {
while (true) doFiles();
return -1;
}
The code will hit this sleep after it has checked every file in the file-system against the database. The process needs to be repeated since new files will be added from time to time. There is no multi-threading in this code. Are there other possible causes for "BUG: scheduling while atomic" besides a misplaced sleep?
Edit: additional error output:
note: mirlin[1083] exited with preempt_count 1
BUG: scheduling while atomic: mirlin/1083/0x40000002
Modules linked in: g_cdc_ms musb_hdrc nop_usb_xceiv irqk edmak dm365mmap cmemk
Backtrace:
[<c002a5a0>] (dump_backtrace+0x0/0x110) from [<c028e56c>] (dump_stack+0x18/0x1c)
r6:c1099460 r5:c04ea000 r4:00000000 r3:20000013
[<c028e554>] (dump_stack+0x0/0x1c) from [<c00337b8>] (__schedule_bug+0x58/0x64)
[<c0033760>] (__schedule_bug+0x0/0x64) from [<c028e864>] (schedule+0x84/0x378)
r4:c10992c0 r3:00000000
[<c028e7e0>] (schedule+0x0/0x378) from [<c0033a80>] (__cond_resched+0x28/0x38)
[<c0033a58>] (__cond_resched+0x0/0x38) from [<c028ec6c>] (_cond_resched+0x34/0x44)
r4:00013000 r3:00000001
[<c028ec38>] (_cond_resched+0x0/0x44) from [<c0082f64>] (unmap_vmas+0x570/0x620)
[<c00829f4>] (unmap_vmas+0x0/0x620) from [<c0085c10>] (exit_mmap+0xc0/0x1ec)
[<c0085b50>] (exit_mmap+0x0/0x1ec) from [<c0037610>] (mmput+0x40/0xfc)
r9:00000001 r8:80000005 r6:c04ea000 r5:00000000 r4:c0427300
[<c00375d0>] (mmput+0x0/0xfc) from [<c003b5e4>] (exit_mm+0x150/0x158)
r5:c10992c0 r4:c0427300
[<c003b494>] (exit_mm+0x0/0x158) from [<c003cd44>] (do_exit+0x198/0x67c)
r7:c03120d1 r6:c10992c0 r5:0000000b r4:c10992c0
...
As others have said, you can sleep() anytime you want to in user code.
Looks like a problem with a driver on your platform. The driver may not actually call sleep() or schedule(), but often it will make a call of an kernel function which will, in turn, call one of these.
This also looks like it is using memory mapped file I/O on an embedded TI ARM processor.
This error was caused by a bad build.
A clean build by itself did not help.
A fresh checkout and build was required to resolve this issue.

could stl set::insert cause BSOD?

I have written a 'filemon' utility which basically logs file being opened during a period of time. Now I have exact these two functions in my 'filemon' utility:
set<wstring> wstrSet;
// callback - get notified by callback filter driver on any file open operation
void CbFltOpenFileN(
CallbackFilter* Sender,
LPWSTR FileName,
ACCESS_MASK DesiredAccess,
WORD FileAttributes,
WORD ShareMode,
DWORD CreateOptions,
WORD CreateDisposition )
{
// don't log for directories
if (FileAttributes & FILE_ATTRIBUTE_DIRECTORY) {
return;
}
wstring wstr = FileName;
wstr.append(L"\n");
//wstrSet.insert(wstr); // as soon as I un-comment this line I start getting BSOD in multiple execution of this utility
}
// Read all paths stored in the set and write them into the log file
void WritePathsToLog() {
typedef set<wstring>::const_iterator CI;
printf("\nNo. of files touched ===> %d\n\n", wstrSet.size());
for (CI iter = wstrSet.begin(); iter != wstrSet.end(); iter++) {
fputws((*iter).c_str(), logFile);
}
}
Basically what this code does is that 'filemon' utility interacts with callback filter driver and whenever file is touched by any process the driver reports the respective filename to the 'filemon' utility via CbFltOpenFileN function.
Now the issue is that this 'filemon' utility runs fine on win7 x64 machine 4GB machine but as soon as I uncomment the last line in the CbFltOpenFileN function i.e. start inserting the reported filename in the set then I start getting BSOD mostly with BugCheck 0xCC and sometimes with BugCheck 0x50 which basically indicates that "system is trying to access already freed memory"
P.S. on win7 x64 with 8GB RAM I am not seeing any issue at all irrespective of whether last line in the CbFltOpenFileN function is commented or not.
Currently, 'wstrSet' is using default allocator so do I need to use a specific allocator while declaring set wstrSet; if yes can someone tell me how & why?
Let me share some more information here:
I am using VS2010
My utility is a 32 bit application targeted for x86 platform
The fileystem filter driver I am using is callabck filter driver provided by Eldos corp.
Now I am tetsing this utility using a simulator which generates lots of files and then starts the 'filemon' utility, then it touches all those files and at the end stops the 'filemon' utility. This simulator repeates this process 25 times.
Now for case where last line is commented I get empty log file created 25 times as I don't insert anything in set but 'filemon' utility also doesn't causes any BSOD
But for case where last line is NOT commented I get log file created each time with path enteries as now I am inserting paths in the set but then during first few iteration, say 2nd or 3rd or 6th, itself 'filemon' utility hits this BSOD scenario.
It's hard for me to repro this issue in debug mode as my simulator take cares of the start/stop of 'filemon' utility and i need to run it multiple time to repro the issue.
Hope this added info helps!!!
Thanks and Regards,
Sachin
If you CbFltOpenFileN is a callback function, does this means it has asynchron calls? in that case, you might want to protect the global variable (wstrSet) with a mutex...