Virtual machine and memory access issue - c++

I'am trying to watch a directory and notify for file changing(creation), I'am using a virtual machine (VirtualBox) on windows 7 and Ubuntu 12.04 as the guest, when a file was added to my watched folder I test the existence of this file to get its size and manipulate it later, the problem is: sometimes it return 0 as size although the file exists and have a non zero size, here is my code:
void Watcher::OnFileChanged (char* FileBaseName)
{
QDir watchDir(RootToWatch);
QString fileN="";
fileN=QString::fromLocal8Bit(FileBaseName);
QFileInfo file(RootToWatch+"/"+fileN);
qDebug()<<"Watcher::OnFileChanged "<<fileN;
try
{
if(file.exists())
{
qDebug()<<"FileWatcher::OnFileChanged "<<fileN<<"exists";
qDebug()<<"OnFileChanged:fileName "<<file.fileName()
<<"\nOnFileChanged File().size"<<QFile(fileN).size()
<<"\nOnFileChanged QFileInfo Size:"<<file.size();
....
}
}
}
should I wait for laps of time to get the real size (if yes how match and how test if the file is ready to be read) or do I missed some thing.
Any help will be appreciated.

This does sound like a typical case of "watcher goes off before the file is finally written" - which would be perfectly expected behaviour really, as all you can say is "tell me if this file has changed" - getting size zero is definitely a change - whether it previously existed or is just coming into existance, it's a change from the previous state.
My approach in this case would be to set the watch again, and see if the file has "grown" next time around.

Related

Deleted file still reported as existing (Windows only)

(Note that this is not primarily a Qt question)
It seems to me that the return value of QFile::exists() is sometimes incorrect.
Consider the following two unit-test-like snippets (each of which I have executed a few thousand times in a loop)
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// assert file is gone
QVERIFY(!file.exists()); // <-- 5..10 % chance of failure
and
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// retry until file is gone (or until timeout)
for (auto i = 0; i < 10; i++)
{
if (!file.exists()) // <-- note that only the check is retried, not the actual delete
return;
QThread::yieldCurrentThread();
}
QFAIL("file is still reported as existing"); // <-- never reached in my tests
The first unit test fails about 8 out of 100 times. Always on the last line of code (indicating that the file still exists). The second unit test never fails.
This behavior was observed on a Windows 10 system using NTFS (with Qt 5.2.1). It could not be reproduced using ubuntu 16.04 LTS using ext4 on a VM (with Qt 5.8.0)
Not sure if this helps:
Process Monitor (when it succeeds)
Process Monitor (when it fails)
So my questions are:
what is happening?
what are implications that I might be interested in?
update:
For clarification: I am hoping for an answer like "this is caused by the NTFS feature 'bills-fancy-caching-magic'". From there I would like to find out, whether Qt does look over this feature intentionally.
According to the Windows API documentation, it is defined behaviour:
The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED.
It seems to be a property of the Windows kernel and therefore not to be limited to NTFS.
The behaviour seems to be unpredictable, as other services (think virus scanners) might open the file in question.

VS2015 removing a file on linux from windows program

We have a server on windows, but it has a network drive which is actually on a linux server. The Program has to delete a file at the same location with the same name (signals), it works ok when those files are on local drive, but when running on the network drive, it will sometime not delete the file, and even worse, the functions will return that everything went ok(meaning the file is deleted). I tried with remove, _unlink, DeleteFileA , the problem still persists,sometime completely at random it won't be deleted and it will stay like this.
The code is really simple:
bool File::Delete()
{
if(isFile() && exist())
{
return DeleteFileA(filename.c_str()) != 0 ? true : false;
}
else
return false;
}
This will always return true even if the file is not removed, if for example it would not have permission it should fail(and fail each time, not at random), could someone give me an idea ? I ran out of options :(
Edit:
Thanks to #ExcessPhase, it seems like moveFile actually detects an error, so renaming before deleting can detect a problem "ERROR_FILE_NOT_FOUND".
Other things : This random problem can only happen when the files are created from linux server. If I create them from windows, they will always be deleted. Even more: If I have a file that the program cannot delete, and I create another file next to it from Windows, the program will detect and delete the one it could not delete before.
Edit2: Closer to answer: filename test and TEST in linux is different, while in Windows it's the same. The problem seems to appear at random when the case don't match. But I'm not sure since it's so random.
I believe the problem is with Samba service on Linux, which implements the SMB protocol for Windows. DeleteFile function just requests the SMB server (Server service on Windows) to delete a file. The success is returned by Samba.
Maybe you should try something more higher level like boost file system, or std::experimental::filesystem::remove

What are the possible causes of "BUG: scheduling while atomic?"

There is another process continuously creating files that need processing by this code.
This code constantly scans the file-system for new files that need processing by comparing the contents of the file-system against a sqlite database that contains the processing results - one record for each file. This process is running at nice -n 19 so as not to interfere with the creation of new files by the other process.
It all works perfectly for a large number (>1k) of files, but then blows up with BUG: scheduling while atomic.
According to this
"Scheduling while atomic" indicates that you've tried to sleep
somewhere that you shouldn't
But the only sleep in the code is like this
void doFiles(void) {
for (...) { // for each file in the file-system
... // check database - do processing if needed
}
sleep(1);
}
int main(int argc, char *argv[], char *envp[]) {
while (true) doFiles();
return -1;
}
The code will hit this sleep after it has checked every file in the file-system against the database. The process needs to be repeated since new files will be added from time to time. There is no multi-threading in this code. Are there other possible causes for "BUG: scheduling while atomic" besides a misplaced sleep?
Edit: additional error output:
note: mirlin[1083] exited with preempt_count 1
BUG: scheduling while atomic: mirlin/1083/0x40000002
Modules linked in: g_cdc_ms musb_hdrc nop_usb_xceiv irqk edmak dm365mmap cmemk
Backtrace:
[<c002a5a0>] (dump_backtrace+0x0/0x110) from [<c028e56c>] (dump_stack+0x18/0x1c)
r6:c1099460 r5:c04ea000 r4:00000000 r3:20000013
[<c028e554>] (dump_stack+0x0/0x1c) from [<c00337b8>] (__schedule_bug+0x58/0x64)
[<c0033760>] (__schedule_bug+0x0/0x64) from [<c028e864>] (schedule+0x84/0x378)
r4:c10992c0 r3:00000000
[<c028e7e0>] (schedule+0x0/0x378) from [<c0033a80>] (__cond_resched+0x28/0x38)
[<c0033a58>] (__cond_resched+0x0/0x38) from [<c028ec6c>] (_cond_resched+0x34/0x44)
r4:00013000 r3:00000001
[<c028ec38>] (_cond_resched+0x0/0x44) from [<c0082f64>] (unmap_vmas+0x570/0x620)
[<c00829f4>] (unmap_vmas+0x0/0x620) from [<c0085c10>] (exit_mmap+0xc0/0x1ec)
[<c0085b50>] (exit_mmap+0x0/0x1ec) from [<c0037610>] (mmput+0x40/0xfc)
r9:00000001 r8:80000005 r6:c04ea000 r5:00000000 r4:c0427300
[<c00375d0>] (mmput+0x0/0xfc) from [<c003b5e4>] (exit_mm+0x150/0x158)
r5:c10992c0 r4:c0427300
[<c003b494>] (exit_mm+0x0/0x158) from [<c003cd44>] (do_exit+0x198/0x67c)
r7:c03120d1 r6:c10992c0 r5:0000000b r4:c10992c0
...
As others have said, you can sleep() anytime you want to in user code.
Looks like a problem with a driver on your platform. The driver may not actually call sleep() or schedule(), but often it will make a call of an kernel function which will, in turn, call one of these.
This also looks like it is using memory mapped file I/O on an embedded TI ARM processor.
This error was caused by a bad build.
A clean build by itself did not help.
A fresh checkout and build was required to resolve this issue.

QSettings - Sync issue between two process

I am using Qsettings for non gui products to store its settings into xml files. This is written as a library which gets used in C, C++ programs. There will be 1 xml file file for each product. Each product might have more than one sub products and they are written into xml by subproduct grouping as follows -
File: "product1.xml"
<product1>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</product1>
File: productn.xml
<productn>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</productn>
The code in one process does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.setValue("settings1",<value>)
sleep(20);
settings.setValue("settings2", <value2>)
settings.sync();
When the first process goes to sleep, I start another process which does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.remove("settings1")
settings.setValue("settings3", <value3>)
settings.sync();
I would expect the settings1 to go away from product1.xml file but it still persist in the file - product1.xml at the end of above two process. I am not using QCoreApplication(..) in my settings library. Please point issues if there is anything wrong in the above design.
This is kind of an odd thing that you're doing, but one thing to note is that the sync() call is what actually writes the file to disk. In this case if you want your second process to actually see the changes you've made, then you'll need to call sync() before your second process accesses the file in order to guarantee that it will actually see your modifications. Thus I would try putting a settings.sync() call right before your sleep(20)
Maybe you have to do delete settings; after the sync() to make sure it is not open, then do the writing in the other process?
Does this compile? What implementation of XmlFormat are you using and which OS? There must be some special code in your project for storing / reading to and from Xml - there must be something in this code which works differently from what you expect.

Media file can not be switched and played

I am trying to play a music file on S60 5th edition with the following code:
_LIT(KMusicFilename, "C:\\Data\\Music.mp3");
TApaTaskList iTaskList(CCoeEnv::Static()->WsSession());
TBool iExists;
TApaTask iApaTask = iTaskList.FindApp(TUid::Uid(0x102072C3));
iExists = iApaTask.Exists();
if(iExists)
{
// Music player already running
iApaTask.SwitchOpenFile(KMusicFilename);
iApaTask.BringToForeground();
}
else
{
// music player is not running and needs to be launched
RApaLsSession iAplsSession;
User::LeaveIfError(iAplsSession.Connect());
TThreadId thread;
iAplsSession.StartDocument( KMusicFilename,
thread,
RApaLsSession::ESwitchFiles );
iAplsSession.Close();
}
The problem is that this code sample does not work if the music player is already running. The media file that was already playing keeps playing, the function SwitchOpenFile does not have any effect on it.
Is there a workaround for this?
Thank you.
I'm not sure why it doesn't work, but one thing I notice about your code: this call:
iApaTask.SwitchOpenFile(KMusicFilename);
does not check the error code; see if you get an non-zero error code and this may help determine what the problem is. (The same applies to the iAplsSession.StartDocument(...) call).