I have an app that uses NSPersistentDocument (without autosaving) on OS X and UIDocument (also without autosaving) on iOS. The file representation is Binary Core Storage. This app has been working fine since iOS 7 + macOS 10.10.
If I open a document on OS X 10.13, and another device (macOS 10.13 or iOS 11) opens the same file, on the next save I get a warning "This document's file has been changed by another application since you opened or saved it.". The warning is spurious, because only an open has occured on another device - not a save.
In looking for a possible reason for this notification, I notice that when an iCloud file open occurs on one device, an extended attribute named com.apple.lastuseddate#PS is updated. I have confirmed this extended attribute is updated on both iOS 11 and macOS 10.13. This extended attribute doesn't appear to have been used in prior versions of iOS or macOS. I wonder if the updating of file metadata is triggering this spurious warning.
(I suspect this attribute may related to NSFileProvider on iOS 11 as there is a new method setLastUsedDate:forItemIdentifier:completionHandler: and FinderSync on macOS 10.13 as setLastUsedDate:forItemWithURL:completion: is also new.)
My question is - do others see this new behavior? Is it causing others such annoying side effects?
I have studied this problem further. I have determined what seems to be going on, and also workaround. NOTE this only applies to NSPersistentDocument - without autosaving.
Firstly an important note on file timestamps and file system type. HFS+ timestamps have a resolution on one second. APFS timestamps have a resolution of 1 nanosecond.
My problems only started to manifest itself when the OS X App's iCloud container was migrated to APFS.
Here is a typical sequence (I've used OS X and iOS as example devices - but the same sequence happens regardless OS type for the 'other' iCloud connected device):
Open a file on OS X in the iCloud app container.
Make a change and save the change to the file, at this point the file modification date on APFS will have a fractional second component.
Open the same file on another device - say iOS (after allowing a moment for iCloud to propagate the changes). The modification date on opening the file on iOS has the fractional seconds component truncated.
Soon afterwards, back on OS X, the modification date of the recently saved file will have the fractional seconds component truncated (by iCloud processes beyond my control - presentedItemDidChange is called at this time).
If another change is saved to the OS X file, the change warning "This document's file has been changed by another application since you opened or saved it." will appear. This is because of a mismatch between the NSDocument self.fileModificationDate from the last save (which contains a fractional seconds component) and the file modification date (which has had the fractional seconds component truncated).
The implications are:
when iCloud exchanges timestamp metadata between hosts, the resolution is in seconds.
when a file is opened, and the timestamp metadata is conveyed to other devices, it is in truncated form. It is also written back to the file on the device that wrote the file last.
The workaround I have employed (which is only necessary because I am using NSPersistentDocument without autoSaving) is to set self.fileModificationDate to the fractional seconds truncated version (yes it is read-write), if and only if self.fileModificationDate matches the file modification date ignoring fractional seconds. If this is done, the warning doesn't appear - and no harm is done (as the file had not been modified):
// saving file changes
NSDate *modDate = nil;
[self.fileURL getResourceValue:&modDate forKey:NSURLContentModificationDateKey error:NULL];
if (modDate && self.fileModificationDate) {
NSTimeInterval delta = [modDate timeIntervalSinceDate:self.fileModificationDate];
if (fabs(delta) > 0.0 && [modDate ss_isEqualToDateInSeconds:self.fileModificationDate])
self.fileModificationDate = modDate.ss_dateWithDateInSeconds;
}
[self saveDocumentWithDelegate:self
didSaveSelector:#selector(documentSaved:didSave:contextInfo:)
contextInfo:nil];
As I said at the outset — surprising behaviour. It would be nice if it was documented. Because my use of NSDocument is non-standard, I don't know if others may have this problem.
Related
I'm stumped... it took me a while to track this down.. because I don't have visual C++ IDE installed on the problematic system (it is windows server 2019)... my code works fine with VS 2022 on my laptop (W11 22H2)... anyhow.. I get this exception
Cannot create a file when that file already exists
I tracked it down to this code:
const auto fileTime = fs::last_write_time(p);
apparently this function can also write to the file to modify the time.
but I'm just trying to read it... (I didn't add the arguments necessary to write)
does anyone have any idea why this error might be happening?
https://en.cppreference.com/w/cpp/filesystem/last_write_time
please note it is highly likely that sometimes the file is actually being written to when I call this code (this code is called in a loop, and so is the file that is being written by another program)
C++, Visual Studio 2019, Windows 10, SDK 10.0.22621.0
As part of my app's log file, I have collected a few bits of information about the user's computer.
I would start the query with:
static PDH_HQUERY cpuQuery;
static PDH_HCOUNTER cpuTotal;
PDH_STATUS status;
status = PdhOpenQuery(NULL, NULL, &cpuQuery);
and then get the bits of information starting with:
status = PdhAddCounter(cpuQuery, L"\\Processor(_Total)\\% Processor Time", NULL, &cpuTotal);
if (status != ERROR_SUCCESS) {
csData += _T("GetCPURAMStatsinThread - status Add Counter Processor Time Error 2 and return *********\n");
log_write(csData);
return -1;
}
I just noticed that I am now getting the error from PdhAddCounter as:
0xC0000BB8 (PDH_CSTATUS_NO_OBJECT) The specified object is not found on the system.
The only thing that I can think of that has changed since this used to work was that I updated to SDK 10.0.22621.0. I believe that it worked with 10.0.17763.0.
I have not been paying attention to these lines in the log file, but when a customer had a problem that had to do with how many cores his CPU had, and how many virtual processors it had, then that is when I realized that these lines have been erroring out.
I have a laptop that had Windows 7, but I upgraded it to Windows 10, and ran the app on that, and it did not error out. So, does this mean an issue with the Windows 10 update, or the SDK update?
Per my comments above with #Tony Lee I used the MS sample code to browse the counters on my local computer. There was a Processor Information selection vs my original Processor under which there was a Processor Time selection. In the choice box below that there was an all instances choice and a _Total choice. When I selected the _Total choice the buffer in the sample code stayed as NULL but if I selected the all instances the buffer filled with:
L"\Processor Information(*)\% Processor Time"
Plugging that string into PdhAddEnglishCounter() worked...
Edit it also worked with PdhAddCounter()
Using Processor instead of Processor Information and (_Total) Instead of (*) used to work in Windows 10. No telling why things have changed at least on some computer.
Ed
EDIT Important note. First is that the new code above also works on the laptop on which the original code worked. Second note is that I just realized that the desktop on which the original code failed is Windows 10 Home whereas the laptop on which the original code worked is Windows 10 Pro. That maybe the difference. Regardless, the new code works on both Home and Pro.
EDIT 2 The new code also ran fine on Windows 11 Home. I also see that my customer in whose log file I noticed the error line was on Windows 11 Home. That would insinuate that the Pro version still works with the legacy (see next comment by Tony Lee) Processor while the Home versions do not work with the legacy Processor but only with the new Processor Information
(Note that this is not primarily a Qt question)
It seems to me that the return value of QFile::exists() is sometimes incorrect.
Consider the following two unit-test-like snippets (each of which I have executed a few thousand times in a loop)
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// assert file is gone
QVERIFY(!file.exists()); // <-- 5..10 % chance of failure
and
// create file
QFile file("test.tmp");
QVERIFY(file.open(QIODevice::WriteOnly));
QVERIFY(file.write("some data") != -1);
file.close();
// delete file
QVERIFY(file.remove());
// retry until file is gone (or until timeout)
for (auto i = 0; i < 10; i++)
{
if (!file.exists()) // <-- note that only the check is retried, not the actual delete
return;
QThread::yieldCurrentThread();
}
QFAIL("file is still reported as existing"); // <-- never reached in my tests
The first unit test fails about 8 out of 100 times. Always on the last line of code (indicating that the file still exists). The second unit test never fails.
This behavior was observed on a Windows 10 system using NTFS (with Qt 5.2.1). It could not be reproduced using ubuntu 16.04 LTS using ext4 on a VM (with Qt 5.8.0)
Not sure if this helps:
Process Monitor (when it succeeds)
Process Monitor (when it fails)
So my questions are:
what is happening?
what are implications that I might be interested in?
update:
For clarification: I am hoping for an answer like "this is caused by the NTFS feature 'bills-fancy-caching-magic'". From there I would like to find out, whether Qt does look over this feature intentionally.
According to the Windows API documentation, it is defined behaviour:
The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED.
It seems to be a property of the Windows kernel and therefore not to be limited to NTFS.
The behaviour seems to be unpredictable, as other services (think virus scanners) might open the file in question.
i have an application that queries a specific folder for its contents with a quick interval. Up to the moment i was using FindFirstFile but even by applying search pattern i feel there will be performance problems in the future since the folder can get pretty big; in fact it's not in my hand to restrict it at all.
Then i decided to give FindFirstFileEx a chance, in combination with some tips from this question.
My exact call is the following:
const char* search_path = "somepath/*.*";
WIN32_FIND_DATA fd;
HANDLE hFind = ::FindFirstFileEx(search_path, FindExInfoBasic, &fd, FindExSearchNameMatch, NULL, FIND_FIRST_EX_LARGE_FETCH);
Now i get pretty good performance but what about compatibility? My application requires Windows Vista+ but given the following, regarding FIND_FIRST_EX_LARGE_FETCH:
This value is not supported until Windows Server 2008 R2 and Windows 7.
I can pretty much compile it on my Windows 7 but what will happens if someone runs this on a Vista machine? Does the function downgrade to a 0
(default) in this case? It's safe to not test against operating system?
UPDATE:
I said above about feeling the performance is not good. In fact my numbers are following on a fixed set of files (about 100 of them):
FindFirstFile -> 22 ms
FindFirstFile -> 4 ms (using specific pattern; however all files may wanted)
FindFirstFileEx -> 1 ms (no matter patterns or full list)
What i feel about is what will happen if folder grows say 50k files? that's about x500 bigger and still not big enough. This is about 11 seconds for an application querying on 25 fps (it's graphical)
Just tested under WinXP (compiled under win7). You'll get 0x57 (The parameter is incorrect) when it calls ::FindFirstFileEx() with FIND_FIRST_EX_LARGE_FETCH. You should check windows version and dynamically choose the value of additional parameter.
Also FindExInfoBasic is not supported before Windows Server 2008 R2 and Windows 7. You'll also get run-time 0x57 error due to this value. It must be changed to another alternative if binary is run under old windows version.
at first periodic queries a specific folder for its contents with a quick interval - not the best solution I think.
you need call ReadDirectoryChangesW - as result you will be not need do periodic queries but got notifies when files changed in directory. the best way bind directory handle with BindIoCompletionCallback or CreateThreadpoolIo and call first time direct call ReadDirectoryChangesW. then when will be changes - you callback will be automatic called and after you process data - call ReadDirectoryChangesW again from callback - until you got STATUS_NOTIFY_CLEANUP (in case BindIoCompletionCallback) or ERROR_NOTIFY_CLEANUP (in case CreateThreadpoolIo) in callback (this mean you close directory handle for stop notify) or some error.
after this (first call to ReadDirectoryChangesW ) you need call FindFirstFileEx/FindNextFile loop but only once - and handle all returned files as FILE_ACTION_ADDED notify
and about performance and compatibility.
all this is only as information. not recommended to use or not use
if you need this look to - ZwQueryDirectoryFile - this give you very big win performance
you only once need open File handle, but not every time like with
FindFirstFileEx
but main - look to ReturnSingleEntry parameter. this is key
point - you need set it to FALSE and pass large enough buffer to
FileInformation. if set ReturnSingleEntry to TRUE function
and return only one file per call. so if folder containing N files -
you will be need call ZwQueryDirectoryFile N times. but with
ReturnSingleEntry == FALSE you can got all files in single call, if buffer will be large enough. in all case you serious reduce
the number of round trips to the kernel, which is very costly
operation . 1 query with N files returned much more faster than N
queries. the FIND_FIRST_EX_LARGE_FETCH and do this - set
ReturnSingleEntry to TRUE - but in current implementation (i check this on latest win 10) system do this only in
FindNextFile calls, but in first call to
FindFirstFileEx it (by unknown reason) still use
ReturnSingleEntry == TRUE - so will be how minimum 2 calls to the ZwQueryDirectoryFile, when possible have single call
(if buffer will be large enough of course) and if direct use
ZwQueryDirectoryFile - you control buffer size. you can
allocate once say 1MB for buffer, and then use it in periodic
queries. (without reallocation). how large buffer use
FindFirstFileEx with FIND_FIRST_EX_LARGE_FETCH - you can
not control (in current implementation this is 64kb - quite
reasonable value)
you have much more reach choice for FileInformationClass - less
informative info class - less data size, faster function worked.
about compatibility? this exist and worked from how minimum win2000 to latest win10 with all functional. (in documentation - "Available starting with Windows XP", however in ntifs.h it declared as #if (NTDDI_VERSION >= NTDDI_WIN2K) and it really was already in win2000 - but no matter- XP support more than enough now)
but this is undocumented, unsupported, only for kernel mode, no lib file.. ?
documented - as you can see, this is both for user and kernel mode - how you think - how is FindFirstFile[Ex] / FindNextFile - working ? it call ZwQueryDirectoryFile - no another way. all calls to kernel only through ntdll.dll - this is fundamental. ( yes still possible that ntdll.dll stop export by name and begin export by ordinal only for show what is unsupported really was). lib file exist, even two ntdll.lib and ntdllp.lib (here more api compare first) in any WDK. headers, where declared ? #include <ntifs.h>. but it conflict with #include <windows.h> - yes conflict, but if include ntifs.h in namespace with some tricks - possible avoid conflicts
I am using MAPISendMail() in an MFC application, and am having a problem that webmail clients sometimes receive a winmail.dat attachment, instead of the "real" attachments.
I have researched a lot, and have found that others are experiencing this problem too, but have not found a solution.
I believe that the problem may be in my MapiFileDesc structure, in which I leave the lpFileType member pointing to NULL, in order to have the mail program (In my case Outlook 2010) determine the file type automatically.
lpFiletype is a MapiFileTagExt structure, and the documentation says this:
A value of NULL indicates an unknown file type or a file type determined by the operating system.
So I believe this should work for common types, such as JPEG or GIF and such.
I read that the winmail.dat is caused by Outlook sending the mail encoded with the ms-tnef encoding, which is proprietary to Microsoft. However, when sending the email, Outlook shows "HTML" as highlighted, not RTF.
Has anyone encountered this problem and properly solved it?
Sending via SMTP and such is not an option, because the user should have a copy of the message in their Sent Items folder.
Using the Outlook object model is not an option, because that would require the user has Outlook installed, and not any MAPI compatible client.
I was having similar issue.
I found a KB article that has interesting information in "One-Off Addressing" section, saying that when address is provided in the format [SMTP:SMTP Address] - then e-mail is always sent in rich text format.
For me the fix was not to set "Address" property of MapiRecipDesc object at all. Instead I put the address in Name property. The opening dialog then does not resolve the address at first, but it resolves it right before sending and then it is not sent in RTF!
I even got it working with recipient's name together with address:
MapiRecipDesc.Name = "Firstname Lastname <mail#address.com>";
I, too, was getting all attachments as WinMail.Dat files for the jclMapi.JclEmail, InternalSendOrSave routine, which is called by jclEmail.Send.
What I did was essentially follow jtmnt's answer and changed:
RealAddresses[I] := FAddress; //do not add the Recipients.AddressesType + AddressTypeDelimiter
and I changed:
lpszName := PAnsiChar('"' + AnsiString(RealNames[I])+'" <' +
AnsiString(RealAddresses[I]) + '>');
lpszAddress := '';
This worked so that I no longer was sending WinMail.dat files as attachments, instead the intended PDFs and MP3s were being sent.
What I really want to report is that I was using an OLE routine that was working fine in Windows 7 and stopped working in Windows 8. Thus, I started looking at the MAPI solutions but found this problem with Winmail.dat files being attached. I could not find any mention of this issue with OLE (with Outlook) not working properly in Windows 8.
(Both:
OutlookApp := GetActiveOleObject('Outlook.Application') and
OutlookApp := CreateOleObject('Outlook.Application')
were no longer working in Windows 8, but continued to work fine in Windows 7.)
Thanks for the solution. Thought you might want to know how to apply it to the jclMapi code and this issue with OLE in Win8.
Curious in Outlooks behavior is it does matter what length the domain name of the recipient has! If the e-mail address domain is 12 characters or more (I don’t know what the limit exactly is), then we face the problematic TNEF coding.
So: a#hutsfluts.nl goes wrong. While abacadabraandmore#hf.nl will result in plain text encoding.
I guess this is not by design….
The solution mentioned above:
Put the recepient e-mail address in MapiRecipDesc’s lpszName and let the lpszAddress point to an empty string (NOT null!) solves the problem.
Don’t ask me why, for I have no clue why this would influence the encoding.