I have a file open on the iPhone that I am sending the data of across the network (Opened using "_open"). However I have the ability to delete files from the iphone's interface. This is done using NSFileManager's removeItemAtPath.
The odd thing is that removeItemAtPath is succeeding even though the file is currently open.
The file transfers perfectly across the network and removeItemAtPath succeeds before the transfer is complete. So does removeItemAtPath do a lazy delete? ie does it queue it for later if the file is in use? If so then no problems.
If not ... does anyone know how I can get NSFileManager to actually report the fact that it didn't do the delete?
Thanks!
According to the documentation at
http://developer.apple.com/library/ios/documentation/cocoa/reference/foundation/Classes/NSFileManager_Class/Reference/Reference.html#//apple_ref/occ/instm/NSObject/fileManager:shouldRemoveItemAtPath:
shouldRemoveItemAtPath returns YES if the operation should proceed, not necessarily that it already successfully deleted the item. It's also interesting that the documentation states:
Discussion Returning NO from this method causes NSFileManager to stop deleting the item. If the item is a directory, no children of that item are deleted either.
Reading that leads me to believe this is an asynchronous operation and that the return value of this method should not be used to determine if the file was successfully deleted. My guess is that it queues the object for deletion and will be deleted when the file is no longer in use.
Related
I have a class UserInterface containing a list of Items (whatever they represent). The content of these Items is too expensive to keep in memory due to possibly big size, so each Item only stores some metadata (description, preview...) for the related data, the data itself is stored on the disk in binary files. There is another class - FileHandler, that is responsible for managing these files. Each Item has id_ associated with the related File. UserInterface has a member reference to FileHandler to be able to get the list of Files from it and represent them as Items on the screen, but FileHandler does not even know about the existence of UserInterface. When the user selects an Item in the UI, UserInerface asks FileHandler to find a File with the same id_ (as the selected Item has), and then this File can be read from the disk for further usage of its content.
But here are some drawbacks of this design:
Let's say the user wants to delete some Items. They should be deleted from both UI and disk storage:
UserInterface::deleteItems(/*list of id's*/)
{
fileHandler_.deleteFiles(/*list of id's*/);
// What if a power failure or unexpected crash happens here?
// Delete items from UI...
}
If something bad happens during the execution of the following method, the next time the user runs the program they will get a broken state - the UI will contain corrupted items linked to files that have been deleted. I can still check if all files listed in the UI exist, and if some of them don't, I can mark corrupted items as invalid/broken, but I would rather prefer to avoid such situations completely.
Are there any good design patterns/techniques aiming to solve such problems?
Your program can stop at any time when the power is lost or the operating system crashes, and there is nothing you can do to avoid that.
However, it can also stop due to a controlled operating system shutdown or when your own code crashes. You can design your software so that it can close gracefully in some of these cases, by handling the shutdown message from the operating system, catching unhandled exceptions, handling std::terminate, handling close signals, etc.
If you use a database such as SQLite, and use transactions to write data, it can handle many of these situations for you. However, even SQLite database can get corrupted in some cases. For more information.
Because you cannot control the situation when the operating system crashes, you also need to fix any problems during your startup sequence. You should design the program so that if cannot fix corrupted database files (or other problems), it can at least detect them. In SQLite, you can execute PRAGMA integrity_check to see whether the database is corrupted. If your program must be able to recover automatically, you could take backups and recover the most recent backup, or you could restore the default settings.
I am using boost::last_write_time as one of my checks to see if a file has been modified or not.
The call works fine if the file I am using is a local file.
But if I request the same information for a file from a network drive, I get wrong result.
The call I make is : boost::filesystem::last_write_time( file_path )
I am connected to the drive through VPN. The result is wrong only for the first time I make a request after modifying the file. The next call returns the right time.
The wrong time I get is always the old modification time(the one prior to the new change).
It doesn't matter if I wait a while before making the request. The first time is always wrong and the second time I get the correct one.
I am working on a mac. And I see that internally the method makes use of stat function.
I tried passing the error_code struct to see if there was any error. But it held 0 after the call.
Is there any limitation related to getting status of files over a network using stat method?
Is there any function that I could call to ensure that the last_write_method always returns the right time(other than calling the method twice)?
ADDITIONAL INFORMATION:
Found some additional info on: IBM Knowledge Center
In usage notes, bullet 6 on "Network File System Differences" says:
Local access to remote files through the Network File System may produce unexpected results due to conditions at the server...
...The local Network File System also impacts operations that retrieve file attributes. Recent changes at the server may not be available at your client yet, and old values may be returned from operations. (Several options on the Add Mounted File System (ADDMFS) command determine the time between refresh operations of local data.)
But I still don't understand why the call works correctly the second time, even when the call is made immediately after the first one.
And why it doesn't work if I wait for some time before I make the first call.
We're getting an intermittent error on a ImqQueue::get( ImqMsg &, ImqGetMessageOptions & ); call with reason code 2042, which Should Not Happen™ based on the Websphere documentation; we should only get that reason code on an open.
Would this error indicate that the server could not open a queue on its side, or does it indicate that there's a problem in our client? What is the best way to handle this error? Right now we just log that it occurs, but it's happening a lot. Unfortunately I'm not well-versed in Websphere MQ; I'm kind of picking this up as I go, so I don't have all the terminology correct.
Our client is written in C++ linking against libmq 6.0.2.4 and running on SLES-10. I don't know the particulars for the server other than it's running version 7.1. We're requesting an upgrade to bring our side up-to-date. We have multiple instances of the client running concurrently; all are using the same request queue, but each is creating its own dynamic reply queue with MQOO_INPUT_EXCLUSIVE + MQOO_INPUT_FAIL_IF_QUIESCING.
If the queue is not already open, the ImqQueue::get method will implicitly open the queue for you. This will end up with the MQOO_INPUT_AS_Q_DEF option being used which will therefore use the DEFSOPT(EXCL|SHARED) attribute on the queue. You should also double check that the queue is defined SHARE rather than NOSHARE, but I suspect that will already be correctly set.
You mention that you have multiple instances of the application running concurrently so if one of them has the queue opened implicitly as MQOO_INPUT_AS_Q_DEF resulting in MQOO_INPUT_SHARED from DEFSOPT, then it will get 2042 (MQRC_OBJECT_IN_USE) if others have it open. If nothing else had it open at the time, then the implicit open will work, and later instances will instead get the 2042.
If it is intermittent, then I suggest there is a path through your application where ImqQueue::open method is not invoked. While you look for that, changing the queue definition to DEFSOPT(SHARED) should get rid of the 2042s.
I'm watching the config files of my NodeJS server on Ubuntu using:
for( var index in cfgFiles ) {
fs.watch(cfgFiles[index], function(event, fileName) {
logger.info("======> EVENT: " + event);
updateConfigData(fileName);
});
}
So whenever I save a config file, the "change" event is received at least twice by the handler function for the same file name causing updateConfigData() to be executed multiple times. I experienced the same behavior when watching config files using C++/iNotify.
Does anyone have a clue what causes this behavior?
Short Answer: It is not Node, file is really changed twice.
Long Answer
I have a very similar approach that I use for my development setup. My manager process watches all js source files if it is a development machine and restart childs on the cluster.
I had not paid any attention to this since it was just development setup; but after I read your question, I gave it a look and realized that I have the same behavior.
I edit files on my local computer and my editor updates them over sftp whenever I save. At every save, change event on the file is triggered twice.
I had checked listeners('change') for the FSWatcher object that is returned by fs.watch call; but it shows my event handler only once.
Then I did the test I should have done first: "touch file.js" on server and it triggered only once. So, for me, it was not Node; but file was really changed twice. When file is opened for writing (instead of appending), it probably triggers a change since it empties the content. Then when new content is written, it triggers the event for a second time.
This does not cause any problem for me; but if you want to prevent it, you can make an odd-even control in your event handler function by keeping the call numbers for each file and do whatever you do only on even-indexed calls.
See my response to a similar question which explains that the problem is being caused by your editor making multiple edits to the file on save.
I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job.
Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread).
Nice-to-have requirements:
Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file.
It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status.
I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode.
However I noticed that after some period it just stops working.
That is, after several successful file downloads it stops downloading anything.
I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state).
The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object.
Does anyone know something about this? Is this a known bug?
I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working.
BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.
I know from my own experiences that the Microsoft implementation of the XmlHttpRequest falls short of full compliance with the draft standard. In particular the standard mandates that streamed data should be able to be extracted in ready state '3' (Receiving) which IE deliberately ignores.
Unfortunately I have not seen what you are describing despite using XmlHttpRequest objects extensively for long polling purposes.