cfapi: CfDehydratePlaceholder seems to be stucked - c++

My target is, that files can be hydrated or dehydrated on user request via the Explorer "free up space" or "Always keep on Device" ContextMenu entry. In case I create a new placeholder file that is dehydrated from the beginning, everything works and I can hydrate it via the callback mechanics. But the way around does not work for me. Inside of the Explorer the file will be marked as UnPinned and the file will be marked as syncing, but my application does not receive any callback from CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE or CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE_COMPLETION. Then I wanted to do it manually with CfDehydratePlaceholder, but exactly the same behaviour. Nothing happens and the file remains in the state, syncing. Even if I used CfSetInSyncState to set the state to CF_IN_SYNC_STATE_IN_SYNC it remains to be in the state syncing.
Now I wanted to implement a minimal example with the help of Cloud Mirror Example, but I realized it has the same behaviour. When I try to dehydrate a file again exactly the same happens there as well. From my perspective, it feels for me like cfapi expects an ack from the cloud service, which it never gets.
But in OneDrive everything works like expected. What I am missing? Did I have to set some specific settings?

I had a misunderstanding of the whole API and here is how I understand the API now, to help other people, who are struggling with it.
You have to register your sync root and connecting your app to it. In case of connecting it, you will receive a CF_CONNECTION_KEY, which is needed to communicate with the virtual filesystem. Then you can add extended attributes to all files inside of your sync root. The most important are custom attributes you can choose by yourself to identify the file object by your app if needed and then the PinState and SyncState. Mostly the SyncState don't have to be changed by the app, besides marking a file as synced after it was processed by the app. (you can do it at the moment you update your custom attributes) Because in case a file changed, the SyncState will automatically be changed. The PinState declares which final state a file should have. For example UNPINNED means, that the file should be dehydrated, and PINNED the opposite. It does not mean, that the file necessarily has already this state. My misunderstanding was, that I thought in case I unpinned a file, it will be automatically dehydrated. Or in case I pinned a placeholder I will receive a request via the callback function I mentioned in my question. But this is not the case. Your app needs to find out via a FileWatcher (i can recommend my own created FileWatcher project: https://github.com/neXenio/panoptes) that the file attribute of specific files was changed. Then your app has to process every step. Like already mentioned in case of dehydrating, the app needs to call CfDehydratePlaceholder. In case of hydrating, you need to open a transfer session via CfGetTransferKey and then hydrate (send the data to the empty file) via the method CfExecute, where you need the connection key and the transfer key. And that's are the basics. There is much more to tell about it, but I guess with this beginning, everybody can figure it out by himself.

Related

Qt: How to create a file using a path on any Windows machine?

I am not exactly sure if the title captures what exactly I want to ask so here it is:
I just made my first desktop application that uses a XML file to store data. The XML File is stored somewhere in C:/Users/myname/Documents/adjsklf/asdjfklasd/.... on my own machine.
How do I go about creating this file in a specific location C:/Users/theirname/Documents/myAppName/data.xml? Or more specifically, how do I get the "theirname" file name? For machines that have multiple users, how do I get the filename that belongs to the user who actually is using the app?
Also, when I first started this application, I wasn't really thinking of deploying this application. So I made an explicit constructor of a class, dataManipulation, that manipulates all the data in my XML file. What happens is that when my program runs, it executes MainWindow, which at the same time constructs my dataManipulation object with the path I want. However, since now I have a few friends who want to try out this app, I need to be able to detect whether the file exists first using the path I mentioned earlier. What's the best way to achieve that?
Thank you so much!
Use QStandardPaths::writableLocation.
There are many ways to perform file opening checks. If you want to do it in Qt-like way, take a look at QFile and do similarily. In constructor it only sets the path, but the file is opened only in QFile::open, and its return value indicates whether it was successfully opened. You can create init() method in your data manipulation class, check its return value and show a message to user if needed.

Creating a file accessible to only my application in C++?

I am developing an application for a small office to maintain their monetary accounts.
My application can help create a file which can store all the information.
But it should not be accessible to the user other than in my application.
Why? Because somebody may delete the file & all the records will vanish.
The environment is a Windows PC with a single account having admin privilages.
I am developing the application in C++ using the MinGW compiler.
I am sort of blank right now, as to how I can create such a file.
Any suggestions please?
If your application can modify it, then the user under whose credentials it runs can modify it, period. Also, if he has administrator privileges then you can't stop him from deleting stuff, even if your application runs under different credentials and the file is protected by ACLs.
Now, since the problem seems to be not of security, but of protecting the user from himself, I would just store the file in a location that is "out of sight" enough and be happy with it; write your data in %APPDATA%\yourappname1, such a directory is specifically for user-specific application data that is not intended to be touched directly by the user.
If you want to be paranoid you can enable every security setting you can find (hide the directory, protect it with a restrictive ACL when the app is not running, open it for exclusive access, ...), but if you ask me it's just wasted time:
the average user (our target AFAICT) doesn't mess in appdata, since it's a hidden folder to begin with;
the "power user" who messes around, if sufficiently determined to shoot himself in the foot (or voluntarily do damage), will find a way, since the security settings are easily circumventable in your situation (an admin can take ownership of any file and change its ACLs, and use applications like Unlocker to circumvent file locking);
the technician that has legitimate reasons to access the file (e.g. he must take/restore a backup of it) will be frustrated by all these useless precautions.
You can get the actual %APPDATA% path by expanding the corresponding environment variable or via SHGetFolderPath/SHGetKnownFolderPath (or whatever replacement they invented for it in new Windows versions).
Make sure your application loads on windows boot and opens the file with dwShareMode 0 option.
Here is an MSDN Example
You would need to give these files their own file extension and perhaps other security measures (I.e passwords to files). If you want these files to be suggested by Windows then you will have to do some work with the registry.
Here's a good source since you're concerned with Windows only:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff513920(v=vs.85).aspx
As far as keeping the data from being deleted, redundancy my friend redundancy. Talk to a network administrator about how they keep their data safe. I'd bet money on them naming lot's of backups as one of their reasons.
But it should not be accessible to the user other than in my application.
You cannot do that.
Everything that exists on machine user has physical access to can be deleted if user has sufficient determination.
You can protect your file from being deleted while program is running - on windows, you can't delete open files. Keep file open, people won't delete it while your program is running. Instead, they will kill your program via task manager and delete the file anyway.
Either that, or you could upload it somewhere. Data that is not located on physically accessible device cannot be easily deleted by user. However, somebody will have to run the server (and deal with security + possibly write server software). In your case it might not be worth it.
I'd suggest to document location of user data in help file, and you should probably put "!do not delete this.txt" or something into folder with this file.

How to make application not to run a new instance when openening a new file in Qt?

For example we have a TextEditor Application. Like notepad++. We have tabs at which file content was displaying.
The default text editor in OS is set to TextEditor Application. When we open a new file application added a tab and put content to it.
How to make an application not to run a new instance when opening a new file in Qt?
Which is the best way you think?
The problem is how can you make a single-instance application. When you open a file the operating system will open the associated application and give it the file as a command line argument. You cannot simply delegate an 'open file' command to a running application through OS mechanism, you have to implement it by yourself.
At the AppWhirr project we used QLocalServer/Client to communicate between instances: when the AppWhirr app is executed it checks whether a QLocalServer with a fix ID is already taken or not. If not this instance of the application is the first/only running instance. If the ID is already taken it means another instance of the application is already running so this second instance will only do 2 things: send the given input arguments to the other instance through Qt's local client/server communication, and when it's successfully finish the communication it will quit (the second instance).
That's one solution for the problem, requires quite a bit of coding and I would not recommend it if you don't want to use local client/server communication for anything else, but it's a viable solution.
Another solution would be that the first instance of the application creates a text file at a fixed location and writes our the instance's ID. After this the second instance can read the text file and send a message to the specified ID. And of course the first instance have to remove the text-file when it quits and probably you have to implement some fail-safe code to remove the text-file in case the first instance crashes. This solution will use less resource than the first one but requires a fail-safe cleanup code.
Or as a third option you can use third-party solutions like #Matteo Italia suggested.

How to determine when files are done copying for further processing?

Alright so to start this is strictly for Windows and I'd prefer to use C++ over .NET but I'm not opposed to boost::filesystem although if it can be avoided in favor of straight Windows API I'd prefer that.
Now the scenario is an application on another machine I can't change is going to create files in a particular directory on the machine that I need to make backups of and do some extra processing. Currently I've made a little application which will sit and listen for change notifications in a target directory using FindFirstChangeNotification and FindNextChangeNotification windows APIs.
The problem is that while I can get notified when new files are created in the directory, modified, size changes, etc it only notifies once and does not specifically tell me which files. I've looked at ReadDirectoryChangesW as well but it's the same story there except that I can get slightly more specific information.
Now I can scan the directory and try to acquire locks or open the files to determine what specifically changed from the last notification and whether they are available for further use but in the case of copying a large file I've found this isn't good enough as the file won't be ready to be manipulated and I won't get any other notifications after the first so there is no way to tell when it's actually done copying unless after the first notification I continually try to acquire locks until it succeeds.
The only other thing I can think of that would be less hackish would be to have some kind of end token file but since I don't have control over the application creating the files in the first place I don't see how I'd go about doing that and it's still not ideal.
Any suggestions?
This is a fairly common problem and one that doesn't have an easy answer. Acquiring locks is one of the best options when you cannot change the thing at the remote end. Another I have seen is to watch the file at intervals until the size doesn't change for an interval or two.
Other strategies include writing a no-byte file as a trigger when the main file is complete and writing to a temp directory then moving the complete file to the real destination. But to be reliable, it must be the sender who controls this. As the receiver, you are constrained to watching the directory and waiting for the file to settle.
It looks like ReadDirectoryChangesW is going to be your best bet. For each file copy operation, you should be receiving FILE_ACTION_ADDED followed by a bunch of FILE_ACTION_MODIFIED notifications. On the last FILE_ACTION_MODIFIED notification, the file should no longer be locked by the copying process. So, if you try to acquire a lock after each FILE_ACTION_MODIFIED of the copy, it should fail until the copy completes. It's not a particularly elegant solution, but there doesn't seem to be any notifications available for when a file copy completes.
You can process the data once the file is closed, right? So the task is to track when the file is closed. This can be done using file system filter driver. You can write your own or you can use our CallbackFilter product.

Win32 C++ ReadDirectoryChangesW "creation" and "modification" of file difference detect?

Here is the problem: I monitor a directory using Win32 API ReadDirectoryChangesW function. And I need to distinguish between newly created files and modified files. But there are problems... as always :(
Cases:
I monitor directory for new/modify (FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_SIZE). Problem: After file creation, new file event + modify file event is triggered. But i need only one. How can I avoid that? When file is modified I get what I want :).
I monitor directory only for new file (FILE_NOTIFY_CHANGE_FILE_NAME) - NO PROBLEM.
I monitor directory only for modify file (FILE_NOTIFY_CHANGE_SIZE). Problem: When a new file is, modify action is fired along with file creation event. How can I avoid that?
Of course, I implemented some workarounds. But, I want to know if there any elegant way of handling the problems I described.
You should be catching FILE_NOTIFY_CHANGE_LAST_WRITE, not FILE_NOTIFY_CHANGE_SIZE, for a modified file. Files may be modified without the size changing.
You should also keep a queue of changes and the time they happened and only process the queue after there have been no changes in the past 1-2 seconds. Some applications can do very strange things when creating or modifying files, and you'll most likely want to special case for popular applications if you plan on using this code in the wild.
ReadDirectoryChanges isn't one of the friendliest winapi functions. You probably can't get around receiving two events on file creation; I'm not completely sure whether you'll get an extra modify for FILE_NOTIFY_CHANGE_LAST_WRITE on creation, but I think you probably will. Using the queue approach will allow you to easily throw out the extra event if it has the same time stamp as the creation event.