Notify my application when a folder is modified in Windows? - c++

I want the ownership of folders created by my application to remain only with my application.
This is because I am linking my application's data to the folders path.
So either of these 2 solutions are fine with me:
Do not allow anybody to modify the folder created by my application.
Only my application can delete/rename the folder. Modifying it
through windows explorer should require admin rights.
If above solution is not possible, at least my application should be notified of the change so that my application's links are updated.
The question is that whether it is possible to do this in Windows?
I feel language does not play a role, but still if required, I am using Qt in C++ for developing my application.
EDIT: Now there are 2 cases of being notified:
a. When my application is running and the folder is modified.
b. When my application is NOT running and the folder is modified (This may be achieved if Windows maintains a log file for changes to a folder. My application can read this log file and understand the changes in the folder).
Actually, I meant to ask specifically for case b, but now reading the answers makes me feel that it may not be possible to get notified for case b.

Security in Windows is account-based, not application-based. A folder isn't "created by your application"; it's created by the user running your application.
As for being notified, just keep an open handle. That will prevent the folder to be changed while you're running. Obviously, when you're not running, you couldn't even be notified.
[edit]
When your app is not running, you need NTFS change journals.

You can use QFileSystemWatcher class to monitoring files and directories for modifications.
QFileSystemWatcher *watcher = new QFileSystemWatcher();
watcher->addPath(QStringLiteral("C:\\Folder"));
QObject::connect(watcher,&QFileSystemWatcher::directoryChanged,[](QString folder) {
qDebug() << folder;
});

You can certainly be notified upon dirctory changes - there's a specific API for it:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365465%28v=vs.85%29.aspx
It's common to run such monitoring calls in a threads of their own in order to reduce the chance of missing bursts of notifications, and the way you handle that, and any subsequent buffering/signaling to other threads, is a bit broad for SO.
Asynchronous overlapped operation seems to be supported, but I have not tried/tested it.

Related

Redirect stdout from Executable Custom Action to MSI log

I have a Custom Action that runs an executable within an msi installer package. The exe is compiled as a console application and stdouts necessary info.
I want that output redirected to the MSI log file.
I don't want the console to be shown during the installation.
For number 2 I suppose I can use windows as a subsystem, which will not open a console at all. But no output will be shown even if I run the exe from a terminal (PowerShell/CMD).
For number 1 I thought of running an executable as a subprocess called within a Custom Action DLL, but it is not possible since the exe is stored in a binary table and won't be generated when I need it. Moreover, it will have a random name.
The Custom Action's logic MUST be run as a separate process.
EDIT: Some colleagues wrote a free guide on installation testing. Maybe it will be useful in the future, to avoid such costly mistakes.
I don't think you can do it if you want to run the custom action as a separate process. I might be wrong. But I never tried this and it doesn't seem/sound possible.
Basically, the MSIEXEC process will own the handle of the log file created by the installation and I don't think you can share it with a separate process.
Why do you need to use a separate custom action process?
As a test - you could try to create an additional DLL custom action, that runs asynchronously. The purpose of this custom action is simply to communicate with your EXE process and write inside the log file any information you want to pass from the EXE custom action. I never tried this approach, but if you have time to kill and really need the main logic to remain in the EXE custom action, you could give it a try.

Possible to make QML application "offline capable" using caches?

I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.

Creating a file accessible to only my application in C++?

I am developing an application for a small office to maintain their monetary accounts.
My application can help create a file which can store all the information.
But it should not be accessible to the user other than in my application.
Why? Because somebody may delete the file & all the records will vanish.
The environment is a Windows PC with a single account having admin privilages.
I am developing the application in C++ using the MinGW compiler.
I am sort of blank right now, as to how I can create such a file.
Any suggestions please?
If your application can modify it, then the user under whose credentials it runs can modify it, period. Also, if he has administrator privileges then you can't stop him from deleting stuff, even if your application runs under different credentials and the file is protected by ACLs.
Now, since the problem seems to be not of security, but of protecting the user from himself, I would just store the file in a location that is "out of sight" enough and be happy with it; write your data in %APPDATA%\yourappname1, such a directory is specifically for user-specific application data that is not intended to be touched directly by the user.
If you want to be paranoid you can enable every security setting you can find (hide the directory, protect it with a restrictive ACL when the app is not running, open it for exclusive access, ...), but if you ask me it's just wasted time:
the average user (our target AFAICT) doesn't mess in appdata, since it's a hidden folder to begin with;
the "power user" who messes around, if sufficiently determined to shoot himself in the foot (or voluntarily do damage), will find a way, since the security settings are easily circumventable in your situation (an admin can take ownership of any file and change its ACLs, and use applications like Unlocker to circumvent file locking);
the technician that has legitimate reasons to access the file (e.g. he must take/restore a backup of it) will be frustrated by all these useless precautions.
You can get the actual %APPDATA% path by expanding the corresponding environment variable or via SHGetFolderPath/SHGetKnownFolderPath (or whatever replacement they invented for it in new Windows versions).
Make sure your application loads on windows boot and opens the file with dwShareMode 0 option.
Here is an MSDN Example
You would need to give these files their own file extension and perhaps other security measures (I.e passwords to files). If you want these files to be suggested by Windows then you will have to do some work with the registry.
Here's a good source since you're concerned with Windows only:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff513920(v=vs.85).aspx
As far as keeping the data from being deleted, redundancy my friend redundancy. Talk to a network administrator about how they keep their data safe. I'd bet money on them naming lot's of backups as one of their reasons.
But it should not be accessible to the user other than in my application.
You cannot do that.
Everything that exists on machine user has physical access to can be deleted if user has sufficient determination.
You can protect your file from being deleted while program is running - on windows, you can't delete open files. Keep file open, people won't delete it while your program is running. Instead, they will kill your program via task manager and delete the file anyway.
Either that, or you could upload it somewhere. Data that is not located on physically accessible device cannot be easily deleted by user. However, somebody will have to run the server (and deal with security + possibly write server software). In your case it might not be worth it.
I'd suggest to document location of user data in help file, and you should probably put "!do not delete this.txt" or something into folder with this file.

Why is build time of local application affected by network?

Build time of XPages application containing several JARs, Java sources and ~50 XP/CC elements takes about minute to build on server via WAN. I have replicated application to local, build time dropped to ~10s.
Since few days ago build of local application is extremely slow, about 2-5 minutes. After some experiments there is workaround: to disable TCP port in location document - it drops build times to just few seconds. Even tho it works, it does not help much - testing requires user to be authenticated, so I need to replicate design changes to remote or local server - and that means to change location (online/offline) every time.
UPDATE 2013-04-04: I have duplicated my current location document and removed home and directory servers. To my surprise, with this location build times went back to few seconds - with TCP port enabled so replication is possible. Bigger surprise was the fact, that returning home/directory servers back to new location did not reproduce the problem - in fact they do not affect performance. I know it because I have renamed current location document and everything went to normal. From my understanding, "something" in client configuration was connected to location name. Thanks to Simon's tips I will investigate further.
The question is still open: I am looking for some (eclipse) preference controlling this behavior - unintended communication with server during build of local application.
Solution:
Teamstudio CIAO hooks into designer and checks for every update of design element. Seems to be lack of code optimization to me: it checks whether currently built design element (every single one, one by one) should be controlled in CIAO config database.
This explains why the problem was solved by renaming of location document. I was disappointed yesterday, when performance problems started again. Fortunately, I recalled CIAO setup to that location document about that time. CIAO uses teamstudio.ini file in DATA directory to configure what CIAO configuration database is used for every location document. Look for entry:
CIAOConfigDb[location name]=server name;CIAO\CIAOConfig.nsf
For development on local replicas with connection to server (for replication or local server), use location document with CIAO disabled.
This works only with property ForceConfigLocation=0.
Not a solution (yet!), but may help in the investigation. I'll update further if you post results later.
Debug instructions.
Add the following to the shortcut that launches the Designer client.
-RPARAMS -console -debug -separateSysLogFiles -consoleLog
Start the designer client. This will also open up the OSGi console.
Reproduce the issue. While it is still in progress in the OSGi console type the following:
dump threads
Do this three times, with a small amount of time between completion of each dump. Once done open the three heap dumps (in the IBM_TECHNICAL_SUPPORT folder) in the Heap Dump Analyser.
It will show you what threads are consistent through all three dumps. Take a look at those and look for package names/calls which may appear to be a functional area. Once you have that then you can try adding the debug for the related class.
For example: Let's say you notice "com.ibm.designer.domino.ui.commons." in the thread, then you would edit the rcpinstall.properties file. It will be in:
<Notes Install>\Data\workspace\.config\rcpinstall.properties
and you would add (start with FINE, then FINEST if nothing):
com.ibm.designer.domino.ui.commons.level=FINE
Now when you restart the designer client it will generate debug output in the workspace\logs folder for that package. You need to then go through the trace logs looking for the time when the delay occurred and see if it makes any references to related design elements.
Other open applications may get built at the same time (which looks like a bug top me). Be sure to close all other applications and the server based replica. Open applications have their icon showing in the application list and they stay open even if you close and reopen the Designer. In Designer 9 right click application and select "Close Application". In 8.5 you need to use Package Exprorer for closing.
Another good way is to use Working Sets. Only applications in open Working Set will be built (AFAIK). Have a Working Set with this one app only (and the app only in this Working Set).
update 1
If these don't help I would delete/rename bookmark.nsf, Cache.NDK and desktop8.ndk. Then open just this one app and see what happens.
update 2
Check that there are no referenced projects. Right click the application and select "Project Properties". From there "Project Referencies" and make sure no check boxes are checked.
update 3
Based on your update I would check the item names starting with $ in location document. Sometimes there are saved IP addresses etc. which could cause this problem. All those items can be removed.
If possible (and if You are not using it yet) try to use version 9 of the Domino designer (You do not have to use Domino 9 to do that - it works fine with Domino 8.5.3).
For our projects build times went down to only few seconds from few minutes. I guess that they finally noticed at IBM that the build process used to heavily relay on connection to server and done something with it.
With new designer You don't event have to replicate to local. You can directly work on Your local server.

watchdog in vc++ application

I have written a simple vc++ background application. What am trying is like a watchdog service that could monitor if the application is running or not. If the application crashed then the service should start the application
For creating a setup through windows installer am using only the app.exe and app.dll.
Is that possible to create this watchdog - service in the exe itself ?
Unfortunately I have no idea of how to write such a program, does anyone have some example code that would demonstrate this technique please?
if so then how to make the default exe and watchdog exe as a single application to install ?
Your best route would be to create a separate service to act as the watchdog. Technically, it's possible to have the service and the "real application" in the same executable. You can differentiate between the two depending on how the exe has been started, but it will make maintenance quite difficult.
This article might be of interest.
Here - http://yadi.sk/d/EtzBRSMi3FqVH - is my implementation of WatchDog app, working in systray. Do not mind that it's written with Qt - the main functionality is with WinAPI.
This app is watching in processes list for several processes and restarts them if can't find. The second feature is that it monitors all windows in system for suspicious window title (for ex. "'My Great App' causes a system error and will be closed. Send message to developers ?") and, if find, restarts them too
P.S. I didn't i18n it, but I think there will no troubles )
Update: (according to #CodyGray comment)
Here's pastebin's links: WatchDog.cpp and WatchDog.h
Such a watchdog can be set up to, for example, write to a file every minute (or whatever). If the file hasn't been updated in two or more minutes then there is most likely a deadlock in the application and it has to be restarted.