I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.
Related
The MSI installation would call my (native/C++) custom action functions. Since the DLL is freshly loaded, and the MSIEXEC.EXE process is launched separately for each function (the callable actions, as specified in MSI/WiX script), I cannot use any global data in C/C++ program.
How (or Where) can I store some information about the installation going on?
I cannot use named objects (like shared-memory) as the "process" that launches the DLL to call the "action" function would exit, and OS will not keep the named-object.
I may use an external file to store, but then how would I know (in the DLL's function):
When to delete the external file.
When to find that this function call is the first call (Action/function call Before="LaunchConditions" may help, not very sure).
If I cannot delete the file, I cannot know if "information" is current or stale (i.e. belonging to earlier failed/succeeded MSI run).
"Temporary MSI tables" I have heard of, but not sure how to utilize it.
Preserve Settings: I am a little confused what your custom actions do, to be honest. However, it sounds like they preserve settings from an older application and setup version and put them back in place if the MSI fails to install properly?
Migration Suggestion (please seriously consider this option): Could you install your new MSI package and delete all shortcuts and access to the old application whilst leaving it
installed instead? Your new application version installs to a new path
and a new registry hive, and then you migrate all settings on first
launch of the new application and then kick off the uninstall of the
old application - somehow - or just leave it installed if that is
acceptable? Are there COM servers in your old install? Other things that have global registration?
Custom Action Abstinence: The above is just a suggestion to avoid custom actions. There are many reasons to avoid custom actions (propaganda piece against custom actions). If you migrate settings on application launch you avoid all sequencing, conditioning, impersonation issues along with the technical issues you have already faced (there are many more) associated with custom action use. And crucially you are in a familiar debugging context (application launch code) as opposed to the unfamiliar world of setups and their poor debugability.
Preserving Settings & Data: With regards to saving data and settings in a running MSI instance, the built in mechanism is basically to set properties using Session.Property (COM / VBScript) or MsiSetProperty (Win32) calls. This allows you to preserve strings inside the MSI's Session object. Sort of global data.
Note that properties can only be set in immediate mode (custom actions that don't change the system), and sending the data to deferred mode custom actions (that can make system changes) is quite involved centering around the CustomActionData concept (more on deferred mode & CustomActionData).
Essentially you send a string to the deferred mode custom action by means of a SetProperty custom action in immediate mode. Typically a "home grown" delimited string that you construct in immediate mode and chew up into information pieces when receiving it in deferred mode. You could try to use JSON-strings and similar to make transfer easier and more reliable by serializing and de-serializing objects via JSON strings.
Alternatives?: This set property approach is involved. Some people write to and from the registry during installation, or to a temp file (in the temp folder) and then they clean up during the commit phase of MSI, but I don't like this approach for several reasons. For one thing commit custom actions might not run based on policies on target systems (when rollback is disabled, no commit script is created - see "Commit Execution" section), and it isn't best practice. Adding temporary rows is an interesting option that I have never spent much time on. I doubt you would be able to easily use this to achieve what you need, although I don't really know what you need in detail. I haven't used it properly. Quick sample. This RemoveFile example from WiX might be better.
I want the ownership of folders created by my application to remain only with my application.
This is because I am linking my application's data to the folders path.
So either of these 2 solutions are fine with me:
Do not allow anybody to modify the folder created by my application.
Only my application can delete/rename the folder. Modifying it
through windows explorer should require admin rights.
If above solution is not possible, at least my application should be notified of the change so that my application's links are updated.
The question is that whether it is possible to do this in Windows?
I feel language does not play a role, but still if required, I am using Qt in C++ for developing my application.
EDIT: Now there are 2 cases of being notified:
a. When my application is running and the folder is modified.
b. When my application is NOT running and the folder is modified (This may be achieved if Windows maintains a log file for changes to a folder. My application can read this log file and understand the changes in the folder).
Actually, I meant to ask specifically for case b, but now reading the answers makes me feel that it may not be possible to get notified for case b.
Security in Windows is account-based, not application-based. A folder isn't "created by your application"; it's created by the user running your application.
As for being notified, just keep an open handle. That will prevent the folder to be changed while you're running. Obviously, when you're not running, you couldn't even be notified.
[edit]
When your app is not running, you need NTFS change journals.
You can use QFileSystemWatcher class to monitoring files and directories for modifications.
QFileSystemWatcher *watcher = new QFileSystemWatcher();
watcher->addPath(QStringLiteral("C:\\Folder"));
QObject::connect(watcher,&QFileSystemWatcher::directoryChanged,[](QString folder) {
qDebug() << folder;
});
You can certainly be notified upon dirctory changes - there's a specific API for it:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365465%28v=vs.85%29.aspx
It's common to run such monitoring calls in a threads of their own in order to reduce the chance of missing bursts of notifications, and the way you handle that, and any subsequent buffering/signaling to other threads, is a bit broad for SO.
Asynchronous overlapped operation seems to be supported, but I have not tried/tested it.
I am developing an application for a small office to maintain their monetary accounts.
My application can help create a file which can store all the information.
But it should not be accessible to the user other than in my application.
Why? Because somebody may delete the file & all the records will vanish.
The environment is a Windows PC with a single account having admin privilages.
I am developing the application in C++ using the MinGW compiler.
I am sort of blank right now, as to how I can create such a file.
Any suggestions please?
If your application can modify it, then the user under whose credentials it runs can modify it, period. Also, if he has administrator privileges then you can't stop him from deleting stuff, even if your application runs under different credentials and the file is protected by ACLs.
Now, since the problem seems to be not of security, but of protecting the user from himself, I would just store the file in a location that is "out of sight" enough and be happy with it; write your data in %APPDATA%\yourappname1, such a directory is specifically for user-specific application data that is not intended to be touched directly by the user.
If you want to be paranoid you can enable every security setting you can find (hide the directory, protect it with a restrictive ACL when the app is not running, open it for exclusive access, ...), but if you ask me it's just wasted time:
the average user (our target AFAICT) doesn't mess in appdata, since it's a hidden folder to begin with;
the "power user" who messes around, if sufficiently determined to shoot himself in the foot (or voluntarily do damage), will find a way, since the security settings are easily circumventable in your situation (an admin can take ownership of any file and change its ACLs, and use applications like Unlocker to circumvent file locking);
the technician that has legitimate reasons to access the file (e.g. he must take/restore a backup of it) will be frustrated by all these useless precautions.
You can get the actual %APPDATA% path by expanding the corresponding environment variable or via SHGetFolderPath/SHGetKnownFolderPath (or whatever replacement they invented for it in new Windows versions).
Make sure your application loads on windows boot and opens the file with dwShareMode 0 option.
Here is an MSDN Example
You would need to give these files their own file extension and perhaps other security measures (I.e passwords to files). If you want these files to be suggested by Windows then you will have to do some work with the registry.
Here's a good source since you're concerned with Windows only:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff513920(v=vs.85).aspx
As far as keeping the data from being deleted, redundancy my friend redundancy. Talk to a network administrator about how they keep their data safe. I'd bet money on them naming lot's of backups as one of their reasons.
But it should not be accessible to the user other than in my application.
You cannot do that.
Everything that exists on machine user has physical access to can be deleted if user has sufficient determination.
You can protect your file from being deleted while program is running - on windows, you can't delete open files. Keep file open, people won't delete it while your program is running. Instead, they will kill your program via task manager and delete the file anyway.
Either that, or you could upload it somewhere. Data that is not located on physically accessible device cannot be easily deleted by user. However, somebody will have to run the server (and deal with security + possibly write server software). In your case it might not be worth it.
I'd suggest to document location of user data in help file, and you should probably put "!do not delete this.txt" or something into folder with this file.
I use AVG and it recently detected a virus. It has before ;) but this was the first time I noticed this.
When I went into the folder containing the virus, AVG immediately, automatically, detected the virus without me even clicking on the application. So I though how could it know a virus was there even when I did not even click (single click) on it.
The only possible answer is that it continuously checks the explorer folder location of all windows and scans all the files in the folder. But how does it see what folder is being viewed by me?
Please explain (if possible) with a C program that does what ever AVG did.
Also : I use Windows if that helps.
When you open a folder a bunch of file system operations is executed (you can use tools like FileMon or ProcMon to take a look at this). Your AV software monitors file access.
There are multiple ways to do this monitoring, e.g. Filter Drivers - you can find a great sample at http://www.codeproject.com/Articles/43586/File-System-Filter-Driver-Tutorial
So when you opened the folder, AV software noticed that you opened a directory, consulted its own data, and informed you about the virus.
I say 'consulted its own data', as AV tools usually don't scan files on access - they do it when the files are written to, as it doesn't make sense to scan files which were marked as clean if they haven't changed since the last scan.
Most virus scanners operate on the principle of API hooks/filters. Whenever windows needs to process a command, like opening a folder, clicking a window, executing a file, etc it generates an api call along with some information like the window coordinates clicked, or a string representing a file. Other programs can request a hook into one or more of these functions which basically says 'instead of executing this function, send it to me first, then I might send it back'. This is how many viruses work (preventing you from deleting them, or copying your keystrokes, for example), how many games/apps work (keyboard, joysticks, drag-and-drop), as well as malware detectors and firewalls.
The latter group hooks the commands, checks any incoming ones to see if they're on the level, then either allows them to resume or blocks them. In this example, opening the folder likely triggered a syscall to parse a directory, and the scanner parsed it too (eg 'realtime protection'). To view all of your hookable functions as well as what is using them, google for a free program called 'sanity check' (previously called 'rootkit hook analyzer'). Most of the red entries will be from either windows firewall or avg, so don't worry too much about what you find.
I have a client application (C++, Windows) that opens sockets, connects to a server, makes requests, receive responses and notifications. It does logging and saves preferences locally. What can be problems if I try to run multiple instances of this application which is prevented presently?
Are you having a particular problem you are seeing? ie - is the application crashing when you execute a second instance?
From your description, you could fail to open the executable if the second application
Tries to open the same socket the first opens
Tries to open the same file the first opens
Outside of that, more detail is needed.
Sounds a little bit like a Web browser ;)
And like a typical Web browser, if your application is implemented correctly, you'll be able to run multiple instances fine.
Unfortunately, there are ways to botch the implementation, for example:
Exclusively lock log or configuration files for prolonged periods, thus "stalling" other instances.
Just plain ignore the concurrent access to files, leading to all sorts of possible corruptions.
Act not just as a client but as a server as well, and listen to a hard-coded port (so the second instance will fail while attempting to open the same port).
Incorrectly declare a mutex as "public" (and therefore shared between processes) instead of "private", leading to slow-downs and possibly deadlocks.
There is a limit for number of GDI handles per session. If you application uses excessive handles, multiple instances taken together might reach that limit, even when each of them individually observes the 10000 handles-per-process limit.
Be a CPU hog (e.g. through busy waiting). One CPU hog on a modern multicore CPU might pass unnoticed, but once the number of instances exceeds the number of CPU cores that's another story!
Be a memory hog.
Mismanage UI:
Use UI tricks such as "always on top" windows - multiple such windows on the screen at the same time is no fun!
Mismanage the taskbar notification area (e.g. display a tray icon for each instance). Will technically "work" but having excessive number of tray icons is not pleasant, especially if application does not also have a "regular" taskbar button.
Etc etc... Essentially whenever there is a shared resource (be it a filesystem, network, CPU, memory, screen or whatever), care must be taken when concurrently using it.
If your application is opening port for listening, only one instance could use that one particular port. If application is connecting to the remote host, OS will always pick the next available port so multiple instances can run in parallel in this case.
If all instances are sharing the same log and/or configuration file, parallel write might corrupt those files so writing operations should be protected by some synchronisation object (e.g. mutex).
By problems I presume you mean that multiple applications each do not create their own workspace for logging and preferences. Which would result in one instance overwriting and access data made by the other, resulting in undesired, and unpredictable results.
If you have access to the source code of the application I would suggest extending the application to create a folder with name that contains time stamped plus randon number to hold the session data - i.e. the logs and the preferences. This way, multiple instances can operate without interfering with one another.
However bear in mind that some preferences may be best made global - to save you having to set the preferences each time you load a new instance. It depends on your application and what it is doing as to what these global preferences may be.
If you don't have access to the source then the other option for multiple instances would be via virtualisation, multiples OSs on same machine each OS running one instance of the app.