I have an application where one or more tabs pass "display requests" (i.e. to display data relevant to some data query) to a dedicated tab. That tab is launched using a known window name and the initial HTML written to it using win.document.write calls. The request data is passed to it using localStorage and that dedicated tab listens for associated storage events.
This all works fine, even when running using a local-file protocol (file:///), ... except in IE11. I have a committent to support IE11 (I don't care about older versions), and the local-file scenario is used by my customers before deploying their working configurations to their websites.
My question is whether there's a reliable alternative mechanism that I can fall back to with IE11 when using file:///.
My library code has provision for a fall-back mechanism if localStorage is undefined, but it seems that everything fails. I have tried cookies (obviously using a different type of "nudge" than storage events) and postMessage, but all have fallen foul of poorly-documented IE limitations.
My data requests are not of a fixed size but I could happily justify a limitation to about 1K if necessary. Any suggestions for how to pass such textual data would be very welcome.
[UPDATE: I have tried directly manipulating data in the other window object, but it really needs a user-defined (or innocuous) event for synchronisation. Most people would recommend storage or message events for that but then I'd be back with the same problem. I have also tried using URL fragments, which at least has its own hashchanged event, but there are size limits that depend on the user's browser and are not documented in most cases.]
Related
The MSI installation would call my (native/C++) custom action functions. Since the DLL is freshly loaded, and the MSIEXEC.EXE process is launched separately for each function (the callable actions, as specified in MSI/WiX script), I cannot use any global data in C/C++ program.
How (or Where) can I store some information about the installation going on?
I cannot use named objects (like shared-memory) as the "process" that launches the DLL to call the "action" function would exit, and OS will not keep the named-object.
I may use an external file to store, but then how would I know (in the DLL's function):
When to delete the external file.
When to find that this function call is the first call (Action/function call Before="LaunchConditions" may help, not very sure).
If I cannot delete the file, I cannot know if "information" is current or stale (i.e. belonging to earlier failed/succeeded MSI run).
"Temporary MSI tables" I have heard of, but not sure how to utilize it.
Preserve Settings: I am a little confused what your custom actions do, to be honest. However, it sounds like they preserve settings from an older application and setup version and put them back in place if the MSI fails to install properly?
Migration Suggestion (please seriously consider this option): Could you install your new MSI package and delete all shortcuts and access to the old application whilst leaving it
installed instead? Your new application version installs to a new path
and a new registry hive, and then you migrate all settings on first
launch of the new application and then kick off the uninstall of the
old application - somehow - or just leave it installed if that is
acceptable? Are there COM servers in your old install? Other things that have global registration?
Custom Action Abstinence: The above is just a suggestion to avoid custom actions. There are many reasons to avoid custom actions (propaganda piece against custom actions). If you migrate settings on application launch you avoid all sequencing, conditioning, impersonation issues along with the technical issues you have already faced (there are many more) associated with custom action use. And crucially you are in a familiar debugging context (application launch code) as opposed to the unfamiliar world of setups and their poor debugability.
Preserving Settings & Data: With regards to saving data and settings in a running MSI instance, the built in mechanism is basically to set properties using Session.Property (COM / VBScript) or MsiSetProperty (Win32) calls. This allows you to preserve strings inside the MSI's Session object. Sort of global data.
Note that properties can only be set in immediate mode (custom actions that don't change the system), and sending the data to deferred mode custom actions (that can make system changes) is quite involved centering around the CustomActionData concept (more on deferred mode & CustomActionData).
Essentially you send a string to the deferred mode custom action by means of a SetProperty custom action in immediate mode. Typically a "home grown" delimited string that you construct in immediate mode and chew up into information pieces when receiving it in deferred mode. You could try to use JSON-strings and similar to make transfer easier and more reliable by serializing and de-serializing objects via JSON strings.
Alternatives?: This set property approach is involved. Some people write to and from the registry during installation, or to a temp file (in the temp folder) and then they clean up during the commit phase of MSI, but I don't like this approach for several reasons. For one thing commit custom actions might not run based on policies on target systems (when rollback is disabled, no commit script is created - see "Commit Execution" section), and it isn't best practice. Adding temporary rows is an interesting option that I have never spent much time on. I doubt you would be able to easily use this to achieve what you need, although I don't really know what you need in detail. I haven't used it properly. Quick sample. This RemoveFile example from WiX might be better.
I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.
I've tried the Win32_DesktopMonitor and checked the "Availability", but the value returned is always 3 (powered on), even when the monitor is physically turned off.
Is the data cached and there's a "force refresh" command in WMI, or in this particular case, the "Availability" is just not reliable ?
I think there is caching going on somewhere. I've observed it recently.
I wrote code that was polling for updates to Win32_PnPSignedDriver via SelectQuery / ManagementObjectSearcher and the results appear to be cached because it never realizes that a new device/driver has been added. Running the query from a separate app instantly sees that it was updated.
You may have a look to your driver. According to the documentation, starting with Windows Vista, hardware that is not compatible with Windows Display Driver Model (WDDM) returns inaccurate property values for instances of this class. For me it's an another way to say that it's not reliable.
I'm quite confused as to what should and should not be done in QApplication::commitData. The name implies that I should just store the state, and the docs say it should not close the application. However, the default implementation indeed closes all windows thereby closing the application. Also, if this is not the way to detect windows shutdown, I don't see any other way to tell that windows is indeed being shutdown.
There is also the related saveState. The function name means about the same and the documentation is also quite similar.
How am I supposed to properly detect when the system is being shutdown and both save my state and close my application? Is commitData indeed the correct way and just suffering from a very poor name and bad documentation?
In my practice to detect an application shutdown I usually connect to the slot void QCoreApplication::aboutToQuit (). As it says in the docu:
The signal is particularly useful if your application has to do some last-second cleanup. Note that no user interaction is possible in this state.
So far so good this has worked for me properly
commitData() and saveState() may seem redundant.
But the documentation
says
Futhermore, most session managers will very likely request a saved state immediately after the application has been started. This permits the session manager to learn about the application's restart policy.
Maybe that explains why the notion of 'data' and 'state' are separated. During that initial call, it would not be user friendly to interact with the user.
The default response to shutdown the application seems like a good design, because if you don't reimplement, then the safest thing to do is to close the app (as if the user had chosen the Quit action), which should also save the user's data.
Is the OS shutting down, or only the session? As far as your app should be concerned, it is only the session (since technically, the user could be logging off and the OS continues to run.) And the user might consider the app to be not 'shut down', just 'paused with data safed.'
Also consider mobile platforms like iOS, where an application seeming runs forever.
I have a client application (C++, Windows) that opens sockets, connects to a server, makes requests, receive responses and notifications. It does logging and saves preferences locally. What can be problems if I try to run multiple instances of this application which is prevented presently?
Are you having a particular problem you are seeing? ie - is the application crashing when you execute a second instance?
From your description, you could fail to open the executable if the second application
Tries to open the same socket the first opens
Tries to open the same file the first opens
Outside of that, more detail is needed.
Sounds a little bit like a Web browser ;)
And like a typical Web browser, if your application is implemented correctly, you'll be able to run multiple instances fine.
Unfortunately, there are ways to botch the implementation, for example:
Exclusively lock log or configuration files for prolonged periods, thus "stalling" other instances.
Just plain ignore the concurrent access to files, leading to all sorts of possible corruptions.
Act not just as a client but as a server as well, and listen to a hard-coded port (so the second instance will fail while attempting to open the same port).
Incorrectly declare a mutex as "public" (and therefore shared between processes) instead of "private", leading to slow-downs and possibly deadlocks.
There is a limit for number of GDI handles per session. If you application uses excessive handles, multiple instances taken together might reach that limit, even when each of them individually observes the 10000 handles-per-process limit.
Be a CPU hog (e.g. through busy waiting). One CPU hog on a modern multicore CPU might pass unnoticed, but once the number of instances exceeds the number of CPU cores that's another story!
Be a memory hog.
Mismanage UI:
Use UI tricks such as "always on top" windows - multiple such windows on the screen at the same time is no fun!
Mismanage the taskbar notification area (e.g. display a tray icon for each instance). Will technically "work" but having excessive number of tray icons is not pleasant, especially if application does not also have a "regular" taskbar button.
Etc etc... Essentially whenever there is a shared resource (be it a filesystem, network, CPU, memory, screen or whatever), care must be taken when concurrently using it.
If your application is opening port for listening, only one instance could use that one particular port. If application is connecting to the remote host, OS will always pick the next available port so multiple instances can run in parallel in this case.
If all instances are sharing the same log and/or configuration file, parallel write might corrupt those files so writing operations should be protected by some synchronisation object (e.g. mutex).
By problems I presume you mean that multiple applications each do not create their own workspace for logging and preferences. Which would result in one instance overwriting and access data made by the other, resulting in undesired, and unpredictable results.
If you have access to the source code of the application I would suggest extending the application to create a folder with name that contains time stamped plus randon number to hold the session data - i.e. the logs and the preferences. This way, multiple instances can operate without interfering with one another.
However bear in mind that some preferences may be best made global - to save you having to set the preferences each time you load a new instance. It depends on your application and what it is doing as to what these global preferences may be.
If you don't have access to the source then the other option for multiple instances would be via virtualisation, multiples OSs on same machine each OS running one instance of the app.