Problems with running multiple instances of an application? - c++

I have a client application (C++, Windows) that opens sockets, connects to a server, makes requests, receive responses and notifications. It does logging and saves preferences locally. What can be problems if I try to run multiple instances of this application which is prevented presently?

Are you having a particular problem you are seeing? ie - is the application crashing when you execute a second instance?
From your description, you could fail to open the executable if the second application
Tries to open the same socket the first opens
Tries to open the same file the first opens
Outside of that, more detail is needed.

Sounds a little bit like a Web browser ;)
And like a typical Web browser, if your application is implemented correctly, you'll be able to run multiple instances fine.
Unfortunately, there are ways to botch the implementation, for example:
Exclusively lock log or configuration files for prolonged periods, thus "stalling" other instances.
Just plain ignore the concurrent access to files, leading to all sorts of possible corruptions.
Act not just as a client but as a server as well, and listen to a hard-coded port (so the second instance will fail while attempting to open the same port).
Incorrectly declare a mutex as "public" (and therefore shared between processes) instead of "private", leading to slow-downs and possibly deadlocks.
There is a limit for number of GDI handles per session. If you application uses excessive handles, multiple instances taken together might reach that limit, even when each of them individually observes the 10000 handles-per-process limit.
Be a CPU hog (e.g. through busy waiting). One CPU hog on a modern multicore CPU might pass unnoticed, but once the number of instances exceeds the number of CPU cores that's another story!
Be a memory hog.
Mismanage UI:
Use UI tricks such as "always on top" windows - multiple such windows on the screen at the same time is no fun!
Mismanage the taskbar notification area (e.g. display a tray icon for each instance). Will technically "work" but having excessive number of tray icons is not pleasant, especially if application does not also have a "regular" taskbar button.
Etc etc... Essentially whenever there is a shared resource (be it a filesystem, network, CPU, memory, screen or whatever), care must be taken when concurrently using it.

If your application is opening port for listening, only one instance could use that one particular port. If application is connecting to the remote host, OS will always pick the next available port so multiple instances can run in parallel in this case.
If all instances are sharing the same log and/or configuration file, parallel write might corrupt those files so writing operations should be protected by some synchronisation object (e.g. mutex).

By problems I presume you mean that multiple applications each do not create their own workspace for logging and preferences. Which would result in one instance overwriting and access data made by the other, resulting in undesired, and unpredictable results.
If you have access to the source code of the application I would suggest extending the application to create a folder with name that contains time stamped plus randon number to hold the session data - i.e. the logs and the preferences. This way, multiple instances can operate without interfering with one another.
However bear in mind that some preferences may be best made global - to save you having to set the preferences each time you load a new instance. It depends on your application and what it is doing as to what these global preferences may be.
If you don't have access to the source then the other option for multiple instances would be via virtualisation, multiples OSs on same machine each OS running one instance of the app.

Related

COM Surrogate Server Timeout

I have a Win32/MFC application that depends on two separate STA COM DLL servers that I created many years ago using C++/ATL. These are large DLL servers with multiple interfaces and are also successfully used in other contexts and client programs. Several years ago, I had to create 64-bit versions of these 32-bit servers, and my 32-bit MFC app needed to be able to use either the 32-bit or 64-bit version of the DLL COM server (chosen with a checkbox).
Because a 32-bit process can't load a 64-bit COM server DLL in-process, I worked around this by having the MFC app create the 64-bit servers in the system surrogate (DLLHOST.EXE) by replacing
CoCreateInstance(..., CLSCTX_INPROC_SERVER, ...)
with
CoCreateInstance(..., CLSCTX_LOCAL_SERVER | CLSCTX_ACTIVATE_64_BIT_SERVER, ...)
Some updates were required, like adding an interface to copy environment variables into the server process and set the server/surrogate's working directory (the surrogate starts in SYSTEM32), but the other interfaces were all remoteable. This all seems to work perfectly and I can now use the 32-bit and 64-bit servers interchangeably from the 32-bit app by flipping a switch.
There is, however, one problem that I haven't been able to solve: making the surrogate quickly terminate when the client releases the last interface. The surrogate hangs around for 3-5 seconds after all remote interfaces are released by the MFC client -- presumably an optimization, hoping the client will come back. If the MFC app re-launches the server with CoCreateInstance() during that 3-5 seconds, it reconnects to the same "dirty" surrogate. The server code is not serially re-usable (it packages up many thousands of lines of legacy ANSI "C" code with lots of static variables) so reconnecting to the same instance is just not possible.
I worked around this several years ago by having the startup interface return a COM error code indicating the server is waiting to be recycled (better than a crash). However, the servers are launched when the end user presses a toolbar button in the MFC app, so this means the user gets a message like "wait a few seconds and try again". That works, but the bad part is that every fresh launch attempt resets the 3-5 second counter that keeps the surrogate from exiting. And impatient users are complaining. I'll add this all works perfectly in-process, with CoFreeUnusedLibraries() working as expected.
I tried a number of things already -- everything short of coding an ExitProcess() in the server, which seems inappropriate. There seems to be no way to tell the surrogate that the application is complete and should not wait for more connections. The MS documentation claims omitting the RunAs attribute in the AppID might help (I had it set to "Interactive User") but it didn't. It also mentions REGCLS_SINGLUSE but then says "Do not set REGCLS_SINGLUSE or REGCLS_MULTIPLEUSE when you register a surrogate for DLL servers" and "REGCLS_SINGLUSE and REGCLS_MULTIPLEUSE should not be used for DLL servers loaded into surrogates." and I don't have control over what the surrogate's class factory as far as I know.
It looks like COM+ might provide some control over recycling, as it seems to have a RecycleActivationLimit option that I might be able to set to 0, but I have no idea what it would take to convert this into a COM+ server.
The other possibility is to write a custom surrogate.
If there's no easy answer, I might just resort to greying out the button until the server vanishes -- but since I can't probe the server without extending its lifetime, I guess I could add a shared mutex and wait for it to vanish. Ugh.
Is RecycleActivationLimit somehow available to regular COM applications? Any other suggestions are most welcome.

Possible to make QML application "offline capable" using caches?

I'm trying to make one of my QML apps "offline capable" - that means I want users to be able to use the application when not connected to the internet.
The main problem I'm seeing is the fact that I'm pretty much pulling a QML file with the UI from one of my HTTP servers, allowing me to keep the bulk of the code within reach and easily updatable.
My "main QML file" obviously has external dependencies, such as fonts (using FontLoader), images (using Image) and other QML components (using Loader).
AFAIK all those resources are loaded through the Qt networking stack, so I'm wondering what I'll have to do to make all resources available when offline without having to download them all manually to the device.
Is it possible to do this by tweaking existing/implementing my own cache at Qt/C++ level or am I totally on the wrong track?
Thanks!
A simple solution is to invert the approach: include baseline files within your application's executable/bundle. Upon first startup, copy them to the application's data directory. Then, whenever you have access to your server, you can update the data directory.
All modifications of the data directory should be atomic - they must either completely succeed, or completely fail, without leaving the data directory in an unusable state.
Typically, you'd create a new, temporary data folder, and copy/hardlink the files there, and download what's needed, and only once everything checks out you'd swap the old data directory with the new one.
Letting your application access QML and similar resources directly online is pretty much impossible to get right, unless you insist on explicitly versioning all the resources and having the version numbers in the url.
Suppose your application was started, and has loaded some resources. There are no guarantees that the user has went to all the QML screens - thus only some resources will be loaded. QML also makes no guarantees as to how often and when will the resources be reloaded: it maintains its own caches, after all. Sometime then you update the contents on the server. The user proceeds to explore more of the application after you've done the changes, but now the application he experiences is a frankenstein of older and newer pieces, with no guarantees that these pieces are still meant to work together. It's a bad idea.

Creating a file accessible to only my application in C++?

I am developing an application for a small office to maintain their monetary accounts.
My application can help create a file which can store all the information.
But it should not be accessible to the user other than in my application.
Why? Because somebody may delete the file & all the records will vanish.
The environment is a Windows PC with a single account having admin privilages.
I am developing the application in C++ using the MinGW compiler.
I am sort of blank right now, as to how I can create such a file.
Any suggestions please?
If your application can modify it, then the user under whose credentials it runs can modify it, period. Also, if he has administrator privileges then you can't stop him from deleting stuff, even if your application runs under different credentials and the file is protected by ACLs.
Now, since the problem seems to be not of security, but of protecting the user from himself, I would just store the file in a location that is "out of sight" enough and be happy with it; write your data in %APPDATA%\yourappname1, such a directory is specifically for user-specific application data that is not intended to be touched directly by the user.
If you want to be paranoid you can enable every security setting you can find (hide the directory, protect it with a restrictive ACL when the app is not running, open it for exclusive access, ...), but if you ask me it's just wasted time:
the average user (our target AFAICT) doesn't mess in appdata, since it's a hidden folder to begin with;
the "power user" who messes around, if sufficiently determined to shoot himself in the foot (or voluntarily do damage), will find a way, since the security settings are easily circumventable in your situation (an admin can take ownership of any file and change its ACLs, and use applications like Unlocker to circumvent file locking);
the technician that has legitimate reasons to access the file (e.g. he must take/restore a backup of it) will be frustrated by all these useless precautions.
You can get the actual %APPDATA% path by expanding the corresponding environment variable or via SHGetFolderPath/SHGetKnownFolderPath (or whatever replacement they invented for it in new Windows versions).
Make sure your application loads on windows boot and opens the file with dwShareMode 0 option.
Here is an MSDN Example
You would need to give these files their own file extension and perhaps other security measures (I.e passwords to files). If you want these files to be suggested by Windows then you will have to do some work with the registry.
Here's a good source since you're concerned with Windows only:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff513920(v=vs.85).aspx
As far as keeping the data from being deleted, redundancy my friend redundancy. Talk to a network administrator about how they keep their data safe. I'd bet money on them naming lot's of backups as one of their reasons.
But it should not be accessible to the user other than in my application.
You cannot do that.
Everything that exists on machine user has physical access to can be deleted if user has sufficient determination.
You can protect your file from being deleted while program is running - on windows, you can't delete open files. Keep file open, people won't delete it while your program is running. Instead, they will kill your program via task manager and delete the file anyway.
Either that, or you could upload it somewhere. Data that is not located on physically accessible device cannot be easily deleted by user. However, somebody will have to run the server (and deal with security + possibly write server software). In your case it might not be worth it.
I'd suggest to document location of user data in help file, and you should probably put "!do not delete this.txt" or something into folder with this file.

Redirect APPCRASH Dumps (Or turn them off)

I have an application (didn't write it) that is producing APPCRASH dumps in C:\Windows\SysWOW64. The application while dumping is crippled, but operating at bare minimum capacity to not lose data. The issue is that these dumps are so large that the system is spending most of it's time writing these and the application is falling far behind in processing and will start losing data soon.
The plan is to either entirely disable it, or mount it to a RAM drive and purge them as soon as they hit the RAM drive.
Now I've looked into using this key:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb787181%28v=vs.85%29.aspx
But all it does is generate a second dump now instead of redirect the original.
The dump is named:
dump-2013_03_31-15_23_55_772.dmp
This is generally the realm of developers on Windows (with stuff like C/C++) so I'd like to hit them up, don't think ServerFault could get me any answers on this.
Additionally: It's not cycling dump files (they'll fill the 20GBs left on the hard drive), so I'm not sure if this is Windows behavior or custom code in the app (if it is... ick!).
To write a DumpFile, an app has to call the function "MiniDumpWriteDump" so this is not a behavior of the system or something you can control, it is application driven. If it dumps on crashes, it uses "SetUnhandledExceptionFilter" to set its own handling routine, before(!) the OS takes over. Unfortunately I didn't found a way to overwrite this handler from an other process, so the only hope left is, that there is a register entry for the app switching the behavior or change the path (as my applications have it for exactly the reason you describe).

Multiple instance of a dll in c++

I have 10.000 devices and I want to control them by one c++ application. Devices are server and I can control them only by dll. Dll is written for MFC and it wasn't written by me so i cant chance anything on it.
Dll establishs the TCP/IP communication between devices and my application.Every device has different variables. I need to open a new thread for each incoming connection and load an instance of my dll. I couldn't load the different instances of a dll for each thread. everytime it is using the same dll and same data.
How can load multiple instance of a dll ?
Is there any way to do it with c++.
Thanks in Advance
If the data are static it is not possible to have more instance in the same process. You have to modify the dll to have some sort of per context data ( usually class instance would do ). As a general suggestion anyway, never starts up to 10000 thread on a process, this will kill the performance. Write a thread pool and let manage the client be served by that pool.
Your situation does not sound hopeful.
Windows will not load more than one instance of a DLL within a given process, ever. If the DLL itself doesn't have the functionality to connect to multiple servers, you would have to create a separate process for each server you need to connect to. In practice, this would be a Bad Idea.
You COULD use LoadLibrary() and UnloadLibrary() to "restart" the DLL multiple times and switch frantically between the different servers that way. Sort of a LoadLibrary()... mess with server... UnloadLibrary()... do it againandagainandagain situation. It would be painful and slow, but might work for you.
The only (ugly) way to load a dll multiple times is for every new load you make a copy of the original dll with a unique name in a location that you're in control of.
Load the copy with LoadLibrary and setup appropiate function-pointers (GetProcAddress(...)) to functions in newly loaded dll for use in your program.
After you're done with it Unload the copy with FreeLibrary and remove the copy from disk.
I don't see an easy solution to this, as previously covered, you can't create multiple instances of a DLL within an app.
There may be a horrible solution, which is to write a lightweight proxy to listen for inbound requests, and spawn a new instance of the real app on each request, and forward traffic to it - there should be a way to load a new copy of a DLL in each instance (technically you'll be re-opening the same loaded DLL, but it should have separate data spaces).
However, with 10k devices, performance will be horrible. It sounds like the best solution is either to re-implement the protocol (either use a published spec, or reverse-engineer it).