What are some ways to design an application such that the configuration can be changed without requiring an application restart?
One way is to just have a flat file with configs, and then the application reads from the configs whenever it needs a particular value and never store any config values in memory.
Another option is to allow the application to load the config file once and store values in memory, but then periodically reload the config file in case something changes.
It just so happens that I recently updated one of my free software packages to do exactly that. The approach I took was slightly different.
1) My application loads its configuration, parses it, and stores it in memory. I do not read the configuration settings every time the application needs the value of some configuration setting.
2) But, along with the configuration settings, I also store the timestamp of the configuration file itself.
3) When the application wakes up in response to an event, and it has something to do, it checks the configuration file's timestamp. If it has not changed, no further action is taken. The stat(2) system call is lightweight, cheap, and fast, and adds very little overhead.
4) If stat(2) tells me that the configuration file's timestamp has changed, the application reads the configuration file again.
The configuration file, as part of its format, includes an explicit "end of configuration" marker. If my application doesn't see it, it means that I should go out and play the next lottery, because I managed to hit an extremely rare race condition, in this case, when somehow my application ended up reading a new configuration file that's in the middle of being saved by the editor that I'm using to edit the configuration file at the same time!
If the code doesn't see the "end of configuration" marker, no further action is taken until the next time the application wakes up and checks the configuration file's timestamp.
5) After the new configuration file is read and parsed, I validate the new configuration settings. Some internal sanity checks occur here. If the sanity checks fail, no further action is taken after reporting the error to the system logs.
6) Only after the sanity checks pass, the previously-stored configuration settings and values get replaced by the updated values read from the new configuration file, together with the new configuration file's newer timestamp. Until next time we meet again.
P.S. The saved configuration settings are protected by a mutex. The application holds the mutex when it needs to check a value of particular configuration setting. Step 6 also acquires the mutex just long enough to replace the current configuration settings with the newly-validated updated configuration settings.
To avoid polling, consider using a notification from the operating system to find out when your config file has been modified. Most operating systems provide APIs that'll do this:
Linux: inotify
Windows: ReadDirectoryChangesW
Mac: FSEvents
There are a number of cross-platform wrappers out there (here and here, for example) that can simplify things.
The answer by Sam Varshavchik contains lots of good advice. However, there is other point worth stating...
The public API of your configuration class will provide one or more lookup()-style methods that are used to retrieve configuration values. To ensure thread safety, you must ensure that these lookup() methods return a deep copy of (rather than a pointer/reference to) the underlying configuration value. For example, if returning a string then the return type should be std::string rather than const std::string & or const char *.
Related
My target is, that files can be hydrated or dehydrated on user request via the Explorer "free up space" or "Always keep on Device" ContextMenu entry. In case I create a new placeholder file that is dehydrated from the beginning, everything works and I can hydrate it via the callback mechanics. But the way around does not work for me. Inside of the Explorer the file will be marked as UnPinned and the file will be marked as syncing, but my application does not receive any callback from CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE or CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE_COMPLETION. Then I wanted to do it manually with CfDehydratePlaceholder, but exactly the same behaviour. Nothing happens and the file remains in the state, syncing. Even if I used CfSetInSyncState to set the state to CF_IN_SYNC_STATE_IN_SYNC it remains to be in the state syncing.
Now I wanted to implement a minimal example with the help of Cloud Mirror Example, but I realized it has the same behaviour. When I try to dehydrate a file again exactly the same happens there as well. From my perspective, it feels for me like cfapi expects an ack from the cloud service, which it never gets.
But in OneDrive everything works like expected. What I am missing? Did I have to set some specific settings?
I had a misunderstanding of the whole API and here is how I understand the API now, to help other people, who are struggling with it.
You have to register your sync root and connecting your app to it. In case of connecting it, you will receive a CF_CONNECTION_KEY, which is needed to communicate with the virtual filesystem. Then you can add extended attributes to all files inside of your sync root. The most important are custom attributes you can choose by yourself to identify the file object by your app if needed and then the PinState and SyncState. Mostly the SyncState don't have to be changed by the app, besides marking a file as synced after it was processed by the app. (you can do it at the moment you update your custom attributes) Because in case a file changed, the SyncState will automatically be changed. The PinState declares which final state a file should have. For example UNPINNED means, that the file should be dehydrated, and PINNED the opposite. It does not mean, that the file necessarily has already this state. My misunderstanding was, that I thought in case I unpinned a file, it will be automatically dehydrated. Or in case I pinned a placeholder I will receive a request via the callback function I mentioned in my question. But this is not the case. Your app needs to find out via a FileWatcher (i can recommend my own created FileWatcher project: https://github.com/neXenio/panoptes) that the file attribute of specific files was changed. Then your app has to process every step. Like already mentioned in case of dehydrating, the app needs to call CfDehydratePlaceholder. In case of hydrating, you need to open a transfer session via CfGetTransferKey and then hydrate (send the data to the empty file) via the method CfExecute, where you need the connection key and the transfer key. And that's are the basics. There is much more to tell about it, but I guess with this beginning, everybody can figure it out by himself.
The MSI installation would call my (native/C++) custom action functions. Since the DLL is freshly loaded, and the MSIEXEC.EXE process is launched separately for each function (the callable actions, as specified in MSI/WiX script), I cannot use any global data in C/C++ program.
How (or Where) can I store some information about the installation going on?
I cannot use named objects (like shared-memory) as the "process" that launches the DLL to call the "action" function would exit, and OS will not keep the named-object.
I may use an external file to store, but then how would I know (in the DLL's function):
When to delete the external file.
When to find that this function call is the first call (Action/function call Before="LaunchConditions" may help, not very sure).
If I cannot delete the file, I cannot know if "information" is current or stale (i.e. belonging to earlier failed/succeeded MSI run).
"Temporary MSI tables" I have heard of, but not sure how to utilize it.
Preserve Settings: I am a little confused what your custom actions do, to be honest. However, it sounds like they preserve settings from an older application and setup version and put them back in place if the MSI fails to install properly?
Migration Suggestion (please seriously consider this option): Could you install your new MSI package and delete all shortcuts and access to the old application whilst leaving it
installed instead? Your new application version installs to a new path
and a new registry hive, and then you migrate all settings on first
launch of the new application and then kick off the uninstall of the
old application - somehow - or just leave it installed if that is
acceptable? Are there COM servers in your old install? Other things that have global registration?
Custom Action Abstinence: The above is just a suggestion to avoid custom actions. There are many reasons to avoid custom actions (propaganda piece against custom actions). If you migrate settings on application launch you avoid all sequencing, conditioning, impersonation issues along with the technical issues you have already faced (there are many more) associated with custom action use. And crucially you are in a familiar debugging context (application launch code) as opposed to the unfamiliar world of setups and their poor debugability.
Preserving Settings & Data: With regards to saving data and settings in a running MSI instance, the built in mechanism is basically to set properties using Session.Property (COM / VBScript) or MsiSetProperty (Win32) calls. This allows you to preserve strings inside the MSI's Session object. Sort of global data.
Note that properties can only be set in immediate mode (custom actions that don't change the system), and sending the data to deferred mode custom actions (that can make system changes) is quite involved centering around the CustomActionData concept (more on deferred mode & CustomActionData).
Essentially you send a string to the deferred mode custom action by means of a SetProperty custom action in immediate mode. Typically a "home grown" delimited string that you construct in immediate mode and chew up into information pieces when receiving it in deferred mode. You could try to use JSON-strings and similar to make transfer easier and more reliable by serializing and de-serializing objects via JSON strings.
Alternatives?: This set property approach is involved. Some people write to and from the registry during installation, or to a temp file (in the temp folder) and then they clean up during the commit phase of MSI, but I don't like this approach for several reasons. For one thing commit custom actions might not run based on policies on target systems (when rollback is disabled, no commit script is created - see "Commit Execution" section), and it isn't best practice. Adding temporary rows is an interesting option that I have never spent much time on. I doubt you would be able to easily use this to achieve what you need, although I don't really know what you need in detail. I haven't used it properly. Quick sample. This RemoveFile example from WiX might be better.
Is there any configuration which helps in log4cplus picking dynamic changes? I am changing log4cplus properties on runtime and want log4cplus to pick those changes dynamically.
There is the ConfigureAndWatchThread class which you can instantiate. It will spawn a thread which will watch for modification time changes on given configuration file. When it notices the modification time change into the future of the last recorded modification time, it will remove all the previously instantiated loggers and appenders, etc., and will reconfigure everything.
However, it is not very sophisticated and there is no defence against catching the configuration file change mid air while it is still being written by your editor. If this danger is not important for you, use it. Otherwise, I would suggest you build some sort of manual trigger into your software that will make it re-read the logging configuration only on the trigger.
Build time of XPages application containing several JARs, Java sources and ~50 XP/CC elements takes about minute to build on server via WAN. I have replicated application to local, build time dropped to ~10s.
Since few days ago build of local application is extremely slow, about 2-5 minutes. After some experiments there is workaround: to disable TCP port in location document - it drops build times to just few seconds. Even tho it works, it does not help much - testing requires user to be authenticated, so I need to replicate design changes to remote or local server - and that means to change location (online/offline) every time.
UPDATE 2013-04-04: I have duplicated my current location document and removed home and directory servers. To my surprise, with this location build times went back to few seconds - with TCP port enabled so replication is possible. Bigger surprise was the fact, that returning home/directory servers back to new location did not reproduce the problem - in fact they do not affect performance. I know it because I have renamed current location document and everything went to normal. From my understanding, "something" in client configuration was connected to location name. Thanks to Simon's tips I will investigate further.
The question is still open: I am looking for some (eclipse) preference controlling this behavior - unintended communication with server during build of local application.
Solution:
Teamstudio CIAO hooks into designer and checks for every update of design element. Seems to be lack of code optimization to me: it checks whether currently built design element (every single one, one by one) should be controlled in CIAO config database.
This explains why the problem was solved by renaming of location document. I was disappointed yesterday, when performance problems started again. Fortunately, I recalled CIAO setup to that location document about that time. CIAO uses teamstudio.ini file in DATA directory to configure what CIAO configuration database is used for every location document. Look for entry:
CIAOConfigDb[location name]=server name;CIAO\CIAOConfig.nsf
For development on local replicas with connection to server (for replication or local server), use location document with CIAO disabled.
This works only with property ForceConfigLocation=0.
Not a solution (yet!), but may help in the investigation. I'll update further if you post results later.
Debug instructions.
Add the following to the shortcut that launches the Designer client.
-RPARAMS -console -debug -separateSysLogFiles -consoleLog
Start the designer client. This will also open up the OSGi console.
Reproduce the issue. While it is still in progress in the OSGi console type the following:
dump threads
Do this three times, with a small amount of time between completion of each dump. Once done open the three heap dumps (in the IBM_TECHNICAL_SUPPORT folder) in the Heap Dump Analyser.
It will show you what threads are consistent through all three dumps. Take a look at those and look for package names/calls which may appear to be a functional area. Once you have that then you can try adding the debug for the related class.
For example: Let's say you notice "com.ibm.designer.domino.ui.commons." in the thread, then you would edit the rcpinstall.properties file. It will be in:
<Notes Install>\Data\workspace\.config\rcpinstall.properties
and you would add (start with FINE, then FINEST if nothing):
com.ibm.designer.domino.ui.commons.level=FINE
Now when you restart the designer client it will generate debug output in the workspace\logs folder for that package. You need to then go through the trace logs looking for the time when the delay occurred and see if it makes any references to related design elements.
Other open applications may get built at the same time (which looks like a bug top me). Be sure to close all other applications and the server based replica. Open applications have their icon showing in the application list and they stay open even if you close and reopen the Designer. In Designer 9 right click application and select "Close Application". In 8.5 you need to use Package Exprorer for closing.
Another good way is to use Working Sets. Only applications in open Working Set will be built (AFAIK). Have a Working Set with this one app only (and the app only in this Working Set).
update 1
If these don't help I would delete/rename bookmark.nsf, Cache.NDK and desktop8.ndk. Then open just this one app and see what happens.
update 2
Check that there are no referenced projects. Right click the application and select "Project Properties". From there "Project Referencies" and make sure no check boxes are checked.
update 3
Based on your update I would check the item names starting with $ in location document. Sometimes there are saved IP addresses etc. which could cause this problem. All those items can be removed.
If possible (and if You are not using it yet) try to use version 9 of the Domino designer (You do not have to use Domino 9 to do that - it works fine with Domino 8.5.3).
For our projects build times went down to only few seconds from few minutes. I guess that they finally noticed at IBM that the build process used to heavily relay on connection to server and done something with it.
With new designer You don't event have to replicate to local. You can directly work on Your local server.
I have a multi-threaded sever application that I'm writing in C++ and I need to implement a good and fairly efficient logging system. By efficient I mean that whatever amount of logging is configured, the application shouldn't ever come to a grinding halt. So preferably there is some thread that is dedicated to writing it's log files.
I want to log each request that the server component handles in it's own file, having a rotation system that removes files older then some threshold. A request is handled by 2 threads, one that does some conversion work and the a worker-thread that is part of thread pool (BOOST threadpool) that does all the other actions (database gets, calculations, etc). So the logging need be threadsafe and I have to be able to configure it for levels and let each Logger class instance (my own logger that implements some library) accept a new file name. So that each new Logger instance is created for a specific request.
My ultimate question is: Which logging library allows me to have a new Log file for each request and allows me to configure log levels? (IE: error, warning, critical, etc)
Or should I implement everything myself? (no logging is not an option)
I have looked at Boost::Logging v2 and since the main logger object, that holds all state (levels, files) is global, I cannot change the files for each request.
I have looked at templog.org and this I can't even get to compile. No matter what I include or which references I set, it can never find the templog namespace or any of its classes.
Have a look at Apache log4cxx. It a great logging library !