Pop & push automatic configurations as a reaction on priority release builds - build

Within our company we use TeamCity for both automatically triggered build configurations, that are triggered quite regularly, and also manual not-as-often build configurations.
Examples of automatic builds are: up-to-date light builds, tests, etc. These are usually triggered because new code/data is available.
Currently, there is big chained process for a release of our product on all platforms. This is done manually by running the last composing build config, and it will do all the build configurations it needs to. None of them are ever automatically triggered, and all are unique to the release build chain.
My questions are, as the amount of agents we have available is quite limited, is it possible that the release process has priority in a way that it would:
Pop any automatic builds and add them again to the queue (just cancelling would be fine, but less desirable) as long as the release chain is ongoing?
Delay any automatic trigger build until the release build chain is finished?
I would understand that there is no existing solution for this, as the only thing I'm using for now is the priority classes. And even though it does work nicely for deciding what stuff on the queue to execute first, it doesn't affect any ongoing build process.
Do you know a great solution for that, or have an idea on how I could tackle this myself by implementing something using, for example, the REST API?

TeamCity have custom build queue priorities feature. You can configure your release process build configurations to have higher priority, so they will run before any non-prioritized build, and there no need to remove/readd builds from/into queue.

Related

Any way to manually trigger a Test Discovery pass in VS2019 from a VSPackage?

We're currently building an internal apparatus to run unit tests on a large C++ codebase, using Catch2 for the framework and our in-house VS test adapter (using [ITestDiscoverer] and ITestExecutor) to attune them to our code practices. However, we've encountered issues with unit tests not always being discovered after a build.
There's a couple of things we're doing out of the norm that may be contributing. While we're using VS2019 for coding, we use FASTBuild and Sharpmake to build our solutions (which can contain countless projects). When we realised that VS would try to build the tests again using MSBuild before running them (even after a full rebuild), we disabled that behaviour in the VS options. Everything else seems to be running as expected, except that sometimes tests aren't picked up.
After doing some digging (namely outputting a verification message to VS's Tests Output the moment our TestDiscoverer is entered), it seems like a test discovery pass isn't always being invoked when we would expect it, sometimes even with a full solution rebuild. Beyond the usual expectation that building a project with new changes (or rebuilding outright) would cause a pass to start, the methodology VS uses to determine when to invoke all installed test adapters seems to be fairly blackbox in terms of what exact parameters/conditions trigger it.
An alternative seems to be to allow the user to manually execute a TD pass via some means that could be wrapped in a VSPackage. However, initial looks through the VSSDK API for anything that'd do the job has come up short.
Using the VSSDK, are there any means to invoke a Test Discovery pass independently from VS's normal means of detecting whether a pass is required?
You would want to use the ITestContainerDiscoverer.TestContainersUpdated event. The platform should then call into your Container Discoverer to get the latest set of containers (ITestContainerDiscoverer.TestContainers). As long as the containers returned from the discoverer are different(based on ITestContainer.CompareTo()) the platform should trigger a discovery for the changed containers. This blog has been quite helpful in the past: https://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/

Start a new release within a existing release pipeline

At the end of our release pipeline we deploy to "PROD" like most people. However, sometimes we have to hold the release until our customer is ready for it. While we are waiting, we want to move forward and begin a new release pipeline so that testing can begin on the next release. What is the best way to handle this since for a given release, only one can be active at a time?
I think I can solve my problem by putting a Queuing policies on the PROD stage. Here is a link for more info: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=classic#queuing-policies

Persistence of data for MSI installation

The MSI installation would call my (native/C++) custom action functions. Since the DLL is freshly loaded, and the MSIEXEC.EXE process is launched separately for each function (the callable actions, as specified in MSI/WiX script), I cannot use any global data in C/C++ program.
How (or Where) can I store some information about the installation going on?
I cannot use named objects (like shared-memory) as the "process" that launches the DLL to call the "action" function would exit, and OS will not keep the named-object.
I may use an external file to store, but then how would I know (in the DLL's function):
When to delete the external file.
When to find that this function call is the first call (Action/function call Before="LaunchConditions" may help, not very sure).
If I cannot delete the file, I cannot know if "information" is current or stale (i.e. belonging to earlier failed/succeeded MSI run).
"Temporary MSI tables" I have heard of, but not sure how to utilize it.
Preserve Settings: I am a little confused what your custom actions do, to be honest. However, it sounds like they preserve settings from an older application and setup version and put them back in place if the MSI fails to install properly?
Migration Suggestion (please seriously consider this option): Could you install your new MSI package and delete all shortcuts and access to the old application whilst leaving it
installed instead? Your new application version installs to a new path
and a new registry hive, and then you migrate all settings on first
launch of the new application and then kick off the uninstall of the
old application - somehow - or just leave it installed if that is
acceptable? Are there COM servers in your old install? Other things that have global registration?
Custom Action Abstinence: The above is just a suggestion to avoid custom actions. There are many reasons to avoid custom actions (propaganda piece against custom actions). If you migrate settings on application launch you avoid all sequencing, conditioning, impersonation issues along with the technical issues you have already faced (there are many more) associated with custom action use. And crucially you are in a familiar debugging context (application launch code) as opposed to the unfamiliar world of setups and their poor debugability.
Preserving Settings & Data: With regards to saving data and settings in a running MSI instance, the built in mechanism is basically to set properties using Session.Property (COM / VBScript) or MsiSetProperty (Win32) calls. This allows you to preserve strings inside the MSI's Session object. Sort of global data.
Note that properties can only be set in immediate mode (custom actions that don't change the system), and sending the data to deferred mode custom actions (that can make system changes) is quite involved centering around the CustomActionData concept (more on deferred mode & CustomActionData).
Essentially you send a string to the deferred mode custom action by means of a SetProperty custom action in immediate mode. Typically a "home grown" delimited string that you construct in immediate mode and chew up into information pieces when receiving it in deferred mode. You could try to use JSON-strings and similar to make transfer easier and more reliable by serializing and de-serializing objects via JSON strings.
Alternatives?: This set property approach is involved. Some people write to and from the registry during installation, or to a temp file (in the temp folder) and then they clean up during the commit phase of MSI, but I don't like this approach for several reasons. For one thing commit custom actions might not run based on policies on target systems (when rollback is disabled, no commit script is created - see "Commit Execution" section), and it isn't best practice. Adding temporary rows is an interesting option that I have never spent much time on. I doubt you would be able to easily use this to achieve what you need, although I don't really know what you need in detail. I haven't used it properly. Quick sample. This RemoveFile example from WiX might be better.

log4cplus properties file changes are not being read runtime

Is there any configuration which helps in log4cplus picking dynamic changes? I am changing log4cplus properties on runtime and want log4cplus to pick those changes dynamically.
There is the ConfigureAndWatchThread class which you can instantiate. It will spawn a thread which will watch for modification time changes on given configuration file. When it notices the modification time change into the future of the last recorded modification time, it will remove all the previously instantiated loggers and appenders, etc., and will reconfigure everything.
However, it is not very sophisticated and there is no defence against catching the configuration file change mid air while it is still being written by your editor. If this danger is not important for you, use it. Otherwise, I would suggest you build some sort of manual trigger into your software that will make it re-read the logging configuration only on the trigger.

Jenkins - trigger job if one of other projects was pulled by SCM

Is there a possibility to trigger some job only if one of couple of others job was build by SCM trigger?
For example:
1. Projects A, B, C are build by SCM trigger.
2. Project D will be build only if A or B or C was build. It should build only once even if all of the upstream project were build (A, B and C).
For job 'D', under Advanced project options, add quiet period (how long, experiment what works well). Also make build parametrized, and add parameter for SCM version. When triggering the build from other builds, use parametrized trigger plugin and give SCM version as parameter.
The idea here is, that when there are two identical builds queued, Jenkins will combine them, and build D just once.
This assumes version control support of Jenkins actually sets environment variable indicating the version (in A, B and C jobs), I'm not 100% sure of this.
If you get it to work otherwise, but you get multiple builds, experiment with "allow concurrent builds" checkbox in build D, I think it had some effect on this, one way or another.