Github Actions CI: is this possible to configure related or dependent concurrency group? - concurrency

We have four workflows for the project and its extension.
Workflow 1 is run for project A.
Workflow 2 is run for project A.
Workflow 3 is run for extension B.
Workflow 4 is run for extension B.
We want to have only one workflow of those to be run at the current point of time (make of them concurrent), but only cancel the job for the same project.
For example, if all workflows were setup with the same concurrency group and if they were run simultaneously in order 1->3->4->2, Workflow 1 is started, workflows 3 and 4 are cancelled and workflow 2 is pending, while we would like to have both workflow 4 and workflow 2 to be pending since they are related to different projects.
Is this possible to setup in github actions CI?

Related

How to call cloud workflows(gcp) sequentially?

how to call cloud workflows sequentially?
I don't want to start workflow when another (same) workflow is processing.
You have a couple of options:
create a primary top level worklow that calls all the other workflows using the googleapis.workflowexecutions.v1.projects.locations.workflows.executions.create action as steps
literally this means you have 1 main workflow with many steps, each trigger one workflow after the other using the above call statement. Steps are executed sequentially.
Leverage Firestore API to write a flag to a collection that controls whether a workflow is in progress, and if another workflow starts check the flag and stop.

Gitlab Runner under GCP Load Balancer

At the moment I have a load balancer which runs a Compute Engine Instance Group which has a minimum of 1 server and a maximum of 5 servers.
This is running auto scaling and use a pre-build ubuntu template with all the base stuff needed.
When an instance boots up it will log a runner into the GitLab project, and then trigger the job to update the instance to the latest copy of the code.
This is fine and works well.
The issue comes when I make a change to the git branch and push the changes, it only seems to be being picked up by one of the random 5 instances that have loaded.
I was under the impression that GitLab would push out to all the runners logged, but this doesn't seem to be the case.
I have seen answers on here that show multiple runners, but on a single server, I haven't come across my particular situation.
Has anyone come across this before? I would assume that this is a pretty normal situation, and weird that it doesn't just work.
For each job that runs in GitLab, only 1 runner receives the job. The mechanism is PULL based -- the runners constantly ask GitLab if there's any jobs available to run. GitLab never initiates communication with the runners.
Therefore, your load balancer rules do nothing to affect which runner receives a job and there is no "fairness" in distributing jobs across server. Runners will keep asking for jobs every few seconds as long as they are able to take them (according to concurrency settings in the config.toml) and GitLab will hand them out on a first-come, first-served basis.
If you set the concurrency to 1 and start multiple jobs, you should see multiple servers pick up the jobs.

How to queue aws codepipeline change sets

In a pipeline we have 3 projects bind deployed, in the first stage we retrieve all the projects, And in subsequent stages we deploy and run test on each, that will total 4 stages, 1 for getting the sources and 1 each for deployment, test and other actions. Our change release are triggered by any commit done to any of the projects in the pipeline.
Normally this works ok but apparently AWS pipeline doesn't queue the change release and can trigger one after the other if a commit is done while a change release is running, so it will run in parallel in the same instance (ec2), and subsequently generate errors. Is there a way to configure a queue for the AWS pipeline release change? This discarding the option of manual approvals.
Thanks for the help in advance.
Based on your description it sounds like you have three projects in one pipeline with a stage for each project and one EC2 instance.
Why not create an independent pipeline for each project? Otherwise it sounds like you need mutual exclusion across the project stages. You could combine the three stages and let CodePipeline enforce one pipeline execution at a time occupying a stage.
I should probably mention based on your question that CodePipeline is intended for continuous delivery and it's desirable to have multiple changes moving through the pipeline at the same time. This is more obvious with deep pipelines (i.e. if it takes 3 days to fully release a change, you probably don't want to wait 3 days before a new change can start traversing the pipeline).

schadule task in DSS 3.5 for DSS project box car

I created a Data service project and enabled Boxcar for running 5 queries sequentially.
after deploying service, I need to use schedule task for running it every 5 minutes. in schedule task, I selected _request_box operation(It was created by DSS boxcar) but it doesn't work. how can i use task schedule with boxcarring?
Thank you
When a task is scheduled the operation should be a parameter-less operation. As request_box consists of several other operations, this scenario will not work as a normal operation. I have added a JIRA to report this scenario and you can track the progress from there.

How can I modify the Load Balancing behavior Jenkins uses to control slaves?

We use Jenkins for our CI build system. We also use 'concurrent builds' so that Jenkins will build each change independently. This means we often have 5 or 6 builds of the same job running simultaneously. To accommodate this, we have 4 slaves each with 12 executors.
The problem is that Jenkins doesn't really 'load balance' among its slaves. It tries to build a job on the same slave that it previously built on (presumably to reduce the time syncing from source control). This is a problem because Jenkins will build all 6 instances of our build on the same slave (or more likely between 2 slaves). One build machine gets bogged down and runs very slowly while the rest of them sit idle.
How do I configure the load balancing behavior of Jenkins, and how it controls its slaves?
We were facing a similar issue. So I've put together a plugin that changes the Load Balancer in Jenkins to select a node that currently has the least load - https://plugins.jenkins.io/leastload/
Any feedback is appreciated.
If you do not find a plugin that does it automatically, here's an idea of what you can do:
Install Node Label Parameter plugin
Add SLAVE parameter to your jobs
Restrict jobs to run on ${SLAVE}
Add a trigger job that will do the following:
Analyze load distribution via a System Groovy Script and decide on which node to start next build.
Dispatch the build on that node with Parameterized Trigger
plugin
by assigning appropriate value to SLAVE parameter.
In order to analyze load distribution you need to install Groovy plugin and familiarize yourself with Jenkins Main Module API. Here are some useful initial pointers.
If your build machines cannot comfortably handle more than 1 build, why configure them with 12 executors? If that is indeed the case, you should reduce the number of executors to 1. My Jenkins has 30 slaves, each with 1 executor.
You may also use the Throttle Concurrent Builds plugin to restrict how many instances of a job can run in parallel on the same node
I have two labels -- one for small tasks and one for big tasks. I have one executor for the big task and 4 for the small tasks. This does balance things a little.