ArgoCD - When deploying one app in monorepo with multiple application, all app re-sync triggers - argocd

All ! :)
I’m using the mono-repo with multi application architecture.
- foo
- dev
- alp
- prd
- bar
- dev
- alp
- prd
- argocd
- dev
- foo-application (argo cd app, target revision : master, destination cluster : dev, path: foo/dev)
- bar-application (argo cd app, target revision : master, destination cluster : dev, path: bar/dev)
- alp
- foo-application (argo cd app, target revision : master, destination cluster: alp, path: foo/alp)
- bar-application (argo cd app, target revision : master, destination cluster: alp, path: bar/alp)
- ...
Recently I found out that merging to the master branch triggers syncing with other applications as well Despite no change in that target path directory..
So, whenever one application is modified and merged into the master, multiple applications are Out-Of-Sync -> Syncing -> Synced is happening repeatedly. :(
In my opinion, if there is no code change in the target path, even if the git sha value of the branch changes, synced is maintained.
But it wasn’t. When the git sha of the target branch is changed, ArgoCD is unconditionally triggered by changing the cache key.
To solve this problem, it seems wasteful to create a manifest repository for each application.
While looking for a solution, I came across this feature.
webhook-and-manifest-paths-annotation
However, according to the documentation, this seems to work when used with the GitHub Webhook.
Currently we are using ArgoCD polling the Repository every 3 minutes. Does this annotation not work in this case?

Related

Google Cloud Run correctly running continuous deployment to github, but not updating when deployed

I've set up a Google Cloud Run with continuous deployment to a github, and it redeploys every time there's a push to the main (what I what), but when I go to check the site, it hasn't updated the HTML I've been testing with. I've tested it on my local machine, and it's updating the code when I run the Django server, so I'm guessing it's something with my cloudbuild.yml? There was another post I tried to mimic, but it didn't take.
Any advice would be very helpful! Thank you!
cloudbuild.yml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/exeplore', './ExePlore']
# Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${PROJECT_ID}/exeplore']
# Deploy image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'exeplore'
- '--image'
- 'gcr.io/${PROJECT_ID}/exeplore'
- '--region'
- 'europe-west2'
- '--platform'
- 'managed'
images:
- gcr.io/${PROJECT_ID}/exeplore
Here are the variables for GCR
Edit 1: I've now updated my cloudbuild, so the SHORT_SHA is all gone, but now google cloud run is saying it can't find my manage.py at /Exeplore/manage.py. I might have to trial and error it, as running the container locally is fine, and same with running the server locally. I have yet to try what Ezekias suggested, as I've tried rolled back to when it was correctly running the server and it doesn't like that.
Edit 2: I've checked the services, it is at 100% Latest
Check your Cloud Run service, either on the Cloud Console or by running gcloud run services describe. It may be set to serve traffic to a specific revision instead of having 100% of traffic serving LATEST.
If that's the case, it won't automatically move traffic to the new revision when you deploy. If you want it to automatically switch to the new update, you can run gcloud run services update-traffic --to-latest or use the "Manage Traffic" button on the revisions tab of the Cloud Console to set 100% of traffic to the latest healthy revision.
It looks like you're building gcr.io/${PROJECT_ID}/exeplore:$SHORT_SHA, but pushing and deploying gcr.io/${PROJECT_ID}/exeplore. These are essentially different images.
Update any image variables to include the SHORT_SHA to ensure all references are the same.
To avoid duplication you may also want to use dynamic substitution variables

AWS Codepipeline with bitbucket and how to pass branch name to appspec.yaml

I've created a code pipeline for the PHP laravel base project with bitbucket. Passing parameter using AWS SSM to the appspec.yml All are working fine with the development branch. I need to update the parameters from the AWS SSM based on the branch name on appspec.yml file.
FOR DEV
Branch name: develop
parameter value: BRANCH_NAME_VALUE (develop_value)
FOR QA
Branch name: qa
parameter value: BRANCH_NAME_VALUE(qa_value)
appspec.yaml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
overwrite: true
hooks:
BeforeInstall:
- location: scripts/before_install.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/after_install.sh
timeout: 300
runas: root
How I can get the BRANCH_NAME for update the after_install.sh
Not sure what do you want to do, but you can't pass arbitrary env variables to CodeDeploy. The only supported ones are:
LIFECYCLE_EVENT : This variable contains the name of the lifecycle event associated with the script.
DEPLOYMENT_ID : This variables contains the deployment ID of the current deployment.
APPLICATION_NAME : This variable contains the name of the application being deployed. This is the name the user sets in the console or AWS CLI.
DEPLOYMENT_GROUP_NAME : This variable contains the name of the deployment group. A deployment group is a set of instances associated with an application that you target for a deployment.
DEPLOYMENT_GROUP_ID : This variable contains the ID of the deployment group in AWS CodeDeploy that corresponds to the current deployment
Thus, in your case you could have two development groups called develop and qa. Then, in the CodeDeploy scripts you could test the branch name using
DEPLOYMENT_GROUP_NAME and get respective SSM parameters.
Seems that you are trying to merge branches, and are facing issues where specific files or directories change per branch. I faced similar issue, and we can try to create .gitattributes per branch. The destination branch will have this so that once it is merged the specific files in the source branch wont overwrite the destination branch.
Check Reference:-
List item
https://git-scm.com/book/en/v2/Customizing-Git-Git-Attributes#_merge_strategies
List item
Git - Ignore files during merge
Example:-
2 branches master (For Production Environment)
and Stage (For Development Environment)
git config --global merge.ours.driver true
git checkout master
echo "appspec.yml merge=ours" >> .gitattributes
echo "scripts/before-install.sh merge=ours" >> .gitattributes
git merge stage
$ cat .gitattributes
appspec.yml merge=ours
scripts/before-install.sh merge=ours
Summary:-
so the idea is to keep the appspec.yml clean and environment free and handle it at git level itself. Unfortunately appspec.yml does not still support variables to accommodate per Branch.
Additionally, I would also add the above paths to .gitignore per branch to avoid them being altered during commits. Above is just a example, in production setup you could by default disable commits to master branch and only use pull requests with manual approval at AWS CodePipeline level with SNS topics for approval emails. And use a feature branch and merge to Stage first.

Azure DevOps YAML self hosted agent pipeline build is stuck at locating self-agent

Action: I tried to configure and run a simple c++ azure pipeline on a self-hosted windows computer. I'm pretty new to all this. I ran the script below.
Expected: to see build task, display task and clean task. to see hello word.
Result: Error, script can't find my build agent.
##[warning]An image label with the label Weltgeist does not exist.
,##[error]The remote provider was unable to process the request.
Pool: Azure Pipelines
Image: Weltgeist
Started: Today at 10:16 p.m.
Duration: 14m 23s
Info & Test:
My self-hosted agent name is Weltgeist and it's part of the
default agent pools.it's a windows computer, with all g++, mingw and
other related tools on it.
I tried my build task locally with no problem.
I tried my build task using azure 'ubuntu-latest' agent with no
problem.
I created the self-hosted agent following these specification.
I'm the owner of the azure repo.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
How do I configure correctly the pool ymal parameter for self-hosted agent ?
Do i have addition steps to do server side? or on azure repo configs?
Any other idea of what went wrong?
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'Weltgeist' #Testing with self-hosted agent
steps:
- script: |
mkdir ./build
g++ -g ./src/hello-world.cpp -o ./build/hello-world.exe
displayName: 'Run a build script'
- script: |
./build/hello-world.exe
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
(UPDATE)
Solution:
Thx, after updating it as stated in a answer below and reading a bit more pool ymal definition it works. Note, I modified a couple of other lines to make it work on my environment.
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals Weltgeist
steps:
- script: |
mkdir build
g++ -o ./build/hello-world.exe ./src/hello-world.cpp
displayName: 'Run a build script'
- script: |
cd build
hello-world.exe
cd ..
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
I was confused by the Default because there was already a pipeline named Default in the organization.
Expanding on the answers provided here.
pool:
name: NameOfYourPool
demands:
- agent.name -equals NameOfYourAgent
Here is screen you'll find that information in DevOps.
Since you are using the self-hosted agent, you could use the following format:
pool:
name: Default
demands:
- agent.name -equals Weltgeist
Then it should work as expected.
You could refer to the doc about POOL Definition in Yaml.
I had faced the same issue and replacing vmImage under pool with "name" worked for me. PFB,
trigger:
master
pool:
name: 'Weltgeist' #Testing with self-hosted agent
Also be aware that if your agent only appears in the "Azure Pipelines" pool and not in any of the other pools then the agent may have been configured to be an "Environment" resource, and can't be used as part of the build step.
I spent ages trying to use a self-hosted VM for a build step, thinking that the correct way to reference the VM was by creating a VM resource from the Pipelines > Enviroments area:
The agent would be properly created and visible in the "Azure Pipelines" pool, but wouldn't be available in any of the other pools, which then meant it couldn't be referenced in the yaml used for setting the server used for builds.
I was able to resolve the issue, by de-registering the agent on my self-hosted VM with .\config.cmd remove and running ./config without the --environment --environmentname "<name>" that was provided within the registration script mentioned above (shown in the "Add reseouce" screenshot)
Oddly, the registration script is a much quicker way to register an Agent than the "New agent" form shown in Agent Pools:
The necessary files are pulled to the server (without having to download one first) and a PAT with a 3-hour lifetime is auto-generated.

How to access a GCP Cloud Source Repository from another project?

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram

How to serve a Java application as Docker container and .war file?

Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.