ArgoCD apps of apps with local repo - argocd

I use the apps of apps approach with ArgoCD, and my child apps look like this:
apiVersion: argoproj.io/v1alpha1
kind: Application
...
spec:
...
source:
path: helm-charts/chart/version
repoURL: 'git#path/to/git/repo.git'
targetRevision: HEAD
There are roughly 80 child apps below the root app, and each child is repeating the repoURL git#path/to/git/repo.git. Can I replace this by a relative path respective of the parent application which is also committed to the same git repo?

Related

How to run python django migration command on all pods in kubernetes cluster from a temporary pod

I have a kubernetes (eks)cluster with multiple pods running. I want to run django migrations on each pod. Whenever new code is deployed I want to run migrations automatically on each pod.
I have figured out there are 2 ways to do it:
1.Through Job
2. Running a script in specs of container.
I want to do it through job. Can someone guide how can i achieve this using jobs in kubernetes?
I have seen some articles. But in job do i have to specify new images for all pods every time i deploy latest changes?
When you run a container you can override its main command. You can use this in the context of a Job to run migrations instead of the Django server.
apiVersion: batch/v1
kind: Job
metadata: {...}
spec:
template:
spec:
containers:
- name: migrate
image: ...
command: [./manage.py, migrate]
This Job will run only once, and you will have to delete and recreate it if you want it to run again. The Job's image: does need to match the build of the main application, since the migrations will presumably be embedded in the image.
You tagged this question with the Helm tool. If you're using Helm, you can run this Job as a hook. This can cause it to be re-run whenever you run helm upgrade. This also will simplify the task of keeping the per-build image: in sync.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "mychart.fullname" . }}-migrate
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
helm.sh/hook: post-install,post-upgrade
spec:
template:
spec:
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: [./manage.py, migrate]
I've copied some of the boilerplate from the standard chart template. In particular note the helm.sh/hook annotation, which causes the Job to be run as a Helm hook, and the templating in image, which would let you helm upgrade mychart . --set-string tag=20230126 to provide the image tag at deploy time.

ArgoCD - When deploying one app in monorepo with multiple application, all app re-sync triggers

All ! :)
I’m using the mono-repo with multi application architecture.
- foo
- dev
- alp
- prd
- bar
- dev
- alp
- prd
- argocd
- dev
- foo-application (argo cd app, target revision : master, destination cluster : dev, path: foo/dev)
- bar-application (argo cd app, target revision : master, destination cluster : dev, path: bar/dev)
- alp
- foo-application (argo cd app, target revision : master, destination cluster: alp, path: foo/alp)
- bar-application (argo cd app, target revision : master, destination cluster: alp, path: bar/alp)
- ...
Recently I found out that merging to the master branch triggers syncing with other applications as well Despite no change in that target path directory..
So, whenever one application is modified and merged into the master, multiple applications are Out-Of-Sync -> Syncing -> Synced is happening repeatedly. :(
In my opinion, if there is no code change in the target path, even if the git sha value of the branch changes, synced is maintained.
But it wasn’t. When the git sha of the target branch is changed, ArgoCD is unconditionally triggered by changing the cache key.
To solve this problem, it seems wasteful to create a manifest repository for each application.
While looking for a solution, I came across this feature.
webhook-and-manifest-paths-annotation
However, according to the documentation, this seems to work when used with the GitHub Webhook.
Currently we are using ArgoCD polling the Repository every 3 minutes. Does this annotation not work in this case?

In Netlify CMS's config.yml, how to specify that the site is in a subdirectory of the repo?

I have a repo at github.com/some_user/some_repo, and it's set to deploy via GitHub pages on the ./docs subfolder of the master branch, to example.com.
./docs/admin/config.yml (and example.com/admin/config.yml) has the following code:
backend:
name: github
repo: some_user/some_repo
branch: master
base_url: https://my-authentication-server.example.com
How do I tell netlify CMS that the code is in the ./docs subfolder and not the root of the codebase? Is there something like this?
backend:
...
site_directory: docs
...
As far as I can tell there is no such directive, however, each individual file or folder should be set to reside in the subdirectory, for example:
...
folder: docs/_posts/
...
file: docs/_data/my_custom_data.yml
...
etc. Then it works.

PCF Staticfile_buildpack not considering Staticfile

I'm trying to deploy Angular 6 application to PCF using "cf push -f manifest.yml" command.
Deployment works fine, except its not considering options set in "Staticfiles" .
To be specific, I've below values in Staticfile, to force HTTPS, and to include ".well-known" folder which will be excluded by default due to "." prefix.
force_https: true
host_dot_files: true
location_include: .well-known/assetlinks.json
I also have a manifest.yml file with below values,
applications:
- name: MyApp
instances: 1
memory: 256M
instances: 1
random-route: false
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
path: ./dist
routes:
- route:myapp.example.com
Is there an alternate option to set these params in manifest.yml or how to I achieve it?
First..
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
Don't do this. You don't want to point to the master branch of a buildpack, it could be unstable. Point to a release, like https://github.com/cloudfoundry/staticfile-buildpack.git#v1.4.31 or use the system provided staticfile buildpack, like staticfile_buildpack.
To be specific, I've below values in Staticfiles
It's not Staticfiles, plural. It's Staticfile singular. Make sure you have the correct file name & that it's in the root of your project (i.e. same directory that you're pushing), which is ./dist based on path: in your manifest.
Update: For angular, "Staticfile" should be under "src" folder, which will put it under dist on building.
https://docs.cloudfoundry.org/buildpacks/staticfile/index.html#config-process

Using Google endpoints in different modules of the same app

I'm quite new to development with Google App engine and other Google services of the Cloud platform and I'd like to create an app with different modules (so they can have their own lifecycle) which use endpoints.
I'm struggling with api paths because I don't know how to route requests to the good module.
My directory tree is like that:
/myApp
/module1
__init__.py
main.py
/module2
__init__.py
main.py
module1.yaml
module2.yaml
dispatch.yaml
module1.yaml
application: myapp
runtime: python27
threadsafe: true
module: module1
version: 0
api_version: 1
handlers:
# The endpoints handler must be mapped to /_ah/spi.
# Apps send requests to /_ah/api, but the endpoints service handles mapping
# those requests to /_ah/spi.
- url: /_ah/spi/.*
script: module1.main.api
libraries:
- name: pycrypto
version: 2.6
- name: endpoints
version: 1.0
module2.yaml
application: myapp
runtime: python27
threadsafe: true
module: module2
version: 0
api_version: 1
handlers:
# The endpoints handler must be mapped to /_ah/spi.
# Apps send requests to /_ah/api, but the endpoints service handles mapping
# those requests to /_ah/spi.
- url: /_ah/spi/.*
script: module2.main.api
libraries:
- name: pycrypto
version: 2.6
- name: endpoints
version: 1.0
dispatch.yaml
dispatch:
- url: "*/_ah/spi/*"
module: module1
- url: "*/_ah/spi/.*"
module: module2
So I'd like my endpoints to be called with the name of the corresponding module somewhere ('_ah/api/module1' or 'module1/_ah/api'). I don't know what to put in the different .yaml files. I don't even know if what I'm doing is right, or possible.
Thanks for your answers.
You can host different endpoints on different modules (now called services); the way to correctly address them is as follows:
https://<service-name>-dot-<your-project-id>.appspot.com/_ah/api
Now, let's say you have—as per your description—module1 and module2, each one hosting different endpoints. You will call module1 APIs by hitting:
https://module1-dot-<your-project-id>.appspot.com/_ah/api
And in a similar fashion, module2 APIs:
https://module2-dot-<your-project-id>.appspot.com/_ah/api
If you want to dig deeper into how this URL schema works (including versions, which are another important part of the equation here), go read Addressing microservices and the immediately following section Using API versions