Assign list to a key within a chart - templates

Deployment.yaml
...
env: {{ .Values.env}}
...
Values.yaml:
env:
- name: "DELFI_DB_USER"
value: "yyy"
- name: "DELFI_DB_PASSWORD"
value: "xxx"
- name: "DELFI_DB_CLASS"
value: "com.mysql.jdbc.Driver"
- name: "DELFI_DB_URL"
value: "jdbc:sqlserver://dockersqlserver:1433;databaseName=ddbeta;sendStringParametersAsUnicode=false"
feels like I'm missing something obvious.
linter says: ok
template says:
env: [map[name:DELFI_DB_USER value:yyy] map[name:DELFI_DB_PASSWORD
value:xxx] map[name:DELFI_DB_CLASS value:com.mysql.jdbc.Driver]
map[value:jdbc:mysql://dockersqlserver.{{ .Release.Namespace
}}.svc.cluster.local:3306/ddbeta\?\&amp\;useSSL=true\&amp\;requireSSL=false
name:DELFI_DB_URL]]
upgrade says:
Error: UPGRADE FAILED: YAML parse error on
xxx/templates/deployment.yaml: error converting YAML to JSON: yaml:
line 35: found unexpected ':'
solution:
env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}

The current Go template expansion will give output which is not YAML:
env: {{ .Values.env}}
becomes:
env: env: [Some Go type stuff that isn't YAML]...
The Helm Go template needs to loop over the keys of the source YAML dictionary.
This is described in the Helm docs.
The correct Deployment.yaml is:
...
env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
...

Helm includes undocumented toYaml and toJson template functions; either will work here (because valid JSON is valid YAML). A shorter path could be
env: {{- .Values.env | toYaml | nindent 2 }}
Note that you need to be a little careful with the indentation, particularly if you're setting any additional environment variables that aren't in that list. In this example I've asked Helm to indent the YAML list two steps more, so additional environment values need to follow that too
env: {{- .Values.env | toYaml | nindent 2 }}
- name: OTHER_SERVICE_URL
value: "http://other-service.default.svc.cluster.local"

Related

There's a way to template a variable in ansible

applications:
appA:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appA'
appB:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appB'
- ansible.builtin.set_fact:
clusters: "{{ clusters + [ item ] }}"
when: applications.{{ item }}.someDB.enable
loop:
- appA
- appB
- ansible.builtin.shell: |
pg_createcluster \
-d {{ aplications.item.someDB.datadir }}
when: item == 'appA'
loop: "{{ clusters }}"
Is there a simple way to make ansible do substitutions inside a var ? Like some precedence operand.
In the loop: item == appA
so, {{ aplications.item.someDB.datadir }} is like this {{ aplications.appA.someDB.datadir }}, and autmatocally is its content is: '/var/lib/postgresql/11/appA' and so on.
Probably I'm using the wrong approach, but seems reasonable to me do that.
Thank's for anyu help.
The applications variable can be flattened, so it will be easier to set the conditions for the task
applications:
- name: appA
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appA'
- name: appB
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appB'
- name: Create PG cluster
ansible.builtin.shell: >
pg_createcluster
-d {{ item.someDB.datadir }}
when:
- item.name == 'appA'
- item.someDB.enable
loop: "{{ applications }}"
Some comments:
Adding a unique name to the task will make it easier to debug your playbooks, specially when you have a long playbook or role
using > in shell will take care to collapse the instructions as a single line, so you won't need the slash
Usually to create a postgress database, you would prefer to use the postgresql_db Ansible module; the -d <directory> is preferred to be set in the configuration file, so you wouldn't need it as an argument.
Create the list of enabled clusters
clusters: "{{ applications|dict2items|
json_query('[?value.someDB.enable].key') }}"
gives
clusters:
- appA
- appB
Then, the task below creates the run-strings you want
- debug:
msg: "pg_createcluster -d {{ applications[item].someDB.datadir }}"
loop: "{{ clusters }}"
gives (abridged)
msg: pg_createcluster -d /var/lib/postgresql/11/appA
msg: pg_createcluster -d /var/lib/postgresql/11/appB
Example of a complete playbook for testing
- hosts: localhost
vars:
applications:
appA:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appA'
appB:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appB'
clusters: "{{ applications|dict2items|
json_query('[?value.someDB.enable].key') }}"
tasks:
- debug:
var: clusters
- debug:
msg: "pg_createcluster -d {{ applications[item].someDB.datadir }}"
loop: "{{ clusters }}"
Notes:
You can get both the key and the value by substitution
- debug:
msg: "{{ _key }}: {{ _val }}"
loop:
- appA
- appB
when: applications[item].someDB.enable
vars:
_key: "applications.{{ item }}.someDB.datadir"
_val: "{{ applications[item].someDB.datadir }}"
gives (abridged)
msg: 'applications.appA.someDB.datadir: /var/lib/postgresql/11/appA'
msg: 'applications.appB.someDB.datadir: /var/lib/postgresql/11/appB'
Create a dictionary to simplify the searching
app_datadir: "{{ dict(applications|dict2items|
json_query('[].[key, value.someDB.datadir]')) }}"
gives
app_datadir:
appA: /var/lib/postgresql/11/appA
appB: /var/lib/postgresql/11/appB
Use this dictionary "to make Ansible do substitutions". For example,
- debug:
msg: "applications.{{ item }}.someDB.datadir: {{ app_datadir[item] }}"
loop:
- appA
- appB
gives (abridged)
msg: 'applications.appA.someDB.datadir: /var/lib/postgresql/11/appA'
msg: 'applications.appB.someDB.datadir: /var/lib/postgresql/11/appB'
Example of a complete playbook for testing
- hosts: localhost
vars:
applications:
appA:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appA'
appB:
someDB:
enable: true
datadir: '/var/lib/postgresql/11/appB'
app_datadir: "{{ dict(applications|dict2items|
json_query('[].[key, value.someDB.datadir]')) }}"
tasks:
- debug:
var: app_datadir
- debug:
msg: "applications.{{ item }}.someDB.datadir: {{ app_datadir[item] }}"
loop:
- appA
- appB
You can also create the dictionary app_enable
app_enable: "{{ dict(applications|dict2items|
json_query('[].[key, value.someDB.enable]')) }}"
gives
app_enable:
appA: true
appB: true
and use it in the condition. For example,
- debug:
msg: "applications.{{ item }}.someDB.datadir: {{ app_datadir[item] }}"
loop:
- appA
- appB
when: app_enable[item]

Helm values string to boolean

new to helm and templating. I have a helm value file which needs to refer the value from another common values file.
enabled: "{{ .Values.common.service.enabled }}"
and the output i expect is
enabled: false
but i always get with the quotes
enabled: "false"
The common values file
service:
enabled : "false"
Have tried toYaml but still same
enabled: "{{ .Values.common.service.enabled | toYaml }}"
```
`
You're receving the quotes because you wrote the quote in the declaration of the component. Instead, the component definition yaml is:
enabled: {{ .Values.common.service.enabled }}
and the values.yaml:
common:
service:
enabled: false

Helm Templating in Configmap for values.yaml

I"m looking for help to create a generic configmap.yaml that can support several services.
values.yaml (THIS WORKS)
value1: val1
genericConfigMapProperties:
application.properties: |-
prop1=prop1value
prop2=prop2value
configmap.yaml
apiVersion: 1
kind: ConfigMap
...
...
data:
{{ (toYaml .Values.genericConfigMapProperties) . | ident 4 }}
The template {{ (toYaml .Values.genericConfigMapProperties) . | ident 4 }} is almost perfect. It renders application.properties correctly:
data:
application.properties: |-
prop1=prop1value
prop2=prop2value
values.yaml (THIS DOES NOT WORK)
value1: val1
genericConfigMapProperties:
cmValue1: {{ .Values.value1 | default "default val1" | quote }}
application.properties: |-
prop1=prop1value
prop2=prop2value
It is getting errors rendering cmValue1. I am expecting this answer:
data:
cmValue1: val1
application.properties: |-
prop1=prop1value
prop2=prop2value
Errors:
Error: failed to parse values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.value1 | default \"default val1\" | quote":interface {}(nill)}
helm.go:88: [debug] error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.value1 | default \"default val1\" | quote":interface {}(nill)}
failed to parse values.yaml
What additional helm template code do I need to support cmValue1 rendering?
Thank you.
Thank you for responding.
I've found this awesome solution from bitnami common templates. It works just about anywhere.
https://github.com/bitnami/charts/blob/master/bitnami/common/templates/_tplvalues.tpl
Copy this template file:
{{/* vim: set filetype=mustache: */}}
{{/*
Renders a value that contains template.
Usage:
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "common.tplvalues.render" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}
Use it to template any values in configmap.yaml or deployment.yaml or anywhere else...
values.yaml:
configMapProperties:
cmValue1: "val1"
application.properties: |-
prop1=prop1value
prop2=prop2value
configmap.yaml
data:
{{- if .Values.configMapProperties }}
{{- include "common.tplvalues.render" ( dict "value" .Values.configMapProperties "context" $ ) | nindent 2 }}
{{- end }}
Helm does not support secondary rendering, but you can use yaml anchor to achieve this function, or indirectly by using named templates.
Plan A : anchor
values.yaml
value1: &value1 val1
genericConfigMapProperties:
cmValue1: *value1
application.properties: |-
prop1=prop1value
prop2=prop2value
templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
{{- toYaml $.Values.genericConfigMapProperties | nindent 4 }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
application.properties: |-
prop1=prop1value
prop2=prop2value
cmValue1: val1
Plan B: Named Template
values.yaml
value1: val1
templates/_helpers.tpl
{{/*
cmValue template
*/}}
{{- define "genericConfigMapProperties" -}}
cmValue1: {{ .Values.value1 | default "default val1" | quote }}
application.properties: |-
prop1=prop1value
prop2=prop2value
{{- end -}}
templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
{{- include "genericConfigMapProperties" . | nindent 4 }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
cmValue1: "val1"
application.properties: |-
prop1=prop1value
prop2=prop2value

Kubernetes - force restarting on specific memory usage

our server running using Kubernetes for auto-scaling and we use newRelic for observability
but we face some issues
1- we need to restart pods when memory usage reaches 1G it automatically restarts when it reaches 1.2G but everything goes slowly.
2- terminate pods when there no requests to the server
my configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
spec:
revisionHistoryLimit: 2
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.imageRepository }}:{{ .Values.tag }}"
env:
{{- include "api.env" . | nindent 12 }}
resources:
limits:
memory: {{ .Values.memoryLimit }}
cpu: {{ .Values.cpuLimit }}
requests:
memory: {{ .Values.memoryRequest }}
cpu: {{ .Values.cpuRequest }}
imagePullSecrets:
- name: {{ .Values.imagePullSecret }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
my values file
memoryLimit: "2Gi"
cpuLimit: "1.0"
memoryRequest: "1.0Gi"
cpuRequest: "0.75"
thats what I am trying to approach
If you want to be sure your pod/deployment won't consume more than 1.0Gi of memory then setting that MemoryLimit will do job just fine.
Once you set that limits and your container exceed it it becomes a potential candidate for termination. If it continues to consume memory beyond its limit, the Container will be terminated. If a terminated Container can be restarted, kubelet restarts it, as with any other type of runtime container failure.
For more readying please visit section exceeding a container's memory limit
Moving on if you wish to scale your deployment based on requests you would require to have custom metrics to be provided by external adapter such as prometheus. Horizontal pod autoascaler natively provides you scaling based only on CPU and Memory (based on the metrics from metrics server).
The adapter documents provides you walkthrough how to configure it with Kubernetes API and HPA. The list of other adapters can be found here.
Then you can scale your deployment based on the http_requests metric as showed here or request-per-seconds as described here.

Ansible jinja2 templating

In Ansible, I currently have vars set out like this
container_a_version: 1
container_b_version: 6
container_c_version: 3
container_d_version: 2
...
containers:
- container_a
- container_b
- container_c
- container_d
...
Each container has its own template file which has a line like this
image: registry.sportpesa.com/sportpesa/global/container-a:{{ container_a_version }}
My playbook
- name: Copy templates
template:
src: templates/docker-compose-{{ item }}.yml
dest: /srv/docker-compose-{{ item }}.yml
loop: "{{ containers }}"
If I want to deploy only container a and c. I change my variable to this
containers:
- container_a
# - container_b
- container_c
# - container_d
What I want to do condense my variables to just have 1 var like this
containers:
- { name: a, version: 1 }
- { name: b, version: 6 }
- { name: c, version: 3 }
- { name: d, version: 2 }
However I'm not sure how I can call the container version in my template. Is this possible?
For example, the playbook
shell> cat pb.yml
- hosts: localhost
vars:
containers:
- {name: a, version: 1}
- {name: b, version: 2}
tasks:
- template:
src: containers.txt.j2
dest: containers.txt
and the template
shell> cat containers.txt.j2
{% for item in containers %}
name: {{ item.name }} version {{ item.version }}
{% endfor %}
give
shell> cat containers.txt
name: a version 1
name: b version 2
It's possible to iterate the list
- debug:
msg: "image: registry.example.com/container-{{ item.name }}:container_{{ item.version }}_version"
loop: "{{ containers }}"
gives
msg: 'image: registry.example.com/container-a:container_1_version'
msg: 'image: registry.example.com/container-b:container_2_version'
Since you want two different templates, each with it's respective container name and version without looping the containers variable. You can use a playbook that renders two templates. In my example I will call the templates:
container-a
container-b
The template container-a.j2:
image: registry.example.com/container-{{ containers[0]['name'] }}:{{ containers[0]['version'] }}
The template container-b.j2:
image: registry.example.com/container-{{ containers[1]['name'] }}:{{ containers[1]['version'] }}
Then my playbook will have below tasks:
vars:
containers:
- { name: a, version: 1 }
- { name: b, version: 2 }
tasks:
- template:
src: "container-{{ containers[0]['name'] }}.j2"
dest: "/tmp/container-{{ containers[0]['name'] }}"
- template:
src: "container-{{ containers[1]['name'] }}.j2"
dest: "/tmp/container-{{ containers[1]['name'] }}"
When this playbook is run, it renders /tmp/container-a:
image: registry.example.com/container-a:1
and /tmp/container-b:
image: registry.example.com/container-b:2