Helm Templating in Configmap for values.yaml - templates

I"m looking for help to create a generic configmap.yaml that can support several services.
values.yaml (THIS WORKS)
value1: val1
genericConfigMapProperties:
application.properties: |-
prop1=prop1value
prop2=prop2value
configmap.yaml
apiVersion: 1
kind: ConfigMap
...
...
data:
{{ (toYaml .Values.genericConfigMapProperties) . | ident 4 }}
The template {{ (toYaml .Values.genericConfigMapProperties) . | ident 4 }} is almost perfect. It renders application.properties correctly:
data:
application.properties: |-
prop1=prop1value
prop2=prop2value
values.yaml (THIS DOES NOT WORK)
value1: val1
genericConfigMapProperties:
cmValue1: {{ .Values.value1 | default "default val1" | quote }}
application.properties: |-
prop1=prop1value
prop2=prop2value
It is getting errors rendering cmValue1. I am expecting this answer:
data:
cmValue1: val1
application.properties: |-
prop1=prop1value
prop2=prop2value
Errors:
Error: failed to parse values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.value1 | default \"default val1\" | quote":interface {}(nill)}
helm.go:88: [debug] error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.value1 | default \"default val1\" | quote":interface {}(nill)}
failed to parse values.yaml
What additional helm template code do I need to support cmValue1 rendering?
Thank you.

Thank you for responding.
I've found this awesome solution from bitnami common templates. It works just about anywhere.
https://github.com/bitnami/charts/blob/master/bitnami/common/templates/_tplvalues.tpl
Copy this template file:
{{/* vim: set filetype=mustache: */}}
{{/*
Renders a value that contains template.
Usage:
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "common.tplvalues.render" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}
Use it to template any values in configmap.yaml or deployment.yaml or anywhere else...
values.yaml:
configMapProperties:
cmValue1: "val1"
application.properties: |-
prop1=prop1value
prop2=prop2value
configmap.yaml
data:
{{- if .Values.configMapProperties }}
{{- include "common.tplvalues.render" ( dict "value" .Values.configMapProperties "context" $ ) | nindent 2 }}
{{- end }}

Helm does not support secondary rendering, but you can use yaml anchor to achieve this function, or indirectly by using named templates.
Plan A : anchor
values.yaml
value1: &value1 val1
genericConfigMapProperties:
cmValue1: *value1
application.properties: |-
prop1=prop1value
prop2=prop2value
templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
{{- toYaml $.Values.genericConfigMapProperties | nindent 4 }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
application.properties: |-
prop1=prop1value
prop2=prop2value
cmValue1: val1
Plan B: Named Template
values.yaml
value1: val1
templates/_helpers.tpl
{{/*
cmValue template
*/}}
{{- define "genericConfigMapProperties" -}}
cmValue1: {{ .Values.value1 | default "default val1" | quote }}
application.properties: |-
prop1=prop1value
prop2=prop2value
{{- end -}}
templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
{{- include "genericConfigMapProperties" . | nindent 4 }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
config.yaml: |
cmValue1: "val1"
application.properties: |-
prop1=prop1value
prop2=prop2value

Related

Helm values string to boolean

new to helm and templating. I have a helm value file which needs to refer the value from another common values file.
enabled: "{{ .Values.common.service.enabled }}"
and the output i expect is
enabled: false
but i always get with the quotes
enabled: "false"
The common values file
service:
enabled : "false"
Have tried toYaml but still same
enabled: "{{ .Values.common.service.enabled | toYaml }}"
```
`
You're receving the quotes because you wrote the quote in the declaration of the component. Instead, the component definition yaml is:
enabled: {{ .Values.common.service.enabled }}
and the values.yaml:
common:
service:
enabled: false

k8s not executing commands after deployment in pod using configmap

Bit complicated but will try to explain as much to give clarity any help much apricated,
I use azure devops to do deployment in eks using helm , everything working fine and i now have a requirement to add certificate to the pod.
for this i have a der file with me , which i should copy to the pods(replica 3) and do keytool to import the certificate and put that in an appropriate location before my application starts
My setup is i have a dockerfile and i call a shell script inside a docker file and do helm install using deployment.yml file
I now tried using configmap to mount the der file which is used to importcert and then i will execute some unix commands to import the certificate , the unix command is not working , can one some help here ?
docker file
FROM amazonlinux:2.0.20181114
RUN yum install -y java-1.8.0-openjdk-headless
ARG JAR_FILE='**/*.jar'
ADD ${JAR_FILE} car_service.jar
ADD entrypoint.sh .
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"] # split ENTRYPOINT wrapper from
CMD ["java", "-jar", "/car_service.jar"] # main CMD
entrypoint.sh
#!/bin/sh
# entrypoint.sh
# Check: $env_name must be set
if [ -z "$env_name" ]; then
echo '$env_name is not set; stopping' >&2
exit 1
fi
# Install aws client
yum -y install curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
yum -y install unzip
unzip awscliv2.zip
./aws/install
# Retrieve secrets from Secrets Manager
export KEYCLOAKURL=`aws secretsmanager get-secret-value --secret-id myathlon/$env_name/KEYCLOAKURL --query SecretString --output text`
cd /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin
keytool -noprompt -importcert -alias msfot-$(date +%Y%m%d-%H%M) -file /tmp/msfot.der -keystore msfot.jks -storepass msfotooling
mkdir /data/keycloak/
cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin/msfot.jks /data/keycloak/
cd /
# Run the main container CMD
exec "$#"
myconfigmap
create configmap msfot1 --from-file=msfot.der
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
volumes:
- name: msfot1
configMap:
name: msfot1
items:
- key: msfot.der
path: msfot.der
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: msfot1
mountPath: /tmp/msfot.der
subPath: msfot.der
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: env_name
value: {{ .Values.environmentName }}
- name: SPRING_PROFILES_ACTIVE
value: "{{ .Values.profile }}"
my values.yml file is
replicaCount: 3
#pass repository and targetPort values during runtime
image:
repository:
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort:
profile: "aws"
environmentName: dev
i have 2 questions here
in my entrypoint.sh files keytool,mkdir,cp and cd command is not getting executed (so the Certificate is not getting added to keystore)
as you know this setup works for all the env as i use the same deployment.yml file though i have different values.yml file for each environment
i want this certificate generation to happen only in acc and prod not for dev and test
is their any other easy method doing this rather than configmap/deployment.yml ??
Please advice
Thanks

Kubernetes - force restarting on specific memory usage

our server running using Kubernetes for auto-scaling and we use newRelic for observability
but we face some issues
1- we need to restart pods when memory usage reaches 1G it automatically restarts when it reaches 1.2G but everything goes slowly.
2- terminate pods when there no requests to the server
my configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
spec:
revisionHistoryLimit: 2
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.imageRepository }}:{{ .Values.tag }}"
env:
{{- include "api.env" . | nindent 12 }}
resources:
limits:
memory: {{ .Values.memoryLimit }}
cpu: {{ .Values.cpuLimit }}
requests:
memory: {{ .Values.memoryRequest }}
cpu: {{ .Values.cpuRequest }}
imagePullSecrets:
- name: {{ .Values.imagePullSecret }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
my values file
memoryLimit: "2Gi"
cpuLimit: "1.0"
memoryRequest: "1.0Gi"
cpuRequest: "0.75"
thats what I am trying to approach
If you want to be sure your pod/deployment won't consume more than 1.0Gi of memory then setting that MemoryLimit will do job just fine.
Once you set that limits and your container exceed it it becomes a potential candidate for termination. If it continues to consume memory beyond its limit, the Container will be terminated. If a terminated Container can be restarted, kubelet restarts it, as with any other type of runtime container failure.
For more readying please visit section exceeding a container's memory limit
Moving on if you wish to scale your deployment based on requests you would require to have custom metrics to be provided by external adapter such as prometheus. Horizontal pod autoascaler natively provides you scaling based only on CPU and Memory (based on the metrics from metrics server).
The adapter documents provides you walkthrough how to configure it with Kubernetes API and HPA. The list of other adapters can be found here.
Then you can scale your deployment based on the http_requests metric as showed here or request-per-seconds as described here.

ansible identifying values in a list

The below code is attempting to find all the keys associated with the value HWIC-8A. I have tried a few different variations & I can't get the key to print, without doing something really long winded. As i'll be repeating this code with different values, i don't want to search each key/value pair individually within that list.
MODULES:
Slot_0_SubSlot_0: HWIC-8A
Slot_0_SubSlot_1: EHWIC-VA-DSL-M
Slot_0_SubSlot_3: HWIC-8A
- name: Apply HWIC-8A Build
debug:
msg: "{{ item.key }}"
with_items: "{{ MODULES }}"
when: "{{ item.value }} == HWIC-8A"
Maybe that's something for you:
---
- hosts: localhost
vars:
MODULES:
Slot_0_SubSlot_0: HWIC-8A
Slot_0_SubSlot_1: EHWIC-VA-DSL-M
Slot_0_SubSlot_3: HWIC-8A
tasks:
- debug: var=MODULES
- debug: msg="{{ MODULES | dict2items }}"
- debug: msg="{{ MODULES | dict2items | selectattr('value','match','HWIC-8A') | map(attribute='key')| list }}"
Then if you would like to have multiple matches, you could solve it with an MATCH list:
---
- hosts: localhost
vars:
MODULES:
Slot_0_SubSlot_0: HWIC-8A
Slot_0_SubSlot_1: EHWIC-VA-DSL-M
Slot_0_SubSlot_3: HWIC-8A
Slot_1_SubSlot_3: HWIC-8C
Slot_1_SubSlot_2: HWIC-8C
MATCH:
- HWIC-8A
- HWIC-8C
tasks:
- debug:
msg: "{{ MODULES | dict2items | selectattr('value','match',item) | map(attribute='key')| list }}"
with_items: "{{ MATCH }}"
Output:
TASK [debug] ***********************************************************************************************************************************************************************************
Thursday 27 August 2020 15:08:10 +0200 (0:00:00.042) 0:00:02.037 *******
ok: [localhost] => (item=HWIC-8A) => {
"msg": [
"Slot_0_SubSlot_0",
"Slot_0_SubSlot_3"
]
}
ok: [localhost] => (item=HWIC-8C) => {
"msg": [
"Slot_1_SubSlot_3",
"Slot_1_SubSlot_2"
]
}
I would use use jinja templates to do it. Something like this:
- name: Apply HWIC-8A Build
debug:
msg: '{% for m in MODULES %}{% if MODULES[m] == "HWIC-8A" %}{{ m }} {% endif %}{% endfor %}'
Which will give you this:
ok: [localhost] => {
"msg": "Slot_0_SubSlot_0 Slot_0_SubSlot_3 "
}
There is probably a fancy way using filters as well.

Assign list to a key within a chart

Deployment.yaml
...
env: {{ .Values.env}}
...
Values.yaml:
env:
- name: "DELFI_DB_USER"
value: "yyy"
- name: "DELFI_DB_PASSWORD"
value: "xxx"
- name: "DELFI_DB_CLASS"
value: "com.mysql.jdbc.Driver"
- name: "DELFI_DB_URL"
value: "jdbc:sqlserver://dockersqlserver:1433;databaseName=ddbeta;sendStringParametersAsUnicode=false"
feels like I'm missing something obvious.
linter says: ok
template says:
env: [map[name:DELFI_DB_USER value:yyy] map[name:DELFI_DB_PASSWORD
value:xxx] map[name:DELFI_DB_CLASS value:com.mysql.jdbc.Driver]
map[value:jdbc:mysql://dockersqlserver.{{ .Release.Namespace
}}.svc.cluster.local:3306/ddbeta\?\&amp\;useSSL=true\&amp\;requireSSL=false
name:DELFI_DB_URL]]
upgrade says:
Error: UPGRADE FAILED: YAML parse error on
xxx/templates/deployment.yaml: error converting YAML to JSON: yaml:
line 35: found unexpected ':'
solution:
env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
The current Go template expansion will give output which is not YAML:
env: {{ .Values.env}}
becomes:
env: env: [Some Go type stuff that isn't YAML]...
The Helm Go template needs to loop over the keys of the source YAML dictionary.
This is described in the Helm docs.
The correct Deployment.yaml is:
...
env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
...
Helm includes undocumented toYaml and toJson template functions; either will work here (because valid JSON is valid YAML). A shorter path could be
env: {{- .Values.env | toYaml | nindent 2 }}
Note that you need to be a little careful with the indentation, particularly if you're setting any additional environment variables that aren't in that list. In this example I've asked Helm to indent the YAML list two steps more, so additional environment values need to follow that too
env: {{- .Values.env | toYaml | nindent 2 }}
- name: OTHER_SERVICE_URL
value: "http://other-service.default.svc.cluster.local"