range can't iterate over list [x y] - list

I have this template in my helm chart:
{{ $appEnvs := list "dev" "uat" }}
{{- range $env := $appEnvs }}
...
{{- end }}
I am getting this error :
<$appEnvs>: range can't iterate over [dev uat]
I spent longtime trying many things like :
{{- range $env := toYaml $appEnvs }} and {{- range $env := tuple $appEnvs }} and others.. but no way.
However, when i put the list directly without variable , it works.. I mean {{- range $env := list "dev" "uat" }} works !?!?
How to iterate over a VARIABLE created by the sprig function list ?

apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
{{- $appEnvs := list "dev" "uat" }}
{{- range $idx, $env := $appEnvs }}
{{ $env }}: {{ $env }}
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
dev: dev
uat: uat
range
...
{{- $appEnvs := list "dev" "uat" }}
{{- range $appEnvs }}
{{ . }}: {{ . }}
{{- end }}
...
output
...
dev: dev
uat: uat
...

Related

How to update attribute in array with helm?

Values.Yaml
env:
isTest: 'true'
hostData:
- isActive: true
name: a
url: testA
- isActive: true
name: A
url: testB
configmap.yaml
test:
{{- with .Values.env }}
hostData: {{ .hostData | toJson}}
isTest: {{ .isTest}}
{{- end }}
Now I want to update the url of hostData
I tried to add
test:
{{- with .Values.env }}
hostData: {{ .hostData | toJson}}
isTest: {{ .isTest}}
{{- end }}
{{- range .Values.env.hostData}}
url: https://{{ .name}}//newName
{{- end }}
But it add the url to the structure of the test
test:
hostData: [{"isActive":true,"name":A","url":"testA"},{"isActive":true,"name":"B","url":"testB"}
url AnewName
url BnewName
and didn't update the hostData-> url
This is the resuLT I want
test:
hostData: [{"isActive":true,"name":AnewName","url":"testA"},{"isActive":true,"name":"BnewName","url":"testB"}]
I tried to create also tpl file and added my logic but the problem that I didn't succeed to return yaml from the tpl
{{/*
Create hostData
*/}}
{{- define "get-hostData" -}}
{{- range .Values.env.hostData}}
hostData:
- isActive: {{ .isActive }}
name: {{ .name }}
url: {{ newUrlFromValues }}
{{- end }}
The problem that in the config it didn`t return yaml but string
{{- $test1 := include "get-hostData" . }}
maybe I need to return it as json array
It's not entirely clear to me what you want to achieve. I assume your goal is to construct the url field from the name field (although in the provided example you are altering the name instead of the url (?)). You can do it by updating the hostData before converting it to JSON:
test:
{{- with .Values.env }}
{{- range .hostData}}
{{- $url := print "https://" .name "/newName" }}
{{- $_ := set . "url" $url }}
{{- end }}
hostData: {{ .hostData | toJson }}
isTest: {{ .isTest }}
{{- end }}

Helm Override common template values

First day helm user here. Trying to understand how to build common templates for k8s resources.
Let's say I have 10 cron jobs within single chart, all of them different by args and names only. Today 10 full job manifests exists and 95% of manifest content is equal. I want to move common part in template and create 10 manifests where I will provide specific values for args and names.
So I defined template _cron-job.yaml
{{- define "common.cron-job"}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "costing-report.name" . }}-bom-{{ .Values.env }}
labels:
{{ include "costing-report.labels" . | indent 4 }}
spec:
schedule: "{{ .Values.cronjob.scheduleBom }}"
suspend: {{ .Values.cronjob.suspendBom }}
{{- with .Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with .Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with .Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
env:
- name: spring_profiles_active
value: "{{ .Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" . }}
- secretRef:
name: {{ .Values.secrets.properties }}
restartPolicy: Never
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
and now I need to create job manifest that override name and args job1.yaml
{{- template "common.cron-job" . -}}
??? override ???
name: {{ include "cost-report.name" . }}-job1-{{ .Values.env }}
jobTemplate:
spec:
template:
spec:
containers:
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
Is there any way to do this? I didn't find this in helm docs. I did find this https://github.com/helm/charts/tree/master/incubator/common but It didn't work as well and gave me error.
Thanks.
Solution found
Option 1
Use example from helm github https://github.com/helm/charts/tree/master/incubator/common
Solution based on yaml merging and values override. Pretty flexible, allow you to define common templates and the use them to compose final k8s manifest.
Option 2
Define common template and pass parameters with desired values.
In my case it looks smth like this.
_common.cronjob.yaml
{{- define "common.cronjob" -}}
{{- $root := .root -}}
{{- $name := .name -}}
{{- $schedule := .schedule -}}
{{- $suspend := .suspend -}}
{{- $args := .args -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $name }}
labels:
{{ include "costing-report.labels" $root | indent 4 }}
spec:
schedule: {{ $schedule }}
suspend: {{ $suspend }}
{{- with $root.Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" $root }}
app.kubernetes.io/instance: {{ $root.Release.Name }}
spec:
template:
spec:
containers:
- name: {{ $root.Chart.Name }}
image: "{{ $root.Values.image.repository }}:{{ $root.Values.image.tag }}"
imagePullPolicy: {{ $root.Values.image.pullPolicy }}
args: {{ $args }}
env:
- name: spring_profiles_active
value: "{{ $root.Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" $root }}
- secretRef:
name: {{ $root.Values.secrets.properties }}
restartPolicy: Never
{{- with $root.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
Then create job manifest(s), define values to pass to common template
bom-cronjob.yaml
{{ $bucket := (printf "%s%s%s" "\"--bucket=ll-costing-report-" .Values.env "\"," )}}
{{ $email := (printf "%s%s%s" "\"--email=" .Values.configmap.service.email_bom "\"") }}
{{ $args := (list "\"--report=Bom\"," "\"--reportType=load\"," "\"--source=bamboorose\"," $bucket "\"--table=COSTING_BOM\"," "\"--ignoreLines=1\"," "\"--truncate=true\"," $email )}}
{{ $name := (printf "%s%s" "costing-report.name-bom-" .Values.env )}}
{{- template "common.cronjob" (dict "root" . "name" $name "schedule" .Values.cronjob.scheduleBom "suspend" .Values.cronjob.suspendBom "args" $args) -}}
Last line do the trick. Trick is that you can pass only single argument to template, in my case it's dictionary with all values that I need on template side. You can omit defining template variables and use dict values right away. Please note that I pass root context (scope) as "root" and prefix . with "root" in template.

How to add code to a go template dynamically

I have a go template like follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
<-------------------------- Here --------------------------------->
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I want to add the following piece of code into it, just below imagePullPolicy line. Any ideas ?
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
Backgroud:
This above code snippet is helm generated deployment.yaml file, which is used to deploy apps to kubernetes.
Basically what I am trying to achieve is making a script that can set all such stuff in helm chart, so things like adding an environment variable can be done in one go.
Here's a simplified example. The defines and ends are on the same line as the template content to avoid extra blank lines.
main.yaml:
{{define "main"}}apiVersion: apps/v1
spec:
template:
spec:
containers:
- name: foo
{{template "env" .Values}}
ports:
- name: http
containerPort: 80
protocol: TCP
{{end}}
env.yaml:
{{define "env"}} env:
- name: NODE_ENV
value: "{{ .node_env }}"{{end}}
main.go:
package main
import (
"log"
"os"
"text/template"
)
type Deployment struct {
Values map[string]string
}
func main() {
dep := Deployment{Values: map[string]string{
"node_env": "PROD",
}}
tmpl, err := template.ParseFiles("main.yaml", "env.yaml")
if err != nil {
log.Fatal(err)
}
tmpl.ExecuteTemplate(os.Stdout, "main", dep)
if err != nil {
log.Fatal(err)
}
}

Helm persistent volume type selection between EBS and NFS?

I need help with "if" statements in my Prometheus Helm chart. What I am trying to achieve is Prometheus chart with persistent volumes in EBS or NFS. It works fine for EBS, but it doesn't work for NFS. I think the problem is with my if statement logic.
When I set storageClass: "nfs" values.yaml, then I am gettting error:
Error: release prometheus failed: PersistentVolume "prometheus-alertmanager" is invalid: spec: Required value: must specify a volume type
In my values.yaml file I have:
persistentVolume:
enabled: true
accessModes:
- ReadWriteOnce
annotations: {}
existingClaim: ""
mountPath: /data
subPath: "alertmanager/"
size: 4Gi
ReclaimPolicy: "Recycle"
storageClass: "aws"
volumeID: "vol-xxx"
fs_mounts:
path: /data/alertmanager
server: 127.0.0.1
The difference for Prometheus server is in subPath and in path under fs_mounts.
In my alertmanager-pv I have:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-alertmanager
spec:
capacity:
storage: {{ .Values.alertmanager.persistentVolume.size }}
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
persistentVolumeReclaimPolicy: "{{ .Values.alertmanager.persistentVolume.ReclaimPolicy }}"
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
awsElasticBlockStore:
fsType: "ext4"
volumeID: {{ .Values.alertmanager.persistentVolume.volumeID }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
StorageClassName: "nfs"
mountOptions:
- hard
- nfsvers=4.1
- timeo=600
- retrans=2
nfs:
server: {{ .Values.alertmanager.persistentVolume.fs_mounts.server }}
path: {{ .Values.alertmanager.persistentVolume.fs_mounts.path }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
and in alertmanager-pvc:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
name: prometheus-alertmanager
spec:
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "nfs"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
{{- end -}}
The problem was with wrong idents in nfs if statement.

Golang template (helm) iterating over a list of maps

I'm using helm to generate kubernetes yamls.
My values.yaml looks like this:
...
jobs:
- nme: job1
command: [sh, -c, "/app/deployment/start.sh job1"]
activeDeadlineSeconds: 600
- name: job2
command: [sh, -c, "/app/deployment/start.sh job2"]
activeDeadlineSeconds: 600
...
templates/jobs.yaml
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" . }}-{{ $job.name }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" . }}-{{ $job.name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml .Values.service.env | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
{{- end }}
Helm is failing with this error:
Error: UPGRADE FAILED: render error in "app1/templates/jobs.yaml": template: app1/templates/_helpers.tpl:6:18: executing "name" at <.Chart.Name>: can't evaluate field Name in type interface {}
When I look at _helpers.tpl:
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
If I remove the range loop and references to $job in my jobs.yaml, the _helpers.tpl name template works fine. When I add in the loop, it fails.
It seems like within the loop, all dot . pipeline, which contains the scope for .Chart and .Values, is reassigned to something else.
What am I doing wrong?
Inside the loop the value of the . is set to the current element and you have to use $.Chart.Name to access your data.
I asked a similar question and I think the answer https://stackoverflow.com/a/44734585/8131948 will answer your question too.
I ended up saving the global context and then updating all of my references like this:
{{ $global := . }}
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" $global }}-{{ $job.name }}
labels:
chart: "{{ $global.Chart.Name }}-{{ $global.Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" $global }}-{{ $job.name }}
spec:
containers:
- name: {{ $global.Chart.Name }}
image: "{{ $global.Values.image.repository }}:{{ $global.Values.image.tag }}"
imagePullPolicy: {{ $global.Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml $global.Values.service.env | indent 10 }}
ports:
- containerPort: {{ $global.Values.service.internalPort }}
{{- end }}