First day helm user here. Trying to understand how to build common templates for k8s resources.
Let's say I have 10 cron jobs within single chart, all of them different by args and names only. Today 10 full job manifests exists and 95% of manifest content is equal. I want to move common part in template and create 10 manifests where I will provide specific values for args and names.
So I defined template _cron-job.yaml
{{- define "common.cron-job"}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "costing-report.name" . }}-bom-{{ .Values.env }}
labels:
{{ include "costing-report.labels" . | indent 4 }}
spec:
schedule: "{{ .Values.cronjob.scheduleBom }}"
suspend: {{ .Values.cronjob.suspendBom }}
{{- with .Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with .Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with .Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
env:
- name: spring_profiles_active
value: "{{ .Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" . }}
- secretRef:
name: {{ .Values.secrets.properties }}
restartPolicy: Never
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
and now I need to create job manifest that override name and args job1.yaml
{{- template "common.cron-job" . -}}
??? override ???
name: {{ include "cost-report.name" . }}-job1-{{ .Values.env }}
jobTemplate:
spec:
template:
spec:
containers:
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
Is there any way to do this? I didn't find this in helm docs. I did find this https://github.com/helm/charts/tree/master/incubator/common but It didn't work as well and gave me error.
Thanks.
Solution found
Option 1
Use example from helm github https://github.com/helm/charts/tree/master/incubator/common
Solution based on yaml merging and values override. Pretty flexible, allow you to define common templates and the use them to compose final k8s manifest.
Option 2
Define common template and pass parameters with desired values.
In my case it looks smth like this.
_common.cronjob.yaml
{{- define "common.cronjob" -}}
{{- $root := .root -}}
{{- $name := .name -}}
{{- $schedule := .schedule -}}
{{- $suspend := .suspend -}}
{{- $args := .args -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $name }}
labels:
{{ include "costing-report.labels" $root | indent 4 }}
spec:
schedule: {{ $schedule }}
suspend: {{ $suspend }}
{{- with $root.Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" $root }}
app.kubernetes.io/instance: {{ $root.Release.Name }}
spec:
template:
spec:
containers:
- name: {{ $root.Chart.Name }}
image: "{{ $root.Values.image.repository }}:{{ $root.Values.image.tag }}"
imagePullPolicy: {{ $root.Values.image.pullPolicy }}
args: {{ $args }}
env:
- name: spring_profiles_active
value: "{{ $root.Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" $root }}
- secretRef:
name: {{ $root.Values.secrets.properties }}
restartPolicy: Never
{{- with $root.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
Then create job manifest(s), define values to pass to common template
bom-cronjob.yaml
{{ $bucket := (printf "%s%s%s" "\"--bucket=ll-costing-report-" .Values.env "\"," )}}
{{ $email := (printf "%s%s%s" "\"--email=" .Values.configmap.service.email_bom "\"") }}
{{ $args := (list "\"--report=Bom\"," "\"--reportType=load\"," "\"--source=bamboorose\"," $bucket "\"--table=COSTING_BOM\"," "\"--ignoreLines=1\"," "\"--truncate=true\"," $email )}}
{{ $name := (printf "%s%s" "costing-report.name-bom-" .Values.env )}}
{{- template "common.cronjob" (dict "root" . "name" $name "schedule" .Values.cronjob.scheduleBom "suspend" .Values.cronjob.suspendBom "args" $args) -}}
Last line do the trick. Trick is that you can pass only single argument to template, in my case it's dictionary with all values that I need on template side. You can omit defining template variables and use dict values right away. Please note that I pass root context (scope) as "root" and prefix . with "root" in template.
Related
I have this template in my helm chart:
{{ $appEnvs := list "dev" "uat" }}
{{- range $env := $appEnvs }}
...
{{- end }}
I am getting this error :
<$appEnvs>: range can't iterate over [dev uat]
I spent longtime trying many things like :
{{- range $env := toYaml $appEnvs }} and {{- range $env := tuple $appEnvs }} and others.. but no way.
However, when i put the list directly without variable , it works.. I mean {{- range $env := list "dev" "uat" }} works !?!?
How to iterate over a VARIABLE created by the sprig function list ?
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
{{- $appEnvs := list "dev" "uat" }}
{{- range $idx, $env := $appEnvs }}
{{ $env }}: {{ $env }}
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
dev: dev
uat: uat
range
...
{{- $appEnvs := list "dev" "uat" }}
{{- range $appEnvs }}
{{ . }}: {{ . }}
{{- end }}
...
output
...
dev: dev
uat: uat
...
Im currently trying to run
helm upgrade --install --dry-run --debug django-test ./helm/django-website
but when i do im met with this error line and i cant seem to fix the issue with anything i try
Error: UPGRADE FAILED: YAML parse error on django-website/templates/deployment.yaml: error converting YAML to JSON: yaml: line 38: mapping values are not allowed in this context
helm.go:84: [debug] error converting YAML to JSON: yaml: line 38: mapping values are not allowed in this context
YAML parse error on django-website/templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
helm.sh/helm/v3/pkg/action/action.go:165
helm.sh/helm/v3/pkg/action.(*Upgrade).prepareUpgrade
helm.sh/helm/v3/pkg/action/upgrade.go:234
helm.sh/helm/v3/pkg/action.(*Upgrade).RunWithContext
helm.sh/helm/v3/pkg/action/upgrade.go:143
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:197
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_arm64.s:1133
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_arm64.s:1133
line 38 is the include line under env:,
ive read up on yaml indentations,
tried yaml scanners,
i cant seem to fix it and when i do it causes something else to break
but the code im using is generated from kubernetes so i dont understand why it wont work does anyone know how to fix it
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "django-website.fullname" . }}
labels:
{{ include "django-website.labels" . | nindent 4 }}
spec:
{{ if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "django-website.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "django-website.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "django-website.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["make", "start-server"]
env:
{{- include "django-website.db.env" . | nindent 10 -}}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Values.Yaml
env:
isTest: 'true'
hostData:
- isActive: true
name: a
url: testA
- isActive: true
name: A
url: testB
configmap.yaml
test:
{{- with .Values.env }}
hostData: {{ .hostData | toJson}}
isTest: {{ .isTest}}
{{- end }}
Now I want to update the url of hostData
I tried to add
test:
{{- with .Values.env }}
hostData: {{ .hostData | toJson}}
isTest: {{ .isTest}}
{{- end }}
{{- range .Values.env.hostData}}
url: https://{{ .name}}//newName
{{- end }}
But it add the url to the structure of the test
test:
hostData: [{"isActive":true,"name":A","url":"testA"},{"isActive":true,"name":"B","url":"testB"}
url AnewName
url BnewName
and didn't update the hostData-> url
This is the resuLT I want
test:
hostData: [{"isActive":true,"name":AnewName","url":"testA"},{"isActive":true,"name":"BnewName","url":"testB"}]
I tried to create also tpl file and added my logic but the problem that I didn't succeed to return yaml from the tpl
{{/*
Create hostData
*/}}
{{- define "get-hostData" -}}
{{- range .Values.env.hostData}}
hostData:
- isActive: {{ .isActive }}
name: {{ .name }}
url: {{ newUrlFromValues }}
{{- end }}
The problem that in the config it didn`t return yaml but string
{{- $test1 := include "get-hostData" . }}
maybe I need to return it as json array
It's not entirely clear to me what you want to achieve. I assume your goal is to construct the url field from the name field (although in the provided example you are altering the name instead of the url (?)). You can do it by updating the hostData before converting it to JSON:
test:
{{- with .Values.env }}
{{- range .hostData}}
{{- $url := print "https://" .name "/newName" }}
{{- $_ := set . "url" $url }}
{{- end }}
hostData: {{ .hostData | toJson }}
isTest: {{ .isTest }}
{{- end }}
I have a go template like follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
<-------------------------- Here --------------------------------->
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I want to add the following piece of code into it, just below imagePullPolicy line. Any ideas ?
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
Backgroud:
This above code snippet is helm generated deployment.yaml file, which is used to deploy apps to kubernetes.
Basically what I am trying to achieve is making a script that can set all such stuff in helm chart, so things like adding an environment variable can be done in one go.
Here's a simplified example. The defines and ends are on the same line as the template content to avoid extra blank lines.
main.yaml:
{{define "main"}}apiVersion: apps/v1
spec:
template:
spec:
containers:
- name: foo
{{template "env" .Values}}
ports:
- name: http
containerPort: 80
protocol: TCP
{{end}}
env.yaml:
{{define "env"}} env:
- name: NODE_ENV
value: "{{ .node_env }}"{{end}}
main.go:
package main
import (
"log"
"os"
"text/template"
)
type Deployment struct {
Values map[string]string
}
func main() {
dep := Deployment{Values: map[string]string{
"node_env": "PROD",
}}
tmpl, err := template.ParseFiles("main.yaml", "env.yaml")
if err != nil {
log.Fatal(err)
}
tmpl.ExecuteTemplate(os.Stdout, "main", dep)
if err != nil {
log.Fatal(err)
}
}
I'm using helm to generate kubernetes yamls.
My values.yaml looks like this:
...
jobs:
- nme: job1
command: [sh, -c, "/app/deployment/start.sh job1"]
activeDeadlineSeconds: 600
- name: job2
command: [sh, -c, "/app/deployment/start.sh job2"]
activeDeadlineSeconds: 600
...
templates/jobs.yaml
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" . }}-{{ $job.name }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" . }}-{{ $job.name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml .Values.service.env | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
{{- end }}
Helm is failing with this error:
Error: UPGRADE FAILED: render error in "app1/templates/jobs.yaml": template: app1/templates/_helpers.tpl:6:18: executing "name" at <.Chart.Name>: can't evaluate field Name in type interface {}
When I look at _helpers.tpl:
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
If I remove the range loop and references to $job in my jobs.yaml, the _helpers.tpl name template works fine. When I add in the loop, it fails.
It seems like within the loop, all dot . pipeline, which contains the scope for .Chart and .Values, is reassigned to something else.
What am I doing wrong?
Inside the loop the value of the . is set to the current element and you have to use $.Chart.Name to access your data.
I asked a similar question and I think the answer https://stackoverflow.com/a/44734585/8131948 will answer your question too.
I ended up saving the global context and then updating all of my references like this:
{{ $global := . }}
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" $global }}-{{ $job.name }}
labels:
chart: "{{ $global.Chart.Name }}-{{ $global.Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" $global }}-{{ $job.name }}
spec:
containers:
- name: {{ $global.Chart.Name }}
image: "{{ $global.Values.image.repository }}:{{ $global.Values.image.tag }}"
imagePullPolicy: {{ $global.Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml $global.Values.service.env | indent 10 }}
ports:
- containerPort: {{ $global.Values.service.internalPort }}
{{- end }}