I'm using helm to generate kubernetes yamls.
My values.yaml looks like this:
...
jobs:
- nme: job1
command: [sh, -c, "/app/deployment/start.sh job1"]
activeDeadlineSeconds: 600
- name: job2
command: [sh, -c, "/app/deployment/start.sh job2"]
activeDeadlineSeconds: 600
...
templates/jobs.yaml
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" . }}-{{ $job.name }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" . }}-{{ $job.name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml .Values.service.env | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
{{- end }}
Helm is failing with this error:
Error: UPGRADE FAILED: render error in "app1/templates/jobs.yaml": template: app1/templates/_helpers.tpl:6:18: executing "name" at <.Chart.Name>: can't evaluate field Name in type interface {}
When I look at _helpers.tpl:
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
If I remove the range loop and references to $job in my jobs.yaml, the _helpers.tpl name template works fine. When I add in the loop, it fails.
It seems like within the loop, all dot . pipeline, which contains the scope for .Chart and .Values, is reassigned to something else.
What am I doing wrong?
Inside the loop the value of the . is set to the current element and you have to use $.Chart.Name to access your data.
I asked a similar question and I think the answer https://stackoverflow.com/a/44734585/8131948 will answer your question too.
I ended up saving the global context and then updating all of my references like this:
{{ $global := . }}
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" $global }}-{{ $job.name }}
labels:
chart: "{{ $global.Chart.Name }}-{{ $global.Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" $global }}-{{ $job.name }}
spec:
containers:
- name: {{ $global.Chart.Name }}
image: "{{ $global.Values.image.repository }}:{{ $global.Values.image.tag }}"
imagePullPolicy: {{ $global.Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml $global.Values.service.env | indent 10 }}
ports:
- containerPort: {{ $global.Values.service.internalPort }}
{{- end }}
Related
Im currently trying to run
helm upgrade --install --dry-run --debug django-test ./helm/django-website
but when i do im met with this error line and i cant seem to fix the issue with anything i try
Error: UPGRADE FAILED: YAML parse error on django-website/templates/deployment.yaml: error converting YAML to JSON: yaml: line 38: mapping values are not allowed in this context
helm.go:84: [debug] error converting YAML to JSON: yaml: line 38: mapping values are not allowed in this context
YAML parse error on django-website/templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
helm.sh/helm/v3/pkg/action/action.go:165
helm.sh/helm/v3/pkg/action.(*Upgrade).prepareUpgrade
helm.sh/helm/v3/pkg/action/upgrade.go:234
helm.sh/helm/v3/pkg/action.(*Upgrade).RunWithContext
helm.sh/helm/v3/pkg/action/upgrade.go:143
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:197
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_arm64.s:1133
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_arm64.s:1133
line 38 is the include line under env:,
ive read up on yaml indentations,
tried yaml scanners,
i cant seem to fix it and when i do it causes something else to break
but the code im using is generated from kubernetes so i dont understand why it wont work does anyone know how to fix it
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "django-website.fullname" . }}
labels:
{{ include "django-website.labels" . | nindent 4 }}
spec:
{{ if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "django-website.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "django-website.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "django-website.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["make", "start-server"]
env:
{{- include "django-website.db.env" . | nindent 10 -}}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I am deploying a stateful set with Helm and the pods are complaining about volumes.
What is the proper way of doing this with AWS EBS? Considering the Helm templates.
Warning FailedScheduling 30s (x112 over 116m) default-scheduler 0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims.
deployment.yaml
volumeClaimTemplates:
- metadata:
name: {{ .Values.storage.name }}
labels:
app: {{ template "etcd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
storageClassName: {{ .Values.storage.class | default .Values.global.storage.class }}
accessModes:
- {{ .Values.storage.accessMode }}
resources:
requests:
storage: {{ .Values.storage.size }}
values.yaml
storage:
name: etcd-data
mountPath: /somepath/etcd
class: "default"
size: 1Gi
accessMode: ReadWriteOnce
Try change the class name to the default name on EKS:
...
spec:
storageClassName: {{ .Values.storage.class | default "gp2" | quote }}
accessModes:
- ...
storage:
...
class: "gp2"
...
First day helm user here. Trying to understand how to build common templates for k8s resources.
Let's say I have 10 cron jobs within single chart, all of them different by args and names only. Today 10 full job manifests exists and 95% of manifest content is equal. I want to move common part in template and create 10 manifests where I will provide specific values for args and names.
So I defined template _cron-job.yaml
{{- define "common.cron-job"}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ include "costing-report.name" . }}-bom-{{ .Values.env }}
labels:
{{ include "costing-report.labels" . | indent 4 }}
spec:
schedule: "{{ .Values.cronjob.scheduleBom }}"
suspend: {{ .Values.cronjob.suspendBom }}
{{- with .Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with .Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with .Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
env:
- name: spring_profiles_active
value: "{{ .Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" . }}
- secretRef:
name: {{ .Values.secrets.properties }}
restartPolicy: Never
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
and now I need to create job manifest that override name and args job1.yaml
{{- template "common.cron-job" . -}}
??? override ???
name: {{ include "cost-report.name" . }}-job1-{{ .Values.env }}
jobTemplate:
spec:
template:
spec:
containers:
args: ["--report=Bom","--email={{ .Values.configmap.service.email_bom }}"]
Is there any way to do this? I didn't find this in helm docs. I did find this https://github.com/helm/charts/tree/master/incubator/common but It didn't work as well and gave me error.
Thanks.
Solution found
Option 1
Use example from helm github https://github.com/helm/charts/tree/master/incubator/common
Solution based on yaml merging and values override. Pretty flexible, allow you to define common templates and the use them to compose final k8s manifest.
Option 2
Define common template and pass parameters with desired values.
In my case it looks smth like this.
_common.cronjob.yaml
{{- define "common.cronjob" -}}
{{- $root := .root -}}
{{- $name := .name -}}
{{- $schedule := .schedule -}}
{{- $suspend := .suspend -}}
{{- $args := .args -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $name }}
labels:
{{ include "costing-report.labels" $root | indent 4 }}
spec:
schedule: {{ $schedule }}
suspend: {{ $suspend }}
{{- with $root.Values.cronjob.concurrencyPolicy }}
concurrencyPolicy: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ . }}
{{- end }}
{{- with $root.Values.cronjob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ . }}
{{- end }}
jobTemplate:
metadata:
labels:
app.kubernetes.io/name: {{ include "costing-report.name" $root }}
app.kubernetes.io/instance: {{ $root.Release.Name }}
spec:
template:
spec:
containers:
- name: {{ $root.Chart.Name }}
image: "{{ $root.Values.image.repository }}:{{ $root.Values.image.tag }}"
imagePullPolicy: {{ $root.Values.image.pullPolicy }}
args: {{ $args }}
env:
- name: spring_profiles_active
value: "{{ $root.Values.env }}"
envFrom:
- configMapRef:
name: {{ include "costing-report.fullname" $root }}
- secretRef:
name: {{ $root.Values.secrets.properties }}
restartPolicy: Never
{{- with $root.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end -}}
Then create job manifest(s), define values to pass to common template
bom-cronjob.yaml
{{ $bucket := (printf "%s%s%s" "\"--bucket=ll-costing-report-" .Values.env "\"," )}}
{{ $email := (printf "%s%s%s" "\"--email=" .Values.configmap.service.email_bom "\"") }}
{{ $args := (list "\"--report=Bom\"," "\"--reportType=load\"," "\"--source=bamboorose\"," $bucket "\"--table=COSTING_BOM\"," "\"--ignoreLines=1\"," "\"--truncate=true\"," $email )}}
{{ $name := (printf "%s%s" "costing-report.name-bom-" .Values.env )}}
{{- template "common.cronjob" (dict "root" . "name" $name "schedule" .Values.cronjob.scheduleBom "suspend" .Values.cronjob.suspendBom "args" $args) -}}
Last line do the trick. Trick is that you can pass only single argument to template, in my case it's dictionary with all values that I need on template side. You can omit defining template variables and use dict values right away. Please note that I pass root context (scope) as "root" and prefix . with "root" in template.
I have a go template like follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
<-------------------------- Here --------------------------------->
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I want to add the following piece of code into it, just below imagePullPolicy line. Any ideas ?
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
Backgroud:
This above code snippet is helm generated deployment.yaml file, which is used to deploy apps to kubernetes.
Basically what I am trying to achieve is making a script that can set all such stuff in helm chart, so things like adding an environment variable can be done in one go.
Here's a simplified example. The defines and ends are on the same line as the template content to avoid extra blank lines.
main.yaml:
{{define "main"}}apiVersion: apps/v1
spec:
template:
spec:
containers:
- name: foo
{{template "env" .Values}}
ports:
- name: http
containerPort: 80
protocol: TCP
{{end}}
env.yaml:
{{define "env"}} env:
- name: NODE_ENV
value: "{{ .node_env }}"{{end}}
main.go:
package main
import (
"log"
"os"
"text/template"
)
type Deployment struct {
Values map[string]string
}
func main() {
dep := Deployment{Values: map[string]string{
"node_env": "PROD",
}}
tmpl, err := template.ParseFiles("main.yaml", "env.yaml")
if err != nil {
log.Fatal(err)
}
tmpl.ExecuteTemplate(os.Stdout, "main", dep)
if err != nil {
log.Fatal(err)
}
}
I need help with "if" statements in my Prometheus Helm chart. What I am trying to achieve is Prometheus chart with persistent volumes in EBS or NFS. It works fine for EBS, but it doesn't work for NFS. I think the problem is with my if statement logic.
When I set storageClass: "nfs" values.yaml, then I am gettting error:
Error: release prometheus failed: PersistentVolume "prometheus-alertmanager" is invalid: spec: Required value: must specify a volume type
In my values.yaml file I have:
persistentVolume:
enabled: true
accessModes:
- ReadWriteOnce
annotations: {}
existingClaim: ""
mountPath: /data
subPath: "alertmanager/"
size: 4Gi
ReclaimPolicy: "Recycle"
storageClass: "aws"
volumeID: "vol-xxx"
fs_mounts:
path: /data/alertmanager
server: 127.0.0.1
The difference for Prometheus server is in subPath and in path under fs_mounts.
In my alertmanager-pv I have:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-alertmanager
spec:
capacity:
storage: {{ .Values.alertmanager.persistentVolume.size }}
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
persistentVolumeReclaimPolicy: "{{ .Values.alertmanager.persistentVolume.ReclaimPolicy }}"
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
awsElasticBlockStore:
fsType: "ext4"
volumeID: {{ .Values.alertmanager.persistentVolume.volumeID }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
StorageClassName: "nfs"
mountOptions:
- hard
- nfsvers=4.1
- timeo=600
- retrans=2
nfs:
server: {{ .Values.alertmanager.persistentVolume.fs_mounts.server }}
path: {{ .Values.alertmanager.persistentVolume.fs_mounts.path }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
and in alertmanager-pvc:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
name: prometheus-alertmanager
spec:
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "nfs"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
{{- end -}}
The problem was with wrong idents in nfs if statement.