I am deploying a stateful set with Helm and the pods are complaining about volumes.
What is the proper way of doing this with AWS EBS? Considering the Helm templates.
Warning FailedScheduling 30s (x112 over 116m) default-scheduler 0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims.
deployment.yaml
volumeClaimTemplates:
- metadata:
name: {{ .Values.storage.name }}
labels:
app: {{ template "etcd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
storageClassName: {{ .Values.storage.class | default .Values.global.storage.class }}
accessModes:
- {{ .Values.storage.accessMode }}
resources:
requests:
storage: {{ .Values.storage.size }}
values.yaml
storage:
name: etcd-data
mountPath: /somepath/etcd
class: "default"
size: 1Gi
accessMode: ReadWriteOnce
Try change the class name to the default name on EKS:
...
spec:
storageClassName: {{ .Values.storage.class | default "gp2" | quote }}
accessModes:
- ...
storage:
...
class: "gp2"
...
Related
I have two services that I am trying to deploy through a Helm chart:
Frontend Service(which is accessible through the host, and uses NodePort)
Backend Service(which is only accessible inside the cluster, and uses ClusterIP)
I am facing an issue with the Ingress of the deployment I am using AWS ALB where it throws a 404 Not Found error when accessing the Frontend Service.
ingress.yaml:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "metaflow-ui.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $fullNameStatic := include "metaflow-ui.fullname-static" . -}}
{{- $svcPortStatic := .Values.serviceStatic.port -}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: metaflow-ui
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/api"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /api
pathType: Prefix
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: metaflow-ui
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: {{ $fullNameStatic }}
servicePort: {{ $svcPortStatic }}
---
{{ end }}
These are the annotations for Ingress under values.yaml:
ingress:
enabled: true
className: ""
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/group.name: metaflow-ui
alb.ingress.kubernetes.io/security-groups: # removed
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: # removed
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
I read that attaching a group.name was the fix to enable a single AWS ALB to be shared across multiple Ingress but it didn't fix the issue. If I were to remove the second ingress the entire site is deployed(but without the backend service).
EDIT:
I found this article that goes over this exact application, How do I achieve path-based routing on an Application Load Balancer?, will try it out.
I managed to get it to work using the following Ingress set up. Instead of having a single Ingress per service I ended up using a single Ingress but kept the group.name.
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "metaflow-ui.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $fullNameStatic := include "metaflow-ui.fullname-static" . -}}
{{- $svcPortStatic := .Values.serviceStatic.port -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
- path: /
pathType: Prefix
backend:
service:
name: {{ $fullNameStatic }}
port:
number: {{ $svcPortStatic }}
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /static
pathType: Prefix
backend:
service:
name: {{ $fullNameStatic }}
port:
number: {{ $svcPortStatic }}
{{ end }}
I'd like to define a Kubernetes ServiceAccount bound to a Google ServiceAccount in a Helm chart as the first step, and later use that service account in the specification of Kubernetes pods.
Here's what I've tried, I define the Kubernetes service account, then the Google Service account and finally try to bind both:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
annotations:
iam.gke.io/gcp-service-account: {{ printf "%s#%s.iam.gserviceaccount.com" .Release.Name .Values.gcp.project }}
---
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
---
# https://cloud.google.com/config-connector/docs/reference/resource-docs/iam/iampolicymember
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicyMember
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
member: {{ printf "serviceAccount:%s#%s.iam.gserviceaccount.com" .Release.Name .Values.gcp.project }}
role: roles/iam.workloadIdentityUser
resourceRef:
apiVersion: v1
kind: ServiceAccount
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
The helm chart deployed to a GKE cluster which has WorkloadIdentity enabled returns the following error
Error: UPGRADE FAILED: failed to create resource: admission webhook "iam-validation.cnrm.cloud.google.com" denied the request: resource reference for kind 'ServiceAccount' must include API group
Basically what I'm trying to do is the ConfigConnector equivalent of
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<YOUR-GCP-PROJECT>.svc.id.goog[<YOUR-K8S-NAMESPACE>/<YOUR-KSA-NAME>]" \
<YOUR-GSA-NAME>#<YOUR-GCP-PROJECT>.iam.gserviceaccount.com
which I got from https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine#gsa
Here is a way to bind a Kubernetes service account to a Google Service account with Config Connector:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
annotations:
iam.gke.io/gcp-service-account: {{ printf "%s#%s.iam.gserviceaccount.com" .Release.Name .Values.gcp.project }}
---
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
displayName: {{ .Release.Name }}
---
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicy
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
resourceRef:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
name: {{ .Release.Name }}
bindings:
- role: roles/iam.workloadIdentityUser
members:
- {{ printf "serviceAccount:%s.svc.id.goog[%s/%s]" .Values.gcp.project .Release.Namespace .Release.Name }}
I have a go template like follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
<-------------------------- Here --------------------------------->
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I want to add the following piece of code into it, just below imagePullPolicy line. Any ideas ?
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
Backgroud:
This above code snippet is helm generated deployment.yaml file, which is used to deploy apps to kubernetes.
Basically what I am trying to achieve is making a script that can set all such stuff in helm chart, so things like adding an environment variable can be done in one go.
Here's a simplified example. The defines and ends are on the same line as the template content to avoid extra blank lines.
main.yaml:
{{define "main"}}apiVersion: apps/v1
spec:
template:
spec:
containers:
- name: foo
{{template "env" .Values}}
ports:
- name: http
containerPort: 80
protocol: TCP
{{end}}
env.yaml:
{{define "env"}} env:
- name: NODE_ENV
value: "{{ .node_env }}"{{end}}
main.go:
package main
import (
"log"
"os"
"text/template"
)
type Deployment struct {
Values map[string]string
}
func main() {
dep := Deployment{Values: map[string]string{
"node_env": "PROD",
}}
tmpl, err := template.ParseFiles("main.yaml", "env.yaml")
if err != nil {
log.Fatal(err)
}
tmpl.ExecuteTemplate(os.Stdout, "main", dep)
if err != nil {
log.Fatal(err)
}
}
I need help with "if" statements in my Prometheus Helm chart. What I am trying to achieve is Prometheus chart with persistent volumes in EBS or NFS. It works fine for EBS, but it doesn't work for NFS. I think the problem is with my if statement logic.
When I set storageClass: "nfs" values.yaml, then I am gettting error:
Error: release prometheus failed: PersistentVolume "prometheus-alertmanager" is invalid: spec: Required value: must specify a volume type
In my values.yaml file I have:
persistentVolume:
enabled: true
accessModes:
- ReadWriteOnce
annotations: {}
existingClaim: ""
mountPath: /data
subPath: "alertmanager/"
size: 4Gi
ReclaimPolicy: "Recycle"
storageClass: "aws"
volumeID: "vol-xxx"
fs_mounts:
path: /data/alertmanager
server: 127.0.0.1
The difference for Prometheus server is in subPath and in path under fs_mounts.
In my alertmanager-pv I have:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-alertmanager
spec:
capacity:
storage: {{ .Values.alertmanager.persistentVolume.size }}
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
persistentVolumeReclaimPolicy: "{{ .Values.alertmanager.persistentVolume.ReclaimPolicy }}"
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
awsElasticBlockStore:
fsType: "ext4"
volumeID: {{ .Values.alertmanager.persistentVolume.volumeID }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
StorageClassName: "nfs"
mountOptions:
- hard
- nfsvers=4.1
- timeo=600
- retrans=2
nfs:
server: {{ .Values.alertmanager.persistentVolume.fs_mounts.server }}
path: {{ .Values.alertmanager.persistentVolume.fs_mounts.path }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
and in alertmanager-pvc:
{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
name: prometheus-alertmanager
spec:
accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "gp2"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
storageClassName: "nfs"
resources:
requests:
storage: {{ .Values.alertmanager.persistentVolume.size }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}
{{- end -}}
The problem was with wrong idents in nfs if statement.
I'm using helm to generate kubernetes yamls.
My values.yaml looks like this:
...
jobs:
- nme: job1
command: [sh, -c, "/app/deployment/start.sh job1"]
activeDeadlineSeconds: 600
- name: job2
command: [sh, -c, "/app/deployment/start.sh job2"]
activeDeadlineSeconds: 600
...
templates/jobs.yaml
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" . }}-{{ $job.name }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" . }}-{{ $job.name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml .Values.service.env | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
{{- end }}
Helm is failing with this error:
Error: UPGRADE FAILED: render error in "app1/templates/jobs.yaml": template: app1/templates/_helpers.tpl:6:18: executing "name" at <.Chart.Name>: can't evaluate field Name in type interface {}
When I look at _helpers.tpl:
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
If I remove the range loop and references to $job in my jobs.yaml, the _helpers.tpl name template works fine. When I add in the loop, it fails.
It seems like within the loop, all dot . pipeline, which contains the scope for .Chart and .Values, is reassigned to something else.
What am I doing wrong?
Inside the loop the value of the . is set to the current element and you have to use $.Chart.Name to access your data.
I asked a similar question and I think the answer https://stackoverflow.com/a/44734585/8131948 will answer your question too.
I ended up saving the global context and then updating all of my references like this:
{{ $global := . }}
{{ range $i, $job := .Values.jobs -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "name" $global }}-{{ $job.name }}
labels:
chart: "{{ $global.Chart.Name }}-{{ $global.Chart.Version | replace "+" "_" }}"
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
template:
metadata:
labels:
app: {{ template "name" $global }}-{{ $job.name }}
spec:
containers:
- name: {{ $global.Chart.Name }}
image: "{{ $global.Values.image.repository }}:{{ $global.Values.image.tag }}"
imagePullPolicy: {{ $global.Values.image.pullPolicy }}
command: {{ $job.command }}
env:
{{ toYaml $global.Values.service.env | indent 10 }}
ports:
- containerPort: {{ $global.Values.service.internalPort }}
{{- end }}