Helm range yaml template kafka topics - templates

I am new to helm and I am trying to generate different topics for kafka with a range function to not have a yaml file for each topic:
I have different topics (topic1, topic2, topic3,...) and the only difference they have is the retention in ms of the topic and the name, some topics have 3600000 and the others 540000, this is my values file:
KafkaTopics:
shortRetentionTopics:
name:
- topic1
- topic2
- topic3
- topic4
spec:
config:
retention.ms: 540000
topicName:
- topic1logs
- topic2logs
- topic3logs
- topic4logs
longRetentionTopics:
name:
- topic34
- topic35
spec:
config:
retention.ms: 3600000
topicName:
- topic34logs
- topic34logs
And I would like to set the name, topicName and retention.ms on this template doing a for loop from the values file:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: (here the name of the topic)
namespace: default
spec:
config:
retention.ms: (here the retention of the topic)
partitions: 12
replicas: 1
topicName: (here the topicName of the topic)
Or if you have any suggestion to change the structure of the values file to make it easier to parse the values to the template I'm interested as well.

At the end I ended up doing this:
{{- range $topics := .Values.kafkaTopicList }}
{{ $spec := default dict $topics.spec }}
{{ $config := default dict $spec.config }}
{{ $retention := default dict $config.retentionMs }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: {{ $topics.name }}
namespace: default
spec:
config:
retention.ms: {{ $retention | default "540000" }}
partitions: 12
replicas: 1
topicName: {{ $topics.name | replace "-" "." }}
{{- end}}
Values file:
kafkaTopicList:
topic1:
name: events-1
topic2:
name: events-2
topic3:
name: events-3
topic4:
name: events-4
topic5:
name: events-5
topic6:
name: events-6
topic7:
name: events-7
spec:
config:
retentionMs: 3600000

Here's an example
$ cat values.yaml
KafkaTopics:
shortRetentionTopics:
name:
- topic1
- topic2
- topic3
- topic4
spec:
config:
retention.ms: 540000
longRetentionTopics:
name:
- topic34
- topic35
spec:
config:
retention.ms: 3600000
$ cat templates/topics.yml
{{- with .Values.KafkaTopics.shortRetentionTopics }}{{- range .name }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: {{ $.Release.Name }}-{{ . }}
namespace: default
spec:
{{- toYaml $.Values.KafkaTopics.shortRetentionTopics.spec | nindent 2 }}
partitions: 12
replicas: 1
topicName: {{.}}logs
{{- end}}{{- end}}
Repeat for the long retention topics, or use separate template files.
Sample debug output - helm template topics ./topic-example --debug
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic1
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic1logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic2
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic2logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic3
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic3logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic4
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic4logs

Related

gRPC AWS Ingress Kubernetes

I am trying to set up an Ingress manifest that allows traffic through an ALB to connect to a gRPC pod. I am currently getting the error:
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Not really sure what this means as I am new to gRPC. I am using BloomRPC to test this currently by hitting dns-of-alb:50051
Kubernetes Manifests:
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{.Values.name}}-svc
spec:
ports:
- port: 50051
targetPort: 50051
protocol: TCP
type: NodePort
selector:
app: {{.Values.name}}
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{.Values.name}}-ingress
namespace: {{.Values.namespace}}
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 50051}]'
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: {{.Values.loadBalancerCertificate}}
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: {{.Values.name}}-svc
servicePort: 50051
backend:
serviceName: {{.Values.name}}-svc
servicePort: 50051
deployment.yaml
{{- if .Values.env.config}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Values.name}}
data:
{{.Values.env.config | toYaml | indent 2}}
---
{{- end}}
{{- if .Values.env.secrets}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}
stringData:
{{.Values.env.secrets | toYaml | indent 2}}
---
{{- end}}
{{- if .Values.dockercfg}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}-dockercfg
annotations:
harness.io/skip-versioning: true
data:
.dockercfg: {{.Values.dockercfg}}
type: kubernetes.io/dockercfg
---
{{- end}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment
spec:
replicas: {{int .Values.replicas}}
selector:
matchLabels:
app: {{.Values.name}}
template:
metadata:
labels:
app: {{.Values.name}}
tags.datadoghq.com/env: {{.Values.environment_name}}
tags.datadoghq.com/service: {{.Values.name}}
tags.datadoghq.com/version: "{{.Values.version}}"
spec:
{{- if .Values.dockercfg}}
imagePullSecrets:
- name: {{.Values.name}}-dockercfg
{{- end}}
serviceAccountName: {{.Values.name}}-service-account
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/env']
- name: DD_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/service']
- name: DD_VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/version']
{{- if or .Values.env.config .Values.env.secrets}}
envFrom:
{{- if .Values.env.config}}
- configMapRef:
name: {{.Values.name}}
{{- end}}
{{- if .Values.env.secrets}}
- secretRef:
name: {{.Values.name}}
{{- end}}
{{- end}}

Creating sidecar Metricbeat with AWS EKS Fargate

I'm trying to create a deployment on AWS EKS with my application and metricbeat as sidecar, so I have the following YML:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-modules
namespace: testframework
labels:
k8s-app: metricbeat
data:
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: [ "https://${NODE_IP}:10250" ]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: testframework
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
processors:
- add_cloud_metadata:
- add_tags:
tags: ["EKSCORP_DEV"]
target: "cluster_test"
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
index: "metricbeat-k8s-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.name: "metricbeat-k8s"
setup.template.pattern: "metricbeat-k8s-*"
setup.ilm.enabled: false
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testframework-initializr-deploy
namespace: testframework
spec:
replicas: 1
selector:
matchLabels:
app: testframework-initializr
template:
metadata:
labels:
app: testframework-initializr
annotations:
co.elastic.logs/enabled: 'true'
co.elastic.logs/json.keys_under_root: 'true'
co.elastic.logs/json.add_error_key: 'true'
co.elastic.logs/json.message_key: 'message'
spec:
containers:
- name: testframework-initializr
image: XXXXX.dkr.ecr.us-east-1.amazonaws.com/testframework-initializr
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 60
failureThreshold: 5
readinessProbe:
httpGet:
port: 8080
path: /health
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 3
- name: metricbeat-sidecar
image: docker.elastic.co/beats/metricbeat:7.12.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs"
]
env:
- name: ELASTIC_CLOUD_ID
value: xxxx
- name: ELASTIC_CLOUD_AUTH
value: xxxx
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: testframework
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: testframework-initializr-service
namespace: testframework
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: testframework-initializr
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: testframework-initializr-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: dev-initializr.test.net
http:
paths:
- backend:
serviceName: testframework-initializr-service
servicePort: 80
Well, after startup the POD in AWS EKS, I got the following error in Kubernetes Metricbeat Container:
INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: error making http request: Get "https://IP_FROM_FARGATE_HERE:10250/stats/summary": dial tcp IP_FROM_FARGATE_HERE:10250: connect: connection refused
I tried to use the "NODE_NAME" instead "NODE_IP", but I got "No Such Host". Any idea how can I fix it?

Disable Istio sidecar injection to the job pod

How to disable Istio sidecar injection for the Kubernetes Job?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pod-restart
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *'
jobTemplate:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command: ['kubectl', 'rollout', 'restart', 'deployment/myapp']
Sidecar still gets injected.
The annotation is in wrong place. You have to put it on the pod template.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
spec:
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
There is working CronJob example with istio injection disabled.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
Also there is related github issue about that.
Now the annotation has been deprecated as per doc https://istio.io/latest/docs/reference/config/annotations/
it would be best if you use a label instead:
apiVersion: batch/v1
kind: CronJob
metadata:
name: jobs-cleanup
spec:
schedule: "*/4 * * * *"
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: cleaner
containers:
- name: kubectl-container
image: bitnami/kubectl:latest
command: ["sh", "/tmp/clean.sh"]
volumeMounts:
- name: cleaner-script
mountPath: /tmp/
restartPolicy: Never
volumes:
- name: cleaner-script
configMap:
name: cleaner-script

Cant get access to service

I have the second problem: cloud sql proxy pod wrapped onto service and must provide access to database.
And I have a job which must create new database for every branch.
But when this job runs the second error appears. I cant get access to the cloudsql-proxy-service.
I cant understand why its happens. Thanks.
E psql: could not connect to server: Connection timed out
E Is the server running on host "cloudsql-proxy-service"
(10.43.254.123) and accepting
E TCP/IP connections on port 5432?
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsql-proxy
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
replicas: 1
selector:
matchLabels:
name: cloudsql-proxy
template:
metadata:
labels:
name: cloudsql-proxy
spec:
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.testDatabaseInstanceConnectionName }}=tcp:5432"
- "-credential_file=/secrets/cloudsql/credentials.json"
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
ports:
- containerPort: 5432
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: {{ .Values.cloudSqlProxySecretName }}
---
apiVersion: v1
kind: Service
metadata:
name: cloudsql-proxy-service
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
selector:
name: cloudsql-proxy
ports:
- port: 5432
apiVersion: batch/v1
kind: Job
metadata:
name: create-test-database
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-10"
spec:
template:
metadata:
name: create-test-database
spec:
containers:
- name: postgres-client
image: kalumkalac/postgresql-client
env:
- name: PGUSER
value: {{ .Values.testDatabaseCredentials.username }}
- name: PGPASSWORD
value: {{ .Values.testDatabaseCredentials.password }}
- name: PGDATABASE
value: {{ .Values.testDatabaseCredentials.defaultDatabaseName }}
- name: PGHOST
value: cloudsql-proxy-service
command:
- psql
- -q
- -c CREATE DATABASE {{ .Values.testDatabaseCredentials.name|quote }}
restartPolicy: Never
backoffLimit: 0 # Deny retry job

Helm charts nested loops

Trying to generate deployments for my helm charts by using this template
{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
for 2 of my services. In values.yaml I got
environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
... but the output is not being properly formatted
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
instead of
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
and another config
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)
I think you should reconsider the way the data is structured, this would work better:
services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
And your Deployment to look like this:
{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
would actually create two deployments, by adding the --- separator.