Helm charts nested loops - templates

Trying to generate deployments for my helm charts by using this template
{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
for 2 of my services. In values.yaml I got
environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
... but the output is not being properly formatted
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
instead of
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
and another config
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)

I think you should reconsider the way the data is structured, this would work better:
services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
And your Deployment to look like this:
{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
would actually create two deployments, by adding the --- separator.

Related

Rabbit cluster in k8s slower than EC2

Right now we are working with 5 EC2 instances (AWS) of rabbitmq and we try to migrate to k8s cluster.
We deployed with EKS a cluster that works fine until 45K users.
The 5 separate instances can handle 75K users.
We discovered that the latency was higher in k8s cluster than the connection with EC2 instances.
We used this tool: https://www.rabbitmq.com/rabbitmq-diagnostics.8.html and we didn't find a problem. The file descriptors looks fine, the memory, CPU and etc...
we deployed with https://github.com/rabbitmq/cluster-operator
values.yaml
serviceName: rabbitmq
namespace: berlin
regionCode: use1
env: dev
resourcesConfig:
replicas: 9
nodeGroupName: r-large
storageType: gp2
storageSize: 100Gi
resources:
limits:
cpu: 8
memory: 60Gi
requests:
cpu: 7
memory: 60Gi
definitionsConf:
vhosts:
- name: /
exchanges:
- name: test
vhost: /
type: direct
durable: true
auto_delete: false
internal: false
arguments: {}
policies:
- vhost: /
name: Test Policy
pattern: test.*.*.*
apply-to: queues
definition:
federation-upstream-set: all
priority: 0
additionalPlugins:
- rabbitmq_event_exchange
- rabbitmq_auth_backend_cache
- rabbitmq_auth_backend_http
- rabbitmq_prometheus
- rabbitmq_shovel
rabbitmqConf:
load_definitions: /etc/rabbitmq/definitions.json
# definitions.skip_if_unchanged: 'true'
cluster_partition_handling: pause_minority
auth_backends.1: cache
auth_cache.cached_backend: http
auth_cache.cache_ttl: '10000'
auth_http.http_method: post
auth_http.user_path: http://XXXX:3000/authentication/users
auth_http.vhost_path: http://XXX:3000/authentication/vhosts
auth_http.resource_path: http://XXX:3000/authentication/resources
auth_http.topic_path: http://XXX:3000/authentication/topics
prometheus.path: /metrics
prometheus.tcp.port: '15692'
log.console: 'true'
log.console.level: error
log.console.formatter: json
log.default.level: error
tcp_listen_options.backlog: '4096'
tcp_listen_options.nodelay: 'true'
tcp_listen_options.sndbuf: '32768'
tcp_listen_options.recbuf: '32768'
tcp_listen_options.keepalive: 'true'
tcp_listen_options.linger.on: 'true'
tcp_listen_options.linger.timeout: '0'
disk_free_limit.relative: '1.0'
num_acceptors.tcp: '40'
hipe_compile: 'true'
collect_statistics_interval: '30000'
mnesia_table_loading_retry_timeout: '60000'
heartbeat: '60'
vm_memory_high_watermark.relative: '0.9'
management_agent.disable_metrics_collector: 'true'
management.disable_stats: 'true'
metricsConfig:
metricsPath: /metrics
metricsPort: '15692'
Chart.yaml
apiVersion: v2
name: rabbitmq
description: RabbitMQ Cluster
type: application
version: 0.0.1
charts/templates/configmap.yaml
{{- $varNamespace := .Values.namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ .Values.namespace }}
name: {{ .Values.serviceName }}-definitions-conf
data:
definitions.json: |
{{ .Values.definitionsConf | toJson |replace "NAMESPACE" $varNamespace }}
{{- $varNamespace := .Values.namespace}}
{{- $varRegionCode := .Values.regionCode}}
{{- $varEnv := .Values.env}}
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.resourcesConfig.replicas }}
rabbitmq:
envConfig: |
ERL_MAX_PORTS=10000000
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4:4 +P 2000000"
advancedConfig: |
[
{kernel, [
{inet_default_connect_options, [{nodelay, true}]},
{inet_default_listen_options, [{nodelay, true}]}
]}
].
additionalPlugins: {{ .Values.additionalPlugins | toJson | indent 4 }}
additionalConfig: |
{{- range $key, $val := .Values.rabbitmqConf }}
{{ $key }} = {{ $val | replace "NAMESPACE" $varNamespace | replace "REGION_CODE" $varRegionCode | replace "ENV" $varEnv }}
{{- end }}
resources:
requests:
cpu: {{ .Values.resourcesConfig.resources.requests.cpu }}
memory: {{ .Values.resourcesConfig.resources.requests.memory }}
limits:
cpu: {{ .Values.resourcesConfig.resources.limits.cpu }}
memory: {{ .Values.resourcesConfig.resources.limits.memory }}
persistence:
storageClassName: {{ .Values.resourcesConfig.storageType }}
storage: {{ .Values.resourcesConfig.storageSize }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- {{ .Values.serviceName }}
topologyKey: kubernetes.io/hostname
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-name: {{ .Values.serviceName }}
service.beta.kubernetes.io/load-balancer-source-ranges: {{ .Values.service.allowedVpcCidrRange }}
service.beta.kubernetes.io/aws-load-balancer-internal: 'true'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Name={{ .Values.serviceName }}
external-dns.alpha.kubernetes.io/hostname: {{ .Values.serviceName }}.{{ .Values.service.hostedZone }}
override:
statefulSet:
spec:
template:
metadata:
annotations:
platform.vonage.com/logging: enabled
telegraf.influxdata.com/class: influxdb
telegraf.influxdata.com/inputs: |+
[[inputs.prometheus]]
urls = ["http://127.0.0.1:{{ .Values.metricsConfig.metricsPort }}{{ .Values.metricsConfig.metricsPath }}"]
metric_version = 1
tagexclude = ["url"]
telegraf.influxdata.com/env-literal-NAMESPACE: {{ $.Values.namespace }}
telegraf.influxdata.com/env-literal-SERVICENAME: {{ $.Values.serviceName }}
spec:
nodeSelector:
node-group-label: {{ .Values.resourcesConfig.nodeGroupName }}
containers:
- name: rabbitmq
volumeMounts:
- name: definitions
mountPath: {{ .Values.rabbitmqConf.load_definitions }}
subPath: definitions.json
volumes:
- name: definitions
configMap:
name: {{ .Values.serviceName }}-definitions-conf
Can someone gives us an advice what we can check or how can we solve our issue?
Thanks.
I'm trying to replace the rabbitmq instances to rabbitmq k8s cluster. We want the same results (or better) than the separate instances.

Helm range yaml template kafka topics

I am new to helm and I am trying to generate different topics for kafka with a range function to not have a yaml file for each topic:
I have different topics (topic1, topic2, topic3,...) and the only difference they have is the retention in ms of the topic and the name, some topics have 3600000 and the others 540000, this is my values file:
KafkaTopics:
shortRetentionTopics:
name:
- topic1
- topic2
- topic3
- topic4
spec:
config:
retention.ms: 540000
topicName:
- topic1logs
- topic2logs
- topic3logs
- topic4logs
longRetentionTopics:
name:
- topic34
- topic35
spec:
config:
retention.ms: 3600000
topicName:
- topic34logs
- topic34logs
And I would like to set the name, topicName and retention.ms on this template doing a for loop from the values file:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: (here the name of the topic)
namespace: default
spec:
config:
retention.ms: (here the retention of the topic)
partitions: 12
replicas: 1
topicName: (here the topicName of the topic)
Or if you have any suggestion to change the structure of the values file to make it easier to parse the values to the template I'm interested as well.
At the end I ended up doing this:
{{- range $topics := .Values.kafkaTopicList }}
{{ $spec := default dict $topics.spec }}
{{ $config := default dict $spec.config }}
{{ $retention := default dict $config.retentionMs }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: {{ $topics.name }}
namespace: default
spec:
config:
retention.ms: {{ $retention | default "540000" }}
partitions: 12
replicas: 1
topicName: {{ $topics.name | replace "-" "." }}
{{- end}}
Values file:
kafkaTopicList:
topic1:
name: events-1
topic2:
name: events-2
topic3:
name: events-3
topic4:
name: events-4
topic5:
name: events-5
topic6:
name: events-6
topic7:
name: events-7
spec:
config:
retentionMs: 3600000
Here's an example
$ cat values.yaml
KafkaTopics:
shortRetentionTopics:
name:
- topic1
- topic2
- topic3
- topic4
spec:
config:
retention.ms: 540000
longRetentionTopics:
name:
- topic34
- topic35
spec:
config:
retention.ms: 3600000
$ cat templates/topics.yml
{{- with .Values.KafkaTopics.shortRetentionTopics }}{{- range .name }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: {{ $.Release.Name }}-{{ . }}
namespace: default
spec:
{{- toYaml $.Values.KafkaTopics.shortRetentionTopics.spec | nindent 2 }}
partitions: 12
replicas: 1
topicName: {{.}}logs
{{- end}}{{- end}}
Repeat for the long retention topics, or use separate template files.
Sample debug output - helm template topics ./topic-example --debug
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic1
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic1logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic2
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic2logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic3
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic3logs
---
# Source: topic-example/templates/topics.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
labels:
strimzi.io/cluster: kafka
name: topics-topic4
namespace: default
spec:
config:
retention.ms: 540000
partitions: 12
replicas: 1
topicName: topic4logs

GCP GKE Ingress Health Checks

I have a deployment and service running in GKE using Deployment Manager. Everything about my service works correctly except that the ingress I am creating reports the service in a perpetually unhealthy state.
To be clear, everything about the deployment works except the healthcheck (and as a consequence, the ingress). This was working previously (circa late 2019), and apparently about a year ago GKE added some additional requirements for healthchecks on ingress target services and I have been unable to make sense of them.
I have put an explicit health check on the service, and it reports healthy, but the ingress does not recognize it. The service is using a NodePort but also has containerPort 80 open on the deployment, and it does respond with HTTP 200 to requests on :80 locally, but clearly that is not helping in the deployed service.
The cluster itself is an almost nearly identical copy of the Deployment Manager example
Here is the deployment:
- name: {{ DEPLOYMENT }}
type: {{ CLUSTER_TYPE }}:{{ DEPLOYMENT_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
properties:
apiVersion: apps/v1
kind: Deployment
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ DEPLOYMENT }}
labels:
app: {{ APP }}
tier: resters
spec:
replicas: 1
selector:
matchLabels:
app: {{ APP }}
tier: resters
template:
metadata:
labels:
app: {{ APP }}
tier: resters
spec:
containers:
- name: rester
image: {{ IMAGE }}
resources:
requests:
cpu: 100m
memory: 250Mi
ports:
- containerPort: 80
env:
- name: GCP_PROJECT
value: {{ PROJECT }}
- name: SERVICE_NAME
value: {{ APP }}
- name: MODE
value: rest
- name: REDIS_ADDR
value: {{ properties['memorystoreAddr'] }}
... the service:
- name: {{ SERVICE }}
type: {{ CLUSTER_TYPE }}:{{ SERVICE_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
- {{ APP }}-cluster-nodeport-firewall-rule
- {{ DEPLOYMENT }}
properties:
apiVersion: v1
kind: Service
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ SERVICE }}
labels:
app: {{ APP }}
tier: resters
spec:
type: NodePort
ports:
- nodePort: {{ NODE_PORT }}
port: {{ CONTAINER_PORT }}
targetPort: {{ CONTAINER_PORT }}
protocol: TCP
selector:
app: {{ APP }}
tier: resters
... the explicit healthcheck:
- name: {{ SERVICE }}-healthcheck
type: compute.v1.healthCheck
metadata:
dependsOn:
- {{ SERVICE }}
properties:
name: {{ SERVICE }}-healthcheck
type: HTTP
httpHealthCheck:
port: {{ NODE_PORT }}
requestPath: /healthz
proxyHeader: NONE
checkIntervalSec: 10
healthyThreshold: 2
unhealthyThreshold: 3
timeoutSec: 5
... the firewall rules:
- name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
type: compute.v1.firewall
properties:
name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
network: projects/{{ PROJECT }}/global/networks/default
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
targetTags:
- {{ CLUSTER_NAME }}-node
allowed:
- IPProtocol: TCP
ports:
- 30000-32767
- 80
You could try to define a readinessProbe on your container in your Deployment.
This is also a metric that the ingress uses to create health checks (note that these health checks probes come from outside of GKE)
And In my experience, these readiness probes work pretty well to get the ingress health checks to work,
To do this, you create something like this, this is a TCP Probe, I have seen better performance with TCP probes.
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 10
periodSeconds: 10
So this probe will check port: 80, which is the one I see is used by the pod in this service, and this will also help configure the ingress health check for a better result.
Here is some helpful documentation on how to create the TCP readiness probes which the ingress health check can be based on.

Helm template looping over map

I'm trying to create a Helm template to create NetworkPolicy and am facing some issue iterating over the maps.
This is what I have in my values file (example):
extraPolicies:
- name: dashboard
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
- name: dashurboard-integ
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
and this is what I have up to now in my template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range $i, $policy := .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ $policy.name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i2, $type := $policy.policyType }}
- {{ $type -}}
{{- end }}
ingress:
- from: |-
{{- range $i3, $ingress := $policy.ingress }}
- {{ $ingress }}
{{- end }}
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
{{- end }}
{{- end }}
The block 'from' with the |- shows that I'm dealing with maps but I can't figure out how to iterate over them and get the output formatted like in the values.yml.
Any help is greatly appreciated.
Found out I took the wrong approach from the beginning with how I structured my data. It might not be the best solution and I welcome any and all improvements and/or suggestions but I'm not blocked anymore.
I got this to work for what I need.
values.yml
extraPolicies:
- name: dashboard
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
- name: namespaceSelector
settings:
matchLabels:
project: test
namespace: mynamespace
ingressPorts:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
- name: dasboard-integ
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
ingressPorts:
- protocol: TCP
port: 3000
- protocol: TCP
port: 8000
- protocol: TCP
port: 443
- protocol: TCP
port: 80
and the template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ .name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i, $type := .policyType }}
- {{ $type }}
{{- end }}
{{- if .ingress }}
ingress:
- from:
{{- range $i, $ingress := .ingress }}
- {{ .name -}}: {{ if eq .name "podSelector" }}{}{{ end -}}
{{- if eq .name "ipBlock" }}
{{- range $k, $v := .settings }}
cidr: {{ $v -}}
{{ end -}}
{{ end -}}
{{- if eq .name "namespaceSelector" }}
{{- range $k, $v := .settings }}
matchLabels:
{{- range $k, $v := . }}
{{ $k }}: {{ $v }}
{{- end -}}
{{ end -}}
{{ end -}}
{{- end }}
ports:
{{ range $i, $port := .ingressPorts }}
{{- range $k, $v := . -}}
{{- if eq $k "port" -}}
- {{ $k }}: {{ $v }}
{{- end -}}
{{ if eq $k "protocol" }}
{{ $k }}: {{ $v }}
{{ end -}}
{{ end -}}
{{- end }}
{{- end }}
{{- if .egress }}
egress:
- to:
ports:
{{- end }}
{{- end }}
{{- end }}
which gives me the result:
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
- namespaceSelector:
matchLabels:
namespace: mynamespace
project: test
ports:
- port: 6379
protocol: TCP
- port: 8080
protocol: TCP
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur-integ
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
ports:
- port: 3000
protocol: TCP
- port: 8000
protocol: TCP
- port: 443
protocol: TCP
- port: 80
protocol: TCP
Hope it helps someone who faces the same problem I had :-)

Cant get access to service

I have the second problem: cloud sql proxy pod wrapped onto service and must provide access to database.
And I have a job which must create new database for every branch.
But when this job runs the second error appears. I cant get access to the cloudsql-proxy-service.
I cant understand why its happens. Thanks.
E psql: could not connect to server: Connection timed out
E Is the server running on host "cloudsql-proxy-service"
(10.43.254.123) and accepting
E TCP/IP connections on port 5432?
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsql-proxy
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
replicas: 1
selector:
matchLabels:
name: cloudsql-proxy
template:
metadata:
labels:
name: cloudsql-proxy
spec:
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.testDatabaseInstanceConnectionName }}=tcp:5432"
- "-credential_file=/secrets/cloudsql/credentials.json"
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
ports:
- containerPort: 5432
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: {{ .Values.cloudSqlProxySecretName }}
---
apiVersion: v1
kind: Service
metadata:
name: cloudsql-proxy-service
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
selector:
name: cloudsql-proxy
ports:
- port: 5432
apiVersion: batch/v1
kind: Job
metadata:
name: create-test-database
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-10"
spec:
template:
metadata:
name: create-test-database
spec:
containers:
- name: postgres-client
image: kalumkalac/postgresql-client
env:
- name: PGUSER
value: {{ .Values.testDatabaseCredentials.username }}
- name: PGPASSWORD
value: {{ .Values.testDatabaseCredentials.password }}
- name: PGDATABASE
value: {{ .Values.testDatabaseCredentials.defaultDatabaseName }}
- name: PGHOST
value: cloudsql-proxy-service
command:
- psql
- -q
- -c CREATE DATABASE {{ .Values.testDatabaseCredentials.name|quote }}
restartPolicy: Never
backoffLimit: 0 # Deny retry job