I can not find a way to iterate over a range in helm templating.
I have the next definition in my values.yaml:
ingress:
app1:
port: 80
hosts:
- example.com
app2:
port: 80
hosts:
- demo.example.com
- test.example.com
- stage.example.com
app3:
port: 80
hosts:
- app3.example.com
And i want to generate the same nginx ingress rule for each mentioned host with:
spec:
rules:
{{- range $key, $value := .Values.global.ingress }}
- host: {{ $value.hosts }}
http:
paths:
- path: /qapi
backend:
serviceName: api-server
servicePort: 80
{{- end }}
But it generates wrong hosts:
- host: [example.com]
- host: [test.example.com demo.example.com test.example.com]
Thanks for the help!
I've finally got it working using:
spec:
rules:
{{- range $key, $value := .Values.global.ingress }}
{{- range $value.hosts }}
- host: {{ . }}
http:
paths:
- path: /qapi
backend:
serviceName: api-server
servicePort: 80
{{- end }}
{{- end }}
Related
I've done quite a bit of searching and cannot seem to find anyone that shows a resolution to this problem.
I'm getting intermittent 111 Connection refused errors on my kubernetes clusters. It seems that about 90% of my requests succeed and the other 10% fail. If you "refresh" the page, a previously failed request will then succeed. I have 2 different Kubernetes clusters with the same exact setup both showing the errors.
This looks to be very close to what I am experiencing. I did install my setup onto a new cluster, but the same problem persisted:
Kubernetes ClusterIP intermittent 502 connection refused
Setup
Kubernetes Cluster Version: 1.18.12-gke.1206
Django Version: 3.1.4
Helm to manage kubernetes charts
Cluster Setup
Kubernetes nginx ingress controller that serves web traffic into the cluster:
https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
From there I have 2 Ingresses defined that route traffic based on the referrer url.
Stage Ingress
Prod Ingress
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: potr-tms-ingress-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
# this line below doesn't seem to have an effect
# nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
cert-manager.io/cluster-issuer: "letsencrypt-{{ .Values.environment }}"
spec:
rules:
- host: {{ .Values.ingress_host }}
http:
paths:
- path: /
backend:
serviceName: potr-tms-service-{{ .Values.environment }}
servicePort: 8000
tls:
- hosts:
- {{ .Values.ingress_host }}
- www.{{ .Values.ingress_host }}
secretName: potr-tms-{{ .Values.environment }}-tls
These ingresses route to 2 services that I have defined for prod and stage:
Service
apiVersion: v1
kind: Service
metadata:
name: potr-tms-service-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
type: ClusterIP
ports:
- name: potr-tms-service-{{ .Values.environment }}
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: potr-tms-{{ .Values.environment }}
These 2 services route to deployments that I have for both prod and stage:
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: potr-tms-deployment-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
replicas: {{ .Values.deployment_replicas }}
selector:
matchLabels:
app: potr-tms-{{ .Values.environment }}
strategy:
type: RollingUpdate
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
containers:
- command: ["gunicorn", "--bind", ":8000", "config.wsgi"]
# - command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
envFrom:
- secretRef:
name: potr-tms-secrets-{{ .Values.environment }}
image: gcr.io/potrtms/potr-tms-{{ .Values.environment }}:latest
name: potr-tms-{{ .Values.environment }}
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 512Mi
restartPolicy: Always
serviceAccountName: "potr-tms-service-account-{{ .Values.environment }}"
status: {}
Error
This is the error that I'm seeing inside of my ingress controller logs:
This seems pretty clear, if my deployment pods were failing or showing errors they would be "unavailable" and the service would not be able to route them to the pod. To try and debug this I did increase my deployment resources and replica counts. The amount of web traffic to this app is pretty low though, ~10 users.
What I've Tried
I tried using a completely different ingress controller https://github.com/kubernetes/ingress-nginx
Increasing deployment resources / replica counts (seems to have no effect)
Installing my whole setup on a brand new cluster (same results)
restart the ingress controller / deleting and re installing
Potentially it sounds like this could be a Gunicorn problem. To test I tried starting my pods with python manage.py runserver, problem remained.
Update
Raising the pod counts seems to have helped a little bit.
deployment replicas: 15
cpu request: 200m
memory request: 512Mi
Some requests do fail still though.
Did you find a solution to this? I am seeing something very similar on a minikube setup.
In my case, I believe I also see the nginx controller restarting after the 502. The 502 is intermittent, frequently the first access fails, then reload works.
The best idea I've found so far is to increase the Nginx timeout parameter, but I have not tried that yet. Still trying to search out all options.
I was not able to figure out why these connection errors happen but I did find a work around that seems to solve the problem for our users.
Inside of your ingress config add the annotation
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
I set it to 10 just to make sure it retried as I was fairly confident our services were working. You could probably get away with 2 or 3.
Here's my full ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: potr-tms-ingress-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
# nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
nginx.ingress.kubernetes.io/client-body-buffer-size: "100m"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "1024m"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
cert-manager.io/cluster-issuer: "letsencrypt-{{ .Values.environment }}"
spec:
rules:
- host: {{ .Values.ingress_host }}
http:
paths:
- path: /
backend:
serviceName: potr-tms-service-{{ .Values.environment }}
servicePort: 8000
tls:
- hosts:
- {{ .Values.ingress_host }}
- www.{{ .Values.ingress_host }}
secretName: potr-tms-{{ .Values.environment }}-tls
Is it possible to modify (add/remove) existing hosts from command line/ by a script?
I want to add new applications dynamically when they are deployed for the first time. I currently ended up with this script:
#!/bin/bash
APP_NAME=$1
if [[ -z $(kubectl get ingress ingress-gw -o yaml | grep "serviceName: $APP_NAME-service") ]]
then echo "$(kubectl get ingress ingress-gw -o yaml | sed '/^status:$/Q')
- host: $APP_NAME.example.com
http:
paths:
- path: "/*"
backend:
serviceName: $APP_NAME-service
servicePort: 80
$(kubectl get ingress ingress -o yaml | sed -n -e '/^status:$/,$p')" | kubectl apply -f -
fi
In nutshell it downloads the existing ingress configuration, checks if the app is defined and if not it is injected to the end of the file, just before the status: entry and the config is re-applied.
It is rather a hack than a nice solution.
I m wondering if I can either configure the ingress to load the hosts and paths dynamically based on some annotations on the services in the same project or if I can at least call some command to add or remove a host.
You can download the configuration in JSON format and using the kubectl patch COMMAND it's possible to update the objects. So you can put all of that in a script to update a ingress dynamically. For more information please follow the above mentioned link.
Example: kubectl get ing mying -o json
I am not sure what your requirements are, but I guess usually such stuff would be done with Helm. You can define an ingress with templates and specify values.yaml which could provide the values necessary to generate that file. A slight adaption of a helm chart generated with helm create
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "ingresstest.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "ingresstest.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host.host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $host.service }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
and values.yaml (snippet)
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: foo.example.com
service: foo-service
paths:
- "/*"
- host: bar.example.com
service: bar-service
paths:
- "/*"
gives a similar result:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-ingresstest
labels:
helm.sh/chart: ingresstest-0.1.0
app.kubernetes.io/name: ingresstest
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
rules:
- host: "foo.example.com"
http:
paths:
- path: /*
backend:
serviceName: foo-service
servicePort: 80
- host: "bar.example.com"
http:
paths:
- path: /*
backend:
serviceName: bar-service
servicePort: 80
I'm trying to create a Helm template to create NetworkPolicy and am facing some issue iterating over the maps.
This is what I have in my values file (example):
extraPolicies:
- name: dashboard
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
- name: dashurboard-integ
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
and this is what I have up to now in my template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range $i, $policy := .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ $policy.name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i2, $type := $policy.policyType }}
- {{ $type -}}
{{- end }}
ingress:
- from: |-
{{- range $i3, $ingress := $policy.ingress }}
- {{ $ingress }}
{{- end }}
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
{{- end }}
{{- end }}
The block 'from' with the |- shows that I'm dealing with maps but I can't figure out how to iterate over them and get the output formatted like in the values.yml.
Any help is greatly appreciated.
Found out I took the wrong approach from the beginning with how I structured my data. It might not be the best solution and I welcome any and all improvements and/or suggestions but I'm not blocked anymore.
I got this to work for what I need.
values.yml
extraPolicies:
- name: dashboard
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
- name: namespaceSelector
settings:
matchLabels:
project: test
namespace: mynamespace
ingressPorts:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
- name: dasboard-integ
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
ingressPorts:
- protocol: TCP
port: 3000
- protocol: TCP
port: 8000
- protocol: TCP
port: 443
- protocol: TCP
port: 80
and the template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ .name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i, $type := .policyType }}
- {{ $type }}
{{- end }}
{{- if .ingress }}
ingress:
- from:
{{- range $i, $ingress := .ingress }}
- {{ .name -}}: {{ if eq .name "podSelector" }}{}{{ end -}}
{{- if eq .name "ipBlock" }}
{{- range $k, $v := .settings }}
cidr: {{ $v -}}
{{ end -}}
{{ end -}}
{{- if eq .name "namespaceSelector" }}
{{- range $k, $v := .settings }}
matchLabels:
{{- range $k, $v := . }}
{{ $k }}: {{ $v }}
{{- end -}}
{{ end -}}
{{ end -}}
{{- end }}
ports:
{{ range $i, $port := .ingressPorts }}
{{- range $k, $v := . -}}
{{- if eq $k "port" -}}
- {{ $k }}: {{ $v }}
{{- end -}}
{{ if eq $k "protocol" }}
{{ $k }}: {{ $v }}
{{ end -}}
{{ end -}}
{{- end }}
{{- end }}
{{- if .egress }}
egress:
- to:
ports:
{{- end }}
{{- end }}
{{- end }}
which gives me the result:
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
- namespaceSelector:
matchLabels:
namespace: mynamespace
project: test
ports:
- port: 6379
protocol: TCP
- port: 8080
protocol: TCP
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur-integ
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
ports:
- port: 3000
protocol: TCP
- port: 8000
protocol: TCP
- port: 443
protocol: TCP
- port: 80
protocol: TCP
Hope it helps someone who faces the same problem I had :-)
My current ingress looks something like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: web1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 8080
Meaning that the first part of the host will always match the serviceName.
So for every web pod I would need to repeat the above like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: web1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 8080
- host: web2.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web2
servicePort: 8080
I was just wondering if there is some support for doing the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: $1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: $1
servicePort: 8080
This is not possible if you use kubectl to deploy your kubernetes manifests. However if you write a helm chart for your application it is possible. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources in the form for templates.
There in the inress.yaml template you can write such config using range block and putting the variable values in values.yaml
In your case it will look something like below
spec:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .name }}.dev.cloud
http:
paths:
- path: {{ default "/" .path | quote }}
backend:
serviceName: {{ .name }}
servicePort: 8080
{{- end }}
and the values.yaml will have
ingress:
hosts:
- name: abc
- name: xyz
Thanks to RAMNEEK GUPTA post, you have solution how it can be automated.
According to the documentation:
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
So please try as in your example:
1. Request based on the HTTP URI being requested "Simple fanout"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dev.com
http:
paths:
- path: /web1
backend:
serviceName: web1
servicePort: 8080
- path: /web2
backend:
serviceName: web2
servicePort: 8080
2. Requests based on the Host header "Named based virtual hosting"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: Named
spec:
rules:
- host: web1.dev.com
http:
paths:
- backend:
serviceName: web1
servicePort: 8080
- host: web2.dev.com
http:
paths:
- backend:
serviceName: web2
servicePort: 8080
Trying to generate deployments for my helm charts by using this template
{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
for 2 of my services. In values.yaml I got
environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
... but the output is not being properly formatted
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
instead of
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
and another config
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)
I think you should reconsider the way the data is structured, this would work better:
services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
And your Deployment to look like this:
{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
would actually create two deployments, by adding the --- separator.