I'm trying to create a Helm template to create NetworkPolicy and am facing some issue iterating over the maps.
This is what I have in my values file (example):
extraPolicies:
- name: dashboard
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
- name: dashurboard-integ
policyType:
- Ingress
- Egress
ingress:
from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
ports:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
egress:
to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
and this is what I have up to now in my template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range $i, $policy := .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ $policy.name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i2, $type := $policy.policyType }}
- {{ $type -}}
{{- end }}
ingress:
- from: |-
{{- range $i3, $ingress := $policy.ingress }}
- {{ $ingress }}
{{- end }}
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
{{- end }}
{{- end }}
The block 'from' with the |- shows that I'm dealing with maps but I can't figure out how to iterate over them and get the output formatted like in the values.yml.
Any help is greatly appreciated.
Found out I took the wrong approach from the beginning with how I structured my data. It might not be the best solution and I welcome any and all improvements and/or suggestions but I'm not blocked anymore.
I got this to work for what I need.
values.yml
extraPolicies:
- name: dashboard
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
- name: namespaceSelector
settings:
matchLabels:
project: test
namespace: mynamespace
ingressPorts:
- protocol: TCP
port: 6379
- protocol: TCP
port: 8080
- name: dasboard-integ
policyType:
- Ingress
ingress:
- name: podSelector
settings:
all: {}
- name: ipBlock
settings:
cidr: "172.17.0.0/16"
ingressPorts:
- protocol: TCP
port: 3000
- protocol: TCP
port: 8000
- protocol: TCP
port: 443
- protocol: TCP
port: 80
and the template:
{{- if .Values.extraPolicies -}}
{{- $fullName := include "network-policies.fullname" . -}}
{{- $namespace := .Values.deployNamespace }}
{{- range .Values.extraPolicies }}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ .name }}
namespace: {{ $namespace }}
spec:
policyTypes:
{{- range $i, $type := .policyType }}
- {{ $type }}
{{- end }}
{{- if .ingress }}
ingress:
- from:
{{- range $i, $ingress := .ingress }}
- {{ .name -}}: {{ if eq .name "podSelector" }}{}{{ end -}}
{{- if eq .name "ipBlock" }}
{{- range $k, $v := .settings }}
cidr: {{ $v -}}
{{ end -}}
{{ end -}}
{{- if eq .name "namespaceSelector" }}
{{- range $k, $v := .settings }}
matchLabels:
{{- range $k, $v := . }}
{{ $k }}: {{ $v }}
{{- end -}}
{{ end -}}
{{ end -}}
{{- end }}
ports:
{{ range $i, $port := .ingressPorts }}
{{- range $k, $v := . -}}
{{- if eq $k "port" -}}
- {{ $k }}: {{ $v }}
{{- end -}}
{{ if eq $k "protocol" }}
{{ $k }}: {{ $v }}
{{ end -}}
{{ end -}}
{{- end }}
{{- end }}
{{- if .egress }}
egress:
- to:
ports:
{{- end }}
{{- end }}
{{- end }}
which gives me the result:
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
- namespaceSelector:
matchLabels:
namespace: mynamespace
project: test
ports:
- port: 6379
protocol: TCP
- port: 8080
protocol: TCP
---
# Source: network-policies/templates/extra-policies.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: dashur-integ
namespace: default
spec:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
- ipBlock:
cidr: 172.17.0.0/16
ports:
- port: 3000
protocol: TCP
- port: 8000
protocol: TCP
- port: 443
protocol: TCP
- port: 80
protocol: TCP
Hope it helps someone who faces the same problem I had :-)
Related
Right now we are working with 5 EC2 instances (AWS) of rabbitmq and we try to migrate to k8s cluster.
We deployed with EKS a cluster that works fine until 45K users.
The 5 separate instances can handle 75K users.
We discovered that the latency was higher in k8s cluster than the connection with EC2 instances.
We used this tool: https://www.rabbitmq.com/rabbitmq-diagnostics.8.html and we didn't find a problem. The file descriptors looks fine, the memory, CPU and etc...
we deployed with https://github.com/rabbitmq/cluster-operator
values.yaml
serviceName: rabbitmq
namespace: berlin
regionCode: use1
env: dev
resourcesConfig:
replicas: 9
nodeGroupName: r-large
storageType: gp2
storageSize: 100Gi
resources:
limits:
cpu: 8
memory: 60Gi
requests:
cpu: 7
memory: 60Gi
definitionsConf:
vhosts:
- name: /
exchanges:
- name: test
vhost: /
type: direct
durable: true
auto_delete: false
internal: false
arguments: {}
policies:
- vhost: /
name: Test Policy
pattern: test.*.*.*
apply-to: queues
definition:
federation-upstream-set: all
priority: 0
additionalPlugins:
- rabbitmq_event_exchange
- rabbitmq_auth_backend_cache
- rabbitmq_auth_backend_http
- rabbitmq_prometheus
- rabbitmq_shovel
rabbitmqConf:
load_definitions: /etc/rabbitmq/definitions.json
# definitions.skip_if_unchanged: 'true'
cluster_partition_handling: pause_minority
auth_backends.1: cache
auth_cache.cached_backend: http
auth_cache.cache_ttl: '10000'
auth_http.http_method: post
auth_http.user_path: http://XXXX:3000/authentication/users
auth_http.vhost_path: http://XXX:3000/authentication/vhosts
auth_http.resource_path: http://XXX:3000/authentication/resources
auth_http.topic_path: http://XXX:3000/authentication/topics
prometheus.path: /metrics
prometheus.tcp.port: '15692'
log.console: 'true'
log.console.level: error
log.console.formatter: json
log.default.level: error
tcp_listen_options.backlog: '4096'
tcp_listen_options.nodelay: 'true'
tcp_listen_options.sndbuf: '32768'
tcp_listen_options.recbuf: '32768'
tcp_listen_options.keepalive: 'true'
tcp_listen_options.linger.on: 'true'
tcp_listen_options.linger.timeout: '0'
disk_free_limit.relative: '1.0'
num_acceptors.tcp: '40'
hipe_compile: 'true'
collect_statistics_interval: '30000'
mnesia_table_loading_retry_timeout: '60000'
heartbeat: '60'
vm_memory_high_watermark.relative: '0.9'
management_agent.disable_metrics_collector: 'true'
management.disable_stats: 'true'
metricsConfig:
metricsPath: /metrics
metricsPort: '15692'
Chart.yaml
apiVersion: v2
name: rabbitmq
description: RabbitMQ Cluster
type: application
version: 0.0.1
charts/templates/configmap.yaml
{{- $varNamespace := .Values.namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ .Values.namespace }}
name: {{ .Values.serviceName }}-definitions-conf
data:
definitions.json: |
{{ .Values.definitionsConf | toJson |replace "NAMESPACE" $varNamespace }}
{{- $varNamespace := .Values.namespace}}
{{- $varRegionCode := .Values.regionCode}}
{{- $varEnv := .Values.env}}
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.resourcesConfig.replicas }}
rabbitmq:
envConfig: |
ERL_MAX_PORTS=10000000
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+S 4:4 +P 2000000"
advancedConfig: |
[
{kernel, [
{inet_default_connect_options, [{nodelay, true}]},
{inet_default_listen_options, [{nodelay, true}]}
]}
].
additionalPlugins: {{ .Values.additionalPlugins | toJson | indent 4 }}
additionalConfig: |
{{- range $key, $val := .Values.rabbitmqConf }}
{{ $key }} = {{ $val | replace "NAMESPACE" $varNamespace | replace "REGION_CODE" $varRegionCode | replace "ENV" $varEnv }}
{{- end }}
resources:
requests:
cpu: {{ .Values.resourcesConfig.resources.requests.cpu }}
memory: {{ .Values.resourcesConfig.resources.requests.memory }}
limits:
cpu: {{ .Values.resourcesConfig.resources.limits.cpu }}
memory: {{ .Values.resourcesConfig.resources.limits.memory }}
persistence:
storageClassName: {{ .Values.resourcesConfig.storageType }}
storage: {{ .Values.resourcesConfig.storageSize }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- {{ .Values.serviceName }}
topologyKey: kubernetes.io/hostname
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-name: {{ .Values.serviceName }}
service.beta.kubernetes.io/load-balancer-source-ranges: {{ .Values.service.allowedVpcCidrRange }}
service.beta.kubernetes.io/aws-load-balancer-internal: 'true'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Name={{ .Values.serviceName }}
external-dns.alpha.kubernetes.io/hostname: {{ .Values.serviceName }}.{{ .Values.service.hostedZone }}
override:
statefulSet:
spec:
template:
metadata:
annotations:
platform.vonage.com/logging: enabled
telegraf.influxdata.com/class: influxdb
telegraf.influxdata.com/inputs: |+
[[inputs.prometheus]]
urls = ["http://127.0.0.1:{{ .Values.metricsConfig.metricsPort }}{{ .Values.metricsConfig.metricsPath }}"]
metric_version = 1
tagexclude = ["url"]
telegraf.influxdata.com/env-literal-NAMESPACE: {{ $.Values.namespace }}
telegraf.influxdata.com/env-literal-SERVICENAME: {{ $.Values.serviceName }}
spec:
nodeSelector:
node-group-label: {{ .Values.resourcesConfig.nodeGroupName }}
containers:
- name: rabbitmq
volumeMounts:
- name: definitions
mountPath: {{ .Values.rabbitmqConf.load_definitions }}
subPath: definitions.json
volumes:
- name: definitions
configMap:
name: {{ .Values.serviceName }}-definitions-conf
Can someone gives us an advice what we can check or how can we solve our issue?
Thanks.
I'm trying to replace the rabbitmq instances to rabbitmq k8s cluster. We want the same results (or better) than the separate instances.
I have a deployment and service running in GKE using Deployment Manager. Everything about my service works correctly except that the ingress I am creating reports the service in a perpetually unhealthy state.
To be clear, everything about the deployment works except the healthcheck (and as a consequence, the ingress). This was working previously (circa late 2019), and apparently about a year ago GKE added some additional requirements for healthchecks on ingress target services and I have been unable to make sense of them.
I have put an explicit health check on the service, and it reports healthy, but the ingress does not recognize it. The service is using a NodePort but also has containerPort 80 open on the deployment, and it does respond with HTTP 200 to requests on :80 locally, but clearly that is not helping in the deployed service.
The cluster itself is an almost nearly identical copy of the Deployment Manager example
Here is the deployment:
- name: {{ DEPLOYMENT }}
type: {{ CLUSTER_TYPE }}:{{ DEPLOYMENT_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
properties:
apiVersion: apps/v1
kind: Deployment
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ DEPLOYMENT }}
labels:
app: {{ APP }}
tier: resters
spec:
replicas: 1
selector:
matchLabels:
app: {{ APP }}
tier: resters
template:
metadata:
labels:
app: {{ APP }}
tier: resters
spec:
containers:
- name: rester
image: {{ IMAGE }}
resources:
requests:
cpu: 100m
memory: 250Mi
ports:
- containerPort: 80
env:
- name: GCP_PROJECT
value: {{ PROJECT }}
- name: SERVICE_NAME
value: {{ APP }}
- name: MODE
value: rest
- name: REDIS_ADDR
value: {{ properties['memorystoreAddr'] }}
... the service:
- name: {{ SERVICE }}
type: {{ CLUSTER_TYPE }}:{{ SERVICE_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
- {{ APP }}-cluster-nodeport-firewall-rule
- {{ DEPLOYMENT }}
properties:
apiVersion: v1
kind: Service
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ SERVICE }}
labels:
app: {{ APP }}
tier: resters
spec:
type: NodePort
ports:
- nodePort: {{ NODE_PORT }}
port: {{ CONTAINER_PORT }}
targetPort: {{ CONTAINER_PORT }}
protocol: TCP
selector:
app: {{ APP }}
tier: resters
... the explicit healthcheck:
- name: {{ SERVICE }}-healthcheck
type: compute.v1.healthCheck
metadata:
dependsOn:
- {{ SERVICE }}
properties:
name: {{ SERVICE }}-healthcheck
type: HTTP
httpHealthCheck:
port: {{ NODE_PORT }}
requestPath: /healthz
proxyHeader: NONE
checkIntervalSec: 10
healthyThreshold: 2
unhealthyThreshold: 3
timeoutSec: 5
... the firewall rules:
- name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
type: compute.v1.firewall
properties:
name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
network: projects/{{ PROJECT }}/global/networks/default
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
targetTags:
- {{ CLUSTER_NAME }}-node
allowed:
- IPProtocol: TCP
ports:
- 30000-32767
- 80
You could try to define a readinessProbe on your container in your Deployment.
This is also a metric that the ingress uses to create health checks (note that these health checks probes come from outside of GKE)
And In my experience, these readiness probes work pretty well to get the ingress health checks to work,
To do this, you create something like this, this is a TCP Probe, I have seen better performance with TCP probes.
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 10
periodSeconds: 10
So this probe will check port: 80, which is the one I see is used by the pod in this service, and this will also help configure the ingress health check for a better result.
Here is some helpful documentation on how to create the TCP readiness probes which the ingress health check can be based on.
Is it possible to modify (add/remove) existing hosts from command line/ by a script?
I want to add new applications dynamically when they are deployed for the first time. I currently ended up with this script:
#!/bin/bash
APP_NAME=$1
if [[ -z $(kubectl get ingress ingress-gw -o yaml | grep "serviceName: $APP_NAME-service") ]]
then echo "$(kubectl get ingress ingress-gw -o yaml | sed '/^status:$/Q')
- host: $APP_NAME.example.com
http:
paths:
- path: "/*"
backend:
serviceName: $APP_NAME-service
servicePort: 80
$(kubectl get ingress ingress -o yaml | sed -n -e '/^status:$/,$p')" | kubectl apply -f -
fi
In nutshell it downloads the existing ingress configuration, checks if the app is defined and if not it is injected to the end of the file, just before the status: entry and the config is re-applied.
It is rather a hack than a nice solution.
I m wondering if I can either configure the ingress to load the hosts and paths dynamically based on some annotations on the services in the same project or if I can at least call some command to add or remove a host.
You can download the configuration in JSON format and using the kubectl patch COMMAND it's possible to update the objects. So you can put all of that in a script to update a ingress dynamically. For more information please follow the above mentioned link.
Example: kubectl get ing mying -o json
I am not sure what your requirements are, but I guess usually such stuff would be done with Helm. You can define an ingress with templates and specify values.yaml which could provide the values necessary to generate that file. A slight adaption of a helm chart generated with helm create
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "ingresstest.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "ingresstest.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host.host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $host.service }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
and values.yaml (snippet)
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: foo.example.com
service: foo-service
paths:
- "/*"
- host: bar.example.com
service: bar-service
paths:
- "/*"
gives a similar result:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-ingresstest
labels:
helm.sh/chart: ingresstest-0.1.0
app.kubernetes.io/name: ingresstest
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
rules:
- host: "foo.example.com"
http:
paths:
- path: /*
backend:
serviceName: foo-service
servicePort: 80
- host: "bar.example.com"
http:
paths:
- path: /*
backend:
serviceName: bar-service
servicePort: 80
I can not find a way to iterate over a range in helm templating.
I have the next definition in my values.yaml:
ingress:
app1:
port: 80
hosts:
- example.com
app2:
port: 80
hosts:
- demo.example.com
- test.example.com
- stage.example.com
app3:
port: 80
hosts:
- app3.example.com
And i want to generate the same nginx ingress rule for each mentioned host with:
spec:
rules:
{{- range $key, $value := .Values.global.ingress }}
- host: {{ $value.hosts }}
http:
paths:
- path: /qapi
backend:
serviceName: api-server
servicePort: 80
{{- end }}
But it generates wrong hosts:
- host: [example.com]
- host: [test.example.com demo.example.com test.example.com]
Thanks for the help!
I've finally got it working using:
spec:
rules:
{{- range $key, $value := .Values.global.ingress }}
{{- range $value.hosts }}
- host: {{ . }}
http:
paths:
- path: /qapi
backend:
serviceName: api-server
servicePort: 80
{{- end }}
{{- end }}
Trying to generate deployments for my helm charts by using this template
{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
for 2 of my services. In values.yaml I got
environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
... but the output is not being properly formatted
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
instead of
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
and another config
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)
I think you should reconsider the way the data is structured, this would work better:
services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
And your Deployment to look like this:
{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
would actually create two deployments, by adding the --- separator.