I'm trying to unit test my Helm charts using Terratest, but running into a strange error:
Here is my unit test:
package grafana
import (
"fmt"
"testing"
corev1 "k8s.io/api/core/v1"
"github.com/gruntwork-io/terratest/modules/helm"
)
func TestGrafanaHelmChartTemplate(t *testing.T) {
// Path to the helm chart we will test
helmChartGrafanaPath := "../../../open-electrons-monitoring"
// Setup the args. For this test, we will set the following input values:
// - image=grafana:latest
options := &helm.Options{
SetValues: map[string]string{"image": "grafana:latest"},
}
// Run RenderTemplate to render the template and capture the output.
output := helm.RenderTemplate(t, options, helmChartGrafanaPath, "pod", []string{"templates/grafana/grafana-deployment.yml"})
// Now we use kubernetes/client-go library to render the template output into the Pod struct. This will
// ensure the Pod resource is rendered correctly.
var pod corev1.Pod
helm.UnmarshalK8SYaml(t, output, &pod)
// Finally, we verify the pod spec is set to the expected container image value
expectedContainerImage := "grafana:latest"
podContainers := pod.Spec.Containers
fmt.Print(pod.Spec)
fmt.Print("*********************************************************")
if podContainers[0].Image != expectedContainerImage {
t.Fatalf("Rendered container image (%s) is not expected (%s)", podContainers[0].Image, expectedContainerImage)
}
}
Here is what the output of the deployment looks like:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: apiVersion: apps/v1
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: kind: Deployment
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: metadata:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: grafana-open-electrons-monitoring
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: namespace: open-electrons-monitoring-ns
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: labels:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/name: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/component: monitoring
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/part-of: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/managed-by: helm
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/instance: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app.kubernetes.io/version: refs/tags/v0.0.11 # TODO: Better use the Grafana version
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: spec:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: replicas: 1
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: selector:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: matchLabels:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: app: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: strategy:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: rollingUpdate:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: maxSurge: 1
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: maxUnavailable: 1
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: type: RollingUpdate
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: template:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: metadata:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: creationTimestamp: null
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: labels:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: spec:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: securityContext:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: runAsUser: 1000
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: runAsGroup: 3000
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: fsGroup: 2000
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: runAsNonRoot: true
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: containers:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - image: grafana/grafana:latest
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: imagePullPolicy: IfNotPresent
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: open-electrons-grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: ports:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - containerPort: 3000
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: protocol: TCP
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: resources:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: limits:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: memory: "1Gi"
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: cpu: "1000m"
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: requests:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: memory: 500M
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: cpu: "500m"
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: volumeMounts:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - mountPath: /var/lib/grafana
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: grafana-storage
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - mountPath: /etc/grafana/provisioning/datasources
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: grafana-datasources
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: readOnly: false
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: restartPolicy: Always
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: terminationGracePeriodSeconds: 30
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: volumes:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - name: grafana-storage
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: emptyDir: {}
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: - name: grafana-datasources
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: configMap:
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: defaultMode: 420
TestGrafanaHelmChartTemplate 2023-02-12T18:59:01+01:00 logger.go:66: name: grafana-datasources
{[] [] [] [] <nil> <nil> map[] <nil> false false false <nil> nil [] nil [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}*********************************************************--- FAIL: TestGrafanaHelmChartTemplate (0.06s)
Here is the output:
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 5 [running]:
testing.tRunner.func1.2({0x1440620, 0xc0002a85b8})
/usr/local/go/src/testing/testing.go:1526 +0x24e
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1529 +0x39f
panic({0x1440620, 0xc0002a85b8})
/usr/local/go/src/runtime/panic.go:884 +0x213
Why should this fail? What am I missing here?
I managed to get it fixed. The import should be like this:
appsv1 "k8s.io/api/apps/v1"
I then have to modify the instantiation of the Deployment object:
var deployment appsv1.Deployment
instead of the Pod object.
Related
So, I have deployed my service as a NodePort and connection requests (from my terminal) are not sent to the application without port-forward. Here are my specs:
$ kubectl get svc rhs-servicebase -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
apiVersion: ambassador/v1
kind: Mapping
name: http_referral-health-signal_mapping
grpc: false
prefix: /referral-health-signal/
rewrite: /
timeout_ms: 0
service: rhs-servicebase:9000
cors:
origins: "*"
headers: X-User-Agent, X-Grpc-Web, Content-Type, Authorization
max_age: "1728000"
creationTimestamp: "2022-08-31T22:32:28Z"
labels:
app.kubernetes.io/name: servicebase
name: rhs-servicebase
namespace: default
resourceVersion: "93013"
uid: 84aba835-6399-49f4-be4f-4e6454d1bd7d
spec:
clusterIP: 10.103.51.237
clusterIPs:
- 10.103.51.237
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30001
port: 9000
protocol: TCP
targetPort: 9000
selector:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
$ kubectl get deployment rhs-servicebase -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-08-31T22:32:28Z"
generation: 1
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: servicebase
helm.sh/chart: servicebase-0.1.5
name: rhs-servicebase
namespace: default
resourceVersion: "93040"
uid: 04af37db-94e0-42b3-91e1-56272791c70a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-08-31T22:32:30Z"
lastUpdateTime: "2022-08-31T22:32:30Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-08-31T22:32:28Z"
lastUpdateTime: "2022-08-31T22:32:30Z"
message: ReplicaSet "rhs-servicebase-6f676c458c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
$ kubectl get pod rhs-servicebase-6f676c458c-f2rw6 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-08-31T22:32:28Z"
generateName: rhs-servicebase-6f676c458c-
labels:
app.kubernetes.io/instance: rhs
app.kubernetes.io/name: servicebase
pod-template-hash: 6f676c458c
name: rhs-servicebase-6f676c458c-f2rw6
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: rhs-servicebase-6f676c458c
uid: 983b450e-4fe2-40fb-a332-a959d9b569bc
resourceVersion: "93036"
uid: 3dff4f66-8369-4855-a371-0fc2f37209a4
spec:
containers:
- env:
- name: TZ
value: UTC
- name: LOG_FILE
value: application.log
- name: S3_BUCKET
value: livongo-int-healthsignal
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: aws-access-key-id
name: referral-health-signal
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: aws-secret-access-key
name: referral-health-signal
image: localhost:5000/referral-health-signal:latest
imagePullPolicy: Always
name: servicebase
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-c984r
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: docker-desktop
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: secret-referral-health-signal
secret:
defaultMode: 420
items:
- key: aws-access-key-id
path: AWS_ACCESS_KEY_ID
- key: aws-secret-access-key
path: AWS_SECRET_ACCESS_KEY
secretName: referral-health-signal
- name: kube-api-access-c984r
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-08-31T22:32:28Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-08-31T22:32:30Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-08-31T22:32:30Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-08-31T22:32:28Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://09d33da961a079adb4f6629eebd595dd6338b56f1f3ec779878503e9de04524f
image: localhost:5000/referral-health-signal:latest
imageID: docker-pullable://localhost:5000/referral-health-signal#sha256:d4dfeb70caa8145babcb025c287ec361bb1e920bf556cdec166d1d54f2136d1a
lastState: {}
name: servicebase
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-08-31T22:32:29Z"
hostIP: 192.168.65.4
phase: Running
podIP: 10.1.0.64
podIPs:
- ip: 10.1.0.64
qosClass: BestEffort
startTime: "2022-08-31T22:32:28Z"
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d6h
rhs-servicebase NodePort 10.103.51.237 <none> 9000:30001/TCP 40m
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.65.4:6443 5d6h
rhs-servicebase 10.1.0.64:9000 40m
I can't understand what I'm missing in my config that's not allowing this to happen. Even when I exec into the pod and type $ curl -i http://localhost:9000/ I don't get a response until I've turned on port forwarding - this is weird, right? At least within the container, it should be able to ping itself?
Please help !!
I have a AWS NLB ingress-controller and an ingress rule which routes traffic between an API and an SPA. The ingress-controller works perfectly on HTTP, but on HTTPS I'm getting a 400 Bad request - plain HTTP request sent to HTTPS port
If I understand it correctly, after TLS has been terminated the request is being redirected via an Https port rather than HTTP, but I'm struggling to find where:
ingress controller.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=ingress-nginx/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
sideEffects: None
admissionReviewVersions: ["v1", "v1beta1"]
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
- --namespace=ingress-nginx
- --secret-name=ingress-nginx-admission
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy:
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=ingress-nginx
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
ingress-rules.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
# tls:
# - hosts:
# - mysite.com
# secretName: secret-name
rules:
# - host: mysite.com
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
frontend-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
I do have deployments behind these, but since the services themselves are working fine independently and in conjunction on http, I've ruled them out as the problem here.
Any advice is greatly appreciated!
I just lost an entire day troubleshooting this and it turned out to be the ports configuration in the Service created by Helm. The targetPort in the https configuration needs to be 80 instead of the reference "https".
Before:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
After:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
Here's how your service would look:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
Add to the helm values
values={
"controller": {
...
"service": {
...
"targetPorts": {"https": "80"}
}
}
}
Getting an error for the following deployment.yaml:
admin#ip-172-20-58-79:~/kubernetes-prometheus/kube-state-metrics-configs$ kubectl apply -f deployment.yaml
error: error converting YAML to JSON: yaml: line 21: found a tab character that violate indentation
admin#ip-172-20-58-79:~/kubernetes-prometheus/kube-state-metrics-configs$ cat deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.8.0
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.8.0
spec:
containers:
- image: quay.io/coreos/kube-state-metrics:v1.8.0
imagePullPolicy: Always
name: kube-state-metrics
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 8081
name: telemetry
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
Update 1 :
admin#ip-172-20-58-79:~$ kubectl describe nodes
https://pastebin.com/bectdNes
Update 2 : As per suggested by Arghya Sadhu added nodeSelector
admin#ip-172-20-58-79:~/kubernetes-prometheus/kube-state-metrics-configs$ kubectl edit deploy kube-state-metrics -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "6"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"v1.8.0"},"name":"kube-state-metrics","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/name":"kube-state-metrics"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"v1.8.0"}},"spec":{"containers":[{"args":["--kubelet-insecure-tls","--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname"],"image":"quay.io/coreos/kube-state-metrics:v1.8.0","imagePullPolicy":"Always","livenessProbe":{"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"kube-state-metrics","ports":[{"containerPort":8080,"name":"http-metrics"},{"containerPort":8081,"name":"telemetry"}],"readinessProbe":{"httpGet":{"path":"/","port":8081},"initialDelaySeconds":5,"timeoutSeconds":5}}],"nodeSelector":{"kubernetes.io/os":"linux"},"hostNetwork":true,"nodeSelector":{"node-role.kubernetes.io/master":""},"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","serviceAccountName":"kube-state-metrics"}}}}
creationTimestamp: 2020-01-10T05:33:13Z
generation: 12
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.9.2
name: kube-state-metrics
namespace: kube-system
resourceVersion: "178997153"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-state-metrics
uid: b20aa645-336a-11ea-9618-0607d7cb72ed
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.8.0
spec:
containers:
- args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
image: quay.io/coreos/kube-state-metrics:v1.8.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
hostPort: 8080
name: http-metrics
protocol: TCP
- containerPort: 8081
hostPort: 8081
name: telemetry
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8081
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: ip-172-20-58-72.us-west-1.compute.internal
nodeSelector:
node-role.kubernetes.io/master: ""
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-state-metrics
serviceAccountName: kube-state-metrics
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: 2020-01-16T03:07:35Z
lastUpdateTime: 2020-01-16T03:07:35Z
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: 2020-01-15T07:41:48Z
lastUpdateTime: 2020-01-16T04:15:14Z
message: ReplicaSet "kube-state-metrics-5fdf7fb4fc" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 12
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
Getting following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning PodFitsHostPorts 21s kubelet, ip-172-20-58-72.us-west-1.compute.internal Predicate PodFitsHostPorts failed
After changing ports now getting:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning MatchNodeSelector 11s kubelet, ip-172-20-58-72.us-west-1.compute.internal Predicate MatchNodeSelector failed
I tried the the yaml and it gave me this error
error: unable to recognize "kube-state-metrics.yaml": no matches for kind "Deployment" in version "apps/v1beta1"
I changed apps/v1beta1 to apps/v1 to solve it but I did not get the error that you reported.
Here is the yaml I used
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.8.0
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.8.0
spec:
containers:
- image: quay.io/coreos/kube-state-metrics:v1.8.0
imagePullPolicy: Always
name: kube-state-metrics
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 8081
name: telemetry
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
I cannot see an issue with the yaml given. I checked and deployed successfully. However double check no tab been used for indentation at line 21 and re-deploy the same yaml.
Also try deploying the yaml in the v1.9.2
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.9.2
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: v1.9.2
spec:
containers:
- image: quay.io/coreos/kube-state-metrics:v1.9.2
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 8081
name: telemetry
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
my app is inside a docker container and works perfectly fine in local host..but when i run this docker image in kubernetes cluster it gives me this error
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
here is my database settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
I deployed the image into kubernetes cluster with frontend.yml manifest file..it looks like this
frontend.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dockersample-app
labels:
app: polls
spec:
replicas: 3
template:
metadata:
labels:
app: dcokersample-app
spec:
containers:
- name: dcokersample
image: mahesh61437/dockersample:v6
imagePullPolicy: Always
ports:
- containerPort: 8000
---service.yml
apiVersion: v1
kind: Service
metadata:
name: dockersample-app
labels:
app: dockersample-app
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
selector:
app: dockersample-app
here is my DOCKER File
FROM python:3
RUN apt-get update
EXPOSE 8000
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD python manage.py runserver
kubectl get pod,svc,deployment,pvc,pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
annotations:
cilium.io/identity: "63547"
creationTimestamp: "2019-02-14T09:49:39Z"
generateName: dockersample-app-557878d964-
labels:
app: dcokersample-app
pod-template-hash: 557878d964
name: dockersample-app-557878d964-fm94j
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: dockersample-app-557878d964
uid: d8b8a828-303d-11e9-94cc-9252dc3b5955
resourceVersion: "271350"
selfLink: /api/v1/namespaces/default/pods/dockersample-app-557878d964-fm94j
uid: d8bc708b-303d-11e9-94cc-9252dc3b5955
spec:
containers:
- image: mahesh61437/dockersample:v6
imagePullPolicy: Always
name: dcokersample
ports:
- containerPort: 8000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-svb6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: vibrant-ramanujan-8zmn
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-svb6z
secret:
defaultMode: 420
secretName: default-token-svb6z
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:49Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:49Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://d82ec6f089cc76e64e7ba68d56ba5c1263343c08929d648c9fef005d4a08488c
image: mahesh61437/dockersample:v6
imageID: docker-pullable://mahesh61437/dockersample#sha256:54aa303cc5534609a1b579718f192323fad9dd57bd92a2897cd64f110438c965
lastState: {}
name: dcokersample
ready: true
restartCount: 0
state:
running:
startedAt: "2019-02-14T09:49:49Z"
hostIP: 10.139.16.196
phase: Running
podIP: 10.244.1.64
qosClass: BestEffort
startTime: "2019-02-14T09:49:39Z"
- apiVersion: v1
kind: Pod
metadata:
annotations:
cilium.io/identity: "63547"
creationTimestamp: "2019-02-14T09:49:39Z"
generateName: dockersample-app-557878d964-
labels:
app: dcokersample-app
pod-template-hash: 557878d964
name: dockersample-app-557878d964-ftngl
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: dockersample-app-557878d964
uid: d8b8a828-303d-11e9-94cc-9252dc3b5955
resourceVersion: "271354"
selfLink: /api/v1/namespaces/default/pods/dockersample-app-557878d964-ftngl
uid: d8bdda66-303d-11e9-94cc-9252dc3b5955
spec:
containers:
- image: mahesh61437/dockersample:v6
imagePullPolicy: Always
name: dcokersample
ports:
- containerPort: 8000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-svb6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: vibrant-ramanujan-8zm3
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-svb6z
secret:
defaultMode: 420
secretName: default-token-svb6z
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:49Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:49Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://ef71c722fbcc70ceb96d929e983e22263cbc40a54fd666cf73cc0dd73c437cae
image: mahesh61437/dockersample:v6
imageID: docker-pullable://mahesh61437/dockersample#sha256:54aa303cc5534609a1b579718f192323fad9dd57bd92a2897cd64f110438c965
lastState: {}
name: dcokersample
ready: true
restartCount: 0
state:
running:
startedAt: "2019-02-14T09:49:48Z"
hostIP: 10.139.120.24
phase: Running
podIP: 10.244.2.187
qosClass: BestEffort
startTime: "2019-02-14T09:49:39Z"
- apiVersion: v1
kind: Pod
metadata:
annotations:
cilium.io/identity: "63547"
creationTimestamp: "2019-02-14T09:49:39Z"
generateName: dockersample-app-557878d964-
labels:
app: dcokersample-app
pod-template-hash: 557878d964
name: dockersample-app-557878d964-lq78m
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: dockersample-app-557878d964
uid: d8b8a828-303d-11e9-94cc-9252dc3b5955
resourceVersion: "271358"
selfLink: /api/v1/namespaces/default/pods/dockersample-app-557878d964-lq78m
uid: d8be0705-303d-11e9-94cc-9252dc3b5955
spec:
containers:
- image: mahesh61437/dockersample:v6
imagePullPolicy: Always
name: dcokersample
ports:
- containerPort: 8000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-svb6z
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: vibrant-ramanujan-8z79
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-svb6z
secret:
defaultMode: 420
secretName: default-token-svb6z
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:50Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:50Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-02-14T09:49:39Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://fa3c8f25b260b0e3c032907ff796b5e22bf0479646d457914de518c3c6180be0
image: mahesh61437/dockersample:v6
imageID: docker-pullable://mahesh61437/dockersample#sha256:54aa303cc5534609a1b579718f192323fad9dd57bd92a2897cd64f110438c965
lastState: {}
name: dcokersample
ready: true
restartCount: 0
state:
running:
startedAt: "2019-02-14T09:49:49Z"
hostIP: 10.139.16.250
phase: Running
podIP: 10.244.0.168
qosClass: BestEffort
startTime: "2019-02-14T09:49:39Z"
- apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"dockersample-app"},"name":"dockersample-app","namespace":"default"},"spec":{"ports":[{"port":8000,"targetPort":8000}],"selector":{"app":"dockersample-app"},"type":"LoadBalancer"}}
creationTimestamp: "2019-02-14T09:49:39Z"
labels:
app: dockersample-app
name: dockersample-app
namespace: default
resourceVersion: "271514"
selfLink: /api/v1/namespaces/default/services/dockersample-app
uid: d8c78f7a-303d-11e9-94cc-9252dc3b5955
spec:
clusterIP: 10.245.57.250
externalTrafficPolicy: Cluster
ports:
- nodePort: 32204
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dockersample-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 174.138.123.199
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-02-12T09:31:19Z"
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "6"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: f3f45187-2ea8-11e9-94cc-9252dc3b5955
spec:
clusterIP: 10.245.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"polls"},"name":"dockersample-app","namespace":"default"},"spec":{"replicas":3,"template":{"metadata":{"labels":{"app":"dcokersample-app"}},"spec":{"containers":[{"image":"mahesh61437/dockersample:v6","imagePullPolicy":"Always","name":"dcokersample","ports":[{"containerPort":8000}]}]}}}}
creationTimestamp: "2019-02-14T09:49:39Z"
generation: 1
labels:
app: polls
name: dockersample-app
namespace: default
resourceVersion: "271360"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/dockersample-app
uid: d8b79710-303d-11e9-94cc-9252dc3b5955
spec:
progressDeadlineSeconds: 2147483647
replicas: 3
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: dcokersample-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: dcokersample-app
spec:
containers:
- image: mahesh61437/dockersample:v6
imagePullPolicy: Always
name: dcokersample
ports:
- containerPort: 8000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2019-02-14T09:49:49Z"
lastUpdateTime: "2019-02-14T09:49:49Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres-pv-claim","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"manual"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2019-02-14T10:11:47Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: postgres
name: postgres-pv-claim
namespace: default
resourceVersion: "273451"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/postgres-pv-claim
uid: f02728ee-3040-11e9-94cc-9252dc3b5955
spec:
accessModes:
- ReadWriteMany
dataSource: null
resources:
requests:
storage: 5Gi
storageClassName: manual
volumeMode: Filesystem
volumeName: postgres-pv-volume
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"app":"postgres","type":"local"},"name":"postgres-pv-volume"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"5Gi"},"hostPath":{"path":"/mnt/data"},"storageClassName":"manual"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2019-02-14T10:11:47Z"
finalizers:
- kubernetes.io/pv-protection
labels:
app: postgres
type: local
name: postgres-pv-volume
resourceVersion: "273449"
selfLink: /api/v1/persistentvolumes/postgres-pv-volume
uid: f01f5beb-3040-11e9-94cc-9252dc3b5955
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: postgres-pv-claim
namespace: default
resourceVersion: "273446"
uid: f02728ee-3040-11e9-94cc-9252dc3b5955
hostPath:
path: /mnt/data
type: ""
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status:
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I cant figure out what I must do now. If you think you can suggest better in my code me something please leave a comment.
It's missing the database deployement and service (optional but highly recommended PersistentVolumeClaim).
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: db-deployment
labels:
app: db-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres:9.4
name: db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-db-data
mountPath: /var/lib/postgresql
volumes:
- name: postgres-db-data
persistentVolumeClaim:
claimName: db-data
---
apiVersion: v1
kind: Service
metadata:
name: db
labels:
name: db
spec:
ports:
- name: db
port: 5432
selector:
app: db-deployment
---
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: "db-data"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: "your storage class"
I have deployed kops k8s in AWS, everything in the same namespace.
nginx ingress controller route traffic to https backends (wordpress apps).
I'm able to reach the website, but unfortunately for every 10~ calls only 1 call get http 200. all the other 9 get 404 nginx not found.
tried to search everywhere but no luck :(
My configuration:
DNS -> AWS NLB -> 2 Nodes
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "True"
nginx.org/ssl-services: test-service
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- host: "test.example.com"
http:
paths:
- path: /
backend:
serviceName: test-service
servicePort: 8443
nginx-service.yaml:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
nginx-daemonset.yaml:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress-controller
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
imagePullSecrets:
- name: private-repo
containers:
- name: nginx-ingress-controller
image: private_repo/private_image
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --default-ssl-certificate=$(POD_NAMESPACE)/tls-cert
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 33
resources:
limits:
cpu: 500m
memory: 300Mi
requests:
cpu: 400m
memory: 200Mi
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
wordpress.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-example
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
strategy:
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
imagePullSecrets:
- name: private-repo
containers:
- name: test-example-httpd
image: private_repo/private_image
imagePullPolicy: Always
ports:
- containerPort: 8443
name: https
- name: test-example-php-fpm
image: private_repo/private_image
imagePullPolicy: Always
securityContext:
runAsUser: 82
securityContext:
allowPrivilegeEscalation: false
---
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
ports:
- name: https-web
targetPort: 8443
port: 8443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---UPDATE---
kubectl get endpoints,services -n example-ns
NAME ENDPOINTS AGE
endpoints/ingress-nginx 100.101.0.1:8443,100.100.0.4:443,100.101.0.2:443 1d
endpoints/test-service 100.100.0.1:8443,100.101.0.1:8443,100.101.0.2:8443 4h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx LoadBalancer SOME-IP sometext.elb.us-west-3.amazonaws.com 80:31541/TCP,443:31017/TCP 1d
service/test-service ClusterIP SOME-IP <none> 8443/TCP 4h
Thanks!
Apparently changing the annotation nginx.ingress.kubernetes.io/ssl-passthrough from "True" to "False" solved it.
Probably has to do something with ssl termination in NGINX and not in the apache.