I am seeing Zonal network endpoint group unhealthy after configuring an ingress with a managed cert in GCP via
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: australia-southeast1-docker.pkg.dev/acme-dev-tooling/acme-docker/backstage:prd-v.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
annotations:
cloud.google.com/backend-config: '{"default": "backstage-ingress-backendconfig"}'
spec:
selector:
app: backstage
ports:
- name: http
protocol: TCP
port: 80
type: NodePort
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backstage-ingress-backendconfig
spec:
healthCheck:
checkIntervalSec: 15
type: HTTP
requestPath: /
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: tools-managed-cert-backstage
namespace: backstage
spec:
domains:
- tools.backstage.acme-uat.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage-ingress
namespace: backstage
annotations:
kubernetes.io/ingress.global-static-ip-name: "tools-backstage-external-ip"
networking.gke.io/managed-certificates: tools-managed-cert-backstage
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: backstage
port:
number: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: backstage
GCP provisions an L7 https loadbalancer that cannot access the GKE cluster due to zonal health endpoint connectivity.
The ingress reads:
All Backends are in an UNHEALTHY state.
Is there something I am missing? Does the GKE ingress configure the firewall? I've looked at the rules, there are rules for 130.211.0.0/22, 35.191.0.0/16 which is the health check address.
logs/compute.googleapis.com%2Fhealthchecks yields no probe results. Despite having enabled logging.
Any help would be much appreciated.
UPDATE - above fixed per comments, the following isn't working
kind: Service
metadata:
name: argocd-server
namespace: argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
helm.sh/chart: argo-cd-3.29.5
annotations:
cloud.google.com/backend-config: '{"default": "argocd-ingress-backendconfig"}'
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: >-
{"network_endpoint_groups":{"80":"k8s1-20a3d3ad-argocd-argocd-server-80-c2ec22fa"},"zones":["australia-southeast1-a"]}
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/backend-config":"{\"default\":
\"argocd-ingress-backendconfig\"}","cloud.google.com/neg":"{\"ingress\":
true}"},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","argocd.argoproj.io/instance":"argocd","helm.sh/chart":"argo-cd-3.29.5"},"name":"argocd-server","namespace":"argocd"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"},"type":"ClusterIP"}}
status:
loadBalancer: {}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: http
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
clusterIP: 10.184.10.20
clusterIPs:
- 10.184.10.20
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
uid: fee5f91c-b431-4b8c-ab10-64daa02ec729
resourceVersion: '108355'
generation: 3
creationTimestamp: '2022-01-20T00:06:05Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
deployment.kubernetes.io/revision: '3'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"},"name":"argocd-server","namespace":"argocd"},"spec":{"replicas":1,"revisionHistoryLimit":5,"selector":{"matchLabels":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"}},"spec":{"containers":[{"command":["argocd-server","--staticassets","/shared/app","--repo-server","argocd-repo-server:8081","--dex-server","http://argocd-dex-server:5556","--logformat","text","--loglevel","info","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"server","ports":[{"containerPort":8080,"name":"server","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/server/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"plugins-home"},{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"argocd-server","volumes":[{"emptyDir":{},"name":"static-files"},{"emptyDir":{},"name":"tmp-dir"},{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}},{"emptyDir":{},"name":"plugins-home"}]}}}}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
kubectl.kubernetes.io/restartedAt: '2022-01-20T15:44:27+11:00'
spec:
volumes:
- name: static-files
emptyDir: {}
- name: tmp-dir
emptyDir: {}
- name: ssh-known-hosts
configMap:
name: argocd-ssh-known-hosts-cm
defaultMode: 420
- name: argocd-repo-server-tls
secret:
secretName: argocd-repo-server-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
defaultMode: 420
optional: true
- name: plugins-home
emptyDir: {}
containers:
- name: server
image: quay.io/argoproj/argocd:v2.2.2
command:
- argocd-server
- '--staticassets'
- /shared/app
- '--repo-server'
- argocd-repo-server:8081
- '--dex-server'
- http://argocd-dex-server:5556
- '--logformat'
- text
- '--loglevel'
- info
- '--redis'
- argocd-redis:6379
ports:
- name: server
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: ssh-known-hosts
mountPath: /app/config/ssh
- name: argocd-repo-server-tls
mountPath: /app/config/server/tls
- name: plugins-home
mountPath: /home/argocd
- name: tmp-dir
mountPath: /tmp
livenessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: argocd-server
serviceAccount: argocd-server
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 5
progressDeadlineSeconds: 600
Cheers
# Here is workaround for Google Cloud with ArgoCD v2.5.2
# cloudflare-key.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-key
namespace: cert-manager
type: Opaque
stringData:
key: xxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: zia#mydomain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
dns01:
cloudflare:
email: zia#mydomain.com
apiKeySecretRef:
name: cloudflare-key
key: key
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/component: server
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"8080":{}}}'
beta.cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'
name: argocd-server
spec:
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
---
#backendconfig.yaml
kind: BackendConfig
metadata:
name: argocd-backend-config
namespace: argocd
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 5
type: HTTP
requestPath: /healthz
port: 8080
---
# FrontendConfig.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: argocd-frontend-config
namespace: argocd
spec:
redirectToHttps:
enabled: true
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: gce
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.global-static-ip-name: "argocd-dev"
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
spec:
rules:
- host: argocd-dev.mydomain.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: "/"
pathType: Prefix
tls:
- hosts:
- argocd-dev.mydomain.com
secretName: argocd-secret #don't change, this is provided by ArgoCD
I'm facing this issue upstream connect error or disconnect/reset before headers. reset reason: connection failure here the my deployment and service file
apiVersion: v1
kind: Service
metadata:
name: project
labels:
app: project
service: project
spec:
ports:
- port: 9080
name: http
selector:
app: project
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: project-svc
labels:
account: project
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-v1
labels:
app: project
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: project
version: v1
template:
metadata:
labels:
app: project
version: v1
spec:
serviceAccountName: project-svc
containers:
- name: project
image: segullshairbutt/website:admin_project_a_01_cl1_wn1_pod1_c4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
and here are the Gateway and virtualservice
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: project-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: projectinfo
spec:
hosts:
- "*"
gateways:
- project-gateway
http:
- match:
- uri:
exact: /productpage
route:
- destination:
host: project
port:
number: 9080
when i visit using minikube-ip:istio-engress i get this error but when I just change the image from my to bookinfo product-page ther nothing this error. I don't know why this is and from where.
Please help me I'll be very thankful to you!
Using this config Getting No healthy upstream Http 503. If i just remove subset everything works perfectly fine.
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: ccgf-cdlg-app
service: ccgf-cdlg-app
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
selector:
app: ccgf-cdlg-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
---
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ccgf-cdlg-app-production
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
version: production
spec:
replicas: 1
selector:
matchLabels:
app: ccgf-cdlg-app
version: production
template:
metadata:
labels:
app: ccgf-cdlg-app
version: production
spec:
containers:
- image: edc-ccgf-ui-app:1.37
imagePullPolicy: Always
name: ccgf-cdlg-app
ports:
- name: ccgf-cdlg-app
containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: spinnakerrepoaccess
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ccgf-cdlg-app-canary
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: ccgf-cdlg-app
version: canary
template:
metadata:
labels:
app: ccgf-cdlg-app
version: canary
spec:
containers:
- image: edc-ccgf-ui-app:1.38
imagePullPolicy: Always
name: ccgf-cdlg-app
ports:
- name: ccgf-cdlg-app
containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: spinnakerrepoaccess
#virtual Service
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
hosts:
- '*'
gateways:
- ccgf-gateway
http:
- match:
- uri:
prefix: /cdlg-edc-devci/frontend
rewrite:
uri: /
route:
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
subset: production
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
weight: 50
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
subset: canary
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
weight: 50
- match:
- uri:
prefix: /static
rewrite:
uri: /static
route:
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
#Destination rule
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
host: ccgf-cdlg-app
subsets:
- labels:
version: canary
name: canary
- labels:
version: production
name: production
Source: ccgf-helm-umbrella-chart/charts/ccgf-gateway/templates/gateway.yaml
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-gateway
namespace: namespace
spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
selector:
release: istio-custom-ingress-gateways
I made a reproduction based on your yamls and everything works just fine, the only thing I have is basic istio ingress gateway instead of the custom one.
For start could you please change Host in DestinationRule and check if it works then?
It should be
ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local instead of ccgf-cdlg-app
Did you enable istio injection in your cdlg-edc-devci namespace?
You can check it with kubectl get namespace -L istio-injection
It should be
NAME STATUS AGE ISTIO-INJECTION
cdlg-edc-devci Active 37m enabled
And the reproduction yamls.
kubectl create namespace cdlg-edc-devci
kubectl label namespace cdlg-edc-devci istio-injection=enabled
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
namespace: cdlg-edc-devci
spec:
selector:
matchLabels:
app: ccgf-cdlg-app
version: production
replicas: 1
template:
metadata:
labels:
app: ccgf-cdlg-app
version: production
spec:
containers:
- name: nginx1
image: nginx
ports:
- name: http-dep1
containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
namespace: cdlg-edc-devci
spec:
selector:
matchLabels:
app: ccgf-cdlg-app
version: canary
replicas: 1
template:
metadata:
labels:
app: ccgf-cdlg-app
version: canary
spec:
containers:
- name: nginx2
image: nginx
ports:
- name: http-dep2
containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
spec:
ports:
- name: http-svc
port: 80
protocol: TCP
selector:
app: ccgf-cdlg-app
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
namespace: cdlg-edc-devci
spec:
gateways:
- ccgf-gateway
hosts:
- '*'
http:
- name: production
match:
- uri:
prefix: /cdlg-edc-devci/frontend
rewrite:
uri: /
route:
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: can
weight: 50
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: prod
weight: 50
- name: canary
match:
- uri:
prefix: /s
rewrite:
uri: /
route:
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: can
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
namespace: cdlg-edc-devci
spec:
host: nginx.cdlg-edc-devci.svc.cluster.local
subsets:
- name: prod
labels:
version: production
- name: can
labels:
version: canary
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-gateway
namespace: namespace
spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
apiVersion: v1
kind: Pod
metadata:
name: ubu
spec:
containers:
- name: ubu
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Some results from ubuntu pod
curl -v external_istio-ingress_gateway_ip/cdlg-edc-devci/frontend
HTTP/1.1 200 OK
Hello nginx2
HTTP/1.1 200 OK
Hello nginx1
curl -v external_istio-ingress_gateway_ip/s
HTTP/1.1 200 OK
Hello nginx2
I hope it answer your question. Let me know if you have any more questions.
I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80
My service is not connecting/directing traffic to pod. I have 'sshed' into the pod and the server is working properly but the service times out.
Deployment File:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: venues
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: venues
version: v0.3
spec:
containers:
- name: venues
image: some-image
imagePullPolicy: Always
ports:
- containerPort: 3000
name: http-server
Service File:
apiVersion: v1
kind: Service
metadata:
name: venues
labels:
name: venues
spec:
type: LoadBalancer
ports:
- port: 3000
targetPort: 3000
protocol: TCP
selector:
name: venues
Your selector in the service is wrong: you need to select a label of the deployment, not the container name. So
selector:
app: venues
should work. Optionally you could add also version: v0.3 if needed.