GKE Ingress with NEGs: backend healthcheck doesn't pass - google-cloud-platform

I have created GKE Ingress as follows:
apiVersion: cloud.google.com/v1beta1 #tried cloud.google.com/v1 as well
kind: BackendConfig
metadata:
name: backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 30
port: 8080
type: HTTP #case-sensitive
requestPath: /healthcheck
connectionDraining:
drainingTimeoutSec: 60
---
apiVersion: v1
kind: Service
metadata:
name: web-engine-service
namespace: prod
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created.
cloud.google.com/backend-config: '{"ports": {"web-engine-port":"backend-config"}}' #https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#associating_backendconfig_with_your_ingress
spec:
selector:
app: web-engine-pod
ports:
- name: web-engine-port
protocol: TCP
port: 8080
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: web-engine-deployment
environment: prod
name: web-engine-deployment
namespace: prod
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-engine-pod
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
name: web-engine-pod
labels:
app: web-engine-pod
environment: prod
spec:
containers:
- image: my-image:my-tag
imagePullPolicy: Always
name: web-engine-1
resources: {}
ports:
- name: flask-port
containerPort: 5000
protocol: TCP
readinessProbe:
httpGet:
path: /healthcheck
port: 5000
initialDelaySeconds: 30
periodSeconds: 100
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
name: my-certificate
namespace: prod
spec:
domains:
- api.mydomain.com #https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#renewal
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prod-ingress
namespace: prod
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: load-balancer-ip
networking.gke.io/managed-certificates: my-certificate
spec:
rules:
- http:
paths:
- path: /model
backend:
serviceName: web-engine-service
servicePort: 8080
I don't know what I'm doing wrong because my heathchecks are not Ok. And based on the perimeter logging I added to the app, nothing is even trying to hit that pod.
I've tried BackendConfig for both 8080 and 5000.
By the way, it's not 100% clear based on the docs if Load Balancer should be configured to targetPorts of corresponding Pods or Services.
The healthcheck is registered with HTTP Load Balancer and Compute Engine:
It seems that something is not right with the Backend service IP.
The corresponding backend service configuration:
$ gcloud compute backend-services describe k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
...
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
capacityScaler: 1.0
group: https://www.googleapis.com/compute/v1/projects/wnd/zones/europe-west3-a/networkEndpointGroups/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
maxRatePerEndpoint: 1.0
connectionDraining:
drainingTimeoutSec: 60
creationTimestamp: '2020-08-01T11:14:06.096-07:00'
description: '{"kubernetes.io/service-name":"prod/web-engine-service","kubernetes.io/service-port":"8080","x-features":["NEG"]}'
enableCDN: false
fingerprint: 5Vkqvg9lcRg=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/wnd/global/healthChecks/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
id: '2233674285070159361'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
enable: true
sampleRate: 1.0
name: k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
port: 80
portName: port0
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/wnd/global/backendServices/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
sessionAffinity: NONE
timeoutSec: 30
(port 80 looks really suspicious but I thought maybe it's just left there as default and is not in use when NEGs are configured).

Figured it out. By default, even the latest GKE clusters are created with no IP Alias support. It's also called VPC-native. I didn't even bother to check that initially because:
NEGs are supported out of the box, and what's more they seem to be default with no need for explicit annotation when used on the GKE version I had(1.17.8-gke.17). It doesn't make sense to not enable IP Aliases by default then because it basically means that cluster is in a non-functional state by default.
I didn't check VPC-Native support initially, because this name for the feature is simply misleading. I had extensive prior experience with AWS and my faulty assumption was that VPC-Native is like EC2-VPC, as opposed to EC2-Classic, which is legacy.

Related

Paths not working in ALB ingress in EKS cluster

I am using AWS as a Cloud Provider. I want the ALB to work on a single host which is a domain that I own, I want it to open different services based on the paths and service name that I provide in the ingress file but it will not open the service at the expected page
The service should open at http://mydomain/attacker1 and http://mydomain/attacker2 attacker1 and attacker2 are 2 different services with respective service and deployment files
But I get a "page cannot be found" when I hit the page
I have deployed and configured ALB ingress controller using official docs https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I have created Hosting Zones and Routes in Route 53 to propagate my Domain
I am adding my service file, deployment file and ingress file for reference
Please help me with any insights or anything I have missed in the code
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: attacker-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
# ingressClassName: alb
rules:
- host: www.mydomain.com
http:
paths:
- path: /attacker1
pathType: Prefix
backend:
service:
name: attacker1
port:
number: 30002
- path: /attacker2
pathType: Prefix
backend:
service:
name: attacker2
port:
number: 30003
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "attacker"
namespace: "development"
spec:
selector:
matchLabels:
app: "attacker1"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "attacker"
spec:
containers:
-
name: "attacker"
image: "qabcr/abc"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 30002
imagePullSecrets:
-
name: "secrets-development"
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "attack1"
namespace: "development"
labels:
app: "attacker"
spec:
type: NodePort
ports:
-
nodePort: 31400
port: 30002
targetPort: 30002
selector:
app: "attacker"

GKE Ingress health check failed on ingress but succeed on Loadbalncer

On GKE, I have a deployment working fine, status running and health checks fine:
here it is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: erp-app
labels:
app: erp-app
switch: app
spec:
replicas: 1
selector:
matchLabels:
app: erp-app
template:
metadata:
labels:
app: erp-app
spec:
containers:
- name: erp-container
# Extract this from Google Container Registry
image: gcr.io/project/project:latest
imagePullPolicy: Always
env:
ports:
- containerPort: 8080
livenessProbe:
failureThreshold: 10
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 150
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
readinessProbe:
failureThreshold: 10
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 150
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 20
Then, I created a service to map ports 8080 to 80
apiVersion: v1
kind: Service
metadata:
labels:
app: erp-app
name: erp-loadbalancer
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: erp-app
sessionAffinity: None
type: NodePort
And then, GKE Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: erp-loadbalancer
port:
number: 80
Things is, ingress does not want to work because backend healthcheck does not pass. If I check health check on gcloud (https://console.cloud.google.com/compute/healthChecks) I have created for http port 80 on / (on this path, app is serving a 200)
If I force it to be tcp, then the health check pass. But google automatically switch it back to http, which leads to a 404.
My question here would be: what's wrong in my configuration for my server to be available with an external loadbalancer and not available when using an ingress ? (backend unhealthy state)
ANSWER:
My page / was sending a 301 redirect to /login.jsp
301 is not a valid status code for GCP Health checks
So I changed it and my readinessProb to /login.jsp and that make the whole ingress working fine
Cross sharing in case someone has the same issue:
https://stackoverflow.com/a/74564439/4547221
TL;DR
GCP will automatically create the health checks based on readinessProbe, otherwise, root / is used.

AWS EKS Fargate Ingress Has No Address

Updated
So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:
NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.
This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps
This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:
deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
Here are some of the logs from the ingress controller>>
E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
Per the suggestion in the comments from #Michael Hausenblas, I've added an annotation to my service for the alb ingress.
Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:
E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
Update
I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
In your service, try adding the following annotation:
annotations:
alb.ingress.kubernetes.io/target-type: ip
And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.

Kubernetes 1.4 SSL Termination on AWS

I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull