I am trying to deploy a frontend application to Amazon EKS. The concept is that there will be two deployments as well as two services (frontend-service and stg-frontend-service), one for production and one for staging.
On top of that, there will be an ingress ALB which will redirect traffic based on the hostname. i.e. if the hostname is www.project.io, traffic will be routed to frontend-service and if the hostname is stg-project.io, traffic will be routed to stg-frontend-service.
Here are my deployment and ingress configs
stg-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stg-frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: stg-frontend
template:
metadata:
labels:
app: stg-frontend
spec:
containers:
- name: stg-frontend
image: STAGING_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: stg-frontend-service
namespace: project
spec:
selector:
app: stg-frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
stg-prod-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: PRODUCTION_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: project
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
namespace: project
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: www.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80
Later, I have used Route 53 to route traffic from both domain to the ALB.
+----------------+------+---------+-----------------------------------------------------+
| Record Name | Type | Routing | Value/Route traffic to |
+----------------+------+---------+-----------------------------------------------------+
| www.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
| stg.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
+----------------+------+---------+-----------------------------------------------------+
The problem is, ALB ingress is always routing traffic to the first spec rules. In the config above, the first rule is host www.project.io which refers to frontend-service. Whenever I am trying to access www.project.io or stg.project.io, it's showing me a response from frontend-service.
Later, I have switched the rules and put staging rules first, and then it was showing staging service on both domains.
I even created a dummy record like junk.project.io and pointed to the load balancer, it still worked and showed me the same response, even though junk.project.io is not included in my ingress config.
It seems to me that Ingress Config is totally ignoring what the host name is and always returning response from the first rule.
Your host and http values are defined as separate items in the list, try removing the - (hyphen) in front of the http node:
- host: www.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80
Related
I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.
I am using AWS as a Cloud Provider. I want the ALB to work on a single host which is a domain that I own, I want it to open different services based on the paths and service name that I provide in the ingress file but it will not open the service at the expected page
The service should open at http://mydomain/attacker1 and http://mydomain/attacker2 attacker1 and attacker2 are 2 different services with respective service and deployment files
But I get a "page cannot be found" when I hit the page
I have deployed and configured ALB ingress controller using official docs https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I have created Hosting Zones and Routes in Route 53 to propagate my Domain
I am adding my service file, deployment file and ingress file for reference
Please help me with any insights or anything I have missed in the code
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: attacker-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
# ingressClassName: alb
rules:
- host: www.mydomain.com
http:
paths:
- path: /attacker1
pathType: Prefix
backend:
service:
name: attacker1
port:
number: 30002
- path: /attacker2
pathType: Prefix
backend:
service:
name: attacker2
port:
number: 30003
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "attacker"
namespace: "development"
spec:
selector:
matchLabels:
app: "attacker1"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "attacker"
spec:
containers:
-
name: "attacker"
image: "qabcr/abc"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 30002
imagePullSecrets:
-
name: "secrets-development"
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "attack1"
namespace: "development"
labels:
app: "attacker"
spec:
type: NodePort
ports:
-
nodePort: 31400
port: 30002
targetPort: 30002
selector:
app: "attacker"
I have a GKE cluster, a static ip, and a container/pod which exposes 2 ports: 8081(UI https) and 8082 (WSS https). I must connect to the "UI Https" and "WSS Https" on the same IP. The "WSS Https" service does not have a health check endpoint.
Do i need to use Isito, Consul, Nginx ingress or some service mesh to allow these connections on the same IP with different ports?
Is this even possible?
Things i have tried:
GCP global lb with 2 independent ingress services. The yaml for the second service never works correctly but i can add another backed service via the UI. The ingress always reverts to the default health check for the "WSS Https" service and it always unhealthy.
Changed Service type from NodePort to LoadBalancer with a static ip. This will not allow me to change the readiness check and always reverts back.
GCP GLIB with 1 ingress and 2 backend gives me the same healthcheck failure as above
TCP Proxy - Does not allow me to set the same instance group.
Below are my ingress, service, and deployment.
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: myappnamespace
annotations:
kubernetes.io/ingress.global-static-ip-name: global-static-ip-name
labels:
app: appname
spec:
backend:
serviceName: ui-service
servicePort: 8081
tls:
- hosts:
- my-host-name.com
secretName: my-secret
rules:
- host: my-host-name.com
http:
paths:
- backend:
serviceName: ui-service
servicePort: 8081
- host: my-host-name.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8082
Services
---
apiVersion: v1
kind: Service
metadata:
labels:
name: ui-service
name: ui-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"ui-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8081":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: ui-https
port: 8081
targetPort: "ui"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
name: app-service
name: app-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"serviceport-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8082":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: serviceport-https
port: 8082
targetPort: "service-port"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: appname
namespace: myappnamespace
labels:
name: appname
spec:
replicas:1
selector:
matchLabels:
name: appname
strategy:
type: Recreate
template:
metadata:
name: appname
namespace: appnamespace
labels:
name: appname
spec:
restartPolicy: Always
serviceAccountName: myserviceaccount
containers:
- name: my-container
image: image
ports:
- name: service-port
containerPort: 8082
- name: ui
containerPort: 8081
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 8081
scheme: HTTPS
livenessProbe:
exec:
command:
- cat
- /version.txt
[......]
A Service exposed through an Ingress must respond to health checks from the load balancer.
External HTTP(S) Load Balancer that GKE Ingress creates only supports port 443 for https traffic.
In that case you may want to:
Use two separate Ingress resources to route traffic for two different host names on the same IP address and port:
ui-https.my-host-name.com
wss-https.my-host-name.com
Opt to use Ambassador or Istio Virtual Service.
Try Multi-Port Services.
Please let me know if that helped.
I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar
Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace
You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo
I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull