istio: ingress with grpc and http - istio

I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---

I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/

You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.

Related

Service can't be registered in Target groups

I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.

Istio traffic routing rules take no effect

I am trying to configure a request routing using Istio and Ingress-nginx but I'm not able to route the requests properly. Basically I have two deployments each as a different subset and implemented a weighted VirtualService.
In Kiali dashboard it shows the request being routed from the ingress-controller to PassthroughCluster even though I can see the correct route mapping using istioctl proxy-config routes command.
Here is the complete configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: dummy-app
namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
name: dummy-app
namespace: my-namespace
labels:
app: dummy-app
service: dummy-app
spec:
ports:
- port: 8080
targetPort: 8080
name: http-web
selector:
app: dummy-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-1
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v1
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v1
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-1
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-2
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v2
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v2
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-2
ports:
- containerPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dummy-app
namespace: my-namespace
spec:
host: dummy-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dummy-app
namespace: my-namespace
spec:
hosts:
- dummy-app.my-namespace.svc.cluster.local
http:
- match:
- uri:
prefix: "/my-route"
route:
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v1
weight: 0
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v2
weight: 100
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "my-ingress-class"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/upstream-vhost: dummy-app.my-namespace.svc.cluster.local
name: dummy-ingress
namespace: my-namespace
spec:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: dummy-app
port:
number: 8080
path: /my-route(.*)
pathType: ImplementationSpecific
Weird thing is I have other applications working in the same namespace and using the same ingress-controller, so I'm not considering there is a problem with ingress-nginx setup.
Istio version:
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (13 proxies), 1.12-dev (15 proxies)
Any ideas on what is the configuration problem or how can I better debug these kind of issues in Istio?
Main issue seems to be with ingress-nginx resource. Based on the above ingress definition, you are trying to bypass istio ingress gateway (where all the proxying rules has been implemented, like gateway,virtual-service and destination rules) and directly pushing the traffic to the application service from ingress. For istio proxy rules to work, you should let traffic pass through istio-ingressgateway (a service under istio-system namespace). So following changes should be made to your ingress resource:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: istio-ingressgateway.istio-system
port:
number: 80
path: /my-route(.*)
pathType: ImplementationSpecific

Wrong Base URL Keycloak Deploy on AWS EKS

i have a problem when deploy the Keycloak Server on AWS EKS
here is my configuration:
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-keycloak
spec:
replicas: 1
selector:
matchLabels:
app: my-keycloak
template:
metadata:
labels:
app: my-keycloak
spec:
containers:
- name: my-keycloak
image: jboss/keycloak
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-keycloak
spec:
selector:
app: my-keycloak
ports:
- port: 8080
targetPort: 8080
name: http
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-keycloak
name: my-keycloak-ingress
spec:
rules:
- host: mykecloak.com
http:
paths:
- backend:
serviceName: my-keycloak
servicePort: 8080
but the base url always set to this, which is false and not work. What i want that the base url should be https://mykeycloak.com/* (with https and without the port number)
current-deployment
Many people said that the solution is set the PROXY_ADDRESS_FORWARDING to TRUE, but it doesnot work for me. Is there something i miss ?
Thanks for your help
You need to add the path /*
- host: mykeycloak.com
http:
paths:
- path: /*
backend:
serviceName: my-keycloak
servicePort: 8080

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.

Kubernetes 1.4 SSL Termination on AWS

I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull