Problems configuring Ingress with cookie affinity - cookies

I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar

Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace

You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo

Related

Ingress Controller produces 502 Bad Gateway on every other request

I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!

Aws ingress controller setup

I have try to expose my micro-service to the internet with aws ec2. Using the deployment and service yaml file under below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
labels:
app: my-app
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: my-app
image: XXX
ports:
- name: my-app
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
And also create a ingress resource.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.myApp.com
http:
paths:
- path: /my-app
backend:
serviceName: my-app
servicePort: 80
The last step I open the 80 port in aws dashboard, how should I choice the ingress controller to realize my intend?
servicePort should be 3000, the same as port in your service object.
Note however that, setting up cluster with kubeadm on AWS is not the best way to go: EKS provides you optimized, well configured clusters with external load-balancers and ingress controllers.

Use K8s Ingress with Istio gateway?

in the helm values file there is a setting global.k8sIngressSelector with the description.
Gateway used for legacy k8s Ingress resources. By default it is
using 'istio:ingress', to match 0.8 config. It requires that
ingress.enabled is set to true. You can also set it
to ingressgateway, or any other gateway you define in the 'gateway'
section.
My interpretation of this is that the istio ingress should pick up normal ingress configurations instead of having to make a virtual service. Is this correct? I tried it and it is not working for me.
kind: Deployment
apiVersion: apps/v1
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: echo
spec:
type: ClusterIP
selector:
app: echo
ports:
- port: 80
name: http
this works
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*.dev.example.com'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo
spec:
hosts:
- echo.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: echo
this doesnt
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: echo
spec:
rules:
- host: echo.dev.example.com
http:
paths:
- backend:
serviceName: echo
servicePort: 80
Your Ingress needs to have an annotation: kubernetes.io/ingress.class: istio.
Depending on what version of Istio you are using, it may not be working anyway. There is currently an open issue about Ingress not working in the latest drivers, and it sounds like it may have been broken for a while.
https://github.com/istio/istio/issues/10500

Traefik ingress is not working behind aws load balancer

After I created a traefik daemonset, I created a service as loadbalancer on port 80, which is the Traefik proxy port and the node got automatically registered to it. If i hit the elb i get the proxy 404 which is OK because no service is registered yet
I then created a nodeport service for the web-ui. targeted port 8080 inside the pod and 80 on clusterip. I can curl the traefik ui from inside the cluster and it retruns traefik UI
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails. I also have the right annotations in my ingress but the elb does not route the path to the traefik ui in the backend which is running properly
What am i doing wrong here? I can post all my yml files if required
UPD
My yaml files:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik
labels:
app: traefik
spec:
template:
metadata:
labels:
app: traefik
spec:
containers:
- image: traefik
name: traefik
args:
- --api
- --kubernetes
- --logLevel=INFO
- --web
ports:
- containerPort: 8080
name: traefikweb
- containerPort: 80
name: traefikproxy
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
nodePort: 30001
port: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: traefik-ing
annotations:
kubernetes.io/ingress.class: traefik
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip:/ui
spec:
rules:
- http:
paths:
- path: /ui
backend:
serviceName: traefik-web-ui
servicePort: 80
If you are on Private_Subnets use
kind: Service
metadata:
name: traefik-proxy
> annotations:
> "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer```
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails."
How did it fail? Did you get error 404, error 500, or something else?
Also, for traefik-web-ui service, you don't need to set type: NodePort, it should be type: ClusterIP.
When you configure backends for your Ingress, the only requirement is availability from inside a cluster, so ClusterIP type will be more than enough for that.
Your service should be like that:
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
port: 80
Option PathPrefixStrip can be useful because without it request will be forwarded to UI with /ui prefix, which you definitely don't want.
All other configs look good.

Kubernetes on AWS with NGINX ingress controller and SSL termination

Having issues configuring SSL termination in my Kubernetes cluster. Attempting to figure out best place for this to happen.
I have been able to get it working following the instructions listed here and then updating the ingress controller service to include the SSL certificate details using service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: https
port: 443
targetPort: 80
I then have ingress rules and services setup similar to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: app1.foo.bar
http:
paths:
- backend:
serviceName: app1
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
app: app1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
When going to app1.foo.bar I can see that:
http requests are redirected to https
the SSL certificate is correctly applied
Originally I was trying to apply the certificate to my individual app services. I could see the certificate being applied to the ELB in AWS but wasn't being passed through. So my question is:
Is this the correct configuration or is there a better solution?
Thank you :)
I would suggest terminating SSL on the Nginx Ingress Controller exposed with ELB, and use kube-lego for automated SSL certificate management.
https://github.com/kubernetes/ingress-nginx &
https://github.com/jetstack/kube-lego