Kubernetes nginx ingress proxy pass to websocket - amazon-web-services

We are running rails application with unicorn and websocket.
We are using AWS ELB as ingress
SSL terminates on ELB and forwards traffic to application.
Nginx ingress routes traffic to web app running unicorn/puma on port 8080.
App works but our websocket responds with 200 instead of 101. We have enabled CORS and used required annotations in ingress.
This are annotations used for the ingress controller service
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert::arn:aws:iam::xxx:server-certificate/staging
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
When we use aws loadbalancer protocol as tcp and load balancer ports as 443 it fails on infinite redirect loop.
Following are the annotations used in the ingress:
nginx.ingress.kubernetes.io/service-upstream: true
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
Our sample nginx configuration we used earlier without ingress is here
How to get websockets working with nginx ingress controller with AWS ELB ?

Is it possible to try without CORS?
Part of the handshake is the client must send at least these headers:
Sec-WebSocket-Key
Sec-WebSocket-Version
And maybe something else. Look at https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#The_WebSocket_Handshake

Related

Nginx ingress controller of type internal nlb giving 400 "The plain HTTP request was sent to HTTPS port" error

I have installed nginx ingress controller of type NLB inside EKS cluster and it is of type internal.
The ingress controller created a network load balancer, with listeners 80 and 443,
with port 443 we can't attach an ssl cert for nlb type, only when I use listener type tls it is able to allow us to add ssl cert from AWS ACM.
Now the issue is, I am trying to expose a frontend application through this NLB nginx ingress controller,
when the NLB lister port is 443, it is able to access the application but complains with ssl cert (fake Kubernetes cert), when I change the listener from 443 to tls in NLB, it throws error "400 "The plain HTTP request was sent to HTTPS port" error"
Like many solutions out there mentioning changing the targetPort from https: https to https: http , I tried but with that too same error "The page isn't working,ERR_TOOMANY_REQUESTS"
Could anyone help me how to resolve this issue?
Any ideas or suggestions would be highly appreciated
To resolve the issue with the SSL certificate and the "400 "The plain HTTP request was sent to HTTPS port" error", you may need to modify your ingress configuration to specify that the ingress should listen for HTTPS traffic on port 443. This can be done by adding the following annotations to your ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
name: example
namespace: example
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example
port:
name: https
tls:
- hosts:
- example.com
secretName: example-tls
In the example above, nginx.ingress.kubernetes.io/ssl-redirect tells the ingress to redirect HTTP traffic to HTTPS. nginx.ingress.kubernetes.io/secure-backends tells the ingress to encrypt the traffic between the ingress and the backend services. `secret

After adding AWS ACM EKS ELB is not opening on HTTPS

I have my app running on EKS which is using istio-ingressgateway service for load balancer and Knative serving I have added ACM to my ELB, but after patching the service with
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xx-xxxx-1:1234567890:certificate/xxxxxx-xxx-dddd-xxxx-xxxxxxxx"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
my domain is not opening on HTTPS but works fine on HTTP giving this error on HTTPS
< HTTP/1.1 408 REQUEST_TIMEOUT
HTTP/1.1 408 REQUEST_TIMEOUT
< Content-Length:0
Content-Length:0
< Connection: Close
Connection: Close
Hope you your load balancer forward the traffic from 443 to the backend target port 3190 in case of Istio. Check your Istio gateway file wether you have 443 port mapped with the targets.

AWS Global Accelerator in front of ALB managed with EKS alb ingress health checks fail

got an EKS cluster with alb ingress controller and external DNS connected to route53, now some clients want static IPs or IP range for connecting to our servers and whitelisting these IPs in their firewall.
Tried the new AWS Global Accelerator, followed this tutorial https://docs.aws.amazon.com/global-accelerator/latest/dg/getting-started.html but it fails with :
Listeners in this accelerator have an unhealthy status. To make sure that Global Accelerator can run health checks successfully, ensure that a service is responding on the protocol and port that you specified in the health check configuration. Learn more
With further reading understood that the healthchecks will be the same configured at the ALB, also that it might fail because of the route53 Healthchecks ips are not whitelisted but all the inbound traffic is open in ports 80 and 443, so not quite sure how to further debug this or if there is any other solution for getting an ip range or static ip for the ALB.
You need to add a healthcheck rule like this one to the ingress controller:
- http:
paths:
- path: /global-accelerator-healthcheck
backend:
serviceName: global-accelerator-healthcheck
servicePort: use-annotation
Then an annotation:
alb.ingress.kubernetes.io/actions.global-accelerator-healthcheck: '{"Type": "fixed-response", "FixedResponseConfig": {"ContentType": "text/plain", "StatusCode": "200", "MessageBody": "healthy" }}'
Then configure the global accelerator to the health checks to that endpoint
When it comes to AWS ALB Ingress controller, always try to think of
it as you are working with AWS ALB, and its Target Groups.
You can even identify the ALB and its target groups by logging in to AWS console UI.
To answer your question try adding following details to your ingress,
code:
annotations:
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: "8161"
alb.ingress.kubernetes.io/healthcheck-path: /admin
alb.ingress.kubernetes.io/success-codes: '401'
alb.ingress.kubernetes.io/backend-protocol: HTTP`
Note: If you have different health check settings for different services, remove this block from K8s "Ingress" and add blocks per K8s "Service".
If more information required, please refer to: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/ingress/annotations/

Istio-ingress behind Google cloud Layer 7 load balancer

Problem I am facing is that my istio-ingressgateway is working perfectly file at network layer load balancer(L4 loadbalancer or TCP load balancer) but when i connect istio-ingressgateway to Layer7 load balancer by attaching nodePort at backend service.after that http to https redirection not working properly its always give Response code 301 even when i request using https protocol.
I successfully configured the same architecture. Here the step to reproduce:
Deploy a GKE cluster. Either with Istio, or with istio installed afterward.
Get the Istio-ingressgateway nodeport for http:
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].port}'
Create a Global Loadbalancer
Create a backend service and select your cluster InstanceGroup.
Set the istio-ingressgateway nodeport as port
Create an health check, on the same nodeport value, in TCP mode
Configure your URL path
Validate and wait 5-10 minutes for letting the time to the health check to validate your configuration and to route the traffic
Now, you can reach your K8S cluster, through the Istio Nodeport with the global load balancer. Deploy a service on Istio, you could reach it through the Global Loadbalancer.
There is an issue on GitHub, Please check: https://github.com/istio/istio/issues/17980

HTTPS redirection with kubernetes l4 load balancer

My application does not need to respond to http requests, except to redirect them to https, but I'm having trouble configuring it for that. I have Django, behind Guincorn, behind a Google Cloud level 4 load balancer (set up through kubernetes). I'm not using nginx, because the static files are served through google cloud storage buckets so it seemed to add unnecessary complexity (is there a reason this is wrong?)
When I configure guincorn for https, it doesn't respond to http requests (ok). The first idea I had was to forward port 80 and 443 through the load balancer and then let django/guincorn take care of redirection, but I can't get guincorn to serve both http and https at the same time, even when I tried exposing two ports:
gunicorn --threads 2 -b :8000 --keyfile=key.txt --certfile=cert.txt myapp.wsgi
The load balancer config is:
apiVersion: v1
kind: Service
metadata:
name: xxx
labels:
name: xxx
spec:
ports:
- port: 443
name: https
targetPort: 8000
selector:
name: app
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
It is possible to changes this so that gunicorn will also answer https requests? (the Django config is setup not to redirect http requests).
Or have I gone about this completely wrong? Should I be trying to perform the redirection at the load balancer itself?
The advice from kubernetes is to use an ingress controller with behind that (in this case) an nginx to redirect http to https.
What this does is:
- The nginx looks at the http_forwarded_for and redirects to https if that is not set correctly.
- The ingress will actually terminate the https for you so that your applications do not have to do this.
Look at https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx for detailed examples.