HTTPS redirection with kubernetes l4 load balancer - django

My application does not need to respond to http requests, except to redirect them to https, but I'm having trouble configuring it for that. I have Django, behind Guincorn, behind a Google Cloud level 4 load balancer (set up through kubernetes). I'm not using nginx, because the static files are served through google cloud storage buckets so it seemed to add unnecessary complexity (is there a reason this is wrong?)
When I configure guincorn for https, it doesn't respond to http requests (ok). The first idea I had was to forward port 80 and 443 through the load balancer and then let django/guincorn take care of redirection, but I can't get guincorn to serve both http and https at the same time, even when I tried exposing two ports:
gunicorn --threads 2 -b :8000 --keyfile=key.txt --certfile=cert.txt myapp.wsgi
The load balancer config is:
apiVersion: v1
kind: Service
metadata:
name: xxx
labels:
name: xxx
spec:
ports:
- port: 443
name: https
targetPort: 8000
selector:
name: app
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
It is possible to changes this so that gunicorn will also answer https requests? (the Django config is setup not to redirect http requests).
Or have I gone about this completely wrong? Should I be trying to perform the redirection at the load balancer itself?

The advice from kubernetes is to use an ingress controller with behind that (in this case) an nginx to redirect http to https.
What this does is:
- The nginx looks at the http_forwarded_for and redirects to https if that is not set correctly.
- The ingress will actually terminate the https for you so that your applications do not have to do this.
Look at https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx for detailed examples.

Related

GKE ingress path mapping can't handle url params

I am running an ingress in GKE. I am routing most of my traffic to one backend but I wish some calls to be routed to another backend. The ingress looks something like this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: zone-search
port:
name: external
path: /api/v2/zones/location-search
pathType: Prefix
- http:
paths:
- backend:
service:
name: api-service
port:
name: external
path: /*
pathType: ImplementationSpecific
If I do a request like GET /api/v2/zones/location-search, it works fine.
However, if I do GET /api/v2/zones/location-search?foo=bar my request ends up in the api-service backend and not the zone-search as I expected.
I have tried using pathType: ImplementationSpecific and had both path: /api/v2/zones/location-search and path: /api/v2/zones/location-search/* but still no progress. Google requires wildcard to follow a slash but location-search is the endpoint itself and has no slash after it.
I also tried using a default backend with the same result. The problem still seems to be that the url including ?foo=bar doesn't match the path i specified.
I can't do path: /api/v2/zones/* since there are other endpoints in the api that would go to the zone-search backend that isn't supposed to.
Update
I tried using double quotes, plus removing the second
- http:
paths:
and started getting failed_to_pick_backend errors. It ended up solved by changing the health check for the backend service.
I don't know if the health check problem meant that the api-service was selected as a backup when the zone-search service was unhealthy or if one of my two changes solved my initial problem.
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. You can use Ingress to reuse the load balancer for multiple domain names, subdomains, and to expose multiple services on a single IP address and load balancer. Check out the simple fanout and name-based virtual hosting examples to learn how to configure Ingress for these tasks.
Note: Always modify the properties of the Load Balancer via the Ingress object. Making changes directly on the load balancing resources might get lost or overridden by the GKE Ingress controller.
On the other hand :
Each external HTTP(S) load balancer or internal HTTP(S) load balancer uses a single URL map, which references one or more backend services. One backend service corresponds to each Service referenced by the Ingress.
Additionally, to create an Ingress that specifies rules for routing requests depending on the URL path in the request. When you create the Ingress, the GKE Ingress controller creates and configures an external HTTP(S) load balancer, see the official documentation.

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

Why does GCE Load Balancer behave differently through the domain name and the IP address?

A backend service happens to be returning Status 404 on the health check path of the Load Balancer. When I browse to the Load Balancer's domain name, I get "Error: Server Error/ The server encountered a temporary error", and the logs show
"type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "failed_to_pick_backend", which makes sense.
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned, In other words the Load Balancer passed on the request despite the failed health check.
Why these two different behaviors?
[Edit]
Here is the yaml for the Ingress that created the Load Balancer:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
I did a "deep dive" into that and managed to reproduce the situation on my GKE cluster, so now I can tell that there are a few things combined here.
A backend service happens to be returning Status 404 on the health check path of the Load Balancer.
There could be 2 options (it is not clear from the description you have provided).
something like:
"Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds."
This one you are geting from LoadBalancer in case HealthCheck failed for pod. The official documentation on GKE Ingress object says that
a Service exposed through an Ingress must respond to health checks from the load balancer.
Any container that is the final destination of load-balanced traffic must do one of the following to indicate that it is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness probe. The Service exposed through an Ingress must point to the same container port on which the readiness probe is enabled.
It is needed to fix HealthCheck handling. You can check Load balancer details by visiting GCP console - Network Services - Load Balancing.
"404 Not Found -- nginx/1.17.6"
This one is clear. That is the response returned by endpoint myservice is sending request to. It looks like something is misconfigured there. My guess is that pod merely can't serve that request properly. Can be nginx web-server issue, etc. Please check the configuration to find out why pod can't serve the request.
While playing with the setup I have find an image that allows you to check if request has reached the pod and requests headers.
so it is possible to create a pod like:
apiVersion: v1
kind: Pod
metadata:
annotations:
run: fake-web
name: fake-default-knp
# namespace: kube-system
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: IfNotPresent
name: fake-web
ports:
- containerPort: 8080
protocol: TCP
to be able to see all the headers that were in incoming requests (kubectl logs -f fake-default-knp ).
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
Upon creation of such an Ingress object, there will be at least 2 backends in GKE cluster.
- the backend you have specified upon Ingress creation ( myservice one)
- the default one (created upon cluster creation).
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP
l7-default-backend-xyz 1/1 Running 0 20d 10.52.0.7
Please note that myservice serves only requests that have Host header set to example.com . The rest of requests are sent to "default backend" . That is the reason why you are receiving "default backend - 404" error message upon browsing to LoadBalancer's IP address.
Technically there is a default-http-backend service that has l7-default-backend-xyz as an EndPoint.
kubectl get svc -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default-http-backend NodePort 10.0.6.134 <none> 80:31806/TCP 20d k8s-app=glbc
kubectl get ep -n kube-system
NAME ENDPOINTS AGE
default-http-backend 10.52.0.7:8080 20d
Again, that's the "object" that returns the "default backend - 404" error for the requests with "Host" header not equal to the one you specified in Ingress.
Hope that it sheds a light on the issue :)
EDIT:
myservice serves only requests that have Host header set to example.com." So you are saying that requests go to the LB only when there is a host header?
Not exactly. The LB receives all the requests and passes requests in accordance to "Host" header value. Requests with example.com Host header are going to be served on myservice backend .
To put it simple the logic is like the following:
request arrives;
system checks the Host header (to determine user's backend)
request is served if there is a suitable user's backend ( according to the Ingress config) and that backend is healthy , otherwise "Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds." is thrown if backend is in non-healthy state;
if request's Host header doesn't match any host in Ingress spec, request is sent to l7-default-backend-xyz backend (not the one that is mentioned in Ingress config). That backend replies with: "default backend - 404" error .
Hope that makes it clear.

How to redirect http to https in Kubernetes service manifest

I had created a service with the type load balancer and I also configured SSL certificate to it, everything working fine but it's not redirecting my HTTP calls to https until I give https manually before my domain.
Here is my svc.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
dns.alpha.kubernetes.io/external: test.example.com
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: nginx
spec:
type: LoadBalancer
loadBalancerIP:
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 80
selector:
app: nginx
I believe, k8s service object does not have redirection functionality, it is designed to provide a static IP (clusterIP) to the pods who has ephemeral IP. It enables pods to have service discovery functionality in the cluster
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.
k8s service
Redirection should happen at the Ingress Level(L7) or at the load balancer(L4) of the cloud provider.

Kubernetes nginx ingress proxy pass to websocket

We are running rails application with unicorn and websocket.
We are using AWS ELB as ingress
SSL terminates on ELB and forwards traffic to application.
Nginx ingress routes traffic to web app running unicorn/puma on port 8080.
App works but our websocket responds with 200 instead of 101. We have enabled CORS and used required annotations in ingress.
This are annotations used for the ingress controller service
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert::arn:aws:iam::xxx:server-certificate/staging
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
When we use aws loadbalancer protocol as tcp and load balancer ports as 443 it fails on infinite redirect loop.
Following are the annotations used in the ingress:
nginx.ingress.kubernetes.io/service-upstream: true
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
Our sample nginx configuration we used earlier without ingress is here
How to get websockets working with nginx ingress controller with AWS ELB ?
Is it possible to try without CORS?
Part of the handshake is the client must send at least these headers:
Sec-WebSocket-Key
Sec-WebSocket-Version
And maybe something else. Look at https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#The_WebSocket_Handshake