Exposing HTTP2 service (over TLS) from Kubernetes on AWS - amazon-web-services

I have an HTTP2 service. It's deployed on EKS (AWS Kubernetes). And I am trying to expose it to the internet.
If I am exposing it without TLS (with the code below) everything works fine. I can access it.
apiVersion: v1
kind: Service
metadata:
name: demoapp
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 5000
selector:
name: demoapp
If I am adding TLS, I am getting HTTP 502 (Bad Gateway).
apiVersion: v1
kind: Service
metadata:
name: demoapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: somearn
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 5000
selector:
name: demoapp
I have a guess (which could be wrong) that service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http for reason assumes that it's HTTP 1.1 (vs HTTP 2.0) and bark when one of the sides start sending binary (vs textual data).
Additional note: I am not using any Ingress controller.
And a thought. Potentially, I can bring TLS termination within the app (vs doing it on the load balancer) and switch as an example to NLB. However, brings a lot of hair in the solution and I would rather use load balancer for it.

Base on the annotations in your question; the TLS should terminate at the CLB and you should remove service.beta.kubernetes.io/aws-load-balancer-backend-protocol (default to tcp).

Related

Kubernetes - load balance multiple services using a single load balancer

Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.

Kubectl : Add healthcheck using TCP targetting https, do not terminate SSL on ELB for AWS

I am working on a Kubernetes application in which we are running a nginx server. There are 2 issues we are facing currently,
one is related to healthchecks. I would like to add healthchecks which check for the container on port-443, but with TCP, Kubernetes is somehow doing that on SSL, causing the containers to show out of service by AWS.
Secondly, SSL Traffic is getting terminated on ELB, while still talking with container on port-443. I have added a self-signed certificate on the container inside nginx already. We redirect from http to https internally, so anything on port-80 is of no use to us. What am I doing wrong?
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service-name
app.kubernetes.io/instance: service-name-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "TCP"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
name: service-name
spec:
selector:
app: service-name
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443

Can't access Grafana(for prometheus) via AWS ELB at Kubernetes

We have a cluster of several nodes so I can't do a NodePort and just go to my node-ip (which it's what I've done for testing prometheus).
I did a helm install stable/prometheus and stable/grafana at "monitoring" namespace.
Everything looks okay so far.
Then, I'm trying to create an LB service to access Grafana, which gets created, I can see the CNAME pointing to the A record for the ELB at AWS, but when accessing the URL of Grafana, nothing happens, no HTTP error, no problem page, nothing.
Here's the service-elb.yaml:
apiVersion: v1
kind: Service
metadata:
name: grafana-lb
namespace: monitoring
labels:
app: grafana
annotations:
dns.alpha.kubernetes.io/external: grafana-testing.country.ourdomain
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
selector:
app: grafana
tier: frontend
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 80
- name: http
port: 80
targetPort: 3000
loadBalancerSourceRanges:
- somerange
- someotherrange
- etc etc
BTW, Got an error of permissions regarding serviceaccount if I don't create the chart with --set rbac.create=false
I recently use a nginx-proxy-pass for Kibana and also use a LB service similar to this with no issue. But I'm missing something here and can't find out what it is yet.
Any help will be much appreciated. I'll update if I make it work.
Solved, had to remove the "tier" selector and just use a spec like this:
spec:
selector:
app: grafana
type: LoadBalancer
ports:
- name: http
port: 3000

Service (LoadBalancer) port not working on aws

I have a LoadBalancer service on a k8s deployment on aws (made via kops).
Service definition is as follows:
apiVersion: v1
kind: Service
metadata:
name: ui
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <certificate_id>
spec:
ports:
- name: http
port: 80
targetPort: ui-port
protocol: TCP
- name: https
port: 443
targetPort: ui-port
protocol: TCP
selector:
els-pod: ui
type: LoadBalancer
Here is the respective deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ui-deployment
spec:
replicas: 1
template:
metadata:
labels:
els-pod: ui
spec:
containers:
- image: <my_ecr_registry>/<my_image>:latest
name: ui
ports:
- name: ui-port
containerPort: 80
restartPolicy: Always
I know that <my_image> exposes port 80.
I have also assigned an alias to the ELB that gets deployed, say. my-k8s.mydomain.org
Here is the issue:
https://my-k8s.mydomain.org works just fine
http://my-k8s.mydomain.org returns an empty page (when accessing behind a squid proxy, I get the zero-sized reply error message)
Why am I unable to access the service via port 80?
edit: what I have just found is that the service annotation regarding the certificate, also assigns it on port 80 on the ELB.
Could that be the issue?
Is there a way around this?
Just needed to add the following annotation in the service definition:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"

Multiple IPs for a single container

I'm not sure if this is preferred/correct way of setting up kubernetes, but I have two websites "x.com" and "y.com" each with their own separate IPs. Currently, they running off separate ec2 instances, but I'm in the process of moving our architecture to using docker/kubernetes on aws. What I'd like to do is have a single nginx container that hands of the requests to the appropriate backend services. However, I'm currently stuck on trying to figure out how to point two IPs at the same container.
My k8s setup is like so:
Replication controller/pod for x.com
Replication controller/pod for y.com
Service for x.com
Service for y.com
Replication controller for nginx, specifying a single replica
Service for nginx specifying that I need port 80 and 443.
Is there a way for me to specify that I want two IPs pointing to the single nginx container, or is there a preferred k8s way of solving this problem?
You could use the ingress routing for this:
http://kubernetes.io/docs/user-guide/ingress/
http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html
https://github.com/nginxinc/kubernetes-ingress
Also to not use ingress you could setup services type LoadBalancer and then create CNAMEs to the ELB matching the domains you want to route.
I ended up using two separate services/load balancers that point to the same nginx pod but using different ports.
apiVersion: v1
kind: Service
metadata:
name: nginx-service1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 880
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 980
protocol: TCP
- name: https
port: 443
targetPort: 9443
protocol: TCP
selector:
app: nginx