Multiple IPs for a single container - amazon-web-services

I'm not sure if this is preferred/correct way of setting up kubernetes, but I have two websites "x.com" and "y.com" each with their own separate IPs. Currently, they running off separate ec2 instances, but I'm in the process of moving our architecture to using docker/kubernetes on aws. What I'd like to do is have a single nginx container that hands of the requests to the appropriate backend services. However, I'm currently stuck on trying to figure out how to point two IPs at the same container.
My k8s setup is like so:
Replication controller/pod for x.com
Replication controller/pod for y.com
Service for x.com
Service for y.com
Replication controller for nginx, specifying a single replica
Service for nginx specifying that I need port 80 and 443.
Is there a way for me to specify that I want two IPs pointing to the single nginx container, or is there a preferred k8s way of solving this problem?

You could use the ingress routing for this:
http://kubernetes.io/docs/user-guide/ingress/
http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html
https://github.com/nginxinc/kubernetes-ingress
Also to not use ingress you could setup services type LoadBalancer and then create CNAMEs to the ELB matching the domains you want to route.

I ended up using two separate services/load balancers that point to the same nginx pod but using different ports.
apiVersion: v1
kind: Service
metadata:
name: nginx-service1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 880
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 980
protocol: TCP
- name: https
port: 443
targetPort: 9443
protocol: TCP
selector:
app: nginx

Related

Exposing HTTP2 service (over TLS) from Kubernetes on AWS

I have an HTTP2 service. It's deployed on EKS (AWS Kubernetes). And I am trying to expose it to the internet.
If I am exposing it without TLS (with the code below) everything works fine. I can access it.
apiVersion: v1
kind: Service
metadata:
name: demoapp
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 5000
selector:
name: demoapp
If I am adding TLS, I am getting HTTP 502 (Bad Gateway).
apiVersion: v1
kind: Service
metadata:
name: demoapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: somearn
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 5000
selector:
name: demoapp
I have a guess (which could be wrong) that service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http for reason assumes that it's HTTP 1.1 (vs HTTP 2.0) and bark when one of the sides start sending binary (vs textual data).
Additional note: I am not using any Ingress controller.
And a thought. Potentially, I can bring TLS termination within the app (vs doing it on the load balancer) and switch as an example to NLB. However, brings a lot of hair in the solution and I would rather use load balancer for it.
Base on the annotations in your question; the TLS should terminate at the CLB and you should remove service.beta.kubernetes.io/aws-load-balancer-backend-protocol (default to tcp).

Kubernetes - load balance multiple services using a single load balancer

Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.

Kubectl : Add healthcheck using TCP targetting https, do not terminate SSL on ELB for AWS

I am working on a Kubernetes application in which we are running a nginx server. There are 2 issues we are facing currently,
one is related to healthchecks. I would like to add healthchecks which check for the container on port-443, but with TCP, Kubernetes is somehow doing that on SSL, causing the containers to show out of service by AWS.
Secondly, SSL Traffic is getting terminated on ELB, while still talking with container on port-443. I have added a self-signed certificate on the container inside nginx already. We redirect from http to https internally, so anything on port-80 is of no use to us. What am I doing wrong?
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service-name
app.kubernetes.io/instance: service-name-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "TCP"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
name: service-name
spec:
selector:
app: service-name
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443

AWS Kubernetes: Selecting SSL Certificate on AWS Load Balancer

I’m trying to configure SSL for an AWS Load Balancer for my AWS EKS cluster. The load balancer is proxying to a Traefik instance running on my cluster. This works fine over HTTP.
Then I created my AWS Certificate in the Cert Manager, copied the ARN and followed this part of the documentation: Services - Kubernetes
But the certificate is not linked to the Listeners in the AWS Load Balancer. I can’t find further documentations or a working example on the web. Can anyone point me out to one?
The LoadBalancer configuration looks like this:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"ports":[{"name":"web","port":80,"targetPort":80},{"name":"admin","port":8080,"targetPort":8080},{"name":"secure","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-north-1:000000000:certificate/e386a77d-26d9-4608-826b-b2b3a5d1ec47
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
creationTimestamp: 2019-01-14T14:33:17Z
name: traefik-ingress-service
namespace: kube-system
resourceVersion: "10172130"
selfLink: /api/v1/namespaces/kube-system/services/traefik-ingress-service
uid: e386a77d-26d9-4608-826b-b2b3a5d1ec47
spec:
clusterIP: 10.100.115.166
externalTrafficPolicy: Cluster
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
- name: admin
port: 8080
protocol: TCP
targetPort: 8080
- name: secure
port: 443
protocol: TCP
targetPort: 80
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: e386a77d-26d9-4608-826b-b2b3a5d1ec47.eu-north-1.elb.amazonaws.com
Kind Regards and looking forward to your answers.
I had a similar issue since I'm using EKS v1.14 (and nginx-ingress-controller) and a Network Load Balancer, and according to Kubernetes, it's possible since Kubernetes v1.15 - GitHub Issue. And since 10-March-2020 - Amazon EKS now supports Kubernetes version 1.15
So if it's still relevant, read more about it here - How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?.
I ran into the same problem and discovered that the issue was that the certificate type that I chose (ECDSA 384-bit) wasn't compatible with the Classic Load Balancer (but was supported by the new Application Load Balancer). When I switched to an RSA certificate it worked correctly.

Expose internal IP so it can be accessed from internet

I just deployed nginx on a K8S Node in a cluster, the master and worker communicate using internal IP address.
I can curl http://worker_ip:8080 (nginx) from internal network, but how to make it can be accessed from external/internet network?
Or should I use public IP as my node host?
update the service type to NodePort. grab the nodePort that is assigned to the service.
you should be able to access nginx using host:nodeport
see below for reference
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx