Kubernetes: How to setup TLS termination in the Load Balancer on AWS using Nginx Ingress Controller - amazon-web-services

In the documentation of Nginx Ingress for AWS it says:
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
Link: https://kubernetes.github.io/ingress-nginx/deploy/#tls-termination-in-aws-load-balancer-nlb
So, I follow the instructions: set AWS ACM certification, set VPC CIDR and deploy.
Then check ingress nginx service:
kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.124.56 adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80:31985/TCP,443:32330/TCP 17h
ingress-nginx-controller-admission ClusterIP 10.100.175.52 <none> 443/TCP 17h
In the AWS console, the Load Balancer has necessary certificate and all seems fine.
Next, I create Ingress rules and Service with type: ClusterIP
Service:
apiVersion: v1
kind: Service
metadata:
name: test-app-service
spec:
selector:
name: test-app-pod
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-app-ingress
spec:
ingressClassName: nginx
rules:
- host: foobar.com # forwards to LB
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: test-app-service
port:
number: 80
Check the Ingress:
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-app-ingress nginx foobar.com adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80 29m
So I am just stuck here.
When I go to http://foobar.com -- it works perfectly fine.
But when I go to https://foobar.com - it says 'Could not resolve host: foobar.com'
And I would expect that when I go to https://foobar.com then it terminates TLS on LB and sends the traffic to the service.
I have also found an article, where it is described the same, and in comments there are the same questions like I have, so I am not the only one :D : https://habeeb-umo.medium.com/using-nginx-ingress-in-eks-with-tls-termination-in-a-network-load-balancer-1783fc92935 (I followed this instructions also - no luck as well)
UPDATE:
as per #mdaniel
When I do curl -v http://foobar.com and curl -v https://foobar.com - it both says Could not resolve host: foobar.com:
http:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
https:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
And in the browser when I go to http://foobar.com - it opens the page OK, BUT when I refresh the page it shows 'This site can’t be reached'.
UPDATE2:
I think I have found an issue.
I used httpd container inside the pod and opened 8080 port
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 8080
So when I do port-forward
kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:8080
The first GET request http://127.0.0.1:8081 is fine, but after another one - it fails
Forwarding from 127.0.0.1:8081 -> 8080
Forwarding from [::1]:8081 -> 8080
Handling connection for 8081
E1124 11:48:43.466682 94768 portforward.go:406] an error occurred forwarding 8081 -> 8080: error forwarding port 8080 to pod d79172ed802e00f93a834aab7b89a0da053dba00ad327d71fff85f582da9819e, uid : exit status 1: 2022/11/24 10:48:43 socat[15820] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
So I changed containerPort to 80 and it helped:
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 80 # changed port to 80
Run port forwarding: kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:80
Make 3 GET requests http://127.0.0.1:8081
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081

Related

How to setup Firewall for GKE

I can't use the external IP of the GKE I deployed it a success by Jenkins and below is:
when i run " kubectl get service":
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello LoadBalancer 10.92.14.31 34.170.30.56 8080:31110/TCP 2d21h
I checked my deployment.yaml and i think no problem with it bellow is file:
spec:
containers:
- name: hello
image: azmassage/hello:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: hello
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
ports:
- protocol: TCP
port: 8080
nodePort: 31110
selector:
app: hello
tier: hello
type: LoadBalancer
Footer
and I think this is the problem with firewalls after I create the firewall rule:
I can't connect and use it bellow is ms test:
admin_#cloudshell:~$ curl http://34.170.30.56:8080
curl: (7) Failed to connect to 34.170.30.56 port 8080: Connection refused
I'm not sure, but maybe make sense to open 31110 port on the firewall and check it with curl?

Upgrade classic loadbalancer to network loadbalancer

I am having trouble upgrading our CLB to a NLB. I did a manual upgrade via the wizard through the console, but the connectivity wouldn't work. This upgrade is needed so we can use static IPs in the loadbalancer. I think it needs to be upgraded through kubernetes, but my attempts failed.
What I (think I) understand about this setup is that this loadbalancer was set up using Helm. What I also understand is that the ingress (controller) is responsible for redirecting http requests to https. and that this lb is working on layer 4.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.0
component: controller
heritage: Tiller
release: nginx-ingress-external
name: nginx-ingress-external-controller
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-external-controller
spec:
clusterIP: 172.20.41.16
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30854
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30621
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress-external
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxx.region.elb.amazonaws.com
How would I be able to perform the upgrade by modifying this configuration file?
As #Jonas pointed out in the comments section, creating a new LoadBalancer Service with the same selector as the existing one is probably the fastest and easiest method. As a result we will have two LoadBalancer Services using the same ingress-controller.
You can see in the following snippet that I have two Services (ingress-nginx-1-controller and ingress-nginx-2-controller) with exactly the same endpoint:
$ kubectl get pod -o wide ingress-nginx-1-controller-5856bddb98-hb865
NAME READY STATUS RESTARTS AGE IP
ingress-nginx-1-controller-5856bddb98-hb865 1/1 Running 0 55m 10.36.2.8
$ kubectl get svc ingress-nginx-1-controller ingress-nginx-2-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP
ingress-nginx-1-controller LoadBalancer 10.40.15.230 <PUBLIC_IP>
ingress-nginx-2-controller LoadBalancer 10.40.11.221 <PUBLIC_IP>
$ kubectl get endpoints ingress-nginx-1-controller ingress-nginx-2-controller
NAME ENDPOINTS AGE
ingress-nginx-1-controller 10.36.2.8:443,10.36.2.8:80 39m
ingress-nginx-2-controller 10.36.2.8:443,10.36.2.8:80 11m
Additionally to avoid downtime, we can first change the DNS records to point at the new LoadBalancer and after the propagation time we can safely delete the old LoadBalancer Service.

How to deploy a Kubernetes service using NodePort on Amazon AWS?

I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.

Kubectl : Add healthcheck using TCP targetting https, do not terminate SSL on ELB for AWS

I am working on a Kubernetes application in which we are running a nginx server. There are 2 issues we are facing currently,
one is related to healthchecks. I would like to add healthchecks which check for the container on port-443, but with TCP, Kubernetes is somehow doing that on SSL, causing the containers to show out of service by AWS.
Secondly, SSL Traffic is getting terminated on ELB, while still talking with container on port-443. I have added a self-signed certificate on the container inside nginx already. We redirect from http to https internally, so anything on port-80 is of no use to us. What am I doing wrong?
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service-name
app.kubernetes.io/instance: service-name-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "TCP"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
name: service-name
spec:
selector:
app: service-name
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443

AWS EKS: Service(LoadBalancer) running but not responding to requests

I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.
I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer