Kubernetes Cluster on AWS with Kops - NodePort Service Unavailable - amazon-web-services

I am having difficulties accessing a NodePort service on my Kubernetes cluster.
Goal
set up ALB Ingress controller so that i can use websockets and http/2
setup NodePort service as required by that controller
Steps taken
Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
Changed service type from LoadBalancer to NodePort using kubectl replace --force
> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
Results
Curl from ALB hangs
Curl from <public ip address of all nodes>:<node port for service> hangs
Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)
Update:
Issues created on github referencing above with some further details in some cases:
https://github.com/kubernetes/kubernetes/issues/50261
https://github.com/coreos/alb-ingress-controller/issues/169
https://github.com/kubernetes/kops/issues/3146

By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.
In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS.
Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as nodes.<your cluster name> and master.<your cluster name>
We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.
Click on the security group, click on rules and add the following rule.
Port range to open on the nodes and master: 30000-32767
This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.
Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.

The SG of master is not needed to open the nodeport range in order to make : working.
So only the Worker's SG needs to open the port range.

Related

Kubernetes Load Balancer on EC2 (Not EKS) [duplicate]

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

How does GCP internal load balancer + GKE service work? (It works, but I do not know why)

E.g. an istio service
istio-ingressgateway LoadBalancer 10.103.19.83 10.160.32.41 15021:30943/TCP,80:32609/TCP,443:30341/TCP,3306:30682/TCP,15443:30302/TCP
Which resulted in a TCP internal load balancer. The front end is ports 15021, 80, 443, 3306, and 15443.
The backend is basically the instance group of the cluster.
How does the load balancer know 443 at the front end will forward to 30341 at backend? As far as I know, TCP load balancer is doing port forwarding? How/Where does the magic happening
The LoadBalancer Service type is an extension of the NodePort type, which is an extension of the ClusterIP type. A nodePort just opens up a port in the range 30000-32767 on each worker node and uses a label selector to identify which Pods to send the traffic to.
This means that internal clients call the Service by using the internal IP address of a node along with the TCP port specified by nodePort. The request is forwarded to one of the member Pods on the TCP port specified by the targetPort field.
Here’s an example
When a Service is created in kubernetes, a corresponding Endpoints object is created along with it. It also applies to LoadBalancer service type.
If you create a simple nginx deployment e.g. by running:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
and then expose it as a LoadBalancer service:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
apart from the service itself, you will also see the lb-nginx Endpoints object. You can inspect its details:
kubectl get ep lb-nginx -o yaml
As you can see it keeps track of all exposed pods (being part of a Deployment in this case) so that corresponding iptables rules, which are responsible for forwarding the traffic to a particular pod, can be up-to-date all the time, even if number of them or their ip chages.
You can e.g. scale your deployment to 5 replicas:
kubectl scale deployment nginx-deployment --replicas=5
and inspect the Endpoints object again:
kubectl get ep lb-nginx -o yaml
and you will see that right after your 5 pods are up and running it immediately gets updated as well.
As you can see in subsets section of the yaml:
subsets:
- addresses:
- ip: 10.12.0.3
nodeName: gke-gke-default-pool-75259266-oauz
targetRef:
kind: Pod
name: nginx-deployment-66b6c48dd5-dw9mt
namespace: default
resourceVersion: "22394113"
uid: 8d7e1d3e-64e2-4891-b567-61ee48f61ed1
apart from the ip address of the Pod it maintains information about the node on which it is running.
Let's go back for a moment to the Service:
kubectl get svc lb-nginx -o yaml
As you can see LoadBalancer service apart from its external IP address has its ClusterIP as every other Service (well, almost every as headless services don't have ClusterIP):
spec:
clusterIP: 10.16.6.236
clusterIPs:
- 10.16.6.236
externalTrafficPolicy: Cluster
ports:
- nodePort: 31935
port: 80
protocol: TCP
targetPort: 80
So as you can imagine this external IP is somehow mapped to the cluster ip so it route the traffic further to the respective endpoints in the cluster. How exactly this mapping is done doesn't really matter as it is done by the cloud provider and such implementation details are not part of publicly shared knowledge. The only thing you need to know is that when your cloud provider provisions an external load balancer to satisfy your request defined in a Service of LoadBalancer type, apart from creating an external load balancer it takes care of the mapping between this external IP and some standard port assigned to it and a kubernetes service which has all the information needed to route the traffic further to the respective pods. In case you wonder how exactly this is done on GCP side i.e. mapping/binding between the external (or internal) loadbalancer and kubernetes LoadBalancer service, I'm affraid such implementation details are not publicly revealed.

How can I set up an Ingress to connect to a ClusterIP service?

Objective
I am have deployed Apache Airflow on AWS' Elastic Kubernetes Service using Airflow's Stable Helm chart. My goal is to create an Ingress to allow others to access the airflow webserver UI via their browser. It's worth mentioning that I am deploying on EKS using AWS Fargate. My experience with Kubernetes is somewhat limited, and I have not set an Ingress myself before.
What I have tried to do
I am currently able to connect to the airflow web-server pod via port-forwarding (like kubectl port-forward airflow-web-pod 8080:8080). I have tried setting the Ingress through the Helm chart (documented here). After which:
Running kubectl get ingress -n dp-airflow I got:
NAME CLASS HOSTS ADDRESS PORTS AGE
airflow-flower <none> foo.bar.com 80 3m46s
airflow-web <none> foo.bar.com 80 3m46s
Then running kubectl describe ingress airflow-web -n dp-airflow I get:
Name: airflow-web
Namespace: dp-airflow
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/airflow airflow-web:web (<redacted_ip>:8080)
Annotations: meta.helm.sh/release-name: airflow
meta.helm.sh/release-namespace: dp-airflow
I am not sure what did I need to put into the browser, so I have tried using http://foo.bar.com/airflow as well as the cluster endpoint/ip without success.
This is how the airflow webservice service looks like:
Running kubectl get services -n dp-airflow, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-web ClusterIP <redacted_ip> <none> 8080/TCP 28m
Other things I have tried
I have tried creating an Ingress without the Helm chart (I am using Terraform), like:
resource "kubernetes_ingress" "airflow_ingress" {
metadata {
name = "ingress"
}
spec {
backend {
service_name = "airflow-web"
service_port = 8080
}
rule {
http {
path {
backend {
service_name = "airflow-web"
service_port = 8080
}
path = "/airflow"
}
}
}
}
}
However I was still not able to connect to the web UI. What are the steps that I need to take to set up an Ingress? Which address do I need to use in my browser to connect to the web UI?
I am happy to provide further details if needed.
It sound like you have created Ingress resources. That is a good step. But for those Ingress resources to have any effect, you also need an Ingress Controller than can realize your Ingress to an actual load balancer.
In an AWS environment, you should look at AWS Load Balancer Controller that creates an AWS Application Load Balancer that is configured according your Ingress resources.
Ingress to connect to a ClusterIP service?
First, the default load balancer is classic load balancer, but you probably want to use the newer Application Load Balancer to be used for your Ingress resources, so on your Ingress resources add this annotation:
annotations:
kubernetes.io/ingress.class: alb
By default, your services should be of type NodePort, but as you request, it is possible to use ClusterIP services as well, when you on your Ingress resource also add this annotation (for traffic mode):
alb.ingress.kubernetes.io/target-type: ip
See the ALB Ingress documentation for more on this.

K8s expose LoadBalancer service giving external-ip pending

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

Connecting from K8S pod in GKE to a VM internal IP on Google Cloud Platform

We have a requirement to connect from a POD in GKE to service running on a VM on it's internal IP address.
The K8s cluster and the VM are on different network so we setup VPC Peering between these nets:
As how to point to an external IP, we applied a service without a selector as discussed here:
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
The POD should connect to the internal IP of the VM through this service, the service and endpoint description is:
kubectl describe svc vm-proxy
Name: vm-proxy
Namespace: test-environment
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.59.251.146
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.164.0.10:8080
Session Affinity: None
Events: <none>
Whereby the Endpoint, the internal IP of the VM is, en the
Service IP is allocated by K8s.
The pod simply sets up an http connection to the IP of the Service, but connection is re-fused. (Connection timeout eventually).
The use case is pretty straightforward, and documented on k8s documentation, giving the example of connecting to a DB running on a VM. However it doesn't work in our case, and we are not sure if our setup is wrong or this is simply not possible, using an internal IP of a VM.
I reproduced your issue and it worked fine for me. This is what I did:
Create 2 networks (one of them (demo) on 172.16.0.0/16, the other one is my default network, set on 10.132.0.0/20).
Set up VPC peering.
Created a VM in demo network. It got assigned 172.16.0.2
Created the service as you described (with the endpoint pointing to 172.16.0.2).
curl from the pod to the service IP and got 200!
If the steps are right, but your configuration is not working, I'd like to know your network IP ranges. Both of them.