accessing kubernetes service from local host - kubectl

I created a single node cluster. There is a nodeport service
kubectl get all --namespace default
service/backend-org-1-substra-backend-server NodePort 10.43.81.5 <none> 8000:30068/TCP 4d23h
The node ip is
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-k3s-default-server-0 Ready control-plane,master 5d v1.24.4+k3s1 172.18.0.2 <none> K3s dev 5.15.0-1028-aws containerd://1.6.6-k3s1
From the same host, but not inside the cluster, I can ping the 172.18.0.2 ip. Since the backend-org-1-substra-backend-server is a nodeport, shouldn't I be able to access it by
curl 172.18.0.2:30068? I get
curl: (7) Failed to connect to 172.18.0.2 port 30068 after 0 ms: Connection refused
additional information:
$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get nodes -o yaml
...
addresses:
- address: 172.24.0.2
type: InternalIP
- address: k3d-k3s-default-server-0
type: Hostname
allocatable:
$ kubectl describe svc backend-org-1-substra-backend-server
Name: backend-org-1-substra-backend-server
Namespace: org-1
Labels: app.kubernetes.io/instance=backend-org-1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=substra-backend-server
app.kubernetes.io/part-of=substra-backend
app.kubernetes.io/version=0.34.1
helm.sh/chart=substra-backend-22.3.1
skaffold.dev/run-id=394a8d19-bbc8-4a3b-b04e-08e0fff40681
Annotations: meta.helm.sh/release-name: backend-org-1
meta.helm.sh/release-namespace: org-1
Selector: app.kubernetes.io/instance=backend-org-1,app.kubernetes.io/name=substra-backend-server
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.68.217
IPs: 10.43.68.217
Port: http 8000/TCP
TargetPort: http/TCP
NodePort: http 31960/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Here, I noticed the endpoints shows . which worries me.
I followed the doc at https://docs.substra.org/en/stable/contributing/getting-started.html
It's a lot to ask someone to replicate the whole thing.
My point is AFAIK, the nodeport service allows callers from outside the cluster to call pods inside the cluster. But neither the cluster ip nor the node ip allows me to curl that service.

I found that it was due to a faulty installation. Now wget to the load balancer ip and port does get a connection.

Related

Kubernetes Load Balancer on EC2 (Not EKS) [duplicate]

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

Istio Multicluster between Microk8s (on GCE instance) and GKE custer

I'm trying to setup Istio 1.7 MultiCluster between Microk8s 1.18/Stable that is installed on Ubuntu 18.04 instance in Google Compute Engine and a GKE cluster.
Everything is ok with GKE part. But I have a question regarding istio-ingressgateway on microk8s.
When I inspect services in the namespace "istio-system" of my Microk8s single-node cluster, I see, that "istio-ingressgateway" is stuck in "pending" state.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.152.183.215 <none> 3000/TCP 10m
service/istio-egressgateway ClusterIP 10.152.183.180 <none> 80/TCP,443/TCP,15443/TCP 10m
service/istio-ingressgateway LoadBalancer 10.152.183.233 <pending> 15021:32648/TCP,80:30384/TCP,443:31362/TCP,15443:30810/TCP 10m
service/istiocoredns ClusterIP 10.152.183.70 <none> 53/UDP,53/TCP 10m
service/istiod ClusterIP 10.152.183.20 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 10m
service/jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 10m
service/jaeger-collector ClusterIP 10.152.183.50 <none> 14267/TCP,14268/TCP,14250/TCP 10m
service/jaeger-collector-headless ClusterIP None <none> 14250/TCP 10m
service/jaeger-query ClusterIP 10.152.183.142 <none> 16686/TCP 10m
service/kiali ClusterIP 10.152.183.135 <none> 20001/TCP 10m
service/prometheus ClusterIP 10.152.183.23 <none> 9090/TCP 10m
service/tracing ClusterIP 10.152.183.73 <none> 80/TCP 10m
service/zipkin ClusterIP 10.152.183.163 <none> 9411/TCP 10m
Ok, I know that microk8s doesn't know that it is installed on the VM that is running inside GCP and thus can not create network loadbalancer in GCP like it can be easily done for service of type LoadBalancer in GKE.
So I created LB manually (made it similar to the LB that GKE creates) and tried to attach it to the existing "istio-ingressgateway" service.
I ran:
kubectl edit svc -n istio-system istio-ingressgateway
And tried to put the IP of this LB in the same way and syntax as is see for istio-ingressgateway in GKE:
...
selector:
app: istio-ingressgateway
istio: ingressgateway
release: istio
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 11.22.33.44
It doesn't work:
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
So, my questions are:
Is there a possibility to make Microk8s know that it is running on VM that is located in GCP and give it ability to create TCP LBs in "Network Services > LoadBalancing"? Maybe some annotation that can be added to the yaml of the service of type LoadBalancer?
I found some info that if cloud infra doesn't support automated LB creation, then we can use host IP and NodePort of the istio-ingressgateway.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
But this was written not for MultiCluster setup. And for MultiCluster they suggest lusing of L4 LBs:
The IP address of the istio-ingressgateway service in each cluster must be accessible from every other cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for service object type load balancers. When deploying on platforms without NLB support, it may be necessary to modify the health checks for the load balancer to register the ingress gateway
is there a way to use NodePort for Istio MultiCluster setup between Microk8s (VM in GCE) and a GKE cluster?
Thanks a lot!
Pavel
Resolved!
there was no problem to use Microk8s's host IP and NodePort value of the port "tls" from istio-ingressgateway (31732):
- name: tls
nodePort: 31732
port: 15443
protocol: TCP
targetPort: 15443

K8s expose LoadBalancer service giving external-ip pending

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
I have created a Deployment using the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
The deployment is running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
A single pod has been created on the worker node:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
From the worker node, I can curl the pod and get the information:
$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
I have created a service using the following configuration:
kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
The service is up and running:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.
If I curl the service from the worker node it works just as expected:
$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:
$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?
I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.
Depends of which mode you using it working different in details, but conceptually same.
You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.
IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.
That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.
So, service IP address desired to be available from other pod, not from nodes.
You can read in official documentation about how the Service Virtual IPs works.
kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.

Kubernetes Cluster on AWS with Kops - NodePort Service Unavailable

I am having difficulties accessing a NodePort service on my Kubernetes cluster.
Goal
set up ALB Ingress controller so that i can use websockets and http/2
setup NodePort service as required by that controller
Steps taken
Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
Changed service type from LoadBalancer to NodePort using kubectl replace --force
> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
Results
Curl from ALB hangs
Curl from <public ip address of all nodes>:<node port for service> hangs
Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)
Update:
Issues created on github referencing above with some further details in some cases:
https://github.com/kubernetes/kubernetes/issues/50261
https://github.com/coreos/alb-ingress-controller/issues/169
https://github.com/kubernetes/kops/issues/3146
By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.
In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS.
Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as nodes.<your cluster name> and master.<your cluster name>
We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.
Click on the security group, click on rules and add the following rule.
Port range to open on the nodes and master: 30000-32767
This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.
Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.
The SG of master is not needed to open the nodeport range in order to make : working.
So only the Worker's SG needs to open the port range.