How to figure out my control plane instance in EKS - amazon-web-services

From grafana dashboard, I can see that one of the 2 kubeapiservers in my EKS is having high API latency. The grafana dashboard identifies the instance using the endpoint ip.
root#k8scluster:~ $ kubectl describe service/kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 172.50.0.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 10.0.99.157:443,10.0.10.188:443
Session Affinity: None
Events: <none>
Now, this endpoint (10.0.99.157) is the one which has high latency when I check from grafana. When I login to my aws console, I have access to aws ec2 instances page, but I don't have access to see the nodes in the AWS EKS page.
From EC2 console, I can figure out the 2 instances which are my running my kubeapiserver. However, I can't seem to figure out which is the one which has high latency (i.e the instance which has the endpoint ip 10.0.99.157). Is there any way I can figure out the same from ec2 console or using eks commands?
Update:
I did compare it with the IP addresses / Secondary IP addresses of both the kubeapiserver ec2 instances. But none match the endpoint ip 10.0.99.157

Please note that the EKS K8s Control Plane is managed by AWS and therefore outside of your management. So, you will not be able to access the respective EC2 instances at all.
Official AWS documentation can be found here.

Related

AWS NLB Ingress for SMTP service in Kubernetes (EKS)

I'm trying to deploy an SMTP service into Kubernetes (EKS), and I'm having trouble with ingress. I'd like not to have to deploy SMTP, but I don't have that option at the moment. Our Kubernetes cluster is using ingress nginx controller, and the docs point to a way to expose TCP connection. I have TCP exposed on the controller via a configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-tcp
namespace: ingress-nginx
data:
'25': some-namespace/smtp:25
The receiving service is listening on port 25. I can verify that the k8s part is working. I've used port forwarding to forward it locally and verified with telnet that it's working. I can also access the SMTP service with telnet from a host in the VPC. I just can not access it from the NLB. I've tried 2 different setups:
the ingress-nginx controller nlb.
provisioning a separate nlb that points to the endpoint IP of the service. The TGs are healthy, and I can access the service from a host in the same vpc, that's not in the cluster.
I've verified a least a few dozen times that the security groups are open to all traffic on port 25.
Does anyone have any insights on how to access to expose the service through the NLB?

istio-ingressgateway LoadBalancer showing "Pending" in AWS EKS

I have installed a private EKS cluster where subnets attached are private subnet. What my requirement is "Private EKS with Istio installation" and create multiple microservices and expose them within the vpc.
For exposing them within VPC, i expected the 'istio-ingressgateway' would create an internal ALB but it is showing "Pending"
istio-ingressgateway LoadBalancer 1xx.x0x.xx.2xx <pending>
My need is to install multiple microservice within in different port using "NodePort" and expose them via Gateway.
Request any help or view in this.
Thanks!
You have 2 options, can use ALB ingress controller and create an internal ingress object and add the annotation:
alb.ingress.kubernetes.io/scheme: "internal"
or create a LoadBalancer service that will create an ELB. Add these annotations to the service:
service.beta.kubernetes.io/aws-load-balancer-internal: Used on the service to indicate that we want an internal ELB.
service.beta.kubernetes.io/aws-load-balancer-security-groups: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB.
For example,
Also, you need these tags in the VPC subnets:
Key: kubernetes.io/role/internal-elb
Value: 1
For your case, take option 2.

EKS using NLB ingress and multiple services deployed in node group

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

Expose a Hazelcast cluster on AWS EKS with a load balancer

We have a Hazelcast 3.12 cluster running inside an AWS EKS kubernetes cluster.
Do you know how to expose a Hazelcast cluster with more than 1 pod that is running inside an AWS EKS kubernetes cluster to outside the kubernetes cluster?
The Hazelcast cluster has 6 pods and is exposed outside of the kubernetes cluster with a kubernetes "Service" of type LoadBalancer (AWS classic load balancer).
When I run a Hazelcast client from outside of the kubernetes cluster, I am able to connect to the Hazelcast cluster using the AWS load balancer. However, when I try to get some value from a Hazelcast map, the client fails with this error:
java.io.IOException: No available connection to address [172.17.251.81]:5701 at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:75
The error mentions the IP address 172.17.251.81. This is an internal kubernetes IP for a Hazelcast pod that I cannot connect to from outside the kubernetes cluster. I don't know why the client is trying to connect to this IP address instead of the Load Balancer public IP address.
On the other hand, when I scale the hazelcast cluster from 6 to 1 pod, I am able to connect and get the map value without any problem.
In case that you want to review the kubernetes LoadBalancer Service configuration:
kind: Service
apiVersion: v1
metadata:
name: hazelcast-elb
labels:
app: hazelcast
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- name: tcp-hazelcast-elb
port: 443
targetPort: 5701
selector:
app: hazelcast
type: LoadBalancer
If you expose all Pods with one LoadBalancer service, then you need to use Hazelcast Unisocket Client.
hazelcast-client:
smart-routing: false
If you want to use the default Smart Client (which means better performance), then you need to expose each Pod with a separate service, because each Pod needs to be accessible from outside the Kubernetes cluster.
Read more in the blog post: How to Set Up Your Own On-Premises Hazelcast on Kubernetes.

AWS cross-cluster comunication with Kubernetes pods

I've question with cross-cluster communication between pods on AWS.
I am using kubernetes to deploy clusters on AWS. Both clusters are in same region and AZ. Both clusters are deployed in their own VPC with non-overlapping subnets. I've successfully created VPC Peering to establish communication between two VPCs. and Minions (instances) from VPC can ping each other through private IP.
Question is, Kubernetes pods from one Cluster (VPC) can not ping Pod in another cluster through it's internal IP. I see traffic leaving pod and minion but dont see it on other VPC.
Here is IP info:
Cluster 1 (VPC 1) - subnet 172.21.0.0/16
Minion(Instance)in VPC 1 - internal IP - 172.21.0.232
Pod on Minion 1 - IP - 10.240.1.54
Cluster 2 (VPC 2) - subnet 172.20.0.0/16
Minion(instance) in VPC 2 - internal IP - 172.20.0.19
Pod on Minion 1 - IP - 10.241.2.36
I've configured VPC Peering between two VPC and I can ping Minion in VPC 1 (172.21.0.232) to Minion
in VPC 2 through IP 172.20.0.19
But when I try to ping pod on VPC 1, Minion 1 - IP 10.240.1.54 from VPC 2, Minion Pod 10.241.2.36, it can not ping.
Is this supported use case in AWS? How can I achieve it. I have configured security group on both instance to allow all traffic from source 10.0.0.0/8 as well but it did not help.
Really appreciate your help!
direct communication with the pods from outside the cluster is not supposed to work. Pods can be exposed to the outside through services.
There is a wide range of options, but a basic services with a definition like the following could expose a pod through a predefined port to the other cluster:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
nodePort: 34567
With that you could access your pod through the port 34567 which is mapped on any of the kubernetes nodes.
Besides that you should also consider to check out ingress configurations.
A very good summary besides the official documentation is the Kubernetes Services and Ingress Under X-ray
blog post.