AWS cross-cluster comunication with Kubernetes pods - amazon-web-services

I've question with cross-cluster communication between pods on AWS.
I am using kubernetes to deploy clusters on AWS. Both clusters are in same region and AZ. Both clusters are deployed in their own VPC with non-overlapping subnets. I've successfully created VPC Peering to establish communication between two VPCs. and Minions (instances) from VPC can ping each other through private IP.
Question is, Kubernetes pods from one Cluster (VPC) can not ping Pod in another cluster through it's internal IP. I see traffic leaving pod and minion but dont see it on other VPC.
Here is IP info:
Cluster 1 (VPC 1) - subnet 172.21.0.0/16
Minion(Instance)in VPC 1 - internal IP - 172.21.0.232
Pod on Minion 1 - IP - 10.240.1.54
Cluster 2 (VPC 2) - subnet 172.20.0.0/16
Minion(instance) in VPC 2 - internal IP - 172.20.0.19
Pod on Minion 1 - IP - 10.241.2.36
I've configured VPC Peering between two VPC and I can ping Minion in VPC 1 (172.21.0.232) to Minion
in VPC 2 through IP 172.20.0.19
But when I try to ping pod on VPC 1, Minion 1 - IP 10.240.1.54 from VPC 2, Minion Pod 10.241.2.36, it can not ping.
Is this supported use case in AWS? How can I achieve it. I have configured security group on both instance to allow all traffic from source 10.0.0.0/8 as well but it did not help.
Really appreciate your help!

direct communication with the pods from outside the cluster is not supposed to work. Pods can be exposed to the outside through services.
There is a wide range of options, but a basic services with a definition like the following could expose a pod through a predefined port to the other cluster:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
nodePort: 34567
With that you could access your pod through the port 34567 which is mapped on any of the kubernetes nodes.
Besides that you should also consider to check out ingress configurations.
A very good summary besides the official documentation is the Kubernetes Services and Ingress Under X-ray
blog post.

Related

How to figure out my control plane instance in EKS

From grafana dashboard, I can see that one of the 2 kubeapiservers in my EKS is having high API latency. The grafana dashboard identifies the instance using the endpoint ip.
root#k8scluster:~ $ kubectl describe service/kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 172.50.0.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 10.0.99.157:443,10.0.10.188:443
Session Affinity: None
Events: <none>
Now, this endpoint (10.0.99.157) is the one which has high latency when I check from grafana. When I login to my aws console, I have access to aws ec2 instances page, but I don't have access to see the nodes in the AWS EKS page.
From EC2 console, I can figure out the 2 instances which are my running my kubeapiserver. However, I can't seem to figure out which is the one which has high latency (i.e the instance which has the endpoint ip 10.0.99.157). Is there any way I can figure out the same from ec2 console or using eks commands?
Update:
I did compare it with the IP addresses / Secondary IP addresses of both the kubeapiserver ec2 instances. But none match the endpoint ip 10.0.99.157
Please note that the EKS K8s Control Plane is managed by AWS and therefore outside of your management. So, you will not be able to access the respective EC2 instances at all.
Official AWS documentation can be found here.

EKS using NLB ingress and multiple services deployed in node group

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

Connecting to Kubernetes cluster on AWS internal network

I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.
You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.

Connect to different VPC inside of Kubernetes pod

I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice

Kubernetes: Have no access from EKS pod to RDS Postgres

I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC).
But unfortunately the app doesn't connect to the database.
What I have:
EKS cluster with its own VPC (CIDR: 192.168.0.0/16)
RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)
Peering connection initiated from the RDS VPC to the EKS VPC
Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target — peer connection from the step #3 is added.
Route table for the RDS is updated: route with destination 192.168.0.0/16 and target — peer connection from the step #3 is added.
The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed
After all these steps I execute kubectl command:
kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
Then I connect to one of the 3 nodes and execute a command:
getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
What I missed in EKS setup in order to have access from pods to RDS?
UPDATE:
I tried to fix the problem by Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
So I created this service in EKS, and then tried to refer to postgres-service as DB URL instead of direct RDS host address.
This fix does not work :(
Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.
The answer I provided here may actually apply to your case, too.
It is about using Services without selectors. Look also into ExternalName Services.