Connecting to Kubernetes cluster on AWS internal network - amazon-web-services

I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.

You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.

Related

istio-ingressgateway LoadBalancer showing "Pending" in AWS EKS

I have installed a private EKS cluster where subnets attached are private subnet. What my requirement is "Private EKS with Istio installation" and create multiple microservices and expose them within the vpc.
For exposing them within VPC, i expected the 'istio-ingressgateway' would create an internal ALB but it is showing "Pending"
istio-ingressgateway LoadBalancer 1xx.x0x.xx.2xx <pending>
My need is to install multiple microservice within in different port using "NodePort" and expose them via Gateway.
Request any help or view in this.
Thanks!
You have 2 options, can use ALB ingress controller and create an internal ingress object and add the annotation:
alb.ingress.kubernetes.io/scheme: "internal"
or create a LoadBalancer service that will create an ELB. Add these annotations to the service:
service.beta.kubernetes.io/aws-load-balancer-internal: Used on the service to indicate that we want an internal ELB.
service.beta.kubernetes.io/aws-load-balancer-security-groups: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB.
For example,
Also, you need these tags in the VPC subnets:
Key: kubernetes.io/role/internal-elb
Value: 1
For your case, take option 2.

EKS using NLB ingress and multiple services deployed in node group

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

How to expose my app outside cluster or vpc my internal load balancer in pprivate EKS cluster

I am having doubt with AWS EKS
i have EKS cluster (Private subnets) managed worker nodes( private subnets)
and i deployed nginx deplyoment with three replicas and did service internal loadbalancer
i can do curl
getting expected output
problem: How to expose my app outside cluster or vpc
Thanks
You can have your EKS nodes in private subnet of VPC but you need public subnets also for exposing your pods/containers.
So ideally you need to create a LB service for your nginx deployment.
The below blog helped me during my initial EKS setup hope it helps you too
Nginx ingress controller with NLB
You can have AWS Application Load Balancer added to your EKS cluster and have an ingress targeting your service.
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Deploy ALB Controller in your cluster
Add new ingress pointing your service
Remember to make alb.ingress.kubernetes.io/scheme: internet-facing as you want to expose your service to public.
You can get DNS of new ingress in AWS Console(EC2/Load Balancer) or by describing ingress using kubectl.

Expose a Hazelcast cluster on AWS EKS with a load balancer

We have a Hazelcast 3.12 cluster running inside an AWS EKS kubernetes cluster.
Do you know how to expose a Hazelcast cluster with more than 1 pod that is running inside an AWS EKS kubernetes cluster to outside the kubernetes cluster?
The Hazelcast cluster has 6 pods and is exposed outside of the kubernetes cluster with a kubernetes "Service" of type LoadBalancer (AWS classic load balancer).
When I run a Hazelcast client from outside of the kubernetes cluster, I am able to connect to the Hazelcast cluster using the AWS load balancer. However, when I try to get some value from a Hazelcast map, the client fails with this error:
java.io.IOException: No available connection to address [172.17.251.81]:5701 at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:75
The error mentions the IP address 172.17.251.81. This is an internal kubernetes IP for a Hazelcast pod that I cannot connect to from outside the kubernetes cluster. I don't know why the client is trying to connect to this IP address instead of the Load Balancer public IP address.
On the other hand, when I scale the hazelcast cluster from 6 to 1 pod, I am able to connect and get the map value without any problem.
In case that you want to review the kubernetes LoadBalancer Service configuration:
kind: Service
apiVersion: v1
metadata:
name: hazelcast-elb
labels:
app: hazelcast
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- name: tcp-hazelcast-elb
port: 443
targetPort: 5701
selector:
app: hazelcast
type: LoadBalancer
If you expose all Pods with one LoadBalancer service, then you need to use Hazelcast Unisocket Client.
hazelcast-client:
smart-routing: false
If you want to use the default Smart Client (which means better performance), then you need to expose each Pod with a separate service, because each Pod needs to be accessible from outside the Kubernetes cluster.
Read more in the blog post: How to Set Up Your Own On-Premises Hazelcast on Kubernetes.

AWS cross-cluster comunication with Kubernetes pods

I've question with cross-cluster communication between pods on AWS.
I am using kubernetes to deploy clusters on AWS. Both clusters are in same region and AZ. Both clusters are deployed in their own VPC with non-overlapping subnets. I've successfully created VPC Peering to establish communication between two VPCs. and Minions (instances) from VPC can ping each other through private IP.
Question is, Kubernetes pods from one Cluster (VPC) can not ping Pod in another cluster through it's internal IP. I see traffic leaving pod and minion but dont see it on other VPC.
Here is IP info:
Cluster 1 (VPC 1) - subnet 172.21.0.0/16
Minion(Instance)in VPC 1 - internal IP - 172.21.0.232
Pod on Minion 1 - IP - 10.240.1.54
Cluster 2 (VPC 2) - subnet 172.20.0.0/16
Minion(instance) in VPC 2 - internal IP - 172.20.0.19
Pod on Minion 1 - IP - 10.241.2.36
I've configured VPC Peering between two VPC and I can ping Minion in VPC 1 (172.21.0.232) to Minion
in VPC 2 through IP 172.20.0.19
But when I try to ping pod on VPC 1, Minion 1 - IP 10.240.1.54 from VPC 2, Minion Pod 10.241.2.36, it can not ping.
Is this supported use case in AWS? How can I achieve it. I have configured security group on both instance to allow all traffic from source 10.0.0.0/8 as well but it did not help.
Really appreciate your help!
direct communication with the pods from outside the cluster is not supposed to work. Pods can be exposed to the outside through services.
There is a wide range of options, but a basic services with a definition like the following could expose a pod through a predefined port to the other cluster:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
nodePort: 34567
With that you could access your pod through the port 34567 which is mapped on any of the kubernetes nodes.
Besides that you should also consider to check out ingress configurations.
A very good summary besides the official documentation is the Kubernetes Services and Ingress Under X-ray
blog post.