We have a Hazelcast 3.12 cluster running inside an AWS EKS kubernetes cluster.
Do you know how to expose a Hazelcast cluster with more than 1 pod that is running inside an AWS EKS kubernetes cluster to outside the kubernetes cluster?
The Hazelcast cluster has 6 pods and is exposed outside of the kubernetes cluster with a kubernetes "Service" of type LoadBalancer (AWS classic load balancer).
When I run a Hazelcast client from outside of the kubernetes cluster, I am able to connect to the Hazelcast cluster using the AWS load balancer. However, when I try to get some value from a Hazelcast map, the client fails with this error:
java.io.IOException: No available connection to address [172.17.251.81]:5701 at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:75
The error mentions the IP address 172.17.251.81. This is an internal kubernetes IP for a Hazelcast pod that I cannot connect to from outside the kubernetes cluster. I don't know why the client is trying to connect to this IP address instead of the Load Balancer public IP address.
On the other hand, when I scale the hazelcast cluster from 6 to 1 pod, I am able to connect and get the map value without any problem.
In case that you want to review the kubernetes LoadBalancer Service configuration:
kind: Service
apiVersion: v1
metadata:
name: hazelcast-elb
labels:
app: hazelcast
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- name: tcp-hazelcast-elb
port: 443
targetPort: 5701
selector:
app: hazelcast
type: LoadBalancer
If you expose all Pods with one LoadBalancer service, then you need to use Hazelcast Unisocket Client.
hazelcast-client:
smart-routing: false
If you want to use the default Smart Client (which means better performance), then you need to expose each Pod with a separate service, because each Pod needs to be accessible from outside the Kubernetes cluster.
Read more in the blog post: How to Set Up Your Own On-Premises Hazelcast on Kubernetes.
Related
I'm trying to deploy an SMTP service into Kubernetes (EKS), and I'm having trouble with ingress. I'd like not to have to deploy SMTP, but I don't have that option at the moment. Our Kubernetes cluster is using ingress nginx controller, and the docs point to a way to expose TCP connection. I have TCP exposed on the controller via a configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-tcp
namespace: ingress-nginx
data:
'25': some-namespace/smtp:25
The receiving service is listening on port 25. I can verify that the k8s part is working. I've used port forwarding to forward it locally and verified with telnet that it's working. I can also access the SMTP service with telnet from a host in the VPC. I just can not access it from the NLB. I've tried 2 different setups:
the ingress-nginx controller nlb.
provisioning a separate nlb that points to the endpoint IP of the service. The TGs are healthy, and I can access the service from a host in the same vpc, that's not in the cluster.
I've verified a least a few dozen times that the security groups are open to all traffic on port 25.
Does anyone have any insights on how to access to expose the service through the NLB?
I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets
So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller
I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.
You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.
I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice