AWS NLB Ingress for SMTP service in Kubernetes (EKS) - amazon-web-services

I'm trying to deploy an SMTP service into Kubernetes (EKS), and I'm having trouble with ingress. I'd like not to have to deploy SMTP, but I don't have that option at the moment. Our Kubernetes cluster is using ingress nginx controller, and the docs point to a way to expose TCP connection. I have TCP exposed on the controller via a configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-tcp
namespace: ingress-nginx
data:
'25': some-namespace/smtp:25
The receiving service is listening on port 25. I can verify that the k8s part is working. I've used port forwarding to forward it locally and verified with telnet that it's working. I can also access the SMTP service with telnet from a host in the VPC. I just can not access it from the NLB. I've tried 2 different setups:
the ingress-nginx controller nlb.
provisioning a separate nlb that points to the endpoint IP of the service. The TGs are healthy, and I can access the service from a host in the same vpc, that's not in the cluster.
I've verified a least a few dozen times that the security groups are open to all traffic on port 25.
Does anyone have any insights on how to access to expose the service through the NLB?

Related

Dedicated IP address to LoadBalancer mapping

We're serving our product on AWS EKS where the service is created of type LoadBalancer. The ELB IP is assigned by AWS and this is what is being shared to the client.
However, when we re-deploy the service when we're making some changes/improvements, the ELB IP changes. Since this is causing us to frequently send mails to all the clients, we would need a dedicated IP which needs to be mapped to LB and thus will not change with re-deployment of the service.
Any existing AWS solution or a nice pointer to solve this situation would be helpful.
You can use elastic ip as is described here How to provide elastic ip to aws eks for external service with type loadbalancer?, and here https://docs.aws.amazon.com/es_es/eks/latest/userguide/network-load-balancing.html, just adding an anotation service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx,eipalloc-yyyyyyyyyyyyyyyyy to the nlb:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-05666791973f6a240
Another way is to use a domain name (my way). Then use https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md annotations to link your Service or Ingress with a dns name and configure external-dns to use your dns provider like Route53.
For example:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
namespace: ambassador
annotations:
external-dns.alpha.kubernetes.io/hostname: 'myserver.mydomain.com'
Every time your LoadBalancer changes the ip the dns server will be updated by the correct ip.
In order to have better control over exposed resources, you can use Ingress Controller such as AWS Load Balancer Controller https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
With it, you'll be able to re-use the same ALBs for multiple Kubernetes services using alb.ingress.kubernetes.io/group.name annotation. It will create multiple listener rules based on Ingress configuration.
(Applicable if you're not restricted by hardcoded FW rules or similar configurations, that will require you to have static IPs, which is not recommended today)

How to figure out my control plane instance in EKS

From grafana dashboard, I can see that one of the 2 kubeapiservers in my EKS is having high API latency. The grafana dashboard identifies the instance using the endpoint ip.
root#k8scluster:~ $ kubectl describe service/kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 172.50.0.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 10.0.99.157:443,10.0.10.188:443
Session Affinity: None
Events: <none>
Now, this endpoint (10.0.99.157) is the one which has high latency when I check from grafana. When I login to my aws console, I have access to aws ec2 instances page, but I don't have access to see the nodes in the AWS EKS page.
From EC2 console, I can figure out the 2 instances which are my running my kubeapiserver. However, I can't seem to figure out which is the one which has high latency (i.e the instance which has the endpoint ip 10.0.99.157). Is there any way I can figure out the same from ec2 console or using eks commands?
Update:
I did compare it with the IP addresses / Secondary IP addresses of both the kubeapiserver ec2 instances. But none match the endpoint ip 10.0.99.157
Please note that the EKS K8s Control Plane is managed by AWS and therefore outside of your management. So, you will not be able to access the respective EC2 instances at all.
Official AWS documentation can be found here.

AWS Network Load Balancer and TCP traffic with AWS Fargate

I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets

EKS using NLB ingress and multiple services deployed in node group

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

Expose a Hazelcast cluster on AWS EKS with a load balancer

We have a Hazelcast 3.12 cluster running inside an AWS EKS kubernetes cluster.
Do you know how to expose a Hazelcast cluster with more than 1 pod that is running inside an AWS EKS kubernetes cluster to outside the kubernetes cluster?
The Hazelcast cluster has 6 pods and is exposed outside of the kubernetes cluster with a kubernetes "Service" of type LoadBalancer (AWS classic load balancer).
When I run a Hazelcast client from outside of the kubernetes cluster, I am able to connect to the Hazelcast cluster using the AWS load balancer. However, when I try to get some value from a Hazelcast map, the client fails with this error:
java.io.IOException: No available connection to address [172.17.251.81]:5701 at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:75
The error mentions the IP address 172.17.251.81. This is an internal kubernetes IP for a Hazelcast pod that I cannot connect to from outside the kubernetes cluster. I don't know why the client is trying to connect to this IP address instead of the Load Balancer public IP address.
On the other hand, when I scale the hazelcast cluster from 6 to 1 pod, I am able to connect and get the map value without any problem.
In case that you want to review the kubernetes LoadBalancer Service configuration:
kind: Service
apiVersion: v1
metadata:
name: hazelcast-elb
labels:
app: hazelcast
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
ports:
- name: tcp-hazelcast-elb
port: 443
targetPort: 5701
selector:
app: hazelcast
type: LoadBalancer
If you expose all Pods with one LoadBalancer service, then you need to use Hazelcast Unisocket Client.
hazelcast-client:
smart-routing: false
If you want to use the default Smart Client (which means better performance), then you need to expose each Pod with a separate service, because each Pod needs to be accessible from outside the Kubernetes cluster.
Read more in the blog post: How to Set Up Your Own On-Premises Hazelcast on Kubernetes.