How can I assign a static IP to my EKS service? - amazon-web-services

I have an EKS cluster.
I created my service and exposed it using ingress-nginx.
ingress-nginx external IP appears as a DNS name and not as IP.
How can I connect my private domain to point to my EKS service?
I know that there is an annotation for using AWS Elastic IP with Kubernetes,
but it's only available starting from Kubernetes 1.16 and EKS supports only up to 1.14.
So what are my options to assign some static IP to my service and configure my DNS to point this IP?

When creating LoadBalancer service (which will create an actual load balancer), you can now specify preallocated Elastic IPs by id via annotations.
Example:
apiVersion: v1
kind: Service
metadata:
name: some-name
annotations:
# only network load balancer supports static IP
service.beta.kubernetes.io/aws-load-balancer-type: nlb
# comma-separated list of Elastic IP ids
# the length of the list must be equal to the number of subnets
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-abcd0000,eipalloc-abcd0001,eipalloc-abcd0002
...
spec:
type: LoadBalancer
...

Assigning Static IP Address to AWS Load Balancer
The answer to this post still rings true in this case.
The way Amazon does load balancing is it will scale up and down interfaces as needed to handle the request load. This is why they assign you a domain name instead of an IP address since your load balancer could have multiple physical interfaces and the IP addresses will frequently change.
If all you are trying to do is create a DNS name for your load balancer, this can simply be done with any DNS provider by creating a CNAME record pointing to the dns name of the load balancer provisioned by AWS. If you are using Route53, it is even simpler since you can just create an A record with an alias to the DNS name.
I hope this helps. FWIW, it is not possible to get a single static IP address for your load balancer unless you are only deploying it in one Availability Zone.

NOW you can assign to a EKS load service/load balancer elastic ips or private ips .
A example for private ips :
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: foobar.internal.k8s.test.
external-dns.alpha.kubernetes.io/ttl: '30'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '600'
service.beta.kubernetes.io/aws-load-balancer-internal: 'true'
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.10.1.15,10.20.1.15,10.30.1.30,10.40.1.30,10.50.1.45,10.50.1.45
labels:
app.kubernetes.io/instance: foobar
name: foobar
namespace: test-foobar
spec:
ports:
- name: foobar
port: 80
protocol: TCP
targetPort: 8080
selector:
internaldeployment: foobar
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
Source of documentation :
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#private-ipv4-addresses
for elastic ips , the documentation is here :
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#eip-allocations

You can provision the Elastic IP on AWS and configure the service with that IP.
Ex:
type: LoadBalancer
loadBalancerIP: xxxxx

Related

Create static domain address for Network Load Balancer in multizone EKS cluster

I'm fairly new to AWS EKS service, and I'm trying to deploy an UDP network load balancer.
I've a EKS cluster inside a VPC with two subnets in two availabilty zones, and I want to have a fixed address assigned to the NLB. Currently, I have this in my service yaml:
apiVersion: v1
kind: Service
metadata:
name: udpserver-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-XXXXX,eipalloc-YYYYY"
spec:
selector:
app: udpserver
type: LoadBalancer
ports:
- protocol: UDP
port: 5002
targetPort: 5002
externalTrafficPolicy: Local
This is the closest case that I found in StackOverflow, but the accepted solution only works when you have just one availabilty zone, as each Elastic IP defined in the service.beta.kubernetes.io/aws-load-balancer-eip-allocations annotation is assigned to the subnet on a different availability zone.
So, with this approach, I have two static IP addresses pointing to the two different subnets in both availability zones, instead of one single domain name pointing to the "global" load balancer.
The problem however is the same: every time that I deploy the service, a new NLB is created, with a different domain name.
How could I make this load balancer DNS fixed? Am I missing/misunderstanding anything?

AWS Network Load Balancer and TCP traffic with AWS Fargate

I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets

EKS using NLB ingress and multiple services deployed in node group

So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller

How to create AWS Security Group that restricts inbound traffic only from Kubernetes on Google Cloud?

AWS RDS Security Groups are great for restricting to specific IP addresses.
My Google Cloud deployment is an Ingress at a static IP. The Ingress points to one or several nodes. Those nodes have non-static IP addresses.
How do I restrict AWS RDS to only those nodes?
(Restricting to the Ingress IP would not, and does not, work.)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: xxxxx
annotations:
kubernetes.io/ingress.global-static-ip-name: xxxxx-ip
labels:
app: xxxxx
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: xxxxx
servicePort: 3000
Google Kubernetes nodes are ephemeral and if enabled autoscale. This means that you cannot rely upon the IP address of a node or collection of nodes. At this time, Google does not support assigning a static pool of addresses to a GKE cluster.
There is an opensource project KubeIP which can help you solve this. I have not used this project on GKE, do your own research on viability for your project.
KubeIP
Don't forget that you will be charged for allocated static IP addresses that are not assigned to a Google service (Load Balancer, Compute Engine, etc).

Connect to different VPC inside of Kubernetes pod

I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice