How to expose nginx ingress to the public url in EKS - amazon-web-services

I have ingress in my AWS EKS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: my-service-example
servicePort: 80
Ingress will redirect all trafic on domain example.com to my-service-example
The problem is, that ingress is without public url, how can I expose this ingress to the public internet?

You need a load balancer controller in your cluster for make a connection to you kubernetes service (nginx, istio, aws load balancer controller).
In this case you don't say what kind of ingress do you have install in your cluster, so i recommend use the official ingress controller of aws for eks, for the installation of this controller please follow this doc, later you need change your ingress object adding these lines in your annotations block:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
With that the AWS Load Balancer Controller interact with the AWS API for create a Application Load Balancer in your Public Subnets because in the annotations the scheme is internet-facing

Related

AWS Network Load Balancer and TCP traffic with AWS Fargate

I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets

How can I assign a static IP to my EKS service?

I have an EKS cluster.
I created my service and exposed it using ingress-nginx.
ingress-nginx external IP appears as a DNS name and not as IP.
How can I connect my private domain to point to my EKS service?
I know that there is an annotation for using AWS Elastic IP with Kubernetes,
but it's only available starting from Kubernetes 1.16 and EKS supports only up to 1.14.
So what are my options to assign some static IP to my service and configure my DNS to point this IP?
When creating LoadBalancer service (which will create an actual load balancer), you can now specify preallocated Elastic IPs by id via annotations.
Example:
apiVersion: v1
kind: Service
metadata:
name: some-name
annotations:
# only network load balancer supports static IP
service.beta.kubernetes.io/aws-load-balancer-type: nlb
# comma-separated list of Elastic IP ids
# the length of the list must be equal to the number of subnets
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-abcd0000,eipalloc-abcd0001,eipalloc-abcd0002
...
spec:
type: LoadBalancer
...
Assigning Static IP Address to AWS Load Balancer
The answer to this post still rings true in this case.
The way Amazon does load balancing is it will scale up and down interfaces as needed to handle the request load. This is why they assign you a domain name instead of an IP address since your load balancer could have multiple physical interfaces and the IP addresses will frequently change.
If all you are trying to do is create a DNS name for your load balancer, this can simply be done with any DNS provider by creating a CNAME record pointing to the dns name of the load balancer provisioned by AWS. If you are using Route53, it is even simpler since you can just create an A record with an alias to the DNS name.
I hope this helps. FWIW, it is not possible to get a single static IP address for your load balancer unless you are only deploying it in one Availability Zone.
NOW you can assign to a EKS load service/load balancer elastic ips or private ips .
A example for private ips :
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: foobar.internal.k8s.test.
external-dns.alpha.kubernetes.io/ttl: '30'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '600'
service.beta.kubernetes.io/aws-load-balancer-internal: 'true'
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.10.1.15,10.20.1.15,10.30.1.30,10.40.1.30,10.50.1.45,10.50.1.45
labels:
app.kubernetes.io/instance: foobar
name: foobar
namespace: test-foobar
spec:
ports:
- name: foobar
port: 80
protocol: TCP
targetPort: 8080
selector:
internaldeployment: foobar
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
Source of documentation :
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#private-ipv4-addresses
for elastic ips , the documentation is here :
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#eip-allocations
You can provision the Elastic IP on AWS and configure the service with that IP.
Ex:
type: LoadBalancer
loadBalancerIP: xxxxx

How to create AWS Security Group that restricts inbound traffic only from Kubernetes on Google Cloud?

AWS RDS Security Groups are great for restricting to specific IP addresses.
My Google Cloud deployment is an Ingress at a static IP. The Ingress points to one or several nodes. Those nodes have non-static IP addresses.
How do I restrict AWS RDS to only those nodes?
(Restricting to the Ingress IP would not, and does not, work.)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: xxxxx
annotations:
kubernetes.io/ingress.global-static-ip-name: xxxxx-ip
labels:
app: xxxxx
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: xxxxx
servicePort: 3000
Google Kubernetes nodes are ephemeral and if enabled autoscale. This means that you cannot rely upon the IP address of a node or collection of nodes. At this time, Google does not support assigning a static pool of addresses to a GKE cluster.
There is an opensource project KubeIP which can help you solve this. I have not used this project on GKE, do your own research on viability for your project.
KubeIP
Don't forget that you will be charged for allocated static IP addresses that are not assigned to a Google service (Load Balancer, Compute Engine, etc).

Kubernetes pod level restricted access to other EC2 instances from AWS EKS nodes

I had a Elastic search DB running on a EC2 instance. Backend services which connect to Elastic DB are running on AWS EKS nodes.
In order for the backend kubernetes pods to access Elastic DB, i added allowed security groups to EKS nodes and it is working fine.
But my question is all other pods(not the backend ones) which are running in the same node had possible access to Elastic DB because of the underlying node security groups, is there a better secure way to handle this.
In this situation you could use additionally Kubernetes` Network Policies to define rules, which specify what traffic is allowed to Elastic DB from the selected pods.
For instance start with creating a default deny all egress traffic policy in namespace for all pods, like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Egress
and allow outgoing traffic from specific pods (holding role: db) to CIDR 10.0.0.0/24 on TCP port 5978
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Please consult official documentation for more information on NetworkPolicies.

Connect to different VPC inside of Kubernetes pod

I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice