aws private eks, how to expose service to public - amazon-web-services

I created an private eks cluster by terraform, and I followed the guide at this page: https://tf-eks-workshop.workshop.aws/500_eks-terraform-workshop.html, and network architecture is below:
then I deployed my web application in the eks cluster, and the application only access aws resource, so it works as expected, but the problem is the eks is in private vpc, and internet alb can't attached to it, so I'm not able to access my application from public web browser, is there any idea to impliment it ?
I've setup the vpc peerring between eks vpc and cicd vpc.
For the application deploymetn part, I create a service, which type is NodePort

Unfortunately the guide link here doesn't open for me, it seems from the VPC Diagram that you have a private subnet within your VPC for EKS, you can do the following -
Create a Private Ingress resource Using ALB Ingress (For Routing traffic based on hostname within the services within the cluster) - https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
And then route traffic from Route53 to Internal LB, if its not reachable still, attach a LB in the default VPC and pass it on to the Internal Ingress Controller. Logically that should work.
EC2s in the same VPC can talk to each other if SGs are open, so by that logic forwarding traffic from Public Subnet to Private shouldn't be an issue.

Related

AWS EC2 Internet access from behind Load Balancer

Using Terraform to setup a VPC with two EC2s in private subnets. The setup needs to SSH to the EC2s to install package updates from the Internet and install the application software. To do this there is an IGW and a NAT-GW in a public subnet. Both EC2s can access the Internet at this point as both private subnets are routing to the NAT-GW. Terraform and SSH to the private subnets is done via Client VPN.
One of the EC2s is going to host a web service so a Classic mode Load Balancer is added and configured to target the web server EC2. Using Classic mode because I can't find a way to make Terraform build Application mode LBs. The Load Balancer requires the instance to be using a subnet that routes to the IGW, so it is changed from routing to the NAT-GW, to the IGW. At this point, the Load Balancer comes online with the EC2 responding and public Internet can access the web service using the DNS supplied End Point for the LB.
But now the web server EC2 can no longer access the Internet itself. I can't curl google.com or get package updates.
I would like to find a way to let the EC2 access the Internet from behind the LB and not use CloudFront at this time.
I would like to keep the EC2 in a private subnet because a public subnet causes the EC2 to have a public IP address, and I don't want that.
Looking for a way to make LB work without switching subnets, as that would make the EC web service unavailable when doing updates.
Not wanting any iptables or firewalld tricks. I would really like an AWS solution that is disto agnostic.
A few points/clarifications about the problems you're facing:
Instances on a public subnet do not need a NAT Gateway. They can initiate outbound requests to the internet via IGW. NGW is for allowing outbound IPv4 connections from instances in private subnets.
The load balancer itself needs to be on a public subnet. The instances that the LB will route to do not. They can be in the same subnet or different subnets, public or private, as long as traffic is allowed through security groups.
You can create instances without a public IP, on a public subnet. However, they won't be able to receive or send traffic to the internet.
Terraform supports ALBs. The resource is aws_lb with load_balancer_type set to "application" (this is the default option).
That said, the public-private configuration you want is entirely possible.
Your ALB and NAT Gateway need to be on the public subnet, and EC2 instances on the private subnet.
The private subnet's route table needs to have a route to the NGW, to facilitate outbound connections.
EC2 instances' security group needs to allow traffic from the ALB's security group.
It sounds like you got steps 1 and 2 working, so the connection from ALB to EC2 is what you have to work on. See the documentation page here as well - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

How to have ECS Fargate scheduled job access API with a ip whitelist policy?

I'm trying to setup a scheduled task with ECS Fargate.Task was dockerized and will be run through AWS ECS with Fargate. Unfortunately the service I want to run needs to access an API of a partner where the IP needs to be whitelisted. I see that for each execution of the task with Fargate a new ENI with an different IP is assigned.
How is it possible to assign a static IP to a AWS ECS Fargate Task?
In order to assign a Static IP on your AWS Fargate task, you will have to create a static IP address (AWS calls this elastic IP) that will serve as the origin address of traffic originating your VPC from network outsiders point of view. To implement this:
You need the following
A VPC
1x Private Subnet
1x Public Subnet
1x Internet Gateway attached to public subnet
An elastic IP (Will serve as static IP of all resources inside the private subnets)
1x NAT Gateway
A route table attached to private subnet with route 0.0.0.0/0 pointing to the NAT Gateway
A route table attached to public subnet with route 0.0.0.0/0 pointing to the internet gateway
You will then need to make sure that:
Your ECS Fargate task is using the VPC mentioned above
And that the private subnet(s) mentioned above is selected as the service task placement
If my explanation is still confusing, you could try giving this guide a read.

How to expose my app outside cluster or vpc my internal load balancer in pprivate EKS cluster

I am having doubt with AWS EKS
i have EKS cluster (Private subnets) managed worker nodes( private subnets)
and i deployed nginx deplyoment with three replicas and did service internal loadbalancer
i can do curl
getting expected output
problem: How to expose my app outside cluster or vpc
Thanks
You can have your EKS nodes in private subnet of VPC but you need public subnets also for exposing your pods/containers.
So ideally you need to create a LB service for your nginx deployment.
The below blog helped me during my initial EKS setup hope it helps you too
Nginx ingress controller with NLB
You can have AWS Application Load Balancer added to your EKS cluster and have an ingress targeting your service.
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Deploy ALB Controller in your cluster
Add new ingress pointing your service
Remember to make alb.ingress.kubernetes.io/scheme: internet-facing as you want to expose your service to public.
You can get DNS of new ingress in AWS Console(EC2/Load Balancer) or by describing ingress using kubectl.

Migrate Pods from different EC2 hosts

I am new to AWS, Kubernetes, EKS, and AppMesh, but had done DevOps in previous roles.
I am taking over some K8 cluster which used EKS and found that we set up NAT gateway which helps redirect egress traffic outbound as a single IP (we need that for whitelisting as 3rd-party external service require it). Pods hosted in a private subnet works fine.
But I found that Pods which hosted on public subnet just skip the NAT gateway, it uses the Public DNS (IPv4) IP address for outbound calls, which don't work for us as it does not use the single NAT gateway.
So I have a few questions:
How do we migrate Pods from Public subnet Hosts to Private subnets Hosts?
Should we use nodeSelector, Node affinity? Do labelings on the Nodes work?
I am not sure why we have Nodes in a public subnet, but we followed this guide: https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html
If we do choose to be on fully private subnets, can we make an exception for such mapping that we allow some Pods to have HTTP endpoints to be exposed for ingress traffic, while still on private subnets?
What do you recommend us to handle when a Pod/Container needs to use NAT gateway for egress traffic, but then exposing HTTP endpoints for ingress traffic?
Note that currently, our EKS is by default set to all public, should we move to Public and private mode?
Thanks for all the answers in advance!.
How do we migrate Pods from Public subnet Hosts to Private subnets Hosts? Should we use nodeSelector, Node affinity? Do labelings on the Nodes work?
Yes. Use Node Affinity which same as using a nodeSelector. You can do a rolling change by updating whatever resource you are using to manage your pods (i.e Deployment, Statefulset, DaemonSet, etc). If you configured it correctly the next time your pods start, they will be in the private subnet hosts.
I am not sure why we have Nodes in a public subnet, but we followed this guide:
https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html
The guide says public subnet so it makes sense that there is one.
If we do choose to be on fully private subnets, can we make an exception for such mapping that we allow some Pods to have HTTP endpoints to be exposed for ingress traffic, while still on private subnets?
Yes! you can create an externally facing load balancer (ALB, NLB, or ELB). These can also be managed by Kubernetes if you use the Service type LoadBalancer. You'll need the appropriate annotations in your Service definitions to get what you want.
What do you recommend us to handle when a Pod/Container needs to use NAT gateway for egress traffic, but then exposing HTTP endpoints for ingress traffic?
Use an externally facing load balancer that forwards traffic to your private VPC with the Kubernetes Service type LoadBalancer and use AWS NAT Gateways for outgoing internet traffic.
Disclaimer: This is just a recommendation, there are other combinations and alternatives.