How to connect to EKS Fargate instance from local machine - amazon-web-services

I have AWS EKS cluster with Fargate profile.
The cluster is in private subnets, and in VPC, which is not public.
I can use AWS cli from local machine, and I'd like to connect to Fargate instance from local machine.

Related

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

aws: API Gateway is not able to connect to EKS cluster running in private subnet

I am having a use case where I need to deploy the EKS cluster in private subnets and access it through API Gateway.
Currently, if I deploy the EKS cluster in public subnet and try to access it is working fine. However, it is not working when the EKS cluster is deployed into the private subnet.
Currently API gateway is configured with VPC link to access EKS cluster securely.
Network load balancer is configured to connect to the EKS cluster nodes.
Please let me know if there is anything that I am missing here.
Thanks,
Avinash

EC2 Instance not available in ECS

I have created an EC2 instance via Terraform with the following configuration:
EC2 instance is using the latest Amazon ECS-Optimized Amazon Linux 2 AMI.
Instance is sitting in a private subnet, with a route to a NAT GW. Tested internet connectivity fine.
SG rules are configured correctly.
EC2 Instance profile is using AmazonEC2ContainerServiceforEC2Role
EC2 user-data is configured (with my cluster name) with:
echo ECS_CLUSTER=my-cluster-name >> /etc/ecs/ecs.config
When I go to my ecs-cluster, no instances show in the EC2 Instance section of the console.
Is there anything else I'm missing as to why this cluster can't register with the EC2 instance?

AWS ECS docker container RDS integration

I have two VPC's in the same account. VPC-A(has RDS installed), VPC-B has services installed through ECS EC2 deployment.
VPC-B has multiple subnets. Services deployed through ECS EC2 service couldn't integrate with RDS. It keeps getting the following error message("Is the server running on host "....")
Where as telnet on RDS database port from Ec2instance(E1) inc VPC-B subnet can connect to the database.
But, it couldn't start the server if the same services are installed through ECS. When manually trying to start the container it works(able to connect to the database).
I also set up a Peering connection between two VPC's but the connection problem exists only when the container is started through ECS EC2 deployment.
The dropdown for public IP has "Disabled" and no other options. Subnet's are public subnets.
Any help/thoughts will be highly helpful.
As per aws docs "awsvpc" launches in a private IP and to interact with external services nat gateway needs to be attached to subnet.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-considerations
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway.
"Auto assign public IP" mode is "Enabled" with "bridge" netowrking mode on on ECS EC2 launch.

Unable to connect to my aws elasticache cluster

I am not able to connect to my aws elasticache cluster from my local machine.
Is it possible to connect to the cluster from my local machine?
You cannot connect to Elastic cluster through your local instance . You can connect only through EC2 instances .