Access control plane from another ec2 instance - amazon-web-services

I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?
I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,
Edit: Both the vpcs are private and have the similar CIDR.

If both of your EC2 are in diff VPCs you can do the VPC peering.
If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and kubectl will connect to k8s cluster over the internet.
You can also checkout peering multiple VPC with same cidr range if you are looking for that way : Multiple VPC and Subnet with same CIDR blocks
Or : https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr

If your eks api server is private, create peering between the
VPCs and allow your Second EC2 server's private IP
If your eks api server is public, you can allow your Second EC2 instance's public IP from the aws console, in the eks security or network section

Related

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

Is it possible to make AWS EKS nodes (EC2 instances) automatically get an IP from a specific subnet (on the same VPC or another VPC)?

We have an EKS cluster running in a VPC and we are thinking of extending this VPC or creating another VPC with a different subnet IP range. EKS nodes (EC2 instances) are running with multiple ENIs, that is, with multiple private IPs.
We wonder if it is possible to make these EC2 instances which serve as EKS nodes automatically get an IP from this new subnet within current VPC or on the other VPC when they are getting instantiated. If the subnet is on another VPC, should we have a VPC peering connection between two VPCs? Can it be doable by Cloud Formation templates on EKS? What is the best practice here? Thanks.
The option to extend VPC in EKS is via adding secondary CIDR block and configure CNI plugin to use the subnets created in the secondary CIDR block. CNI is ultimately responsible to assign the ip addresses available through the subnet cidr to the pods.
To use the correct CIDR range for VPC extension and to configure the CNI please use the below article :
https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/

Can publicly accessible RDS instance be connected privately via VPC peering?

I have a publicly accessible RDS instance that I want to connect to from a EKS cluster in a different VPC. I set up a VPC peering, add cross routes for VPC CIDRs, add EKS VPC CIDR to RDS security group, however there's no db connection unless I add a NAT IP address from EKS cluster (I have worker nodes in private subnets) to the inbound rules of RDS security group. It looks like because RDS instance created as publicly accessible its hostname always resolved to the public IP so the connection from EKS happens from a public NAT EIP to a public RDS EIP. Is this how it should be and cannot be changed? Does it mean there's no point in VPC peering because the connection will never be private? Ideally I want the traffic between EKS and RDS be private and never leave VPCs or does AWS already routes the traffic internally despite the connection happening through EIPs?
I just needed to enable DNS settings of VPC peering connection to allow resolution to private IP https://stackoverflow.com/a/44896732/1826109

AWS ECS docker container RDS integration

I have two VPC's in the same account. VPC-A(has RDS installed), VPC-B has services installed through ECS EC2 deployment.
VPC-B has multiple subnets. Services deployed through ECS EC2 service couldn't integrate with RDS. It keeps getting the following error message("Is the server running on host "....")
Where as telnet on RDS database port from Ec2instance(E1) inc VPC-B subnet can connect to the database.
But, it couldn't start the server if the same services are installed through ECS. When manually trying to start the container it works(able to connect to the database).
I also set up a Peering connection between two VPC's but the connection problem exists only when the container is started through ECS EC2 deployment.
The dropdown for public IP has "Disabled" and no other options. Subnet's are public subnets.
Any help/thoughts will be highly helpful.
As per aws docs "awsvpc" launches in a private IP and to interact with external services nat gateway needs to be attached to subnet.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-considerations
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway.
"Auto assign public IP" mode is "Enabled" with "bridge" netowrking mode on on ECS EC2 launch.

AWS VPC and IP address of docker container

I have a docker container running application-1 on an EC2 machine in a private subnet of my AWS VPC. I have another EC2 machine in the public subnet of my VPC running application-2.
Application-1 sends rpc command to application-2 and I have to whitelist the ip address of application-1 in application-2. The problem is that I don't know what ip address to white list.
Thanks
Consider changing your design to support private IP addresses within your VPC or subnet.
If that is not possible, then you need a discovery service. Look into Amazon ECS Service Discovery. This should provide what you need.
Amazon ECS Service Discovery