AWS ECS docker container RDS integration - amazon-web-services

I have two VPC's in the same account. VPC-A(has RDS installed), VPC-B has services installed through ECS EC2 deployment.
VPC-B has multiple subnets. Services deployed through ECS EC2 service couldn't integrate with RDS. It keeps getting the following error message("Is the server running on host "....")
Where as telnet on RDS database port from Ec2instance(E1) inc VPC-B subnet can connect to the database.
But, it couldn't start the server if the same services are installed through ECS. When manually trying to start the container it works(able to connect to the database).
I also set up a Peering connection between two VPC's but the connection problem exists only when the container is started through ECS EC2 deployment.
The dropdown for public IP has "Disabled" and no other options. Subnet's are public subnets.
Any help/thoughts will be highly helpful.

As per aws docs "awsvpc" launches in a private IP and to interact with external services nat gateway needs to be attached to subnet.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-considerations
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway.
"Auto assign public IP" mode is "Enabled" with "bridge" netowrking mode on on ECS EC2 launch.

Related

Connect MySQL DB on EC2 from Fargate Container application

I have a container application running on ECS Fargate (Network awsvpc), And tried to connect MySQL database set up on EC2 instance... But it is not happening.
I can connect same database (on EC2) from local machine with same containerized application running.
Trying so hard to solve this issue, if you know please help me.
Tried other things I know:
Security group inbound as ECS service security group (also tried opening all traffic access to EC2 instance)
ECS tasks running into private subnet or public subnet (EC2 and Fargate apps, all are in same VPC)

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

Access control plane from another ec2 instance

I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?
I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,
Edit: Both the vpcs are private and have the similar CIDR.
If both of your EC2 are in diff VPCs you can do the VPC peering.
If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and kubectl will connect to k8s cluster over the internet.
You can also checkout peering multiple VPC with same cidr range if you are looking for that way : Multiple VPC and Subnet with same CIDR blocks
Or : https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr
If your eks api server is private, create peering between the
VPCs and allow your Second EC2 server's private IP
If your eks api server is public, you can allow your Second EC2 instance's public IP from the aws console, in the eks security or network section

Auto configure private DNS resolve in VPC network for EC2 instances running from AMI

I have AMI template server in EC2 AWS witch run my server.
For sure it's running in single VPC network.
I want to be able to connect any my server using ssh once it's running using hostname dns resolve.
For example I have gateway, server-01, server-02 in my ec2 instances list.
Once I launch one more server from my AMI (server-03), I need to connect to it from gateway server using ssh server-03
How I can do it?
I would suggest using terraform to manage your EC2 instances. This will allow you to do many things you would normally do manually.
You can have a private or public hosted zone assigned to your VPCs (public would require a bit more)
Then on terraform, you can have the following:
Your ec2 instance creation.
A tfvar file containing the variables for all your EC2 instances
Your Hosted Zone attaching the EC2 private IP to a DNS
Output afterwards to print out your new EC2 instance with the private DNS you can SSH to

AWS VPC and IP address of docker container

I have a docker container running application-1 on an EC2 machine in a private subnet of my AWS VPC. I have another EC2 machine in the public subnet of my VPC running application-2.
Application-1 sends rpc command to application-2 and I have to whitelist the ip address of application-1 in application-2. The problem is that I don't know what ip address to white list.
Thanks
Consider changing your design to support private IP addresses within your VPC or subnet.
If that is not possible, then you need a discovery service. Look into Amazon ECS Service Discovery. This should provide what you need.
Amazon ECS Service Discovery