AWS VPC and IP address of docker container - amazon-web-services

I have a docker container running application-1 on an EC2 machine in a private subnet of my AWS VPC. I have another EC2 machine in the public subnet of my VPC running application-2.
Application-1 sends rpc command to application-2 and I have to whitelist the ip address of application-1 in application-2. The problem is that I don't know what ip address to white list.
Thanks

Consider changing your design to support private IP addresses within your VPC or subnet.
If that is not possible, then you need a discovery service. Look into Amazon ECS Service Discovery. This should provide what you need.
Amazon ECS Service Discovery

Related

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

Access control plane from another ec2 instance

I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?
I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,
Edit: Both the vpcs are private and have the similar CIDR.
If both of your EC2 are in diff VPCs you can do the VPC peering.
If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and kubectl will connect to k8s cluster over the internet.
You can also checkout peering multiple VPC with same cidr range if you are looking for that way : Multiple VPC and Subnet with same CIDR blocks
Or : https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr
If your eks api server is private, create peering between the
VPCs and allow your Second EC2 server's private IP
If your eks api server is public, you can allow your Second EC2 instance's public IP from the aws console, in the eks security or network section

How to have ECS Fargate scheduled job access API with a ip whitelist policy?

I'm trying to setup a scheduled task with ECS Fargate.Task was dockerized and will be run through AWS ECS with Fargate. Unfortunately the service I want to run needs to access an API of a partner where the IP needs to be whitelisted. I see that for each execution of the task with Fargate a new ENI with an different IP is assigned.
How is it possible to assign a static IP to a AWS ECS Fargate Task?
In order to assign a Static IP on your AWS Fargate task, you will have to create a static IP address (AWS calls this elastic IP) that will serve as the origin address of traffic originating your VPC from network outsiders point of view. To implement this:
You need the following
A VPC
1x Private Subnet
1x Public Subnet
1x Internet Gateway attached to public subnet
An elastic IP (Will serve as static IP of all resources inside the private subnets)
1x NAT Gateway
A route table attached to private subnet with route 0.0.0.0/0 pointing to the NAT Gateway
A route table attached to public subnet with route 0.0.0.0/0 pointing to the internet gateway
You will then need to make sure that:
Your ECS Fargate task is using the VPC mentioned above
And that the private subnet(s) mentioned above is selected as the service task placement
If my explanation is still confusing, you could try giving this guide a read.

AWS ECS docker container RDS integration

I have two VPC's in the same account. VPC-A(has RDS installed), VPC-B has services installed through ECS EC2 deployment.
VPC-B has multiple subnets. Services deployed through ECS EC2 service couldn't integrate with RDS. It keeps getting the following error message("Is the server running on host "....")
Where as telnet on RDS database port from Ec2instance(E1) inc VPC-B subnet can connect to the database.
But, it couldn't start the server if the same services are installed through ECS. When manually trying to start the container it works(able to connect to the database).
I also set up a Peering connection between two VPC's but the connection problem exists only when the container is started through ECS EC2 deployment.
The dropdown for public IP has "Disabled" and no other options. Subnet's are public subnets.
Any help/thoughts will be highly helpful.
As per aws docs "awsvpc" launches in a private IP and to interact with external services nat gateway needs to be attached to subnet.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-considerations
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway.
"Auto assign public IP" mode is "Enabled" with "bridge" netowrking mode on on ECS EC2 launch.

AWS Elastic Beanstalk Using same IP for outbound

I currently have an elastic beanstalk setup in AWS currently whenever i make API calls it is coming from the EC2 instance's external IP. is there a way to have all servers in that group use the same IP?
Put your EC2 instances in private subnets and direct all outbound traffic through a NAT. This way, all outbound connections appear to come from the NAT's IP address.
See the following for more information. It's a different problem, but the NAT solution is the same.
How do you allocate STATIC addresses to an EBS (beanstalk) within a VPC?
Note, for security, you should follow this architecture anyways. When using ELB, don't have your EC2 instances in a public subnet.