AWS VPC peering is created for VPCs in single region by referring to aws docs.
The diagram below explains the same.
Both the VPC peering connections are active and their route tables adjusted for subnets.
But when tried to establish http connection to VPC-A from other two VPCs (kubernetes) it fails.
VPC-B and VPC-C runs microservices based application deployed on kubernetes(docker). So, it's not guaranteed that a micorservice pod will run exactly from specific instance. On re-deployment of the microservice, it jumps to any available instance in VPC.
Only when public IP of any instance from VPC-B or VPC-C added to security group of VPC-A instance , http request to VPC-A instance works from that specific instance of other VPCs. This can't be permanent solution due to possible instance expiry(and hence the IP) and nature of the application.
It was expected that setup will make it possible to access service running on instance in VPC-A from both of the other VPCs. Please point out what is missing or ill configured.
Related
My VPC is in eu-west-2. I have two subnets for an RDS instance, split across two different availability zones for reasons of high availability: eu-west-2a and eu-west-2b. I also have a Redshift cluster in its own subnet in eu-west-2c.
With this configuration, I have successfully configured an AWS Client VPN endpoint so that I can access RDS and Redshift from my local machine when connected to a VPN client with the appropriate configuration.
While following the same principles of using subnets for specific services, I would like my EC2 instances to live in private subnets that are also only accessible over a VPN connection. However, one of the limitations of the Client VPN service is:
You cannot associate multiple subnets from the same Availability Zone with a Client VPN endpoint.
This implies that I would need to create a separate endpoint for connecting to my private EC2 subnet—which feels like complete overkill for my modest networking architecture!
Is there a workaround?
By default, a subnet can reach the other subnets.
This means that you won't need to do anything. This will work out of the box. If not, check the route tables and see if there is a route from your VPN subnet to your private subnet.
When you associate the first subnet with the Client VPN endpoint, the following happens:
The state of the Client VPN endpoint changes to available. Clients can now establish a VPN connection, but they cannot access any resources in the VPC until you add the authorization rules.
The local route of the VPC is automatically added to the Client VPN endpoint route table. (This local route allows you to communicate with every subnet within the VPC that the subnet is in.)
The VPC's default security group is automatically applied for the Client VPN endpoint.
See documentation for details.
I am having an Amazon RDS Postgres instance which resides in the default VPC.
To connect to it, i am using different EC2 instances (Java Spring Boot and NodeJs) running in ElasticBeanstalk. These instances also reside in the default VPC.
Do these EC2 instances connect to/query the RDS instance through the internet or the calls do not leave the AWS Network?
If they leave the AWS network and the calls go through the internet, is creating a VPC endpoint the right solution? Or my whole understanding is incorrect.
Thanks a lot for your help.
Do these EC2 instances connect to/query the RDS instance through the internet or the calls do not leave the AWS Network?
The DNS of the RDS endpoint will resolve to private IP address when used from within VPC. So communication is private, even if you use public subnets or set your RDS instance as publicly available. However, for connection from outside of AWS, the RDS endpoint will resolve to public IP address if the db instance is publicly available.
If they leave the AWS network and the calls go through the internet, is creating a VPC endpoint the right solution?
There is no VPC endpoint for RDS client connections, only for management actions (creating db-instance, termination, etc). In contrast, Aurora Serverless has Data API with corresponding VPC endpoint.
To secure your DB-Instances communications you need to be sure at least about the following:
locate your RD in private subnet (route table does not contain default outbound route to internet gateway).
RDS security group just accept traffic inbound only from instances security group/groups on TCP port for PostgreSQL which is usually 5432.
In this case Traffice to RDS will go localy in your vpc, for vpc endpoints it can be used to access RDS API operations privatly which is not your case (you just need to connect your app to DB using connection string)
I have production stacks inside a Production account and development stacks inside a Development account. The stacks are identical and are setup as follows:
Each stack as its own VPC.
Within the VPC are two public subnets spanning to AZs and two private subnets spanning to AZs.
The private Subnets contain the RDS instance.
The public Subnets contain a Bastion EC2 instance which can access the RDS instance.
To access the RDS instance, I either have to SSH into the Bastion machine and access it from there, or I create an SSH tunnel via the Bastion to access it through a Database client application such as PGAdmin.
Current DMS setup:
I would like to be able to use DMS (Database Migration Service) to replication an RDS instance from Production into Development. So far I am trying the following but cannot get it to work:
Create a VPC peering connection between Development VPC and Production VPC
Create a replication instance in the private subnet of the Development VPC
Update the private subnet route tables in the development VPC to route traffic to the CIDR of the production VPC through the VPC peering connection
Ensure the Security group for the replication instance can access both RDS instances.
Main Problem:
When creating the source endpoint in DMS, the wizard only shows RDS instances from the same account and the same region, and only allows RDS instances to be configured using server names and ports, however, the RDS instances in my stacks can only be accessed via Bastion machines using tunnelling. Therefore the test endpoint connection always fails.
Any ideas of how to achieve this cross account replication?
Any good step by step blogs that detail how to do this? I have found a few but they don't seem to have RDS instances sitting behind bastion machines and so they all assume the endpoint configuration wizard can be populated using server names and ports.
Many thanks.
Securing the RDS instances via the Bastion host is sound security practice, of course, for developer/operational access.
For DMS migration service however, you should expect to open security group for both the Target and Source RDS database instances to allow the migration instance to have access to both.
From Network Security for AWS Database Migration Service:
The replication instance must have access to the source and target endpoints. The security group for the replication instance must have network ACLs or rules that allow egress from the instance out on the database port to the database endpoints.
Database endpoints must include network ACLs and security group rules that allow incoming access from the replication instance. You can achieve this using the replication instance's security group, the private IP address, the public IP address, or the NAT gateway’s public address, depending on your configuration.
See
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.Network.html
For network addressing and to open the RDS private subnet, you'll need a NAT on both source and target. They can be added easily, and then terminated after the migration.
You can now use Network Address Translation (NAT) Gateway, a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an AWS Virtual Private Cloud (VPC).
See
https://aws.amazon.com/about-aws/whats-new/2015/12/introducing-amazon-vpc-nat-gateway-a-managed-nat-service/
Background: I have a kubernetes cluster set up in one AWS account that needs to access data in an RDS MySQL instance in a different account and I can't seem to get the settings correct to allow traffic to flow.
What I've tried so far:
Setup a peering connection between the two VPCs. They are in the same region, us-east-1.
Created Route table entries in each account to point traffic on the corresponding subnet to the peering connection.
Created a security group in the RDS VPC to allow traffic from the kubernetes subnets to access MySql.
Made sure DNS Resolution is enabled on both VPC's.
Kubernetes VPC details (Requester)
This contains 3 EC2's (looks like each has its own subnet) that house my kubernetes cluster. I used EKS to set this up.
The route table rules I set up have the 3 subnets associated, and point the RDS VPC CIDR block at the peering connection.
RDS VPC details (Accepter)
This VPC contains the mysql RDS instance, as well as some other resources. The RDS instance has quite a few VPC security groups assigned to it for access from our office IP's etc. It has Public Accessibility set to true.
I repeated the route table setup (in reverse) and pointed back to the K8s VPC subnet / peering connection.
Testing
To test the connection, I've tried 2 different ways. The application that needs to access mysql is written in node, so I just wrote a test connector and example query and it times out.
I also tried netcat from a terminal in the pod running in the kubernetes cluster.
nc -v {{myclustername}}.us-east-1.rds.amazonaws.com 3306
Which also times out. It seems to be trying to hit the correct mysql instance IP though so I'm not sure if that means my routing rules are working right from the k8s vpc side.
DNS fwd/rev mismatch: ec2-XXX.compute-1.amazonaws.com != ip-{{IP OF MY MYSQL}}.ec2.internal
I'm not sure what steps to take next. Any direction would be greatly appreciated.
Side Note: I've read thru this Kubernetes container connection to RDS instance in separate VPC
I think I understand what's going on there. My CIDR blocks do not conflict with the default K8s ips (10.0...) so my problem seems to be different.
I know this was asked a long time ago, but I just ran into this problem as well.
It turns out I was editing the wrong AWS routing table! When I ran kops to create my cluster, it created a new VPC with its own routing table but also another routing table! I needed to add the peer connection route to the cluster's routing table instead of the VPC's Main routing table.
As the title suggests, I'm struggling to connect to my elasticache instance via my EC2 instance. I have a orm to connect to redis in my EC2 instance that was just failing on my logs, so I sshed into my EC2 instance to try to manually connect to the redis instance and got a timeout:
Could not connect to Redis at <redis uri>: Connection timed out
They're in different VPC's (the elasticache instance and the EC2 instance), but in my elasticache instance's security group, I have a custom TCP inbound rule at port 6379 from any source.
Halp.
You setup the security rule, but did you setup the VPC peering properly:
A VPC peering connection is a networking connection between two VPCs
that enables you to route traffic between them using private IP
addresses. Instances in either VPC can communicate with each other as
if they are within the same network. You can create a VPC peering
connection between your own VPCs, or with a VPC in another AWS account
within a single region.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html
After you create VPC Peer connection, you also need to modify routing table.
Keep in mind that you need to modify BOTH of the routing tables.
Also you need to add CIDR of the local VPC.
It can be confusing which is "local" VPC and which is "target".
In my case, the local VPC contained EC2 instances that needed Redis database in other VPC. After creating peer connection in this format, I needed to do two things:
edit routing table for both local and target VPC.
edit security group of Redis database to accept connections from local VPC.
If set accordingly, you should be able to connect from EC2 instance at local VPC to Redis database in target VPC.
Here is documentation from AWS that is relatively easy to follow:
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-pg.pdf
Your scenario can be found on page 16.