So I have a primary RDS in us-east-1 & a replica in us-west-1. Both are inside VPCs in their respective regions. I want to have one of my EC2 instances in us-east-1 connect to the replica instance.
A simple solution is to enable public access for the RDS replica and add the IP of the EC2 to its security group and it works.
But instead of allowing a static IP, I would like to allow access to the entire CIDR range of my us-east-1 VPC and also I don't want my instances to be public accessible.
To do this, I've setup a VPC peering connection between the two regions and I have added entries in the routing tables of both the VPCs to forward traffic to each other's CIDR ranges to the peering connections.
The CIRD range of the EC2 instance is 172.31.0.0/16 and I have added this to the security group of the RDS replica in the us-west-1 region. But for some reason the RDS is not reachable from my EC2.
Have I missed anything else? Thanks!
To summarize my setup:
US EAST:
VPC CIDR: 172.31.0.0/16
Route Table entry: Destination 10.0.0.0/16 routes to the peering connection of us-west-1 VPC.
EC2 IP: 172.31.5.234
US WEST:
VPC CIDR: 10.0.0.0/16
Route Table entry: Destination 172.31.0.0/16 routes to the peering connection of us-east-1 VPC.
RDS:
Public Accessible: Yes
Security Group: Allow connections from 172.31.0.0/16
To reproduce your situation, I did the following:
In us-east-1:
Created a VPC in us-east-1 with a CIDR of 172.31.0.0/16 using the "VPC with Public and Private Subnets" VPC Wizard
Launched an Amazon EC2 Linux instance in the public subnet
In us-west-1:
Created a VPC in us-west-1 with a CIDR of 10.0.0.0/16 using the "VPC with Public and Private Subnets" VPC Wizard
Added an additional private subnet to allow creation of an Amazon RDS Subnet Group that uses multiple AZs
Created an RDS Subnet Group across the two private subnets
Launched an Amazon RDS MySQL database in the private subnet with Publicly accessible = No
Setup peering:
In us-east-1, created a Peering Connection Request to the VPC in us-west-1
In us-west-1, accepted the Peering Request
Configure routing:
In us-east-1, configured the Public Route Table (used by the EC2 instance) to route 10.0.0.0/16 traffic to the peered VPC
In us-west-1, configured the Private Route Table (used by the RDS instance) to route 172.31.0.0/16 traffic to the peered VPC
Security Groups:
In us-east-1, created a security group (App-SG) that allows inbound port 22 connections from 0.0.0.0/0. Associated it to the EC2 instance.
In us-west-1, created a security group (RDS-SG) that allows inbound port 3306 connections from 10.0.0.0/16 (which is the other side of the peering connection). Associated it to the RDS instance.
Test:
Used ssh to connect to the EC2 instance in us-east-1
Installed mysql client (sudo yum install mysql)
Connected to mysql with:
mysql -u master -p -h xxx.yyy.us-west-1.rds.amazonaws.com
This successfully connected to the RDS database across the peering connection.
FYI, the DNS name of the database resolved to 10.0.2.40 (which is in the CIDR range of the us-west-1 VPC). This DNS resolution worked from both VPCs.
In summary, the important bits are:
Establish a 2-way peering connection
Configure the security group on the RDS instance to permit inbound connections from the CIDR of the peered VPC
No need to make the database publicly accessible
Related
I have an RDS database in VPC A, that I'd like to share with an EC2 instance in VPC B.
How do I do so by giving access specifically ONLY to the database (especially given that RDS doesn't expose a static IP and rather a DNS endpoint)?
Assuming your VPCs are peered using VPC peering or transit gateway, you can whitelist ec2's security group in the security group that is attached to your rds instance.
So, you can add an inbound rule to rds's security group which will allow access on port 3306 (mysql) or 5432 (postgres) from security group id attached to ec2 instance.
I have a publicly accessible RDS instance that I want to connect to from a EKS cluster in a different VPC. I set up a VPC peering, add cross routes for VPC CIDRs, add EKS VPC CIDR to RDS security group, however there's no db connection unless I add a NAT IP address from EKS cluster (I have worker nodes in private subnets) to the inbound rules of RDS security group. It looks like because RDS instance created as publicly accessible its hostname always resolved to the public IP so the connection from EKS happens from a public NAT EIP to a public RDS EIP. Is this how it should be and cannot be changed? Does it mean there's no point in VPC peering because the connection will never be private? Ideally I want the traffic between EKS and RDS be private and never leave VPCs or does AWS already routes the traffic internally despite the connection happening through EIPs?
I just needed to enable DNS settings of VPC peering connection to allow resolution to private IP https://stackoverflow.com/a/44896732/1826109
I have two AWS accounts (A and B). Each of them has a VPC with no overlapping CIDR blocks, both are in the same region. I have successfully created a VPC peering connection between them (which is active). The requester and receiver both allow remote vpc dns resolution.
I have specified in each VPC table routes the other's VPC cidr block as a destination with the peering connection as a target.
I have an EC2 instance running in a public subnet inside the VPCA of AccountA, attached to a security group SecurityGroupA. SecurityGroupA enables inbound from all sources in the default security group of VPCA, as well as inbound from AccountBId/SecurityGroupB, and all outbounds.
I have a RDS postgres instance running in the VPCB of AccountB, attached to a security group SecurityGroupB. SecurityGroupB enables inbound TCP on port 5432 (postgres default port) from AccountAId/SecurityGroupAId.
When running aws ec2 describe-security-group-references --group-id SecurityGorupAId, I get
{
"SecurityGroupReferenceSet": [
{
"GroupId": "SecurityGroupAId",
"ReferencingVpcId": "VPCBId",
"VpcPeeringConnectionId": "pcx-XXXXXXXXXXXXXXXXX"
}
]
}
Which seems to indicate that the security group is correctly referenced. But when trying to connect from the EC2 instance to the RDS instance, I'm getting a connection timed out error.
What security group rules should I set for my db instance and my EC2 instance for accessing DB instance from my EC2 instance?
Both are in different VPCs and I used VPC Peering between them.
I did following configuration:
I created two VPC's
One is with public subnet and another is with private subnet
Launch EC2 web instance with public VPC and MySQL db instance with private subnet
Set VPC peering between them and they both have different security groups
Created a NAT Gateway in public subnet
So, how should I set both security group rules for establishing connections between them?
You should configure:
A security group on the Amazon EC2 instance (App-SG) that permits access to the instance/application as desired
A security group on the Amazon RDS DB instance (DB-SG) that permits inbound access on port 3306 for App-SG
That is, DB-SG should specifically refer to App-SG in the inbound rules.
When connecting from the EC2 to the database, make sure you are using the DNS Name of the RDS database. This should resolve to a private IP address.
The NAT Gateway is not required for the above connection.
I am new to AWS and not a network admin, mere a developer, and need your help.
I am unable to connect to my aws RDS (mysql) from my lightsail ubuntu instance. when trying to connect, it just wait for a minute and then fails.
I am unable to ping my RDS either.
here is the setup
the lightsail instance has vpc peering enabled in lon-zone-A
I have created a mysql RDS instance in aws and used default vpc peering. mysql is restricted to VPC and using default security group which has a rule for inbound - All traffic for default security group source
the default VPC have 2 subnets in CIDR 172.31.16.0/20 and 172.31.0.0/16 for two availability zone A and B.
In route table of the subnet, i have
172.26.0.0/16 as destination and target to vpc peering which further has
Requester VPC CIDRs 172.26.0.0/16
Accepter VPC CIDRs 172.31.0.0/16
My lightsail instance has private IP 172.26.15.xxx and in lon-Zone-A
When i ping my mysql intance, i get ip 172.31.10.9
command using to connect mysql -h xxxxxx.xxxxx.eu-west-2.rds.amazonaws.com -P 3306 -u db_master_username -p
To enable access from AWS Lightsail to AWS RDS you can accomplish in two separate ways:
Method 1.
Make RDS publicly accessible.
In RDS pick you instance and click 'Modify'. In section 'Network & Security' choose 'Publicly accessible' to Yes. Apply settings and wait until they are effective. Your RDS has public IP now.
Add your Lightsail public IP to the RDS security group inbound traffic.
Use CIDR: x.x.x.x/32 where x.x.x.x is your Lightsail instance public IP.
Method 2. (better, RDS with no public IP)
Make sure you Lightsail instance is in the same Availability Zone as RDS.
Set up VPC peering beetween Lightsail VPC and Amazon VPC.
Add your Lightsail local IP to the RDS security group inbound traffic.
I managed to solve. it.
I had to add my lightsail instance IP CIDR in the RDS inbound rule as mysql/aurora TCP allowed traffic.
:-)