I have set up a documentdb cluster in us-east-1. I am attempting to connect via an EC2 instance in us-west-1. I have set up connection peering with the VPC in us-west-1 having a CIDR of 172.31.0.0/16 and the VPC in us-east-1 having a CIDR of 172.32.0.0/16. Connection peering is established and active. When I attempt to
connect to the documentdb from mongo shell from the EC2 instance, I get the exception:
connecting to: mongodb://cluster-name.cluster-uniquecode.us-east-1.docdb.amazonaws.com:27017/?gssapiServiceName=mongodb
2020-07-15T00:50:16.004+0000 W NETWORK https://forums.aws.amazon.com/ Failed to connect to 172.32.83.229:27017 after 5000ms milliseconds, giving up.
2020-07-15T00:50:16.004+0000 E QUERY https://forums.aws.amazon.com/ Error: couldn't connect to server cluster-name.cluster-uniquecode.us-east-1.docdb.amazonaws.com:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:263:13
#(connect):1:6
exception: connect failed
The security group attached to the us-east-1 VPC is set to allow all IP addresses and all ports, so that doesn't seem to be the issue.
So.... why the the failure to connect? Anything I missed?
VPC peering does not implictly handle reverse-path routes for return traffic, so tou need to add routes to both VPCs.
You need routes in the tables of VPC A sending b.b.b.b/x over the peering connection and you need routes in VPC B to send a.a.a.a/y traffic over the peering connection, regardless of which end originates the traffic.
The owner of the peer VPC must also complete these steps to add a route to direct traffic back to your VPC through the VPC peering connection.
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html
I would take a look to the route tables in VPC for us-west-1. Make sure there is a record that sends 172.32.0.0/16 through the vpc peering.
Related
I've got a main AWS account where I have a VPC(VPC-A) and a Client VPN Endpoint configured.
I have another account where I have a Dev environment and a VPC(VPC-B) configured over there.
I have setup the VPC peering between VPC-A and VPC-B and it's working as intended.
The VPC-A CIDR is 172.43.0.0/16
The VPC-B CIDR is 10.2.20.0/23
I've setup the VPN Client endpoint with two explicit subnets, one in availability zone A the other on F, they both use the same route table(route table has peering connection to VPC-B). I have authorized the CIDR of VPC-B on the VPN as well.
The VPN Client CIDR is 7.0.0.0/16
When I connect to the VPN and I get an IP like 7.0.0.131, I can ping an instance I have on VPC-B just fine
When I connect to the VPN and I get an IP like 7.0.1.162, I get timeouts, I can't reach the instance on VPC-B at all.
The instance on VPC-B lives on availability zone C.
What am I missing here, why is the connection working fine through ips like 7.0.0... but not working on Ips with 7.0.1...?
I found the issue with my implementation.
I mentioned that my VPN Client endpoint has two subnet associations. On the VPN endpoint under Route Table, I realized I had created the route for the first subnet on AZ-A but I forgot to create the Route for the 2nd subnet on AZ-F.
Creating a Route for the VPC-B CIDR(10.2.20.0/23) for the 2nd subnet as well solved the issue
I'm trying to mount an AWS EFS file system to an EC2 instance that is in another account. I followed the below steps:
Account A:
VPC-A: 172.31.0.0/16
Created EFS in the VPC
Security group-A: Allows all inbound traffic from VPC-B(10.210.0.0/16) in Account B, also allows all outbound traffic to the internet. And this security group is attached to the EFS file system.
Accepted VPC peering connection request from VPC-B(10.210.0.0/16)
Route table-A: contains the route to VPC-B(10.210.0.0/16) via peering connection
Account B:
VPC-B: 10.210.0.0/16
Launched an EC2 instance(10.210.0.165) in a private subnet in VPC-B
Security group-B: Allows both inbound and outbound traffic from/to VPC-A(172.31.0.0/16)
Created a VPC Peering connection with VPC-A
Route table-B: contains the route to VPC-A(172.31.0.0/16) via peering connection
Note: I made sure that the region and availability zones of both the EFS in account A and EC2 instance in account B are the same. Also connecting to the EFS endpoint in the correct AZ using the mount by IP option
Still, I'm getting "mount.nfs4: Connection timed out error"
Please help!
Edit:
Just to test the setup and connectivity, I launched one EC2 instance in account A and ping worked from the EC2 instance in account B.
My AWS Lambda function times out when it ties to connect to an RDS instance in another VPC. The VPCs are peered.
Things I have checked:
Lambda is inside the correct VPC
RDS is inside the other VPC
RDS exists in subnets that are peered
VPC Peering is "accepted"
Lambda security group has ingress permission on correct port (5432) to RDS security group
Lambda security group has egress permission to anywhere on any port
Route table entries exists from Lambda VPC subnets to peering
Route table entries exist from RDS VPC subnets to peering
What else can I check / leverage to fix this connectivity issue?
Update
DNS hostnames and DNS resolution are enabled for both VPCs
Update
I tried the following:
Create EC2 instance on same subnet as Lambda
Assign lambda SG to the EC2
SSH connect to EC2
telnet to RDS:
telnet rds.xxxxxxxxxx.eu-west-2.rds.amazonaws.com 5432
Trying 10.11.65.225...
Connected to rds.xxxxxxxxxx.eu-west-2.rds.amazonaws.com.
Escape character is '^]'.
^CConnection closed by foreign host.
So the EC2 can connect. Therefore the issue must be with the lambda.
What can I try next?
The issue in my case (maybe yours too?) was that the query was timing out, not the connection attempt. You can test this by changing the query to SELECT 1 AS x or similar. The solution is to optimize the query so that it can run in reasonable time.
The trick of launching an EC2 with similar settings to the Lambda and connecting via SSH is a good one.
I have two existing vpc's. One is shared services, and the other is the actual application servers. I have created a peer between the two vpc's, and added routes on each vpc, but still cannot ssh from bastian to app server from the shared services vpc.
Details:
shared services vpc cidr(172.31.0.0/16)
bastian server ip (172.31.5.84)
route added to main route table (10.2.0.0/16 -> vpc-peer-id)
app server vpc cidr (10.2.0.0/16)
ec2 subnet instance ip (10.2.60.4)
route added to main route table (172.17.0.0/16 -> vpc-peer-id)
added sg allow (22 tcp 172.31.0.0/16)
I also added the same route to the app server subnet but no change.
I am completely stumped atm for how to set this up or even work our where it is blocking. Any help would be appreciated.
To assist you, I did the following:
Started with an existing VPC-A with CIDR of 172.31.0.0/16
Created a new VPC-B with CIDR of 10.0.0.0/16
Created a subnet in VPC-B with CIDR of 10.0.0.0/24
Launched an Amazon Linux EC2 instance in the new subnet in VPC-B
Inbound Security Group: Allow SSH from 172.31.0.0/16
Created Peering connection:
Requester VPC: VPC-A
Acceptor VPC: VPC-B
Accepted peering connection (Did you do this on yours?)
Configured Route Tables:
The public Route Table in VPC-A: Route 10.0.0.0/16 to VPC-B
The private Route Table in VPC-B: Route 172.31.0.0/16 to VPC-A
Opened an SSH connection to an existing instance in VPC-A
From that instance, opened an SSH connection to the private IP address of the new instance (10.0.0.121)
Result: Instantly got a Permission denied (publickey) error because I didn't supply the private key. Getting an instant error messaged proved network connectivity (as opposed to hanging, which normally indicates a lack of network connectivity).
I then supplied the correct private key and tried to SSH again.
Result: Connected!
The complete flow is:
My laptop -> Instance in public subnet of `VPC-A` -> Instance in `VPC-B`
This had to use the peering connection because VPC-B has no Internet Gateway and I connected via the private IP address of the instance.
So, I recommend that you double-check that you have done each of the above steps to find where your configuration might differ (accepting the peering connection, configuring the security group, etc).
As the title suggests, I'm struggling to connect to my elasticache instance via my EC2 instance. I have a orm to connect to redis in my EC2 instance that was just failing on my logs, so I sshed into my EC2 instance to try to manually connect to the redis instance and got a timeout:
Could not connect to Redis at <redis uri>: Connection timed out
They're in different VPC's (the elasticache instance and the EC2 instance), but in my elasticache instance's security group, I have a custom TCP inbound rule at port 6379 from any source.
Halp.
You setup the security rule, but did you setup the VPC peering properly:
A VPC peering connection is a networking connection between two VPCs
that enables you to route traffic between them using private IP
addresses. Instances in either VPC can communicate with each other as
if they are within the same network. You can create a VPC peering
connection between your own VPCs, or with a VPC in another AWS account
within a single region.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html
After you create VPC Peer connection, you also need to modify routing table.
Keep in mind that you need to modify BOTH of the routing tables.
Also you need to add CIDR of the local VPC.
It can be confusing which is "local" VPC and which is "target".
In my case, the local VPC contained EC2 instances that needed Redis database in other VPC. After creating peer connection in this format, I needed to do two things:
edit routing table for both local and target VPC.
edit security group of Redis database to accept connections from local VPC.
If set accordingly, you should be able to connect from EC2 instance at local VPC to Redis database in target VPC.
Here is documentation from AWS that is relatively easy to follow:
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-pg.pdf
Your scenario can be found on page 16.