Weird behavior on AWS Client VPN endpoint access through Peered VPC - amazon-web-services

I've got a main AWS account where I have a VPC(VPC-A) and a Client VPN Endpoint configured.
I have another account where I have a Dev environment and a VPC(VPC-B) configured over there.
I have setup the VPC peering between VPC-A and VPC-B and it's working as intended.
The VPC-A CIDR is 172.43.0.0/16
The VPC-B CIDR is 10.2.20.0/23
I've setup the VPN Client endpoint with two explicit subnets, one in availability zone A the other on F, they both use the same route table(route table has peering connection to VPC-B). I have authorized the CIDR of VPC-B on the VPN as well.
The VPN Client CIDR is 7.0.0.0/16
When I connect to the VPN and I get an IP like 7.0.0.131, I can ping an instance I have on VPC-B just fine
When I connect to the VPN and I get an IP like 7.0.1.162, I get timeouts, I can't reach the instance on VPC-B at all.
The instance on VPC-B lives on availability zone C.
What am I missing here, why is the connection working fine through ips like 7.0.0... but not working on Ips with 7.0.1...?

I found the issue with my implementation.
I mentioned that my VPN Client endpoint has two subnet associations. On the VPN endpoint under Route Table, I realized I had created the route for the first subnet on AZ-A but I forgot to create the Route for the 2nd subnet on AZ-F.
Creating a Route for the VPC-B CIDR(10.2.20.0/23) for the 2nd subnet as well solved the issue

Related

Can I use an AWS Client VPN endpoint to access more than three subnets in the same region?

My VPC is in eu-west-2. I have two subnets for an RDS instance, split across two different availability zones for reasons of high availability: eu-west-2a and eu-west-2b. I also have a Redshift cluster in its own subnet in eu-west-2c.
With this configuration, I have successfully configured an AWS Client VPN endpoint so that I can access RDS and Redshift from my local machine when connected to a VPN client with the appropriate configuration.
While following the same principles of using subnets for specific services, I would like my EC2 instances to live in private subnets that are also only accessible over a VPN connection. However, one of the limitations of the Client VPN service is:
You cannot associate multiple subnets from the same Availability Zone with a Client VPN endpoint.
This implies that I would need to create a separate endpoint for connecting to my private EC2 subnet—which feels like complete overkill for my modest networking architecture!
Is there a workaround?
By default, a subnet can reach the other subnets.
This means that you won't need to do anything. This will work out of the box. If not, check the route tables and see if there is a route from your VPN subnet to your private subnet.
When you associate the first subnet with the Client VPN endpoint, the following happens:
The state of the Client VPN endpoint changes to available. Clients can now establish a VPN connection, but they cannot access any resources in the VPC until you add the authorization rules.
The local route of the VPC is automatically added to the Client VPN endpoint route table. (This local route allows you to communicate with every subnet within the VPC that the subnet is in.)
The VPC's default security group is automatically applied for the Client VPN endpoint.
See documentation for details.

AWS Site-to-Site VPN Configuration doesn't allow inbound traffic

I've been following the instructions here: https://aws.amazon.com/blogs/networking-and-content-delivery/simulating-site-to-site-vpn-customer-gateways-strongswan/
I can successfully get the VPN up and running, but I can't successfully ping internal IP addresses from behind the VPN.
Here's my setup:
"On-prem" is simulated using a VPC with IP address: 172.19.0.0/16. The VPN is deployed on an EC2 instance in the subnet 172.19.16.0/20. This subnet has the following route table:
Destination
Target
172.19.0.0/16
local
172.21.0.0/16
eni-XXXXXXXXX
0.0.0.0/0
igw-XXXXXXXXX
Where eni-XXXXXXXXX is the network interface of the EC2 instance that has the VPN deployed on it.
My cloud VPC has the CIDR range: 172.21.0.0/16. I have an EC2 instance deployed in the 172.21.32.0/20 subnet which has the following route table:
Destination
Target
172.21.0.0/16
local
172.19.0.0/16
vgw-XXXXXXXXX
0.0.0.0/0
igw-XXXXXXXXX
Where the vgw-XXXXXXXXX is the virtual gateway associated with the VPN I have.
I can send traffic from my "on-prem" VPC into my cloud VPC successfully, but no traffic comes back out. I've tested this by SSHing into an EC2 instance in my "on-prem" VPC and then pinging a private IP address of an EC2 instance in my cloud VPC and I can see the pings are received by the EC2 instance in the cloud VPC, but my "on-prem" instance never receives the response.
I have checked my security groups and NACLs and they are not preventing this type of traffic.
Is there something misconfigured here?
This is not an entirely satisfying answer, but I moved from using a Virtual Private Gateway to using a Transit Gateway and I was able to get it to work.

Unable to access another instance in different VPC

So I have 2 different VPCs in the same account.
In the first VPC (A), I have an instance which is a part of a private subnet, and all the data is routed to a NAT gateway (Working on previous configurations).
Currently I am trying to access an instance (telnet/ping/anything) in the other VPC(B) from this instance.
I setup VPC peering and changed the main route tables of both the VPCs to target the peering connection. (Did not work)
Then I tried changing the route table of the private subnet to directly route to the peering Connection. (Did not work)
There are many security groups in play, however when I changed the SG on instance in B to accept all Connections, I was able to connect from my local PC but still not from instance in A. So i don't think SG is an issue. I thought it might be routing tables but was unable to find the cause.
When I traceroute from the instance in A, it goes to the NAT gateway private IP, and then to some AWS instance (OWNED BY AWS, NOT ME) and then gets lost.
Where can the connection be possibly wrong?
It is difficult to debug the situation from what you have described.
So, instead, I have tried to recreate your situation and have documented all the steps I took. Please follow these steps to create two more VPCs so that you feel comfortable with the fact that it actually can work.
Then, once you have it working, you can compare this configuration with your existing configuration to figure out what might be wrong with your current VPC configuration.
Here's what I did. Follow along!
Created VPC-A with the VPC Wizard ("VPC with a Single Public Subnet") and a CIDR of 10.0.0.0/16 and a public subnet of 10.0.0.0/24
Manually created VPC-B with a CIDR of 10.5.0.0/16 and a private subnet 10.5.0.0/24
Launched EC2 Instance-A in VPC-A (publicly accessible, with a Security Group permitting SSH access from 0.0.0.0/0)
Launched EC2 Instance-B in VPC-B (in the private subnet, with a Security Group permitting SSH access from 0.0.0.0/0)
Created a VPC Peering connection from VPC-A to VPC-B
Accepted the Peering connection
Added a route to the main Route Table for VPC-A with destination of 10.5.0.0/16 (the range of VPC-B), pointing to the peering connection
Added a route to the main Route Table for VPC-B with destination of 10.0.0.0/16 (the range of VPC-A), pointing to the peering connection
Logged into Instance-A via SSH
From Instance-A, connected via SSH to the private IP address of Instance-B
I had to first paste my private key into a PEM file on Instance-A, use chmod to set permissions, then use:
ssh -i keypair.pem ec2-user#10.5.0.15
I used the private IP address of Instance-B (10.5.0.15). This was randomly assigned, so it would be slightly different when you try this yourself.
And the result was... I successfully connected via SSH from Instance-A in VPC-A to Instance-B in VPC-B via the Peering connection (as proven by the fact that I connected via a Private IP address and the fact that VPC-B has no Internet Gateway).
So, if you follow along with the above steps and get it working, you'll then be able to compare your existing setup and figure out what's different!
I followed all the steps mentioned above and went through all the documentation of AWS. When I added a route in the subnet route table (of VPC A) with the CIDR of the peered VPC (VPC B) pointing to the peering connection it worked.
In my case VPC B (the peered VPC) has a CIDR of 172.31.0.0/16

vpc peering issues between bastion and app server ec2

I have two existing vpc's. One is shared services, and the other is the actual application servers. I have created a peer between the two vpc's, and added routes on each vpc, but still cannot ssh from bastian to app server from the shared services vpc.
Details:
shared services vpc cidr(172.31.0.0/16)
bastian server ip (172.31.5.84)
route added to main route table (10.2.0.0/16 -> vpc-peer-id)
app server vpc cidr (10.2.0.0/16)
ec2 subnet instance ip (10.2.60.4)
route added to main route table (172.17.0.0/16 -> vpc-peer-id)
added sg allow (22 tcp 172.31.0.0/16)
I also added the same route to the app server subnet but no change.
I am completely stumped atm for how to set this up or even work our where it is blocking. Any help would be appreciated.
To assist you, I did the following:
Started with an existing VPC-A with CIDR of 172.31.0.0/16
Created a new VPC-B with CIDR of 10.0.0.0/16
Created a subnet in VPC-B with CIDR of 10.0.0.0/24
Launched an Amazon Linux EC2 instance in the new subnet in VPC-B
Inbound Security Group: Allow SSH from 172.31.0.0/16
Created Peering connection:
Requester VPC: VPC-A
Acceptor VPC: VPC-B
Accepted peering connection (Did you do this on yours?)
Configured Route Tables:
The public Route Table in VPC-A: Route 10.0.0.0/16 to VPC-B
The private Route Table in VPC-B: Route 172.31.0.0/16 to VPC-A
Opened an SSH connection to an existing instance in VPC-A
From that instance, opened an SSH connection to the private IP address of the new instance (10.0.0.121)
Result: Instantly got a Permission denied (publickey) error because I didn't supply the private key. Getting an instant error messaged proved network connectivity (as opposed to hanging, which normally indicates a lack of network connectivity).
I then supplied the correct private key and tried to SSH again.
Result: Connected!
The complete flow is:
My laptop -> Instance in public subnet of `VPC-A` -> Instance in `VPC-B`
This had to use the peering connection because VPC-B has no Internet Gateway and I connected via the private IP address of the instance.
So, I recommend that you double-check that you have done each of the above steps to find where your configuration might differ (accepting the peering connection, configuring the security group, etc).

How to resolve issue on two aws accounts with resepctive VPCs on the same subnet range to talk to each others

We have two AWS accounts:
Account A have a VPC with 172.31.21.0/16 subnet.
Account B have 3 VPCs:
VPC 1 : 172.31.0.0/16 Default
VPC 2 : 172.32.0.0/16
VPC 3 : 172.30.0.0/16
We have an EC2 on Account A's VPC, that needs to talk to RDS(MySQL) on Account B's VPC 2 but I cannot connect the RDS from EC2 on Account A.
Is the problem caused by Account B's VPC 1 which is using the same subnet as Account A's VPC?
If so, how can we resolve the issue?
Do you have 172.31.21.0/16 or 172.31.21.0/24? Having the first scenario is useless. Did you set up the VPC peering connection and tried to add routes? I believe you will have problem with network range overlapping. Also VPC peering connection will work if you're using the same region in both accounts.
presumably you already have a peering connection ( a pcx ) for A -> B
So either
1) alter the addressing on Account B VPC 1 so that it doesn't overlap with Account A VPC
2) add an explict route for Account B VPC 2 route table sending 172.31.21.0/16 to the pcx. But in this case routing to Account B VPC1 from VPC2 will be broken for some addresses
If only 1 server connection is required, you can setup a EC2 instance and attach to EIP. Then use that EC2 as SSH tunnel that connect to the RDS. Then another VPC can connect to the EC2 secure tunnel.
(Background info)
VPC are virtually isolated, Even within the same AWS account.
Connecting VPC A to VPC B is NOT POSSIBLE, unless you
i. Setup AWS VPC PEER. or
ii. assign EIP to the resource you want to connect, then everyone connect through the public IP, or
iii. Create some sort of VPN routing.
However, in case of i,iii because both your account A and B using 172.31.X.X/16, VPC peering will NOT works, even VPN setup will failed due to same IP network subnet used.
Nevertheless, you may use NAT to share particular resources using VPN, but it will be a "limited VPN".
In addition, you cannot use AWS NAT gateway features for NAT, because that services is only mean for NAT connection from VPC private network to internet.
You can checkout AWS this link for example of multiple VPC peer connection.