vpc peering issues between bastion and app server ec2 - amazon-web-services

I have two existing vpc's. One is shared services, and the other is the actual application servers. I have created a peer between the two vpc's, and added routes on each vpc, but still cannot ssh from bastian to app server from the shared services vpc.
Details:
shared services vpc cidr(172.31.0.0/16)
bastian server ip (172.31.5.84)
route added to main route table (10.2.0.0/16 -> vpc-peer-id)
app server vpc cidr (10.2.0.0/16)
ec2 subnet instance ip (10.2.60.4)
route added to main route table (172.17.0.0/16 -> vpc-peer-id)
added sg allow (22 tcp 172.31.0.0/16)
I also added the same route to the app server subnet but no change.
I am completely stumped atm for how to set this up or even work our where it is blocking. Any help would be appreciated.

To assist you, I did the following:
Started with an existing VPC-A with CIDR of 172.31.0.0/16
Created a new VPC-B with CIDR of 10.0.0.0/16
Created a subnet in VPC-B with CIDR of 10.0.0.0/24
Launched an Amazon Linux EC2 instance in the new subnet in VPC-B
Inbound Security Group: Allow SSH from 172.31.0.0/16
Created Peering connection:
Requester VPC: VPC-A
Acceptor VPC: VPC-B
Accepted peering connection (Did you do this on yours?)
Configured Route Tables:
The public Route Table in VPC-A: Route 10.0.0.0/16 to VPC-B
The private Route Table in VPC-B: Route 172.31.0.0/16 to VPC-A
Opened an SSH connection to an existing instance in VPC-A
From that instance, opened an SSH connection to the private IP address of the new instance (10.0.0.121)
Result: Instantly got a Permission denied (publickey) error because I didn't supply the private key. Getting an instant error messaged proved network connectivity (as opposed to hanging, which normally indicates a lack of network connectivity).
I then supplied the correct private key and tried to SSH again.
Result: Connected!
The complete flow is:
My laptop -> Instance in public subnet of `VPC-A` -> Instance in `VPC-B`
This had to use the peering connection because VPC-B has no Internet Gateway and I connected via the private IP address of the instance.
So, I recommend that you double-check that you have done each of the above steps to find where your configuration might differ (accepting the peering connection, configuring the security group, etc).

Related

Weird behavior on AWS Client VPN endpoint access through Peered VPC

I've got a main AWS account where I have a VPC(VPC-A) and a Client VPN Endpoint configured.
I have another account where I have a Dev environment and a VPC(VPC-B) configured over there.
I have setup the VPC peering between VPC-A and VPC-B and it's working as intended.
The VPC-A CIDR is 172.43.0.0/16
The VPC-B CIDR is 10.2.20.0/23
I've setup the VPN Client endpoint with two explicit subnets, one in availability zone A the other on F, they both use the same route table(route table has peering connection to VPC-B). I have authorized the CIDR of VPC-B on the VPN as well.
The VPN Client CIDR is 7.0.0.0/16
When I connect to the VPN and I get an IP like 7.0.0.131, I can ping an instance I have on VPC-B just fine
When I connect to the VPN and I get an IP like 7.0.1.162, I get timeouts, I can't reach the instance on VPC-B at all.
The instance on VPC-B lives on availability zone C.
What am I missing here, why is the connection working fine through ips like 7.0.0... but not working on Ips with 7.0.1...?
I found the issue with my implementation.
I mentioned that my VPN Client endpoint has two subnet associations. On the VPN endpoint under Route Table, I realized I had created the route for the first subnet on AZ-A but I forgot to create the Route for the 2nd subnet on AZ-F.
Creating a Route for the VPC-B CIDR(10.2.20.0/23) for the 2nd subnet as well solved the issue

AWS Site-to-Site VPN Configuration doesn't allow inbound traffic

I've been following the instructions here: https://aws.amazon.com/blogs/networking-and-content-delivery/simulating-site-to-site-vpn-customer-gateways-strongswan/
I can successfully get the VPN up and running, but I can't successfully ping internal IP addresses from behind the VPN.
Here's my setup:
"On-prem" is simulated using a VPC with IP address: 172.19.0.0/16. The VPN is deployed on an EC2 instance in the subnet 172.19.16.0/20. This subnet has the following route table:
Destination
Target
172.19.0.0/16
local
172.21.0.0/16
eni-XXXXXXXXX
0.0.0.0/0
igw-XXXXXXXXX
Where eni-XXXXXXXXX is the network interface of the EC2 instance that has the VPN deployed on it.
My cloud VPC has the CIDR range: 172.21.0.0/16. I have an EC2 instance deployed in the 172.21.32.0/20 subnet which has the following route table:
Destination
Target
172.21.0.0/16
local
172.19.0.0/16
vgw-XXXXXXXXX
0.0.0.0/0
igw-XXXXXXXXX
Where the vgw-XXXXXXXXX is the virtual gateway associated with the VPN I have.
I can send traffic from my "on-prem" VPC into my cloud VPC successfully, but no traffic comes back out. I've tested this by SSHing into an EC2 instance in my "on-prem" VPC and then pinging a private IP address of an EC2 instance in my cloud VPC and I can see the pings are received by the EC2 instance in the cloud VPC, but my "on-prem" instance never receives the response.
I have checked my security groups and NACLs and they are not preventing this type of traffic.
Is there something misconfigured here?
This is not an entirely satisfying answer, but I moved from using a Virtual Private Gateway to using a Transit Gateway and I was able to get it to work.

Connect to EC2 instance with a private IP address via a VPN

I've created a VPC with IPv4 CIDR 172.16.0.0/16, next I've created three subnets:
subnet_1 172.16.0.0/20
subnet_2 172.16.16.0/20
subnet_3 172.16.32.0/20
Next I created an Internet Gateway attached to the VPC.
At this point I've created an EC2 instance and I attached to it an Elastic IP. On this instance I have installed an OpenVPN access server.
I then created a second EC2 instance that only has a private IP address. In my mind I thought that once connected via VPN I should able to ssh into the second EC2 instance with a private IP, but I'm not able to connect. What might I have done wrong?
EDIT: I edit the post with some additional information
This is how I configured my VPC
My subnets attached to the VPC
The internet gateway attached to VPC
This is my EC2 instance with OpenVpn access server, with his Elastic Ip so that I can access from my browser
Inbound rules for security group of vpn instance
And the outbund rules
The second and private instance (the instance to which I want to connect via VPN)
Inbound rules
And outbund rules
In OpenVpn access server I do this configurations
And when I connect to the VPN I receive this address 172.16.128.2 (for example)

Unable to access another instance in different VPC

So I have 2 different VPCs in the same account.
In the first VPC (A), I have an instance which is a part of a private subnet, and all the data is routed to a NAT gateway (Working on previous configurations).
Currently I am trying to access an instance (telnet/ping/anything) in the other VPC(B) from this instance.
I setup VPC peering and changed the main route tables of both the VPCs to target the peering connection. (Did not work)
Then I tried changing the route table of the private subnet to directly route to the peering Connection. (Did not work)
There are many security groups in play, however when I changed the SG on instance in B to accept all Connections, I was able to connect from my local PC but still not from instance in A. So i don't think SG is an issue. I thought it might be routing tables but was unable to find the cause.
When I traceroute from the instance in A, it goes to the NAT gateway private IP, and then to some AWS instance (OWNED BY AWS, NOT ME) and then gets lost.
Where can the connection be possibly wrong?
It is difficult to debug the situation from what you have described.
So, instead, I have tried to recreate your situation and have documented all the steps I took. Please follow these steps to create two more VPCs so that you feel comfortable with the fact that it actually can work.
Then, once you have it working, you can compare this configuration with your existing configuration to figure out what might be wrong with your current VPC configuration.
Here's what I did. Follow along!
Created VPC-A with the VPC Wizard ("VPC with a Single Public Subnet") and a CIDR of 10.0.0.0/16 and a public subnet of 10.0.0.0/24
Manually created VPC-B with a CIDR of 10.5.0.0/16 and a private subnet 10.5.0.0/24
Launched EC2 Instance-A in VPC-A (publicly accessible, with a Security Group permitting SSH access from 0.0.0.0/0)
Launched EC2 Instance-B in VPC-B (in the private subnet, with a Security Group permitting SSH access from 0.0.0.0/0)
Created a VPC Peering connection from VPC-A to VPC-B
Accepted the Peering connection
Added a route to the main Route Table for VPC-A with destination of 10.5.0.0/16 (the range of VPC-B), pointing to the peering connection
Added a route to the main Route Table for VPC-B with destination of 10.0.0.0/16 (the range of VPC-A), pointing to the peering connection
Logged into Instance-A via SSH
From Instance-A, connected via SSH to the private IP address of Instance-B
I had to first paste my private key into a PEM file on Instance-A, use chmod to set permissions, then use:
ssh -i keypair.pem ec2-user#10.5.0.15
I used the private IP address of Instance-B (10.5.0.15). This was randomly assigned, so it would be slightly different when you try this yourself.
And the result was... I successfully connected via SSH from Instance-A in VPC-A to Instance-B in VPC-B via the Peering connection (as proven by the fact that I connected via a Private IP address and the fact that VPC-B has no Internet Gateway).
So, if you follow along with the above steps and get it working, you'll then be able to compare your existing setup and figure out what's different!
I followed all the steps mentioned above and went through all the documentation of AWS. When I added a route in the subnet route table (of VPC A) with the CIDR of the peered VPC (VPC B) pointing to the peering connection it worked.
In my case VPC B (the peered VPC) has a CIDR of 172.31.0.0/16

Nginx Instances which are Launching in Public subnet with public IP in VPC but unable to connect to Network

Our AWS instances are created in a public subnet availability zone and are not able to connect to the internet and SSH. So all the resources are created on the public subnet and one AZ.
I have developed CF nginx template with single VPC and two public subnets, butsecond public subnet instances are unable to connect network and SSH even though I'm giving public IP of the instance in the browser it is not working.
The main issue is instances which are launching in the second public subnet are unable to connect internet system logs are:
Contact the upstream for the repository and get them to fix the
problem
Reconfigure the base URL/etc.
Disable the repository, so yum won't use it by default
Looking at the scenario generally in order to enable access to or from the Internet for instances in a VPC subnet, you must do the following:
Attach an Internet gateway to your VPC.
Ensure that your subnet's route table points to the Internet gateway.
Ensure that instances in your subnet have a globally unique IP
address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control and security group rules
allow the relevant traffic to flow to and from your instance.
To use an Internet gateway, your subnet's route table must contain a route that directs Internet-bound traffic to the Internet gateway. You can scope the route to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6).
Kindly Refer this AWS Documentation and see what you are missing , as you must have skipped one of the above mentioned things.