I have successfully setup an IPsec VPN between 2 VPCs from 2 different regions via Strongswan and the 2 gateways are able to connect.
The problem is that the other instances of a vpc/subnets are not able to ping the other vpc/subnet:
VPC A/gateway can talk to VPC B/gateway...
VPC A/Instance can talk to VPC A/Gateway
Same applies for VPC B... But
VPC A / Instance can NOT talk to VPC B/Gateway B or VPC B/ Instances ( Same applies for VPC B to VPC A).
I have checked and tried to play with the routes of table 220 and also ICMP redirects, no way.
Anyone can assist please?
Regards.
There is way too little information to provide an exact answer; topology and addressing plan, relevant security groups and EC2 configuration, StrongSwan and relevant Linux kernel configuration would be needed.
Still please let me offer a few hints what to do in order to allow routing among subnets connected via VPN:
IP forwarding must be enabled in Linux kernel, assuming the StrongSwan runs on Linux EC2 instance. It can be done with following command, run as root:
echo 1 > /proc/sys/net/ipv4/ip_forward
Please note that the setting would not persist during a reboot. How to make the setting persistent depends on the Linux distribution.
EC2 source/dest. check must be disabled, see the screenshot below.
VPC routing tables must be set to route the traffic to the another subnet in another region via the StrongSwan EC2 node, instead of via default gateway.
Traffic selectors (left_subnet and right_subnet) in ipsec.conf must be set accordingly.
Related
I am trying to connect to an RDS Instance from my local machine through a VPC Peering connection. In my AWS Account I have two VPCs: VPC1 is connected to my local network via DirectConnect, VPC2 isn't. VPC2 contains all of my infrastructure and the idea is that if I want to connect to that infrastructure from my local machine I need to work through VPC1.
I have configured a route in the peering connection to forward IP based requests to VPC2 for a given address range. This doesn't really help me for RDS though because I don't know what the IP Address for RDS is, only the endpoint. I am guessing that there is some combination of DNS/Routing/Networking/Peering that will solve this problem but I haven't found any documentation that describes how to solve this issue.
Has anyone solved this issue before, or know of any documentation that describes what needs to be done?
Update:
The exact problem is that I can't connect to the RDS instance from my local machine. For example, if I use the RDS Endpoint as the server for my connection, the Sql Client I am using simply can't connect with a timeout error. My suspicion is that traffic is not being routed to VPC2 correctly but I don't know how to prove that.
As far as DNS goes, I am not sure how OnPrem is setup however I have 4 hosted zones in Route53 with a variety of URLs. Items that I setup in Route53 I am able to resolve by host name on my local.
Likewise, I am not sure how the network has been configured with DirectConnect (full VPN tunnel or otherwise).
As far as DNS and the network connections between AWS go though, that stuff works. I am able to resolve pieces of infrastructure in VPC1 fine I just (seemingly) can't get traffic to move across the Peering Connection in the way that I would expect.
I think the problem is that you think you can access vpc2 resources from on-prem just b/c you have direct connect to vpc1. What vpc-peering is giving you is access from vpc1 to vpc2 via private ip addresses. In your case you want vpc1 to act like a router to just transit your request from on-prem to vpc2. It does not work that way.
What are your options:
You could have a host vpc1 access vpc2 (like a bastion host) and you could ssh into that one first.
If possible, you can create a vpn connection from on-prem to vpc2.
And there are more complex solutions via transit gateway.
The doc here talks about vpc-peering limitations, it will basically explain that transitive connections like you want won't work: https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html
AWS scenario documentation to reach db mentions option 1 here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
Sorry for the Japanese material.
I think VPC1 and VPC2 cannot communicate even if you configure routing. So as long as communication is impossible, configuring DNS will not accomplish the goal, I guess.
AWS Solutions Architect ブログ: VPC Peeringの使いどころとTips等々
VPC Peering provides peering, not routing between multiple VPCs, so if you are peering 3 or more VPCs or connecting to locations outside of AWS via VPN or DirectConnect, even if you set the Routing Table appropriately for each, there will be no IP layer routing to networks more than 2 hops away. Even if you configure the Routing Table appropriately, there will be no IP layer routing to networks more than 2 hops away. Workarounds such as using proxies or stepping stones are required as before.
Translated with www.DeepL.com/Translator (free version)
Could PrivateLink help you achieve your goal?
AWS-40_AWS_Summit_Online_2020_NET01.pdf
Along the example on page 42:
local network --> Direct Connect --> VPC Endpoint (in VPC1) --> NLB (in VPC2) --> RDS (in VPC2)
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
Background: I have a kubernetes cluster set up in one AWS account that needs to access data in an RDS MySQL instance in a different account and I can't seem to get the settings correct to allow traffic to flow.
What I've tried so far:
Setup a peering connection between the two VPCs. They are in the same region, us-east-1.
Created Route table entries in each account to point traffic on the corresponding subnet to the peering connection.
Created a security group in the RDS VPC to allow traffic from the kubernetes subnets to access MySql.
Made sure DNS Resolution is enabled on both VPC's.
Kubernetes VPC details (Requester)
This contains 3 EC2's (looks like each has its own subnet) that house my kubernetes cluster. I used EKS to set this up.
The route table rules I set up have the 3 subnets associated, and point the RDS VPC CIDR block at the peering connection.
RDS VPC details (Accepter)
This VPC contains the mysql RDS instance, as well as some other resources. The RDS instance has quite a few VPC security groups assigned to it for access from our office IP's etc. It has Public Accessibility set to true.
I repeated the route table setup (in reverse) and pointed back to the K8s VPC subnet / peering connection.
Testing
To test the connection, I've tried 2 different ways. The application that needs to access mysql is written in node, so I just wrote a test connector and example query and it times out.
I also tried netcat from a terminal in the pod running in the kubernetes cluster.
nc -v {{myclustername}}.us-east-1.rds.amazonaws.com 3306
Which also times out. It seems to be trying to hit the correct mysql instance IP though so I'm not sure if that means my routing rules are working right from the k8s vpc side.
DNS fwd/rev mismatch: ec2-XXX.compute-1.amazonaws.com != ip-{{IP OF MY MYSQL}}.ec2.internal
I'm not sure what steps to take next. Any direction would be greatly appreciated.
Side Note: I've read thru this Kubernetes container connection to RDS instance in separate VPC
I think I understand what's going on there. My CIDR blocks do not conflict with the default K8s ips (10.0...) so my problem seems to be different.
I know this was asked a long time ago, but I just ran into this problem as well.
It turns out I was editing the wrong AWS routing table! When I ran kops to create my cluster, it created a new VPC with its own routing table but also another routing table! I needed to add the peer connection route to the cluster's routing table instead of the VPC's Main routing table.
We are evaluating AWS for our cloud usage.
However our corp proxy is blocking the access of instances via SSH /RDP.
I checked with Ops team and they said they will allow ports for SSH and RDP. But I have to give one source subnet and only one destination subnet /IP.
I am in ap-southeast-2 zone and there are nearly 27 subnets as per this including Global zones(For S3).
AWS Subnets -Regionwise
Question
1) Is there a way I can force AWS to create instances with in a particular IP range. ( I am thinking if I give one of the subnet from the 27, then I can give those subnet as destination to our Ops team.)
2) Can I do RDP Jumpbox ? meaning can I create one EC2 instance and then give IP of that machine to Ops for allowing access and then using that machine to RDP / SSH to other instances?
Please let me know the other options and your suggestions.
Thanks in advance.
There is no way to launch instances in a particular public IP CIDR and unless your CIDR is large that may include non-AWS IPs. One option is to allocate a bunch of elastic IPs (you are limited to only 10 EIPs by default), keeping the EIPs in the same /24 or /21 and releasing the rest. Even then there is no guarantee to get EIPs in the same small CIDR.
SSH with Jumphost is easy and used by many. RDP with Jumphost may be possible but I am not sur how it can be done.
I have found a way with the help of my colleague.
Luckily we had a guest network and we used that to achieve this.
Steps
1) I connected to both the LAN network and the guest network.
2) We referenced Add Static Route- Microsoft documentation
route add "destination" mask "subnetmask" "gateway" metric "costmetric" if interface
eg - route add ec2InstanceIP mask 255.255.255.255 AlternateNetworkgateway METRIC 10
Now I am able to do ssh, but RDP is not possible. Will explore and update if I find a way.
I've setup an Application Load Balancer in my primary VPC where most of my instances are. I have some instances in another VPC hosting docker services and I want to setup rules to access these at http://domain.com/services/. I have peering enabled between the two VPCs and I've created a target group, but the ALB only lists target groups within its own VPC. Is there any way to access the target group in the peered VPC or am I out of luck? I've been unable to find any leads on google so far. I've made sure the subnets in the ALB have routing through the VPC peering, but that hasn't helped.
You can load balance using ALBs and use the internal IP address of the peered VPC. You can do this via selecting the target type as ip when setting up the Target Group.
Amazon has a great write up on this exact problem and solution: https://aws.amazon.com/blogs/aws/new-application-load-balancing-via-ip-address-to-aws-on-premises-resources/
Since you are going VPC to VPC, substitute their "on premise" wording with "my other VPC". I just set this up using a host header routing for the ALB to cross two VPCs with a single ALB.
try with Route 53 routing policy. you can balance instance beyond the region also.