Implementing Vpc peering in GCP - google-cloud-platform

I was trying to demonstrate VPC peering in GCP. I followed the below steps;
Setup1 :
I logged into GCP admin user account and I have created VPC in custom mode and added a subnet in the us-central region under one project. Than I have set the firewall rule to allow ssh and tcp. Than I created a VM instance in the same us-central region also selected this custom VPC and subnet in networking option. Than I tried to ssh into that VM and tried to ping from cloud shell. Both are working fine.
Setup2:
I logged into GCP user account which is added as a service account user by admin (previously used admin account). In that I created VPC in custom mode and added a subnet in the asia-east region under another project. Than I created a VM instance in the same asia-east region also selected this custom VPC and subnet in networking. Then I have set the firewall rule to allow ssh and tcp. Then I tried to ssh into that VM and tried to ping from cloud shell. Both are working fine.
Both VPC's haveDynamic routing mode set as Regional.
Than I tried to ping us-central machine from asia-east machine and also asia-east machine from us-central machine.
My expectation was, it won't work as it uses two different VPC which has subnet in two different region. So i can implement VPC peering to make it possible. But unfortunately it is working. I just tried to demonstrate VPC peering concept.
Can anyone suggest me what i missed in it?
===============================================================
UPDATE
gcloud compute networks describe vpc1
autoCreateSubnetworks: false
creationTimestamp: '2021-09-03T04:08:24.491-07:00'
description: ''
id: '8530504402595724487'
kind: compute#network
mtu: 1460
name: my-vpc
routingConfig:
routingMode: REGIONAL
selfLink: https://www.googleapis.com/compute/v1/projects/project-name/global/networks/my-vpc
subnetworks:
- https://www.googleapis.com/compute/v1/projects/project-name/regions/us-central1/subnetworks/my-subnet
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM
gcloud compute networks describe vpc2
autoCreateSubnetworks: false
creationTimestamp: '2021-09-03T04:56:02.154-07:00'
description: ''
id: '8965341541776208829'
kind: compute#network
mtu: 1460
name: my-project2-vpc
routingConfig:
routingMode: REGIONAL
selfLink: https://www.googleapis.com/compute/v1/projects/project-name/global/networks/my-project2-vpc
subnetworks:
- https://www.googleapis.com/compute/v1/projects/project-name/regions/asia-east1/subnetworks/asia-subnet
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM

I finally got my head around it - I couldn't reproduce your issue because I was missing firewall rules (I assumed GCP will create them but when you're creating custom networks no rules are created by default). I had to allow SSH (TCP port 22 and ICMP protocol to be allowed) traffic manually and then everything started working as you described.
Communication between the networks is possible due to the fact, that the VM's (by default) get public IP's and are accessible for every machine connected to the internet. You didn't provide that information so I assumed that you didn't change any networking setting while creating test VM's - thus created VM's with public IP's.
But if you create the VM's with only internal IP's - they won't be able to communicate to VM's from different VPC's without VPC peering. Any communication between the networks (regardless of the region they are in) is impossible in such cases.

Related

Network GKE cluster between VPC subnets

In this question, the author says that the gke cluster is not available from other subnets in the VPC.
BUT, that is exactly what I need to do. I've added detail below, all suggestions welcome.
I created a VPC in Google Cloud with custom sub-nets. I have a subnet in us-east1 and another in us-east4. Then, I created a VPC-native private GKE cluster in the same VPC in the us-east4 subnet.
[added details]
GKE in us-east4
endpoint 10.181.15.2
control plane 10.181.15.0/28
pod address range 10.16.0.0/16
service address range 10.17.0.0/22
VPC subnet in us-east4
10.181.11.0/24
VPC subnet in us-east1
10.171.1.0/24
I added 10.171.1.0/24 as a Control Plane authorized network, and I added 10.171.1.0/24 to the automatically created firewall rule.
But I still can't use kubectl from the instance in the 10.171.1.0/24 subnet.
What I see when trying to use kubectl from a VM in us-east4 10.181.11.7
On this VM, I set the context with kubectl config use-context <correct gke context> and I have gcloud configured correctly. Then,
kubectl get pods correctly gives a list of pods in the gke cluster.
from a VM in us-east4 10.171.1.0 subnet, which is set up in the same way, kubectl get pods times out with an error that it's unable to reach the endpoint. The message is:
kubectl get pods
Unable to connect to the server: dial tcp 10.181.15.2:443: i/o timeout
This seems like a firewall problem, but I've been unable to find a solution, despite the abundance of GKE documentation out there. It could be a routing problem, but I thought VPC-native GKE cluster would take care of the routes automatically?
By default, the private endpoint for the control plane is accessible from clients in the same region as the cluster. If you want clients in the same VPC but located in different regions to access the control plane, you'll need to enable global access using the --enable-master-global-access option. You can do this when you create the cluster or you can update an existing cluster.

How can I diagnose why exactly my application in Elastic Beanstalk is not reachable?

I have deployed an application to AWS (Elastic Beanstalk) and am having trouble reaching it from the browser. The appliation is a Spring Boot web application.
According to the dashboard, the application is running.
Here are the configuration details (All Applications -> myapp -> myapp-env):
Category: Software
Lifecycle: Keep logs after terminating environment
Environment properties: GRADLE_HOME, JAVA_HOME, M2, M2_HOME, SERVER_PORT
Rotate logs: disabled
Retention: 1 days
Log streaming: enabled
X-Ray daemon: disabled
Category: Instances
AMI ID: XXXXXXXXXXXXXXXXXXXXXXXXX
Instance type: t3.micro
Monitoring interval: 5 minute
IOPS: container default
Size: container default
Root volume type: container default
EC2 security groups: XXXXXXXXXXXXXXXXXXXXXXXXX
Category: Security
IAM instance profile: XXXXXXXXXXXXXXXXXXXXXXXXX
EC2 key pair: XXXXXXXXXXXXXXXXXXXXXXXXX
Service role: XXXXXXXXXXXXXXXXXXXXXXXXX
Category: Network
Public IP address: disabled
Instance subnets: mysubnet-XXXXXX
Visibility: public
VPC: vpc-XXXXXXX
How can I find out what settings do I need to change in order to make my application
available from within my company (inside the virtual private cloud) and
not accessible from the outside?
At the moment I cannot access it neither from within the company network, nor from the outside.
Update 1:
I looked at the security group settings to find out whether or not any ports are blocked. Below you can find the screenshots of the security group configuration associated with my application.
There are no blocked ports.
Update 2: I just figured out that I cannot connect to the instance via SSH as well (time out).
According to the information provided, no public IP assigned to the instance. So I presume that the instance is deployed in a private subnet and you are trying to access your application through an elastic load balancer. You cannot SSH to an instance directly if it is launched in a private subnet.
Please make sure that you:
Setup a NAT Gateway/Nat Instance in a public subnet.
Update your VPC routing table to send all public traffic through the NAT GW.
0.0.0.0 --> NAT
Check that ELB healthchecks are green.
Connect to your application through ELB DNS name.

Kubernetes: Have no access from EKS pod to RDS Postgres

I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC).
But unfortunately the app doesn't connect to the database.
What I have:
EKS cluster with its own VPC (CIDR: 192.168.0.0/16)
RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)
Peering connection initiated from the RDS VPC to the EKS VPC
Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target — peer connection from the step #3 is added.
Route table for the RDS is updated: route with destination 192.168.0.0/16 and target — peer connection from the step #3 is added.
The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed
After all these steps I execute kubectl command:
kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
Then I connect to one of the 3 nodes and execute a command:
getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
What I missed in EKS setup in order to have access from pods to RDS?
UPDATE:
I tried to fix the problem by Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
So I created this service in EKS, and then tried to refer to postgres-service as DB URL instead of direct RDS host address.
This fix does not work :(
Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.
The answer I provided here may actually apply to your case, too.
It is about using Services without selectors. Look also into ExternalName Services.

How to configure OpenVPN for AWS VPC Peering with single private in 1st and single subnet in 2nd VPC?

I've just installed OpenVPN from AMI Marketplace in my account and connected via LDAP to AWS Simple AD. To start with, here are the details below:
Bastion Host VPC
Name: Bastion-VPC ---> Has single public subnet
VPC ID: vpc-01000000000000000
CIDR: 10.236.76.192/26
Private Host VPC
Name: Private-Environment-VPC ---> Has single private subnet
VPC ID: vpc-02000000000000000
CIDR: 192.168.96.0/20
I've established VPC Peering between both subnets. Whenever I logon to any machine in Bastion-VPC, I can RDP to any machine in Private-Environment-VPC machines.
I've installed OpenVPN in Bastion-VPC and can normally RDP to any machines inside Bastion-VPC, but can't RDP / connect to any machines in Private-Environment-VPC.
I'd like to resolve above problem - establish VPC connection to Bastion-VPC and RDP to machines in Private-Environment-VPC using OpenVPN.
Did tried to follow steps noted at: https://forums.aws.amazon.com/thread.jspa?messageID=570840 and https://openvpn.net/index.php/open-source/documentation/howto.html#redirect, but of no help.
Thanks in advance.
After trying N number of solutions available, here is the problem:
1 - My OpenVPN was joined to AWS Simple AD
2 - There was no known way to allow access to all authenticated users to be connected to the private subnet hosted in other VPC
Solution
Add permissions for each user in "Allow To" Section for User Profile to allow access to private subnet hosted in other VPC.

AWS - Strongswan :: How to make clients from subnets communicate?

I have successfully setup an IPsec VPN between 2 VPCs from 2 different regions via Strongswan and the 2 gateways are able to connect.
The problem is that the other instances of a vpc/subnets are not able to ping the other vpc/subnet:
VPC A/gateway can talk to VPC B/gateway...
VPC A/Instance can talk to VPC A/Gateway
Same applies for VPC B... But
VPC A / Instance can NOT talk to VPC B/Gateway B or VPC B/ Instances ( Same applies for VPC B to VPC A).
I have checked and tried to play with the routes of table 220 and also ICMP redirects, no way.
Anyone can assist please?
Regards.
There is way too little information to provide an exact answer; topology and addressing plan, relevant security groups and EC2 configuration, StrongSwan and relevant Linux kernel configuration would be needed.
Still please let me offer a few hints what to do in order to allow routing among subnets connected via VPN:
IP forwarding must be enabled in Linux kernel, assuming the StrongSwan runs on Linux EC2 instance. It can be done with following command, run as root:
echo 1 > /proc/sys/net/ipv4/ip_forward
Please note that the setting would not persist during a reboot. How to make the setting persistent depends on the Linux distribution.
EC2 source/dest. check must be disabled, see the screenshot below.
VPC routing tables must be set to route the traffic to the another subnet in another region via the StrongSwan EC2 node, instead of via default gateway.
Traffic selectors (left_subnet and right_subnet) in ipsec.conf must be set accordingly.