GCP GCE Establishing Cross Project VM SSH connections without a gcloud command - google-cloud-platform

I have two private IP VMs, each are in two different projects, each have their own VPC with Private Google Access enabled and no VPC peering or a VPN tunnel between the two.
I am attempting to establish an SSH connection between each VM and cannot use gcloud commands since I am in a chroot. I can only use ssh commands. Since these VMs aren't in the same VPC, attempting to SSH to the IP address of the other instance would not work.
Could I somehow setup DNS records in Project A linking to the VM hostnames/private IPs in project B?

The easiest way to do this is by creating a Shared VPC network or a Cloud VPN. VPC peering won't work in this use-case because transitive peering is not supported.
https://cloud.google.com/vpc/docs/shared-vpc
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview

Related

Access Cloud SQL instance in separate project

I'm looking for a solution (maybe this isn't the best way) to get an app running on one of our GKE clusters in Project-A, to access a Cloud SQL instance in Project-B, over it's an internal IP and ideally via cloud SQL proxy. Some more info:
We have VPC peering in between project-A and Project-B, traffic from both VPC's definitely flows fine
We have Cloud SQL proxy running in GKE cluster in project-A with the SQL instance in Project-B defined
The cloud SQL instance only has an internal Private IP
Pods from GKE cluster in Project-B can access Cloud SQL in the same project (Project-B) so I know the internal connectivity is definitely there
Only when we briefly add a public IP to the cloud SQL instance in project B, does the connection work from project A via Cloud SQL Proxy
When I try from Project-A to project-B, we get connection time outs.
I understand that when creating a cloud sql instance with an internal IP, that there is another separate VPC peering connection created called servicenetworking-googleapis.com from the VPC in that same project.
My thoughts here, being from a networking background, is that there is no IP route in project-A, to tell pod traffic to go over the VPC peering connection between the 2 projects if it wants to get to the private IP of the cloud SQL instance.
But I wondered if anyone else has tried to same thing.
I've found in documentation, that transitive peering is not supported. Haven't tried it myself, but it seems that recommended way is to use shared VPC for accessing CloudSQL from multiple projects.
In this section:
https://cloud.google.com/sql/docs/mysql/private-ip#quick-reference
Transitive peering
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Clients in one project can connect to Cloud SQL instances in multiple projects using Shared VPC networks.
You can use the following guide to set up a Shared VPC between your projects. In summary, it involves the following steps:
Set the project that hosts your Cloud SQL instance as a host project, since it is the one sharing the resources which in this case includes your Cloud SQL instance.
Select the subnets you would like to share to other projects
Set the project where your GKE cluster is hosted as the service project. This service project can then be attached to the host project set up previously.
Attach the service project to the host project and set up the appropriate VPC administrator roles so that users from the service project are allowed access to the shared resources.
As mentioned in another answer, VPC peering is not transitive. So even though there's a VPC peering between Project A and Project B, that does not mean Project A can communicate with the private IP Cloud SQL instance (which is deployed inside another VPC peered within Project B).
As a workaround, you can deploy a SOCKS5 proxy in Project B and have Project A connect to it using the Cloud SQL Proxy.
ALL_PROXY=socks5://localhost:8000 cloud_sql_proxy \
-instances=$INSTANCE_CONNECTION_NAME=tcp:5432

NAT Gateways-how do you go about SSH'ing into the private EC2?

When you set up an EC2 instance in a private subnet to access the internet through a NAT gateway (with all the necessary routing and association through route table), how do you go about SSH'ing into the private EC2?
For example, EC2 in the NAT Gateway public subnet and making a connection through the public EC2 to the private EC2.
NAT Gateway is for outgoing traffic only.if you have to access the private EC2 instance then you need bastion on public subnet in same VPC.
OR VPN to connect or AWS system manager.
There are three options that are commonly used:
Use a bastion host in a public subnet. First you ssh to the bastion, and then ssh from the bastion to the private ec2. This usually requires copying private ssh key to the bastion so that you can use it there to ssh to the private subnet.
Use a SSM session manager. This probably would be the easiest option to setup as you already are using NAT and it requires special instance role.
Use a VPN. Probably the most complex solution but also used nevertheless.
As the instance is in a private subnet you will need to use a method to connect to this privately. There are many options to choose from, they will vary in cost and complexity so ensure you read each one first.
Site-to-site VPN - Using this method a managed VPN is added to your VPC and connected to your on-premise via hardware configuration. Your security groups will need to allow your on-premise CIDR range(s) to allow connection.
Client VPN - Using either AWS solution, or a third party from the marketplace (such as OpenVPN) you can establish a connection using either a local program or HTTPS in your browser.
SSM Sessions Manager - Access your EC2 instance via the AWS console or using the CLI, portrayed as a bash interface without using SSH to authenticate. Instead IAM is used to control permissions and access.
Bastion host - A public instance that you can connect to as an intermediary either using SSH to connect to before accessing your hsot, or as a proxy for your commands.

Compute Engine in VPC can't connect to Internet & Cloud Storage after establishing Cloud VPN

Assuming I have a custom VPC with IP ranges 10.148.0.0/20
This custom VPC has firewall rules to allow-internal so the service inside those IP ranges can communicate to each other.
After the system grows I need to connect to some on-premises network by using Classic Cloud VPN, already create Cloud VPN (the on-premises side configuration already configured by someone) and the VPN Tunnel already established (with green checkmarks).
I also can ping to on-premises IP right now (let's say ping to 10.xxx.xxx.xxx where this is not GCP internal/private IP but on-premises private IP) using compute engine created on custom VPC network.
The problem is all the compute engine instance spawn in custom VPC network can't communicate to the internet now (like doing sudo apt update) or even communicate to google cloud storage (using gsutil), but they can communicate using private IP.
I also can't spawn dataproc cluster on that custom VPC (I guess because it can't connect to GCS, since dataproc needs GCS for staging buckets).
Since I do not really know about networking stuff and relatively new to GCP, how to be able to connect to the internet on instances that I created inside custom VPC?
After checking more in-depth about my custom VPC and Cloud VPN I realize there's misconfiguration when I establish the Cloud VPN, I've chosen route-based in routing option and input 0.0.0.0/0 in Remote network IP ranges. I guess this routes sending all traffic to VPN as #John Hanley said.
Solved it by using policy-based in routing option and only add specific IP in Remote network IP ranges.
Thank you #John Hanley and
#guillaume blaquiere for pointing this out

Cannot connect to Cloud SQL from Cloud Run after enabling private IP and turning off public iP

I have a postgreSQL CLoud SQL instance which I am connecting to via UNIX socket and the instance name from a Cloud Run container as per the documentation. With a public IP, this connection works fine. I was looking to turn off the public IP and only have a private IP, so I would not be charged for the public IP going forward.
When I first created the Cloud SQL instance, I only enabled the public IP. A couple of days later I enabled the private IP. For the assocaited network for the private IP, I accepted the default as the Cloud Run instance is in the same project.
When I turn off the public IP, my application can no longer connect to the Cloud SQL instance. I get a connection refused error:
sqlalchemy.exc.InterfaceError: (pg8000.core.InterfaceError) ('communication error', ConnectionRefusedError(111, 'Connection refused'))
As stated above, I did follow the instruaction on the Connecting to Cloud SQL from Cloud Run page:
https://cloud.google.com/sql/docs/postgres/connect-run
I even ran the gcloud command to update an the exsiting deployed revision after turning off the public IP and only having the private IP available but it made no difference.
Is a public IP required for a connection from Cloud Run to Cloud SQL? I do not see that in the connection documentation page. Or is there something else I missed when trying to switch over to only having a private IP? Or do I need to create a new Cloud Instance without a public IP and go through the instructions for connecting Cloud Run via an instance anme again?
Is a public IP required for a connection from Cloud Run to Cloud SQL? I do not see that in the connection documentation page.
On the Connecting to Cloud SQL from Cloud Run page, it says "Note: These instructions require your Cloud SQL instance to have a public IP address configured."
Private IP access is access from a Virtual Private Cloud (VPC). In order to access your instance through a VPC, the resource you are connecting to needs to be a part of the VPC. Cloud Run doesn't currently support VPC access, so you'll need to use have a public IP for now.
TL;DR: Open a case to the Google support
Your case is interesting because, by design, I think it's not yet supported.
In fact, when you create a Cloud SQL database with a private IP, a network peering is done between your VPC and the Cloud SQL VPC (or something equivalent).
In addition, today, it's not possible to plug your Cloud Run instance to your VPC. With function and App Engine, you have a serverless VPC connector, and not yet with Cloud Run (it's coming!).
The serverless VPC connector perform the same things as the Cloud SQL private IP, I mean a peering between your VPC and the Cloud Functions (or App Engine) VPC (or something equivalent).
And even if the serverless VPC connector is available on Cloud Run, it's not sure that it work because of network peering transitivity. In short, If you have a peering between VPC A -> VPC B and between VPC B -> VPC C, you can't reach VPC C from VPC A by performing an hop in VPC B. Replace A by VPC Cloud Run, B by VPC of your project, and C by VPC Cloud SQL.
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
I didn't check with AppEngine or Cloud Function, but this design shouldn't work.
But I'm not sure, that's why a case to the Google support will allow you to have a clear answer and maybe inputs on the roadmap. Any valuable information from Google Support are welcomed here!
I was also getting the following error when I was trying to connect to postgres using the following command from cloud shell:
gcloud sql connect
it seems your client does not have ipv6 connectivity...
What I do is that I login to one of the pods deployed using Google cloud Kubernetes using the following command:
kubectl exec --stdin --tty java-hello-world-7fdecb9894-smql4 -- /bin/bash
Then for 1st time I ran:
apt-get update
apt install postgresql-client
And now I can connect using:
psql -h postgres-private-ip -U username

SSH'ing into AWS EC2 Instance located in Private Subnet in a VPC

I've been going at this problem for a couple of hours and maybe its not possible, maybe it is.
I have a VPC in AWS, with a couple of EC2 instances and Lambda Instances.
As of right now, The lambda can invoke, ssh and so on to the EC2 server without a problem.
My lambdas are using a security group with only HTTP, HTTPS AND SSH in "Outbound".
My ec2 default security group only accepts 22 inbound (From my Lambda security group, AND my office IP).
If i create an ec2 instance on my public subnet, both me and my lambda functions can access it through ssh.
If i create it on my PRIVATE subnet, my lambdas can ssh but i CANT...
Do i really have to have a NAT SERVER in order to achieve this?
TL:DR; Only my office and my lambdas should have access to my ec2 instances.
The 1st option to consider for SSH access to EC2 instances is EC2 Instance Connect which allows you to control access to your EC2 instances using IAM and provides access from either the AWS console or your regular command line SSH tools.
The 2nd option is AWS Systems Manager Session Manager for Shell Access to EC2 Instances. You basically run an SSH session in your browser and it can target all EC2 instances, regardless of public/private IP or subnet. EC2 instances have to be running an up to date version of the SSM Agent and must have been launched with an appropriate IAM role (including the key policies from AmazonEC2RoleForSSM). No need for a bastion host or firewall rules allowing inbound port 22.
The 3rd option to consider is AWS Systems Manager Run Command which allows you to run commands remotely on your EC2 instances. It's not interactive like SSH but if you simply want to run a sequence of scripts then it's very good. Again, the instance has to be running the SSM Agent and have an appropriate IAM policy, and this option avoids the need to tunnel through bastion hosts.
Finally, if you really must SSH from your office laptop to an EC2 instance in a private subnet, you can do so via a bastion host. You need a few things:
IGW and NAT in the VPC
bastion host with public IP in the VPC's public subnet
security group on the bastion allowing inbound SSH from your laptop
a default route from the private subnet to the NAT
security group on the private EC2 instance that allows inbound SSH from the bastion
Then you have to tunnel through the bastion host. See Securely Connect to Linux Instances Running in a Private Amazon VPC for more.
Create a Bastion host.
This would be a public EC2 instance in a public subnet having the same security group as your private ec2 instance.
Ensure that traffic within the security group is allowed. You can do this by creating an inbound rule for your security-group.
Now in Windows 10, you can run the following though your command prompt :
ssh -i your_private_key.pem ec2-user#private_ip -o "proxycommand ssh -W
%h:%p -i your_private_key.pem ec2-user#public_ip"
Replace the following 3 things in the command posted above :
your_private_key
private_ip
public_ip
You can refer to this: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
You will have to use NAT Gateway to access anything in the Private Subnet.