Connecting to Cloud SQL with Private IP from GCE or GKE - google-cloud-platform

I'm trying to connect to a Postgres database (CloudSQL) from a pod deployed in a GoogleCompute cluster, with a private IP, but I get only connection timeout errors.
I setup the GCP cluster with the following:
gcloud beta container clusters create "gcp-cluster" --zone "europe-west1-b" --no-enable-basic-auth --cluster-version "1.13.6-gke.13" --machine-type "n1-standard-1" --image-type "COS" --disk-type "pd-ssd" --disk-size "20" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/sqlservice" --num-nodes "2" --enable-stackdriver-kubernetes --enable-ip-alias --network "projects/XXX/global/networks/default" --subnetwork "projects/XXX/regions/europe-west1/subnetworks/default" --default-max-pods-per-node "110" --enable-autoscaling --min-nodes "2" --max-nodes "20" --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --maintenance-window "19:00"
Then I deployed a wildfly pointing to the postgres database IP address (DB created in the same zone/region), but I get only connection timeout. After have enabled a public IP address with source 0.0.0.0/0, I can get a connection.
Any idea to work with private IP address?

Private IP means accessing Cloud SQL through a Virtual Private Cloud (VPC). You have to use a resource (in this case, GCE instance) that is also on that VPC to be able to reach it. See the environment requirements page of the Private IP docs.
Note for future readers: It's a really bad idea to whitelist 0.0.0.0/0 on a Public IP address. This essentially allows the entire internet to attempt to connect to your instance, and should not be left enabled for any extended period of time.

Your gke cluster is in europe-west1-b. Assuming that you use the default network, you must enable Private Google Access for europe-west1 subnet. Click on the subnet to view details and edit it if required to set Private Google Access to "On".

After a couple of hours I got my connection by enabling CloudSql access scope for the VM instances.

Related

Network GKE cluster between VPC subnets

In this question, the author says that the gke cluster is not available from other subnets in the VPC.
BUT, that is exactly what I need to do. I've added detail below, all suggestions welcome.
I created a VPC in Google Cloud with custom sub-nets. I have a subnet in us-east1 and another in us-east4. Then, I created a VPC-native private GKE cluster in the same VPC in the us-east4 subnet.
[added details]
GKE in us-east4
endpoint 10.181.15.2
control plane 10.181.15.0/28
pod address range 10.16.0.0/16
service address range 10.17.0.0/22
VPC subnet in us-east4
10.181.11.0/24
VPC subnet in us-east1
10.171.1.0/24
I added 10.171.1.0/24 as a Control Plane authorized network, and I added 10.171.1.0/24 to the automatically created firewall rule.
But I still can't use kubectl from the instance in the 10.171.1.0/24 subnet.
What I see when trying to use kubectl from a VM in us-east4 10.181.11.7
On this VM, I set the context with kubectl config use-context <correct gke context> and I have gcloud configured correctly. Then,
kubectl get pods correctly gives a list of pods in the gke cluster.
from a VM in us-east4 10.171.1.0 subnet, which is set up in the same way, kubectl get pods times out with an error that it's unable to reach the endpoint. The message is:
kubectl get pods
Unable to connect to the server: dial tcp 10.181.15.2:443: i/o timeout
This seems like a firewall problem, but I've been unable to find a solution, despite the abundance of GKE documentation out there. It could be a routing problem, but I thought VPC-native GKE cluster would take care of the routes automatically?
By default, the private endpoint for the control plane is accessible from clients in the same region as the cluster. If you want clients in the same VPC but located in different regions to access the control plane, you'll need to enable global access using the --enable-master-global-access option. You can do this when you create the cluster or you can update an existing cluster.

How can I get a DNS name for a GCE instance

I have a Google Compute Engine instance which is uniquely identified:
name: updateservice
zone: us-central1-a
project: myproject
is there a way to access the instance via DNS name? Otherwise I need to whitelist it's IP everytime in Cloud SQL since it changes on reboot.
Compute Engine instances have a private DNS name within the VPC, but do not have a public DNS name. You must configure a DNS resource record for the instance at your DNS server if you want a public DNS name.
Otherwise I need to whitelist it's IP everytime in Cloud SQL since it
changes on reboot.
There are two solutions for Cloud SQL:
Assign a static IP address to the Compute Engine instance. link
Deploy the Cloud SQL Auth Proxy on the Compute Engine instance . link
Method #2 is the recommended method because IP addresses do not need to be whitelisted and authentication is encrypted.
If you SSH to the VM and run the command hostname -A it will show you the VM's internal DNS.
From the on-premise network, you can reach/ping the VM's internal DNS by setting up Cloud VPN.

Connecting to a Cloud SQL instance from VM with public and private IPs - how to ensure the right network interface is chosen for the connection?

What I'm trying to set up:
Cloud SQL instance with private IP, Postgresql database
A VM with a public IP, but also one private IP on same VPC network as the SQL instance is on (VM, SQL instance and VPC are all in the same region)
VM has a service account with sufficient Cloud SQL client/viewer permissions
Possibility to connect from VM to SQL instance.
What happens?
Any attempt to actually use the connection, from for example psql client or db-migrate, simply hangs - for example psql --host 10.78.0.3 -U gcp-network-issue-demo-staging-db-user gcp-network-issue-demo-staging-database will not prompt for a password, just sit there.
If I remove the VM's public IP address from the setup, it connects fine. However, I need a publicly accessible VM for other services to connect to it..
I assume the psql connection attempt goes through the wrong network interface or something (this may be just my ignorance about network stuff speaking) - how can I get this working? What am I missing?
PS: this is basically same problem as Connecting to Google Cloud SQL instance on private IP from a VM with both private and public IPs fails but commenters there seem to want one Terraform-related and one connection-issue-related question.
Some screenshots:
VM IPs:
DB IPs:
Network config for VM:
Private IP config for DB instance:
This is the setup of the private network:
I don't understand why the private IP of the DB instance (10.78.0.3) is not an IP from the range of the private network (10.2.0.0-10.2.0.24, right?)..? Is that my problem?
To answer your question:
I don't understand why the private IP of the DB instance (10.78.0.3) is not an IP from the range of the private network (10.2.0.0-10.2.0.24, right?)..?
The Cloud SQL instance is assigned an IP address from the allocated range. When you setup a private services access a VPC peering is created between your VPC gcp-network-issue-demo-staging-network and the service producer VPC network that uses the allocated range 10.78.0.0/16
Also, looking at your VM network config, I see that the VM has two Nics in two different VPCs (default and gcp-network-issue-demo-staging-network). In your case, you can use only one Nic.
As a next step, make sure that your VM is using only the VPC network that you have used to create the private connection. Once that done you should be able to connect to the Cloud SQL instance IP using the command bellow:
telnet 10.78.0.3 3306

problems connecting to AWS DocumentDB

I created a Cluster and an Instance of DocumentDB in amazon. When I try to connect to my Local SSH (MacOS) it displays the following message:
When I try for the MongoDB Compass Community:
mongodb://Mobify:<My-Password>#docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
It loads many minutes and in the end it has this result:
After solving this problem, I would like to know if it is possible to connect a cluster of documentDB to an instance in another zone of availability ... I have my DocumentDB in Ohio and I have an EC2 in São Paulo ... is it possible?
Amazon DocumentDB clusters are deployed in a VPC to provide strong network isolation from the Internet. To connect to your cluster from outside of the VPC, please see the following: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
AWS document DB is hosted on a VPC (virtual private cloud) which has its own specific subnets and security groups; basically, anything that resides in a VPC is not publicly accessible.
Document DB is deployed in a VPC. In order to access it, you need to create an EC2 instance or AWS Could9.
Let's access it from the EC2 instance and access AWS document DB using SSH tunneling.
Create an EC2 instance (preferably ubuntu) of any configuration and select the same VPC in which your document DB cluster is hosted.
After the EC2 is completely initialized, start an SSH tunnel and bind the local port # 27017 with document DB cluster host # 27017.
ssh -i "<ec2-private-key>" -L 27017:docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017 ubuntu#<ec2-host> -N
Now your localhost is tunneled to ec2 on port 27017. Connect from mongosh or mongo, enter your cluster password and you will be logged in and execute any queries.
mongosh --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username Mobify --password
Note: SSL will be deprecated. Use tls, just replace SSL with tls in the above command.

Connect to Cloud SQL for PosgreSQL from Cloud Composer

My question is about configure Google Cloud Composer to reach Google Cloud SQL using the same network configuration in the same Google Cloud project.
Cloud SQL configured with Private IP associated to a Default Network.
Cloud SQL config
Cloud Composer configured Network ID = Default
Cloud Composer config
Executing a DAG which uses a PostgresOperator configured with the Private IP and default port (5432) to connect, we always get the same connection error:
ERROR - could not connect to server: Connection timed ou Is the
server running on host "private_ip" and acceptin TCP/IP connections on
port 5432
We expect the connection should be established because we have configured the same network and we are using Private IP to reach the Cloud SQL server from Composer.
according to Introducing private networking connection for Cloud SQL these are still two separate network segments (see the visual scheme there). therefore VPC network peering is required, in order to get a route-able private IP. see the code lab, which has also this scenario covered.
The request from Composer comes from the pod's IP address which is non-routable outside the VPC. Therefore it has to be masqueraded to the IP of the interface of the node which is in 10.0.0.0/8 (when using the default network).
If you configured your CloudSQL instance to use an auto-generated IP range when setting the Private IP connection, it is likely the IP is also in 10.0.0.0/8, but it is not inside the same VPC.
If it the connection is to 10.0.0.0/8 and is not in the VPC, it can't be routed. As a workaround you can create a custom address range, for example 192.168.X.X:
gcloud beta compute addresses create [RESERVED_RANGE_NAME] \
--global \
--purpose=VPC_PEERING \
--addresses=192.168.0.0 \
--prefix-length=16 \
--description=[DESCRIPTION] \
--network=[VPC_NETWORK] \
And configure your Cloud SQL instance's private IP to be within that range.
Cloud SQL Proxy is a great way to go, and a similar question, if not the same, has been answered with details on getting that set up.
To address the Internal IP question, see the Google docs:
You can use the Cloud SQL Proxy to connect to an instance that is also configured to use private IP. The proxy can connect using either the private IP address or a public IP address. If you use the Cloud SQL Proxy to connect to an instance that has both public and private IP addresses assigned, the proxy uses the public IP address by default.