GCE cannot access private SQL - google-cloud-platform

I tried to access from GCE instance to Cloud SQL instance, which is private and having private service connection.
following private services access docs, I setup VPC and FW, and create SQL and GCE in same VPC.
https://cloud.google.com/vpc/docs/configure-private-services-access
but in GCE, ping to SQL instance, nor sql connection didn't work.
create VPC
gcloud compute networks create test-custom-vpc --subnet-mode=custom --bgp-routing-mode=global --mtu=1460
create subnet
gcloud compute networks subnets create vpc-sb-1 --network=test-custom-vpc --range=10.100.0.0/16 --region=asia-northeast1
create IP range for private service connection
gcloud compute addresses create vpc-peering-range --global --purpose=VPC_PEERING
--addresses=192.168.0.0 --prefix-length=16 --description=description --network=test-custom-vpc
create VPC peering for SQL
gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --ranges=vpc-peering-range --network=test-custom-vpc --project=my-project
create mySQL in VPC
gcloud --project=my-project beta sql instances create vpc-sql-1 --network=test-custom-vpc --no-assign-ip
create GCE instance in VPC
gcloud compute instances create vm-in-sb-1 --subnet vpc-sb-1 --zone asia-northeast1-b
create FW rule, so far allow all IP/protocol
gcloud compute firewall-rules create allow-all --network test-custom-vpc --direction ingress --action allow --rules all
Then, I would access VM with ssh and check connection between VM & SQL
gcloud sql instances list
NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
vpc-sql-1 MYSQL_5_7 us-central1-b db-n1-standard-1 - 192.168.0.3 RUNNABLE
-> SQL private IP is 192.168.0.3
ssh login
gcloud beta compute ssh --zone "asia-northeast1-b" "vm-in-sb-1" --project "my-project"
check connection
ping 192.168.0.3
no response
psql -h 192.168.0.3 -U postgres
mysql --host=192.168.0.3 --user=root --password
psql: could not connect to server: Connection timed out
Is the server running on host "192.168.0.3" and accepting
TCP/IP connections on port 5432?
I have no idea what configuration is wrong

I replicated your case, all configuration are working well but please note, using the command below in step #5 will create a Cloud SQL instance for Mysql not for Postgres:
gcloud --project=my-project beta sql instances create vpc-sql-1 --network=test-custom-vpc --no-assign-ip
If you want to create a Cloud SQL instance for Postgres use the command below:
gcloud --project=my-project beta sql instances create vpc-sql-1 --database-version=POSTGRES_12 --cpu=2 --memory=7680MB --network=test-custom-vpc --no-assign-ip
The problem is you are connecting to Cloud SQL for Mysql using Postgres database client. To proper connect use the following example:
for Mysql example:
mysql --host=192.168.0.3 --user=root --password
for Postgres example:
psql -h 192.168.0.3 -U postgres

Related

unable to SSH into a VM in GCP

I created a VM, assigned external IP to it in us-west1 region. Also i did assign a subnet to VM.
I created a firewall rule with options as direction-ingress, Source filters - 0.0.0.0/0, Protocols and ports - tcp:22, Targets - All Instances in the network.
Using gcloud command gcloud compute ssh --zone "xxxx" "xxxxxxxx" --project "xxxxxx" when i try to ssh into a VM, it is not allowing me to connect getting below error
ssh: connect to host xx.xx.xx.xx port 22: Operation timed out
Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.
gcloud compute ssh xxxxxxx --project=xxxxxx --zone=xxxxxx --troubleshoot
Or, to investigate an IAP tunneling issue:
gcloud compute ssh xxxxxxx --project=xxxxxxx --zone=xxxxx --troubleshoot --tunnel-through-iap

Setting up Datafusion instance to connect with secured Dataproc cluster

We have a secured Dataproc cluster, we are able to successfully SSH into it with individual user ID's with the command:
gcloud compute ssh cluster-name --tunnel-through-iap
But when we create a profile and attach it to Data Fusion instance and configure the pipeline to run it throws connection timeout:
java.io.IOException: com.jcraft.jsch.JSchException: java.net.ConnectException: Connection timed out (Connection timed out)
at io.cdap.cdap.common.ssh.DefaultSSHSession.<init>(DefaultSSHSession.java:88) ~[na:na]
at io.cdap.cdap.internal.app.runtime.distributed.remote.RemoteExecutionTwillPreparer.lambda$start$0(RemoteExecutionTwillPreparer.java:436) ~[na:na]
How can we configure Data Fusion pipeline to run with a secured Dataproc cluster? Kindly let me know.
Some information to give more context on this question:
From the option --tunnel-through-iap, most probably you are using Tunneling with SSH and cluster-name is the instance name into the Dataproc cluster you want to connect to. The link also provide information about the option --internal-ip that connect to an instance only through its internal IP.
Data Fusion explains the procedure to create private IP addresses to limit the access to your instance.
Hence, a private IP instance and the option --internal-ip could be a good combination to connect to your instance (keeping a secured cluster) once the firewall rules are correctly configured.

Connecting to Cloud SQL with Private IP from GCE or GKE

I'm trying to connect to a Postgres database (CloudSQL) from a pod deployed in a GoogleCompute cluster, with a private IP, but I get only connection timeout errors.
I setup the GCP cluster with the following:
gcloud beta container clusters create "gcp-cluster" --zone "europe-west1-b" --no-enable-basic-auth --cluster-version "1.13.6-gke.13" --machine-type "n1-standard-1" --image-type "COS" --disk-type "pd-ssd" --disk-size "20" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/sqlservice" --num-nodes "2" --enable-stackdriver-kubernetes --enable-ip-alias --network "projects/XXX/global/networks/default" --subnetwork "projects/XXX/regions/europe-west1/subnetworks/default" --default-max-pods-per-node "110" --enable-autoscaling --min-nodes "2" --max-nodes "20" --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --maintenance-window "19:00"
Then I deployed a wildfly pointing to the postgres database IP address (DB created in the same zone/region), but I get only connection timeout. After have enabled a public IP address with source 0.0.0.0/0, I can get a connection.
Any idea to work with private IP address?
Private IP means accessing Cloud SQL through a Virtual Private Cloud (VPC). You have to use a resource (in this case, GCE instance) that is also on that VPC to be able to reach it. See the environment requirements page of the Private IP docs.
Note for future readers: It's a really bad idea to whitelist 0.0.0.0/0 on a Public IP address. This essentially allows the entire internet to attempt to connect to your instance, and should not be left enabled for any extended period of time.
Your gke cluster is in europe-west1-b. Assuming that you use the default network, you must enable Private Google Access for europe-west1 subnet. Click on the subnet to view details and edit it if required to set Private Google Access to "On".
After a couple of hours I got my connection by enabling CloudSql access scope for the VM instances.

Unable to connect to ssh on Google Cloud Platform:

We are unable to connect to 'VM'via ssh instance on Google Cloud platform.
Here we are trying with the help of 'SSH' button available on the browser.
But following message is received:
We are unable to connect to the VM on the port 22.
We have tried to Stop and Start the VM but did not help.
You need to create a firewall rule that enables SSH access on port 22 for your VMs. It is better to make the 'Target' as a network tag instead of enabling SSH access for all of the machines on your VPC network.
You can use the CLI to perform this operation - using the default VPC
gcloud compute firewall-rules create <rule-name> --allow tcp:22 --network "default" --source-ranges "<source-range>"

problems connecting to AWS DocumentDB

I created a Cluster and an Instance of DocumentDB in amazon. When I try to connect to my Local SSH (MacOS) it displays the following message:
When I try for the MongoDB Compass Community:
mongodb://Mobify:<My-Password>#docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
It loads many minutes and in the end it has this result:
After solving this problem, I would like to know if it is possible to connect a cluster of documentDB to an instance in another zone of availability ... I have my DocumentDB in Ohio and I have an EC2 in São Paulo ... is it possible?
Amazon DocumentDB clusters are deployed in a VPC to provide strong network isolation from the Internet. To connect to your cluster from outside of the VPC, please see the following: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
AWS document DB is hosted on a VPC (virtual private cloud) which has its own specific subnets and security groups; basically, anything that resides in a VPC is not publicly accessible.
Document DB is deployed in a VPC. In order to access it, you need to create an EC2 instance or AWS Could9.
Let's access it from the EC2 instance and access AWS document DB using SSH tunneling.
Create an EC2 instance (preferably ubuntu) of any configuration and select the same VPC in which your document DB cluster is hosted.
After the EC2 is completely initialized, start an SSH tunnel and bind the local port # 27017 with document DB cluster host # 27017.
ssh -i "<ec2-private-key>" -L 27017:docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017 ubuntu#<ec2-host> -N
Now your localhost is tunneled to ec2 on port 27017. Connect from mongosh or mongo, enter your cluster password and you will be logged in and execute any queries.
mongosh --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username Mobify --password
Note: SSL will be deprecated. Use tls, just replace SSL with tls in the above command.