I have an architectural issue with Cloud SQL.
We have an API that is running in a GKE cluster in a network A and a cloud SQL instance in a network B. The current network config doesn't allow peering between these 2 networks. Is there any possibility to connect the API to the instance.
As # Ferregina suggested:
The bastion hosts provide secure access to Linux instances located in the private and public subnets of your virtual private cloud (VPC). The solution sets up a Multi-AZ environment and deploys Linux bastion host instances into the public subnets.
As mentioned in the document:
To connect to a Cloud SQL instance using private IP, the Cloud SQL
Auth proxy must be on a resource with access to the same VPC network
as the instance.
Refer to this link for more information.
Related
We want to move our AWS RDS database to GCP CloudSQL. We want to do this without downtime. So our approach was to set up a HA VPN tunnel and use Data Migration Service to sync everything to CloudSQL.
The RDS database is in a private subnet on the AWS side. I've successfully set up a HA VPN tunnel between this AWS private subnet and a private subnet in our GCP project.
I'm able to verify that this works because I can do the following things:
ping from an instance in GCP in the private subnet to an instance in AWS in that private subnet
ping from an instance in AWS in the private subnet to the instance in GCP
After installing MySQL on the GCP instance, I'm able to connect and query the RDS database
I'm struggling with setting up the Data Migration Service in GCP to sync the data from the RDS instance. I've chosen the CloudSQL instance to have a Private IP, not a public one. As connectivity method, I select VPC peering and select the VPC in which the GCP instance from which I'm able to contact the RDS instance resides.
I understand that CloudSQL is created in a project peered to my GCP project, and the CloudSQL instance resides in a subnet in this new project. So there is no route from this subnet to my private subnet. However, I see that it is peered automatically. In this peering connection, I checked the option to import and export custom routes, but still, I cannot reach the RDS from the CloudSQL instance.
I've got routes in GCP for the private subnet IP range of AWS, with the next hop the VPN tunnels.
I'm not sure what I need to do to connect CloudSQL to RDS on this point.
I have an AWS account with an EC2 in it that I am trying to connect to a Cloud SQL Server (MySQL 5.6) inside of Google Cloud Platform.
I have successfully set up a VPN between AWS and GCP and can echo a message over nc between an ec2 on AWS and a vm on GCP.
As GCP managed DB's are not placed inside of a VPC of my choosing I followed this guide to give the DB a private IP and to then peer that with my google VPC. I tested this works by accessing the DB via pymsql from an VM in GCP using the private IP of the DB.
However my issues come from connecting the EC2 inside of AWS to the Cloud SQL DB in the same way, I have followed this guide to allow the use of the DB's private IP from an external source but I seem to be getting stuck with how to set the routing up to the peered network the DB is sitting in using AWS Routing.
The problem has been sorted!
In the Advertised routes Settings of my Cloud Router, I had misunderstood the function of Advertise all subnets visible to the Cloud Router (Default)
I needed to instead choose Create custom routes" And then the sub-option Advertise all subnets visible to the Cloud Router.
This then allowed me to add the Cloud SQL subnet to my router to that IP block propagate over to AWS.
I have a postgreSQL CLoud SQL instance which I am connecting to via UNIX socket and the instance name from a Cloud Run container as per the documentation. With a public IP, this connection works fine. I was looking to turn off the public IP and only have a private IP, so I would not be charged for the public IP going forward.
When I first created the Cloud SQL instance, I only enabled the public IP. A couple of days later I enabled the private IP. For the assocaited network for the private IP, I accepted the default as the Cloud Run instance is in the same project.
When I turn off the public IP, my application can no longer connect to the Cloud SQL instance. I get a connection refused error:
sqlalchemy.exc.InterfaceError: (pg8000.core.InterfaceError) ('communication error', ConnectionRefusedError(111, 'Connection refused'))
As stated above, I did follow the instruaction on the Connecting to Cloud SQL from Cloud Run page:
https://cloud.google.com/sql/docs/postgres/connect-run
I even ran the gcloud command to update an the exsiting deployed revision after turning off the public IP and only having the private IP available but it made no difference.
Is a public IP required for a connection from Cloud Run to Cloud SQL? I do not see that in the connection documentation page. Or is there something else I missed when trying to switch over to only having a private IP? Or do I need to create a new Cloud Instance without a public IP and go through the instructions for connecting Cloud Run via an instance anme again?
Is a public IP required for a connection from Cloud Run to Cloud SQL? I do not see that in the connection documentation page.
On the Connecting to Cloud SQL from Cloud Run page, it says "Note: These instructions require your Cloud SQL instance to have a public IP address configured."
Private IP access is access from a Virtual Private Cloud (VPC). In order to access your instance through a VPC, the resource you are connecting to needs to be a part of the VPC. Cloud Run doesn't currently support VPC access, so you'll need to use have a public IP for now.
TL;DR: Open a case to the Google support
Your case is interesting because, by design, I think it's not yet supported.
In fact, when you create a Cloud SQL database with a private IP, a network peering is done between your VPC and the Cloud SQL VPC (or something equivalent).
In addition, today, it's not possible to plug your Cloud Run instance to your VPC. With function and App Engine, you have a serverless VPC connector, and not yet with Cloud Run (it's coming!).
The serverless VPC connector perform the same things as the Cloud SQL private IP, I mean a peering between your VPC and the Cloud Functions (or App Engine) VPC (or something equivalent).
And even if the serverless VPC connector is available on Cloud Run, it's not sure that it work because of network peering transitivity. In short, If you have a peering between VPC A -> VPC B and between VPC B -> VPC C, you can't reach VPC C from VPC A by performing an hop in VPC B. Replace A by VPC Cloud Run, B by VPC of your project, and C by VPC Cloud SQL.
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
I didn't check with AppEngine or Cloud Function, but this design shouldn't work.
But I'm not sure, that's why a case to the Google support will allow you to have a clear answer and maybe inputs on the roadmap. Any valuable information from Google Support are welcomed here!
I was also getting the following error when I was trying to connect to postgres using the following command from cloud shell:
gcloud sql connect
it seems your client does not have ipv6 connectivity...
What I do is that I login to one of the pods deployed using Google cloud Kubernetes using the following command:
kubectl exec --stdin --tty java-hello-world-7fdecb9894-smql4 -- /bin/bash
Then for 1st time I ran:
apt-get update
apt install postgresql-client
And now I can connect using:
psql -h postgres-private-ip -U username
I am trying to connect a VPC with GKE to a Cloud SQL database.
I have specified a VPC with the following details:
IP ranges gateway
10.240.0.0/24 10.240.0.1
I see that all my GKE services are in 10.39.xxx.xx
NAME CLUSTER_IP
service/kubernetes 10.39.240.1 ....
service/api 10.39.xxx.xx
service/web 10.39.xxx.xx
I don't actually understand the connection with the VPC here. I want to have the GKE cluster able to communicate with a Cloud SQL database without exposing it over the public internet.
I have a Cloud SQL db on public IP, say, 36.241.123.123 with a private IP equal to 10.7.224.3.
In SQL - Connections I check the private IP box and given the choice between default and dev-vpc which is the name of my VPC, I select dev-vpc.
According to https://cloud.google.com/sql/docs/mysql/configure-private-ip I should be done now, but I am unable to connect to the Cloud SQL from my GKE cluster.
I do see the following message when selecting the private IP.
Private IP connectivity requires additional APIs and permissions. You may need to contact your organisation's administrator for help enabling or using this feature. Currently, Private IP cannot be disabled once it has been enabled.
I also have a VPC peering connection
Peering connection details
imported routes
10.7.224.0/24 [ the Cloud SQL internal IP is in this ]
exported routes
10.240.0.0/24 [ the VPC subrange ]
What am I missing?
The GKE cluster needs to be on the same VPC in order to have access to other services on that Private IP. This means you have to create a VPC-native cluster.
If you created your cluster before Cloud SQL had support for private IP, you need to recreate your cluster, I'm not sure why but most of the changes involving networking in GCP you have to recreate your cluster.
I have production stacks inside a Production account and development stacks inside a Development account. The stacks are identical and are setup as follows:
Each stack as its own VPC.
Within the VPC are two public subnets spanning to AZs and two private subnets spanning to AZs.
The private Subnets contain the RDS instance.
The public Subnets contain a Bastion EC2 instance which can access the RDS instance.
To access the RDS instance, I either have to SSH into the Bastion machine and access it from there, or I create an SSH tunnel via the Bastion to access it through a Database client application such as PGAdmin.
Current DMS setup:
I would like to be able to use DMS (Database Migration Service) to replication an RDS instance from Production into Development. So far I am trying the following but cannot get it to work:
Create a VPC peering connection between Development VPC and Production VPC
Create a replication instance in the private subnet of the Development VPC
Update the private subnet route tables in the development VPC to route traffic to the CIDR of the production VPC through the VPC peering connection
Ensure the Security group for the replication instance can access both RDS instances.
Main Problem:
When creating the source endpoint in DMS, the wizard only shows RDS instances from the same account and the same region, and only allows RDS instances to be configured using server names and ports, however, the RDS instances in my stacks can only be accessed via Bastion machines using tunnelling. Therefore the test endpoint connection always fails.
Any ideas of how to achieve this cross account replication?
Any good step by step blogs that detail how to do this? I have found a few but they don't seem to have RDS instances sitting behind bastion machines and so they all assume the endpoint configuration wizard can be populated using server names and ports.
Many thanks.
Securing the RDS instances via the Bastion host is sound security practice, of course, for developer/operational access.
For DMS migration service however, you should expect to open security group for both the Target and Source RDS database instances to allow the migration instance to have access to both.
From Network Security for AWS Database Migration Service:
The replication instance must have access to the source and target endpoints. The security group for the replication instance must have network ACLs or rules that allow egress from the instance out on the database port to the database endpoints.
Database endpoints must include network ACLs and security group rules that allow incoming access from the replication instance. You can achieve this using the replication instance's security group, the private IP address, the public IP address, or the NAT gateway’s public address, depending on your configuration.
See
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.Network.html
For network addressing and to open the RDS private subnet, you'll need a NAT on both source and target. They can be added easily, and then terminated after the migration.
You can now use Network Address Translation (NAT) Gateway, a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an AWS Virtual Private Cloud (VPC).
See
https://aws.amazon.com/about-aws/whats-new/2015/12/introducing-amazon-vpc-nat-gateway-a-managed-nat-service/