How to programmatically create Cloud SQL instance with Private IP - google-cloud-platform

I'm trying to programmatically set up a new instance of Cloud SQL which has a Private IP, but the creation fails. I need a Private IP to connect from a GKE Kubernetes cluster.
Programmatically creating a new Cloud SQL instance only succeeds after a Cloud SQL with a Private IP has been created manually first. My assumption is that the manual creation sets up the necessary VPC network peering. However it fails in case no Cloud SQL instance was created manually first.
How do I programatically create the VPC network peering required to create a Cloud SQL instance which has a Private IP?
This is the request I'm making to create the Cloud SQL instance with a private ip.
const res = await client.request({
url: `https://www.googleapis.com/sql/v1beta4/projects/${projectId}/instances`,
method: "POST",
data: {
name: "my-database-8",
settings: {
tier: "db-f1-micro",
ipConfiguration: {
privateNetwork: `projects/${projectId}/global/networks/default`,
ipv4Enabled: true
}
},
databaseVersion: "MYSQL_5_7"
}
})
I would expect the Cloud SQL instance with private networking to be created successfully, even when no Cloud SQL instance was created manually first.

My assumption is that the manual creation sets up the necessary VPC network peering.
You are correct.
How do I programatically create the VPC network peering required to create a Cloud SQL instance which has a Private IP?
It involves reserving an IP address range in your VPC, and establish peering with one of networking services. Detailed steps provided in the public doc. (look at gcloud section).

Related

GCP: Connection to Cloud SQL

I have an architectural issue with Cloud SQL.
We have an API that is running in a GKE cluster in a network A and a cloud SQL instance in a network B. The current network config doesn't allow peering between these 2 networks. Is there any possibility to connect the API to the instance.
As # Ferregina suggested:
The bastion hosts provide secure access to Linux instances located in the private and public subnets of your virtual private cloud (VPC). The solution sets up a Multi-AZ environment and deploys Linux bastion host instances into the public subnets.
As mentioned in the document:
To connect to a Cloud SQL instance using private IP, the Cloud SQL
Auth proxy must be on a resource with access to the same VPC network
as the instance.
Refer to this link for more information.

Setup database synchronization from AWS RDS to GCP CloudSQL

We want to move our AWS RDS database to GCP CloudSQL. We want to do this without downtime. So our approach was to set up a HA VPN tunnel and use Data Migration Service to sync everything to CloudSQL.
The RDS database is in a private subnet on the AWS side. I've successfully set up a HA VPN tunnel between this AWS private subnet and a private subnet in our GCP project.
I'm able to verify that this works because I can do the following things:
ping from an instance in GCP in the private subnet to an instance in AWS in that private subnet
ping from an instance in AWS in the private subnet to the instance in GCP
After installing MySQL on the GCP instance, I'm able to connect and query the RDS database
I'm struggling with setting up the Data Migration Service in GCP to sync the data from the RDS instance. I've chosen the CloudSQL instance to have a Private IP, not a public one. As connectivity method, I select VPC peering and select the VPC in which the GCP instance from which I'm able to contact the RDS instance resides.
I understand that CloudSQL is created in a project peered to my GCP project, and the CloudSQL instance resides in a subnet in this new project. So there is no route from this subnet to my private subnet. However, I see that it is peered automatically. In this peering connection, I checked the option to import and export custom routes, but still, I cannot reach the RDS from the CloudSQL instance.
I've got routes in GCP for the private subnet IP range of AWS, with the next hop the VPN tunnels.
I'm not sure what I need to do to connect CloudSQL to RDS on this point.

Unable to move Cloud SQL to different subnet or vpc

I am trying to move Cloud SQL from one subnet to another on my GCP project. Basically, I created a Cloud SQL instance, which used google managed service connection, and the IP range is allocated by Google default.
Where I want to switch to my own CIDR I setup via managed service connections created.
I am following the steps from Changing the private IP address of an existing Cloud SQL instance
Trying to switch to a temporary network/vpc before attaching back to my custom VPC with my own managed service connections.
$gcloud --project=myprj beta sql instances patch mydbid --network=tmp_vpc --no-assign-ip
The following message will be used for the patch API method. {"name": "mydbid", "project": "myprj", "settings": {"ipConfiguration": {"ipv4Enabled": false, "privateNetwork": "https://compute.googleapis.com/compute/v1/projects/myprj/global/networks/tmp_vpc"}}} ERROR: (gcloud.beta.sql.instances.patch) HTTPError 400: This operation is not valid for this instance.
I am assuming you are using a Shared VPC.
Currently, it is not possible to assign a Private IP from a Shared VPC network to an existing Cloud SQL instance.
This operation is only possible when creating a new instance, as explained in the
Quick reference for Private IP topics
You can create Cloud SQL instances with private IP addresses in a
Shared VPC network. However, you cannot assign a private IP address in
a Shared VPC network to an existing Cloud SQL instance.
Here you can find a Feature Request opened for your use case.

Accessing Cloud SQL from another GCP project

I want to connect to Cloud SQL from a different GCP project.
Cloud SQL is location in ProjectSQL and a VPC network is there in ProjectSQL project with name sql_vpc
There is another project ProjectDataflow and this has a vpc dataflow_vpc. I want to connect to cloudSQL from ProjectSQL with the VM launched in ProjectDataflow project
Things I have tried with success and failure.
Private ACCESS:
VPC Peering:
Enable Private IP access in Cloud with the vpc sql_vpc
Creating VPC peering between dataflow_vpc and sql_vpc
This solution does not work because you can not access the Peered Network.
https://cloud.google.com/sql/docs/mysql/private-ip
Status: FAILED
Shared Network
As per doc I can create the CloudSQL in shared VPC network, that says I
have to create the CloudSQL in host project, and to access the Cloud
SQL from VM instance, it has be in the same network as of authorized
private ip network of Cloud SQL
Status: NOT TRIED but looks to be Negative
Public Access:
Create a Cloud NAT in ProjectDataflow with dataflow_vpc with manual IP
Use the Cloud NAT public ip to whitelist in CloudSQL instance
Now I can access the CloudSQL from project ProjectDataflow using CloudSQL Public IP
STATUS: Success
Please share your experience accessing Cloud SQL from another project.
Is there any best practice to connect cloud SQL from another gcp project?
EDIT:
Newer instances seem to be having this option enabled by default and there's no need to contact support anymore. However, if after all the process, the setup is still not working, it may be needed to contact support.
IMPORTANT: The VPC peering option will not work anymore, as stated in the documentation, more precisely in the Considerations topic. Then the only available option to achieve it is using Shared VPCs
The process of interconnecting a Cloud SQL with another GCP project it is pretty straightforward following the documentation. The only thing you need to take into consideration in order to make it work is that you will have to request Google Cloud Support to enable custom routes for your Cloud SQL speckle umbrella instance in which your Cloud SQL is running under otherwise you won’t be able to access your Cloud SQL within your GCP project.
The following steps will work for you:
-Configuring VPC for Cloud SQL instance
Inside the project where you have your Cloud SQL instance, create a
VPC network with the ip address range of your desire. Choose the same
zone for the VPC in which your instance is located.
-Configuring VPC for GCP project
Now switch to the project where your CloudDataflow instance is located
and follow the same process. Create the VPC network being careful that
the IP ranges do not collide between each other. You can use the following tool to
check if the IP addresses range collide. Also take into consideration
that both VPC networks must be in the same zone.
-Connecting VPC of both projects with peering
Once both VPC networks are created it is needed to configure the VPC
network peering from both projects. From the Cloud SQL instance side,
configure the peering specifying the project and VPC network name to
connect with and also select the option to export custom routes. This
way the other part of the peering, in this case your GCP project, will
have visibility of your Cloud SQL instance. Now, from the GCP project
side, configure the peering specifying the Cloud SQL project name and
the VPC network name to connect with. The same way we did with the
Cloud SQL peering, we have to set up the peering to import custom
routes as it will receive exported routes coming from the other side
of the connection, which in our case is your Cloud SQL instance.
Here you can check more information about importing and exporting routes between any VPC network peerings.
-Request Google Cloud Support to enable for you the exchange custom routes for your Cloud SQL
Reach Google Cloud Support and ask them to enable the exchange of
custom routes for your speckle-umbrella VPC network associated with
your instance that is automatically created upon the Cloud SQL
instance is created.
Take into consideration that this last step is very important, all SQL projects run under the umbrella project, hence without requesting Google Cloud Support to enable the exchange custom routes for your instance this will never work.
Shared VPC
As for Shared VPC, the only thing you need to take into consideration is that you need to enable the option once creating your Cloud SQL instance as you can’t add it afterwards.
You will find a configuration guide for Shared VPC in the following link.

Connecting VPC to Cloud SQL

I am trying to connect a VPC with GKE to a Cloud SQL database.
I have specified a VPC with the following details:
IP ranges gateway
10.240.0.0/24 10.240.0.1
I see that all my GKE services are in 10.39.xxx.xx
NAME CLUSTER_IP
service/kubernetes 10.39.240.1 ....
service/api 10.39.xxx.xx
service/web 10.39.xxx.xx
I don't actually understand the connection with the VPC here. I want to have the GKE cluster able to communicate with a Cloud SQL database without exposing it over the public internet.
I have a Cloud SQL db on public IP, say, 36.241.123.123 with a private IP equal to 10.7.224.3.
In SQL - Connections I check the private IP box and given the choice between default and dev-vpc which is the name of my VPC, I select dev-vpc.
According to https://cloud.google.com/sql/docs/mysql/configure-private-ip I should be done now, but I am unable to connect to the Cloud SQL from my GKE cluster.
I do see the following message when selecting the private IP.
Private IP connectivity requires additional APIs and permissions. You may need to contact your organisation's administrator for help enabling or using this feature. Currently, Private IP cannot be disabled once it has been enabled.
I also have a VPC peering connection
Peering connection details
imported routes
10.7.224.0/24 [ the Cloud SQL internal IP is in this ]
exported routes
10.240.0.0/24 [ the VPC subrange ]
What am I missing?
The GKE cluster needs to be on the same VPC in order to have access to other services on that Private IP. This means you have to create a VPC-native cluster.
If you created your cluster before Cloud SQL had support for private IP, you need to recreate your cluster, I'm not sure why but most of the changes involving networking in GCP you have to recreate your cluster.