Accessing Cloud SQL from another GCP project - google-cloud-platform

I want to connect to Cloud SQL from a different GCP project.
Cloud SQL is location in ProjectSQL and a VPC network is there in ProjectSQL project with name sql_vpc
There is another project ProjectDataflow and this has a vpc dataflow_vpc. I want to connect to cloudSQL from ProjectSQL with the VM launched in ProjectDataflow project
Things I have tried with success and failure.
Private ACCESS:
VPC Peering:
Enable Private IP access in Cloud with the vpc sql_vpc
Creating VPC peering between dataflow_vpc and sql_vpc
This solution does not work because you can not access the Peered Network.
https://cloud.google.com/sql/docs/mysql/private-ip
Status: FAILED
Shared Network
As per doc I can create the CloudSQL in shared VPC network, that says I
have to create the CloudSQL in host project, and to access the Cloud
SQL from VM instance, it has be in the same network as of authorized
private ip network of Cloud SQL
Status: NOT TRIED but looks to be Negative
Public Access:
Create a Cloud NAT in ProjectDataflow with dataflow_vpc with manual IP
Use the Cloud NAT public ip to whitelist in CloudSQL instance
Now I can access the CloudSQL from project ProjectDataflow using CloudSQL Public IP
STATUS: Success
Please share your experience accessing Cloud SQL from another project.
Is there any best practice to connect cloud SQL from another gcp project?

EDIT:
Newer instances seem to be having this option enabled by default and there's no need to contact support anymore. However, if after all the process, the setup is still not working, it may be needed to contact support.
IMPORTANT: The VPC peering option will not work anymore, as stated in the documentation, more precisely in the Considerations topic. Then the only available option to achieve it is using Shared VPCs
The process of interconnecting a Cloud SQL with another GCP project it is pretty straightforward following the documentation. The only thing you need to take into consideration in order to make it work is that you will have to request Google Cloud Support to enable custom routes for your Cloud SQL speckle umbrella instance in which your Cloud SQL is running under otherwise you won’t be able to access your Cloud SQL within your GCP project.
The following steps will work for you:
-Configuring VPC for Cloud SQL instance
Inside the project where you have your Cloud SQL instance, create a
VPC network with the ip address range of your desire. Choose the same
zone for the VPC in which your instance is located.
-Configuring VPC for GCP project
Now switch to the project where your CloudDataflow instance is located
and follow the same process. Create the VPC network being careful that
the IP ranges do not collide between each other. You can use the following tool to
check if the IP addresses range collide. Also take into consideration
that both VPC networks must be in the same zone.
-Connecting VPC of both projects with peering
Once both VPC networks are created it is needed to configure the VPC
network peering from both projects. From the Cloud SQL instance side,
configure the peering specifying the project and VPC network name to
connect with and also select the option to export custom routes. This
way the other part of the peering, in this case your GCP project, will
have visibility of your Cloud SQL instance. Now, from the GCP project
side, configure the peering specifying the Cloud SQL project name and
the VPC network name to connect with. The same way we did with the
Cloud SQL peering, we have to set up the peering to import custom
routes as it will receive exported routes coming from the other side
of the connection, which in our case is your Cloud SQL instance.
Here you can check more information about importing and exporting routes between any VPC network peerings.
-Request Google Cloud Support to enable for you the exchange custom routes for your Cloud SQL
Reach Google Cloud Support and ask them to enable the exchange of
custom routes for your speckle-umbrella VPC network associated with
your instance that is automatically created upon the Cloud SQL
instance is created.
Take into consideration that this last step is very important, all SQL projects run under the umbrella project, hence without requesting Google Cloud Support to enable the exchange custom routes for your instance this will never work.
Shared VPC
As for Shared VPC, the only thing you need to take into consideration is that you need to enable the option once creating your Cloud SQL instance as you can’t add it afterwards.
You will find a configuration guide for Shared VPC in the following link.

Related

Google Cloud Database Migration Service - Has anyone used DMS to migrate GCP postgres instance from one service project to another using VPC Peering?

I have done this successfully using IP allowlist connectivity type (external IP), migrated one postgres instance from one service project to another service project, but when I do try to do it using VPC peering I have an error that says it cannot connect to source DB. VPC Peering works if I try it the source and destination is in the same VPC network within the same service project but does not work if it is different network and service project. It also does not work if it is same service project but different network vpc.
I have tried the following:
Within the source service project, created the SourceDB in one VPC and allocated a GCP managed private IP and enabled connection for private services.
Create the connection profile and selected "PostgresSQL" as Database engine and use the private IP in hostname/IP:port
In Migration Job, in destination part, when choosing a network path, I selected "private ip" and selected the VPC of the target VPC and i want to select a different VPC.
There is a note in the bottom where it is "if you plan on connecting to the migration source via VPC peering choose the VPC where it resides". Does that mean you can only do VPC peering on the same VPC?
Also in Define connectivity method, where you select VPC network of your source "it says select the network of your source" So i have no choice to where i place the target DB for VPC peering, I must use the same VPC i had created the source in.
Does this mean the only way to migrate to different service projects or different vpc network is using external IP?
Shared VPC should allow migration between 2 different GCP projects using DMS, as long as you have a common shared VPC network, as Goli shared.
For example - Having project A as service project, and project B as service of the same shared VPC host project - an existing DB in project A in a shared VPC network 1, should be able to migrate to project B as long as it is a service project as well and the VPC network 1 is shared with it.
Please note that in another scenario project A can be a host project, and B can be a service project, where B is shared with (and has access to) the shared VPC network 1 where the source DB is located (in project A).
The DMS migration job connectivity method required for shared vpc cross-project migration would be ‘VPC peering’, therefore you will need to follow instructions in this documentation.
Once you have a source Database, have set up all the necessary configurations for it according to the DMS documentation (here) including all the other relevant initial settings for shared VPC (e.g. setting up the relevant service project and sharing the relevant VPC network with it) as well as finished setting up the private services access (and the VPC peering connection) for the shared VPC network following this documentation, you should be able to set up a new DMS migration job in the destination database service project, and choose that shared VPC network successfully.
If the test at the end of configuration in the migration job is failing for any reason, try to verify if you’ve managed to execute all the necessary steps mentioned in the documentation.

Access Cloud SQL instance in separate project

I'm looking for a solution (maybe this isn't the best way) to get an app running on one of our GKE clusters in Project-A, to access a Cloud SQL instance in Project-B, over it's an internal IP and ideally via cloud SQL proxy. Some more info:
We have VPC peering in between project-A and Project-B, traffic from both VPC's definitely flows fine
We have Cloud SQL proxy running in GKE cluster in project-A with the SQL instance in Project-B defined
The cloud SQL instance only has an internal Private IP
Pods from GKE cluster in Project-B can access Cloud SQL in the same project (Project-B) so I know the internal connectivity is definitely there
Only when we briefly add a public IP to the cloud SQL instance in project B, does the connection work from project A via Cloud SQL Proxy
When I try from Project-A to project-B, we get connection time outs.
I understand that when creating a cloud sql instance with an internal IP, that there is another separate VPC peering connection created called servicenetworking-googleapis.com from the VPC in that same project.
My thoughts here, being from a networking background, is that there is no IP route in project-A, to tell pod traffic to go over the VPC peering connection between the 2 projects if it wants to get to the private IP of the cloud SQL instance.
But I wondered if anyone else has tried to same thing.
I've found in documentation, that transitive peering is not supported. Haven't tried it myself, but it seems that recommended way is to use shared VPC for accessing CloudSQL from multiple projects.
In this section:
https://cloud.google.com/sql/docs/mysql/private-ip#quick-reference
Transitive peering
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Clients in one project can connect to Cloud SQL instances in multiple projects using Shared VPC networks.
You can use the following guide to set up a Shared VPC between your projects. In summary, it involves the following steps:
Set the project that hosts your Cloud SQL instance as a host project, since it is the one sharing the resources which in this case includes your Cloud SQL instance.
Select the subnets you would like to share to other projects
Set the project where your GKE cluster is hosted as the service project. This service project can then be attached to the host project set up previously.
Attach the service project to the host project and set up the appropriate VPC administrator roles so that users from the service project are allowed access to the shared resources.
As mentioned in another answer, VPC peering is not transitive. So even though there's a VPC peering between Project A and Project B, that does not mean Project A can communicate with the private IP Cloud SQL instance (which is deployed inside another VPC peered within Project B).
As a workaround, you can deploy a SOCKS5 proxy in Project B and have Project A connect to it using the Cloud SQL Proxy.
ALL_PROXY=socks5://localhost:8000 cloud_sql_proxy \
-instances=$INSTANCE_CONNECTION_NAME=tcp:5432

Cloud SQL Connection within different projects

Problem:
Hello, I have recently started using GCP. For a task, it is requied to connect my cloud sql instance with only private-ip present in my 'prod' project in 'vpc2' to an vm launched in diff project 'dev' in 'vpc1'.
Solution attempt:
I have made a private service connection from 'vpc2' for providing private-ip to my sql instance. and also i have done vpc peering b/w vpc1 & vpc2 with import/export of custom routes enabled.
But i am unable to access sql from vm.Curently i dont want to use shared vpc or sql proxy feature.
Thanks.
Actually, when you create a private IP for your Cloud SQL database, you create a peering between your VPC network and the Google Managed Network for your Cloud SQL instances. Therefore, you can't create another peering because you break the peering transitivity rule
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
There is several solution for this:
Set a public IP on the Cloud SQL instance, without any allowed network (for security reason) and use Cloud SQL proxy in your Dev project. It will be able to connect to the CLoud SQL instance through the public IP and with an encrypted protocol. But you don't want to use Cloud SQL proxy; and in addition you need to add a public IP on your prod Cloud SQL instance, you might be not authorized to do this!
Set up a Shared VPC. But it's not very easy to manage with lot of service limitation. And you don't want to use this solution
My latest bullet is to set up a Cloud VPN between your projects. It's a workaround but it works fine.
I had a similar problem, I have 2 projects A and B, and I needed to access the cloud sql instance in project B from project A I created a simple VPN instance with pritunl,configured the routes inside pritunl, after that I just created a VPN Ipsec between project A and B, with custom routes to the cloud sql, and it worked, now I can access the database using internal IP from my laptop locally.

Connecting an AWS EC2 to a Google Cloud SQL instance locally using VPN Gateway

I have an AWS account with an EC2 in it that I am trying to connect to a Cloud SQL Server (MySQL 5.6) inside of Google Cloud Platform.
I have successfully set up a VPN between AWS and GCP and can echo a message over nc between an ec2 on AWS and a vm on GCP.
As GCP managed DB's are not placed inside of a VPC of my choosing I followed this guide to give the DB a private IP and to then peer that with my google VPC. I tested this works by accessing the DB via pymsql from an VM in GCP using the private IP of the DB.
However my issues come from connecting the EC2 inside of AWS to the Cloud SQL DB in the same way, I have followed this guide to allow the use of the DB's private IP from an external source but I seem to be getting stuck with how to set the routing up to the peered network the DB is sitting in using AWS Routing.
The problem has been sorted!
In the Advertised routes Settings of my Cloud Router, I had misunderstood the function of Advertise all subnets visible to the Cloud Router (Default)
I needed to instead choose Create custom routes" And then the sub-option Advertise all subnets visible to the Cloud Router.
This then allowed me to add the Cloud SQL subnet to my router to that IP block propagate over to AWS.

How to connect AppEngine Standard Gen2 to local resource using Serverless VPC and Cloud VPN?

I have a project setup where I can connect to a local resource through AppEngine Flexible instances launching on a VPC network that is setup with a Cloud VPN connection to my local firewall.
With the release of Serverless VPC for the us-east1 region, I wanted to replace my setup to use AppEngine Standard Gen2 instances vs Flexible for the cost savings. I setup a Serverless VPC for the region/network my AppEngine app is hosted on and my Cloud VPN connection is configured for, updated my app.yaml accordingly, and pushed a new version.
I keep getting timeout errors for the new version that is trying to use Serverless VPC to connect to my local resource.
Some context:
The VPC Network is named "portal" and setup to "Auto" mode (auto creation of subnets for each region)
Cloud VPN is setup as a Classic VPN in the "portal" network with Route-based routing in the us-east1 region, connecting to my remote local 192.168.11.0/24 subnet.
A route exists on the VPC network for destinations 192.168.11.0/24 to use the Cloud VPN I have setup as the next hop (automatically created)
With the above, AppEngine Flexible deployments on the "portal" network can connect to my local resource as can any other Compute Engine VM on the "portal" network
I setup the Serverless VPC connector on the us-east1 region with the subnet 10.8.0.0/28
I'm not too clear how Serverless VPC works so I'm not sure how to even begin troubleshooting. When I click on the route rule for the 192.168.11.0/24 destination, I can see the AppEngine Flexible instances listed along with some "serverless-vpc-access" tagged instances that appear to be on a different subnetwork but using 10.8.0.0/28 IPs.
Should this configuration be working? If not, what changes do I need to make in order to support this?
Your problem (most likely) is caused by static routing. Do you have a route for return traffic coming from your VPN going to the VPC connector? Look at the routes defined for the VPN.
The purpose of a Serverless VPC connector is to allow the connection from the App Engine Standard to your VPC Network since the App Engine Standard environment is hosted and managed by Google and is not part of your VPC Network.
More details can be found here: [https://cloud.google.com/vpc/docs/configure-serverless-vpc-access].
That being said, you should verify the following:
Make sure that you’ve added the new subnet (/28) to your local on premise routes, with your VPN Gateway as the next hop. Since you’re using route-based routing, there is nothing to do regarding the Traffic Selectors on the VPN.
Make sure your local firewall is configured to accept the connection back and forth with the new configuration (/28).
While this probably won't apply to you, I just wanted to point out that communication through the Serverless VPC connector to the App Engine Standard environment is not possible unless it’s done on the same original tcp connection that originated from that same App Engine (TCP Established).
Your configuration, as you described is definitely possible to achieve. As mentioned, there are only a few things you need to verify to make sure it works.