connectiong to a cloudSQL private IP instance without creating a new VM instance - google-cloud-platform

I was wondering if there's any possible solution to connect a GCP AIP tunnel to a DB (Cloud SQL Proxy) when the DB has no public IP.
I don't want to create a new VM for this purpose so I'm only interested in solutions that don't require me to use a VM.
Thanks in advance.

There are two ways to connect your on-prem network to the VPC, but both are fairly involved (and potentially expensive):
You can use Cloud Interconnect
You can use Cloud VPN to set up a HA VPN
For both scenarios, you'll also need to configure Cloud Router to export the routes to your Cloud SQL instance into your on-prem network.
Additionally (if you have control of your constraints) you could revisit the idea of using Public IP. Using the Cloud SQL Auth proxy allows you to authorize your connections using an IAM identity as opposed to traditional firewalling or SSL certs. You can even use org policies to restrict Authorized Networks, making the Auth proxy required to connect. enter link description here

Related

Best way to establish a private secure connection between an Azure App Service and a AWS ECS service

I have an app service (Rest API) in Azure and I am planning on hosting another service that has to be integrated with the Azure app service. Could someone please let me know the preferred way(s) to make sure the communication is on a private secure channel?
According the official Azure Docs, you have three options, I can say that the VPN option will be one of the easiest ones, but you can have problems like limited throughput, unpredictable routing via the public internet, and the cost of the AWS and Azure data transfer fees.
To understand better which option to use you can check this flow chart:
Option 1: Connect Azure ExpressRoute and the other cloud provider's equivalent private connection. The customer manages routing.
Option 2: Connect ExpressRoute and the other cloud provider's equivalent private connection. A cloud exchange provider handles routing.
Option 3: Use Site-to-Site VPN over the internet. For more information, see Connect on-premises networks to Azure by using Site-to-Site VPN gateways.
The options 1 and 2 are the best options to avoid use of the public internet, if you require an SLA, if you want predictable throughput, or need to handle data volume transfer. Consider whether to use a customer-managed routing or a cloud exchange provider if you haven't implemented ExpressRoute already.
In the AWS side, you will be able to configure your VPC, to understand how to do this check here.
For more information about these three options, check here

What is the GCP equivalent of AWS Client VPN Endpoint

We are moving from AWS to the GCP. I used Client VPN Endpoint in AWS to get into the VPC network in the AWS. What is the alternative in GCP which I can quickly setup and get my laptop into the VPC network? If there is no exact alternative, what's the closest one and please provide instructions to set it up.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
Currently there is no managed product available on GCP to allow VPN connections from multiple clients to directly access resources within a VPC as Cloud VPN only supports site-to-site connectivity, however there is an existing Feature Request for this.
As an alternative a Compute Engine Instance can be used instead with OpenVPN server manually installed and configured following the OpenVPN documentation, however this would be a self managed solution.

Connecting Google Cloud Run Service to Google Cloud SQL database

I have 2 google cloud services:
Google Cloud Run Service (Node Js / Strapi)
Google Cloud SQL Service (Mysql)
I have added the Cloud SQL connection to the Google Cloud Run Service from the UI, and have a public IP for the Google Cloud SQL Service. On top of that I have added the Run Service IP to the Authorised networks of SQL Service.
If I try and connect from another server (external from Google cloud) I can easily connect to the Google Cloud SQL Service and execute queries.
But if I try and connect from inside the GCloud Run Service with exactly the same settings (Ip, database_name, etc) my connection hangs and I get a timeout error in the logs...
How to properly allow Gcloud SQL to accept connections from GCloud RUN?
I looked for other answers in here, but they all look very old (around 2015 )
You can use 3 modes to access to your database
Use the built-in feature. In this case, you don't need to specify the IP address, it's a linux socket that is open to communicate with the database as described in the documentation
Use Cloud SQL private IP. This time, no need to configure a connection in the Cloud Run service, you won't use it because you will use the IP, not the linux socket. This solution required 2 things
Firstly attach your database to your VPC and give it a private IP
Then, you need to route the private IP traffic of Cloud Run through your VPC. For this you have to create, and then to attach to the Cloud RUn service, a serverless VPC Connector
Use CLoud SQL public IP. This time again, no need to configure a connection in the Cloud Run service, you won't use it because you will use the IP, not the linux socket. To achieve this, you need more steps (and it's less secure)
You need to route all the egress traffic of Cloud Run through your VPC. For this you have to create, and then to attach to the Cloud RUn service, a serverless VPC Connector
Deploy your Cloud Run service with the Serverless VPC Connector and the egress connectivity param to "all"
Then create a Cloud NAT to route all the VPC Connector ip range traffic to a single IP (or set of IPs) (The link is the Cloud Functions documentation, but it works exactly in the same way)
Finally authorize the Cloud NAT IP(s) on Cloud SQL authorized networks.
In your case, you have whitelisted the Cloud Run IP, but it's a shared IP (other service can use the same!! Be careful) and it's not always the same, there is a pool of IP addresses used by Google cloud.

Cloud SQL Connection within different projects

Problem:
Hello, I have recently started using GCP. For a task, it is requied to connect my cloud sql instance with only private-ip present in my 'prod' project in 'vpc2' to an vm launched in diff project 'dev' in 'vpc1'.
Solution attempt:
I have made a private service connection from 'vpc2' for providing private-ip to my sql instance. and also i have done vpc peering b/w vpc1 & vpc2 with import/export of custom routes enabled.
But i am unable to access sql from vm.Curently i dont want to use shared vpc or sql proxy feature.
Thanks.
Actually, when you create a private IP for your Cloud SQL database, you create a peering between your VPC network and the Google Managed Network for your Cloud SQL instances. Therefore, you can't create another peering because you break the peering transitivity rule
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
There is several solution for this:
Set a public IP on the Cloud SQL instance, without any allowed network (for security reason) and use Cloud SQL proxy in your Dev project. It will be able to connect to the CLoud SQL instance through the public IP and with an encrypted protocol. But you don't want to use Cloud SQL proxy; and in addition you need to add a public IP on your prod Cloud SQL instance, you might be not authorized to do this!
Set up a Shared VPC. But it's not very easy to manage with lot of service limitation. And you don't want to use this solution
My latest bullet is to set up a Cloud VPN between your projects. It's a workaround but it works fine.
I had a similar problem, I have 2 projects A and B, and I needed to access the cloud sql instance in project B from project A I created a simple VPN instance with pritunl,configured the routes inside pritunl, after that I just created a VPN Ipsec between project A and B, with custom routes to the cloud sql, and it worked, now I can access the database using internal IP from my laptop locally.

Accessing Cloud SQL from another GCP project

I want to connect to Cloud SQL from a different GCP project.
Cloud SQL is location in ProjectSQL and a VPC network is there in ProjectSQL project with name sql_vpc
There is another project ProjectDataflow and this has a vpc dataflow_vpc. I want to connect to cloudSQL from ProjectSQL with the VM launched in ProjectDataflow project
Things I have tried with success and failure.
Private ACCESS:
VPC Peering:
Enable Private IP access in Cloud with the vpc sql_vpc
Creating VPC peering between dataflow_vpc and sql_vpc
This solution does not work because you can not access the Peered Network.
https://cloud.google.com/sql/docs/mysql/private-ip
Status: FAILED
Shared Network
As per doc I can create the CloudSQL in shared VPC network, that says I
have to create the CloudSQL in host project, and to access the Cloud
SQL from VM instance, it has be in the same network as of authorized
private ip network of Cloud SQL
Status: NOT TRIED but looks to be Negative
Public Access:
Create a Cloud NAT in ProjectDataflow with dataflow_vpc with manual IP
Use the Cloud NAT public ip to whitelist in CloudSQL instance
Now I can access the CloudSQL from project ProjectDataflow using CloudSQL Public IP
STATUS: Success
Please share your experience accessing Cloud SQL from another project.
Is there any best practice to connect cloud SQL from another gcp project?
EDIT:
Newer instances seem to be having this option enabled by default and there's no need to contact support anymore. However, if after all the process, the setup is still not working, it may be needed to contact support.
IMPORTANT: The VPC peering option will not work anymore, as stated in the documentation, more precisely in the Considerations topic. Then the only available option to achieve it is using Shared VPCs
The process of interconnecting a Cloud SQL with another GCP project it is pretty straightforward following the documentation. The only thing you need to take into consideration in order to make it work is that you will have to request Google Cloud Support to enable custom routes for your Cloud SQL speckle umbrella instance in which your Cloud SQL is running under otherwise you won’t be able to access your Cloud SQL within your GCP project.
The following steps will work for you:
-Configuring VPC for Cloud SQL instance
Inside the project where you have your Cloud SQL instance, create a
VPC network with the ip address range of your desire. Choose the same
zone for the VPC in which your instance is located.
-Configuring VPC for GCP project
Now switch to the project where your CloudDataflow instance is located
and follow the same process. Create the VPC network being careful that
the IP ranges do not collide between each other. You can use the following tool to
check if the IP addresses range collide. Also take into consideration
that both VPC networks must be in the same zone.
-Connecting VPC of both projects with peering
Once both VPC networks are created it is needed to configure the VPC
network peering from both projects. From the Cloud SQL instance side,
configure the peering specifying the project and VPC network name to
connect with and also select the option to export custom routes. This
way the other part of the peering, in this case your GCP project, will
have visibility of your Cloud SQL instance. Now, from the GCP project
side, configure the peering specifying the Cloud SQL project name and
the VPC network name to connect with. The same way we did with the
Cloud SQL peering, we have to set up the peering to import custom
routes as it will receive exported routes coming from the other side
of the connection, which in our case is your Cloud SQL instance.
Here you can check more information about importing and exporting routes between any VPC network peerings.
-Request Google Cloud Support to enable for you the exchange custom routes for your Cloud SQL
Reach Google Cloud Support and ask them to enable the exchange of
custom routes for your speckle-umbrella VPC network associated with
your instance that is automatically created upon the Cloud SQL
instance is created.
Take into consideration that this last step is very important, all SQL projects run under the umbrella project, hence without requesting Google Cloud Support to enable the exchange custom routes for your instance this will never work.
Shared VPC
As for Shared VPC, the only thing you need to take into consideration is that you need to enable the option once creating your Cloud SQL instance as you can’t add it afterwards.
You will find a configuration guide for Shared VPC in the following link.