connect local environment to CloudSQL with private IP - google-cloud-platform

I have hosted my application in a CloudRun Container and connected it to CloudSQL.
Everything is in a VPC Network and is running smoothly. Now I would like to modify data in production from a Database tool like DataGrid. Therefore I need to connect my local environment to my VPC-Network. I did this through a Cloud VPN Tunnel. Now I would like to connect to the SQL instance.
Here I got stuck and I'm wondering how I can establish the connection.
It would be great if someone would know how I can solve this issue. Thanks!

My preferred solution is to use the public IP BUT without whitelisting any network. In fact, it's like if y ou have a public IP and all the connexion are forbidden.
The solution here is to use Cloud SQL proxy and to open a tunnel from your computer to the Cloud SQL database (that you reach on the public IP, but the tunnel is secured); It's exactly like a VPN connexion: a secure tunnel.
You can do this
Download Cloud SQL prowy
Launch it
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306
Connect your SQL client on localhost:3306
If the port 3306 is already in use, feel free to use another one
If you prefer the private IP only (sometime, it's security team requirement), I wrote an article on this.
If you use a VPN (and you are connected to Cloud VPN) take care to open the correct route and firewalls in both way (in and out)

Related

Cloud fusion to on-prem postgresql database connection

I'm trying to establish connection to on-premise postgresql database from cloud data fusion. But I' not able to resolve Host and port. Where can I find host and port for postgresql DB, also anything needs to be done for postgresql db to access from data fusion?
I downloaded postgresl JDBC from cloud data fusion hub. In Data fusion studio, I selected PostgeSQL a source. While filling details in properties, I'm not sure where to find host/port in postgresql.
the answer is more "theoretical" than practical but I will try to resume it as easily as possible.
The first question you need to ask yourself is, is my PostgreSQL instance accessible via a public IP address ? If so, this is quite easy, all you need is an authorised user, the correct password, and the public IP adress of you instance (and the port you configured).
If your instance is not publically accessible, which is often the case, then the first thing to do is to setup a VPN connection between your on-prem database and the GCP Virtual Private Cloud (VPC) that is peered to your Data Fusion instance (if you setup a private instance which you usually do for security reasons). Once this is done, Data Fusion should be able to connect directly to the PostgreSQL source via the exported VPN route / routes from you peered GCP VPC.
I am happy to edit my answer with more detail based on your follow up questions.

How to connect to Cloud SQL from Cloud Run instance while also using Serverless-VPC-Connector?

I'm running a cloud run service with a working Cloud-SQL connection using the proxy to connect to the Cloud-SQL instance. The Cloud-SQL instance does not have a private IP configured.
Now there is a new requirement that this service needs to connect to a DB outside of GCP, for which it needs a static egress-IP that can be whitelisted. I attempted to achieve this via a serverless-VPC-connector (https://cloud.google.com/run/docs/configuring/static-outbound-ip).
Problem: When I add the VPC-connector to the service, and configure it to route all traffic through the vpc-connector, the service fails to deploy because it cannot connect to Cloud-SQL via the proxy anymore:
CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: Post "https://sqladmin.googleapis.com/sql/v1beta4/projects/<>/instances/<>/createEphemeral?alt=json&prettyPrint=false": context deadline exceeded
I was able to get this exact setup to work for a cloud function (identical external DB, CloudSQL, and vpc connector), and I'm at a loss as to why this wouldn't work for Cloud Run, and I'm wondering if there is additional configuration required which I'm missing?
Is it possible to connect to Cloud-SQL with the proxy, while at the same time using a VPC-connector to achieve a static egress IP?
If you need a static egress public IP, you only did the half of the path. Now, you need to add a Cloud NAT, that gets the VPC connector range and NAT the traffic to the internet. In this NAT, you can define a public static IP which will be used for all outgoing communications.
In any cases, if you want to reach Cloud SQL, it's better to use Cloud SQL proxy because the connection is secured (encrypted channel). If you use directly the public IP, add a SSL certificat to encrypt the communication over the public internet. (same thing if your instance is located outside GCP)

AWS EC2 for QuickBooks

AWS and network noob. I've been asked to migrate QuickBooks Desktop Enterprise to AWS. This seems easy in principle but I'm finding a lot of conflicting and confusing information on how best to do it. The requirements are:
Setup a Windows Server using AWS EC2
QuickBooks will be installed on the server, including a file share that users will map to.
Configure VPN connectivity so that the EC2 instance appears and behaves as if it were on prem.
Allow additional off site VPN connectivity as needed for ad hoc remote access
Cost is a major consideration, which is why I am doing this instead of getting someone who knows this stuff.
The on-prem network is very small - one Win2008R2 server (I know...) that hosts QB now and acts as a file server, 10-15 PCs/printers and a Netgear Nighthawk router with a static IP.
My approach was to first create a new VPC with a private subnet that will contain the EC2 instance and setup a site-to-site VPN connection with the Nighthawk for the on-prem users. I'm unclear as to if I also need to create security group rules to only allow inbound traffic (UDP,TCP file sharing ports) from the static IP or if the VPN negates that need.
I'm trying to test this one step at a time and have an instance setup now. I am remote and am using my current IP address in the security group rules for the test (no VPN yet). I setup the file share but I am unable to access it from my computer. I can RDP and ping it and have turned on the firewall rules to allow NB and SMB but still nothing. I just read another thread that says I need to setup a storage gateway but before I do that, I wanted to see if that is really required or if there's another/better approach. I have to believe this is a common requirement but I seem to be missing something.
This is a bad approach for QuickBooks. Intuit explicitly recommends against using QuickBooks with a file share via VPN:
Networks that are NOT recommended
Virtual Private Network (VPN) Connects computers over long distances via the Internet using an encrypted tunnel.
From here: https://quickbooks.intuit.com/learn-support/en-us/configure-for-multiple-users/recommended-networks-for-quickbooks/00/203276
The correct approach here is to host QuickBooks on the EC2 instance, and let people RDP (remote desktop) into the EC2 Windows server to use QuickBooks. Do not let them install QuickBooks on their client machines and access the QuickBooks data file over the VPN link. Make them RDP directly to the QuickBooks server and access it from there.

Direct access a database in a private subnet without SSH tunnel

I have a database set up (use RDS) in a private subnet, and a bastion is set up in front of it in a public subnet. The traditional way to access this database from local laptops is to set up an ssh tunnel on that bastion/jumpbox and map the database port to local. But this is not convenient to development because we need to set up that tunnel everytime before we want to connect. I am looking for a way to access this database without setting up an ssh tunnel first. I have seen a case where the local laptop directly uses that bastion's ip and its 3306 port to connect to the database behind. I have no idea how it is done.
BTW, in that case I saw, they don't use port forwarding because I didn't find any special rules in the bastion's iptable.
There are several ways to accomplish what you are trying to do, but without understanding the motivation fully it is hard to say which is the "Best Solution".
SSH Tunneling is the defacto standard of accessing a resource in a private subnet behind a public bastion host. I will agree that SSH Tunnels are not very convenient, fortunately, some ide's and many apps are available to make this as easy as a click of a button once configured.
Alternatively, you can set up a client to site VPN to your EC2 environment which would also provide access to the private subnet.
I would caution anything you do which proxies or exposes the DB cluster to the outside world in a naked way such as using IP tables, Nginx, etc. should be avoided. If your goal is this, then the correct solution is to just make the DB instance publicly exposed. But be aware any of these solutions which do not make use of tunneling in (such as VPN or SSH Tunnel) would be an auditory finding, and open your database to various attack vectors. To mitigate it would be recommended that your security groups should restrict port 3306 to the public IP's of your corporate network.

VPN on EC2 to Heroku server

Hi there networking experts,
I have a Rails app hosted on Heroku, and I am looking to set up a VPN tunnel on a separate EC2 instance which will connect with a 3rd party.
3rd party <----(VPN tunnel)----> EC2 <----(HTTP/SSH)---> Heroku
Best case scenario would have been to set up the tunnel directly on our Heroku instance, but that doesn't seem possible according to some of these answers.
With my limited knowledge, I figured that the next best thing would be to set up a 'middle-man' EC2 instance with the capability to listen to the VPN tunnel as well as send HTTP requests to our Heroku server over SSH. The most important consideration in this integration would be security. I would like to encrypt end-to-end, and only decrypt on our Heroku server.
What would be the best practice for achieving something like this, if possible at all?
Thank you!
AWS has a managed VPN offering.
You configure a customer gateway for the client side, attach a virtual private gateway to your VPC, and the VPN connects the two. You can then set up routes which will allow them to connect securely to any services running inside your VPC.
A VPN in AWS can use static or dynamic routing. Static is generally simpler, especially if there is a limited IP range on the client side.