I have a postgresql database on the google cloud platform (cloud SQL). I'm currently managing this database through pgadmin, installed on my laptop. I've added the IP address of my laptop to the whitelist on the cloud sql settings page. This all works.
The problem is: when I go somewhere else and I connect to a different network, the IP address changes and I cannot connect to the postgresql database (through pgadmin) from my laptop.
Is there someone who knows a (secure) solution, involving a proxy server (or something else), to connect from my laptop (and only my laptop) to my postgresql database, even if I'm not on a whitelisted network (IP address)? Maybe I can set up a VM instance and install a proxy server and use this? But I have no clue where to start (or search for).
You have many options for connecting to a Cloud SQL instance from an external applications such a Public IP address with SSL, Public IP address without SSL, Cloud SQL proxy, etc. You can see all of them here.
Between all connection options there exists Cloud SQL Proxy, it basically provides secure access to your instances without the need for Authorized networks or configuring SSL on your part.
You only need to follow the steps listed here and you will be able to connect your Cloud SQL instance using the proxy.
Enable Cloud SQL Admin API on your console.
Install the proxy client on your local machine (Linux):
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
Determine how you will authenticate the proxy. You can use use a service account or let Cloud SDK take care of the authentication.
However, if required by your authentication method, create a service account.
Determine how you will specify your instances for the proxy. Your options for instance specification depend on your operating system and environment
Start the proxy using either TCP sockets or Unix sockets.
Take note that as of this writing, Cloud SQL Proxy does not support Unix sockets on Windows.
Update your application to connect to Cloud SQL using the proxy.
Related
I am trying to reach our Postgres SQL server running as a GCP Cloud SQL instance from the PgAdmin 4 tool on desktop. For that you have to whitelist your IP address. Until this point it was working fine, the whole team used IPV4 addresses, but I moved to a new country where my new ISP assigned an IPV6 address to me, and it seems like GCP doesn't allow IPV6 addesses to be whitelisted, thus you can't use them to connect.
Here's a picture of the Connections/Allowed Networks tab:
Is there any kind of solution to this?
Or do they expect me to only have ISPs who assign IPV4 addesses to me?
Thank you.
Alternatively, you could just use the Cloud SQL Auth Proxy directly. You'll have to run an instance of it on your local machine and then PgAdmin can connect to the proxy on localhost.
Yes, there is a workaround for this but not using the UI, instead you need to use the Cloud SDK.
This has recently been added to cloud SDK beta. So as you can see in the documentation for gcloud sql connect:
If you're connecting from an IPv6 address, or are constrained by certain organization policies (restrictPublicIP, restrictAuthorizedNetworks), consider running the beta version of this command to avoid error by connecting through the Cloud SQL proxy: gcloud beta sql connect.
I couldn't find any Feature Requests for this being implemented into the UI at the moment, so if you'd like to have this changed I would suggest you to open one in Google's Issue Tracker system.
I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.
Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.
This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.
Based on my current understanding, when I enable a service connection to my Cloud SQL instance in one of my revisions, the path /cloudsql/[instance name]/.s.PGSQL.5432 becomes populated. This is a UNIX socket connection.
Unfortunately, a 3rd party application I'm using doesn't support UNIX socket connections and as such I'm required to connect via TCP.
Does the Google Cloud SQL Proxy also configure any way I can connect to Cloud SQL via something like localhost:5432, or other equivalent? Some of the documentation I'm reading suggests that I have to do elaborate networking configuration with private IPs just to enable TCP based Cloud SQL for my Cloud Run revisions, but I feel like the Cloud Proxy is already capable of giving me a TCP connection instead of a UNIX socket.
What is the right and most minimal way forward here, obviously assuming I do not have the ability to modify the code I'm running.
I've also cross posted this question to the Google Cloud SQL Proxy repo.
The most secure and easiest way is to use the private IP. It's not so long and so hard, you have 3 steps
Create a serverless VPC connector. Create it in the same region as your Cloud Run service. Note the VPC Network that you use (by default it's "default")
Add the serverless VPC Connector to Cloud Run service. Route only the private IPs through this connector
Add a private connection to your Cloud SQL database. Attached it in the same VPC Network as your serverless VPC Connector.
The Cloud configuration is over. Now you have to get the Cloud SQL private IP of your instance and to add it in parameters of your Cloud Run service to open a connection to this IP.
As I came to know that cloud proxy uses the public IP in the background. So how safe is it to use cloud proxy and what is the background process and how safe it is if we are using public IP in google cloud.
If you use a PaaS-based client application, most likely it has an ephemeral IP address. In this situation, restricting access based on the range of source IP addresses may be ineffective.
In such a case using Cloud SQL Proxy is optimal. As with many services that use public IP address, Google Cloud Proxy protects traffic in public networks by encryption. Traffic between the proxy client and the proxy server process is passed through the secure tunnel encrypted using AES cipher.
Apart from that, The SQL Proxy requires authentication and uses IAM to restrict access to the SQL instance.
You can find more information in the documentation:
Cloud SQL > Doc > MySQL > Connecting to Cloud SQL from external applications
Cloud SQL > Doc > MySQL > About the Cloud SQL Proxy:
The Cloud SQL Proxy provides secure access to your instances
without the need for authorized networks or for configuring SSL. The
proxy automatically encrypts traffic to and from the database
using TLS 1.2 with a 128-bit AES cipher; SSL certificates are used to
verify client and server identities.
The proxy uses a secure
tunnel to communicate with its companion process running on the SQL
server.
The proxy requires authentication. When you use a service
account to provide the credentials for the proxy, you must create it
with sufficient permissions: a role that includes the
cloudsql.instances.connect permission.
Cloud SQL > Doc > MySQL > Connecting from App Engine standard environment to Cloud SQL:
App Engine provides a mechanism that connects using the Cloud SQL
Proxy.
Once correctly configured, you can connect your service to
your Cloud SQL instance's unix domain socket using the format:
/cloudsql/INSTANCE_CONNECTION_NAME.
These connections are
automatically encrypted without any additional configuration.
Also, you may configure your systems so that use Private IP as described here:
Cloud SQL > Doc > MySQL > Private IP
I'm attempting to find a completely remote / cloud-based development workflow.
I've created an aws free-tier ec2 instance and on that box I've been developing a gatsby site (the framework doesn't matter, the solution I'm looking for should be framework agnostic). Since the code is on another box, I can't run the dev server and then from the local computer hit localhost as I would normally.
So,
What do I need to do so that I can run gatsby develop and hit my dev server that's hosted on the ec2 box?
How do I provide public access to that endpoint?
Is it possible to provide temporary access so that when I log off of the box, it's no longer accessible?
Is there some mechanism I can put into place so that I'm the only one that can hit that endpoint?
Are there other features that I should be taking advantage to secure that endpoint?
Thanks.
I can't run the dev server and then from the local computer hit localhost as I would normally
You can. You can use ssh to tunnel your remote port to your localhost, and access the server from your localhost.
What do I need to do so that I can run gatsby develop and hit my dev server that's hosted on the ec2 box?
ssh into the dev server, run gatsby develop and either access it on localhost through ssh tunnel or make it public to access through its public IP address.
Use sshfs to mount a development folder on the dev server onto your localhost.
Alternatively, you can setup vncserver on the dev server, tunnel vnc connection using ssh, and access the dev server using through a remove desktop. Something liteweight would be good, e.g. fluxbox as a desktop environment for vnc.
Is it possible to provide temporary access so that when I log off of the box, it's no longer accessible?
yes. through ssh tunnel. You close tunnel and the access is finished.
Is there some mechanism I can put into place so that I'm the only one that can hit that endpoint?
ssh tunnel along with security group to allow ssh for your IP address only.
Are there other features that I should be taking advantage to secure that endpoint?
Security groups and ssh tunneling would be primary choices to ensure secure access to the dev server.
You can also make the endpoint public, but set security group of your dev server to allow internet access only from your IP.
You could also put the dev server in a private subnet for full separation from the internet. Use bastion host to access it or setup double ssh tunnel to your localhost.
Other way is to do all development on localhost, push code to CodeCommit and have CodePipeline manage deployment of your code to your dev server using CodeDeploy.
You can also partially eliminate ssh by using SSM Session Manager.
Hope this helps.