How to connect to Google Cloud SQL with enforced SSL from within App Engine app - google-cloud-platform

I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.

Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.

This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.

Related

How do I connect to Google Cloud SQL from Google Cloud Run via TCP?

Based on my current understanding, when I enable a service connection to my Cloud SQL instance in one of my revisions, the path /cloudsql/[instance name]/.s.PGSQL.5432 becomes populated. This is a UNIX socket connection.
Unfortunately, a 3rd party application I'm using doesn't support UNIX socket connections and as such I'm required to connect via TCP.
Does the Google Cloud SQL Proxy also configure any way I can connect to Cloud SQL via something like localhost:5432, or other equivalent? Some of the documentation I'm reading suggests that I have to do elaborate networking configuration with private IPs just to enable TCP based Cloud SQL for my Cloud Run revisions, but I feel like the Cloud Proxy is already capable of giving me a TCP connection instead of a UNIX socket.
What is the right and most minimal way forward here, obviously assuming I do not have the ability to modify the code I'm running.
I've also cross posted this question to the Google Cloud SQL Proxy repo.
The most secure and easiest way is to use the private IP. It's not so long and so hard, you have 3 steps
Create a serverless VPC connector. Create it in the same region as your Cloud Run service. Note the VPC Network that you use (by default it's "default")
Add the serverless VPC Connector to Cloud Run service. Route only the private IPs through this connector
Add a private connection to your Cloud SQL database. Attached it in the same VPC Network as your serverless VPC Connector.
The Cloud configuration is over. Now you have to get the Cloud SQL private IP of your instance and to add it in parameters of your Cloud Run service to open a connection to this IP.

proxy(?) server for connecting to cloud sql instance (GCP)

I have a postgresql database on the google cloud platform (cloud SQL). I'm currently managing this database through pgadmin, installed on my laptop. I've added the IP address of my laptop to the whitelist on the cloud sql settings page. This all works.
The problem is: when I go somewhere else and I connect to a different network, the IP address changes and I cannot connect to the postgresql database (through pgadmin) from my laptop.
Is there someone who knows a (secure) solution, involving a proxy server (or something else), to connect from my laptop (and only my laptop) to my postgresql database, even if I'm not on a whitelisted network (IP address)? Maybe I can set up a VM instance and install a proxy server and use this? But I have no clue where to start (or search for).
You have many options for connecting to a Cloud SQL instance from an external applications such a Public IP address with SSL, Public IP address without SSL, Cloud SQL proxy, etc. You can see all of them here.
Between all connection options there exists Cloud SQL Proxy, it basically provides secure access to your instances without the need for Authorized networks or configuring SSL on your part.
You only need to follow the steps listed here and you will be able to connect your Cloud SQL instance using the proxy.
Enable Cloud SQL Admin API on your console.
Install the proxy client on your local machine (Linux):
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
Determine how you will authenticate the proxy. You can use use a service account or let Cloud SDK take care of the authentication.
However, if required by your authentication method, create a service account.
Determine how you will specify your instances for the proxy. Your options for instance specification depend on your operating system and environment
Start the proxy using either TCP sockets or Unix sockets.
Take note that as of this writing, Cloud SQL Proxy does not support Unix sockets on Windows.
Update your application to connect to Cloud SQL using the proxy.

How to use SSL for a backend EC2 instance without a domain

I have an AWS EC2 instance set up running my back-end, and it's able to communicate with my front-end (locally), but not with front-end deployed (on Netlify).
Is it necessary to create a domain name for my EC2 instance so I can use SSL? There's no point to have a domain name to my back end since it's just there for the API calls.
How do I use SSL for my backend server without a domain name? Every video and blog I've found requires a domain name. If anyone can point me to the right resource, would appreciate it.
You can enable SSL on an EC2 instance without a domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain
Sadly yes to use SSL-certificates you need to have a valid DNS name so it can process it when you are calling it, anyways if what you want to encrypt is the info you could just use your own encryption method and send the data encrypted to frontend, then use something like crypto.js to use it once decrypted, but the best practice would be giving the backend it's own DNS, that way if at some point the API grows to the point it can be used by others for business you can have them point at something named (and also you don't need to deal with the whole manual encryption/decryption).

How to set up a front end for AWS DBs without using the Internet

On AWS, I know how to set up a web server with inbound rules allowing HTTP and HTTPS and a database security group that only connect to the web server. The issue is I need to create a front end to manage the databases without using Internet access - this will be internal only and precludes the use of a public IP / public DNS. Does anyone know how I would do this?
To further elaborate, some of our AWS accounts are for internal use only - we can log in to the console, use CygWin to SSH in, see what's there, etc. But these accounts are for development purposes, and in a large enterprise such as this one, these are not allowed an IGW. So - no inbound Internet access is allowed. How do I create an app (e.g., phpMyAdmin type) in which our manager can easily view and edit the data in the database given the restriction that this must be done without inbound Internet access?
Host your database on RDS inside a VPC and create a VPN connection between your client network and your VPC.
host your database on one EC2 and also upload your front end there. your database will be running on locally on EC2 and you can connect front end to database. where database will not have public DNS it will running locally you can access only using SSH and front end script.
you check this official documentation from aws : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
for frontend script you can use https://www.adminer.org/ which is one file database management system. one simple file is there using this make connection to locally running database on EC2

Streamlining Azure set up with app and DB on separate VMs

A Django app of mine (with a postgresql backend) is hosted over two separate Ubuntu VMs. I use Azure as my infrastructure provider, and the VMs are classic. Both are part of the same resource group, and map to the same DNS as well (i.e. they both live on xyz.cloudapp.net). Currently, I have the following database url defined in my app's settings.py:
DATABASE_URL = 'postgres://username:password#public_ip_address:5432/dbname'
The DB port 5432 is publicly open, and I'm assuming the above DB url implies the web app is connecting to the DB as if it's on a remote machine. If so, that's not the best practice: it has security repercussions, not to mention it adds anything from 20-30 milliseconds to a hundred milliseconds to each query (in latency).
My question is, how does one program such a Django+postgres setup on Azure such that the database is only exposed on the private network? I want to keep the two-VM set up intact. An illustrative example would be nice - I'm guessing I'll have to replace the public ip address in my settings.py with a private IP? I can see a private IP address listed under Virtual machines(classic) > VMname > Settings > IP Addresses in the Azure portal. Is this the one to use? If so, it's dynamically assigned, thus wouldn't it change after a while? Looking forward to guidance on this.
In Classic (ASM) mode, the Cloud Service is the network security boundary and the Endpoints with ACLs are used to restrict access from the outside Internet.
A simple solution to secure access would be:
Ensure that the the DB port (5432) is removed from the cloud service endpoint (to avoid exposing it for the entire Internet).
Get at static private IP address for the DB server.
Use the private IP address of
the DB server in the connection string.
Keep the servers in the same Cloud Service.
You can find detailed instructions here:
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-static-private-ip-classic-pportal/
This should work. But for future implementations, I would recommend the more modern Azure Resource Model (ARM), where you can benefit from many nice new features, including virtual networks (VNETs) where you get more fine-grained security.