How to DML to a PostgresSQL db in a Private Subnet via GCP Cloud SQL Auth Proxy? Try to connect via Third Party Applications like DBeaver, get stuck - google-cloud-platform

I have a PostgresSQL DB in a private subnet in a VPC. I want to do DML operations from my local environment.
I installed Google SDK in order to create a VPN connection to the private instance.
And run the command:
C:\Program Files (x86)\Google\Cloud SDK>gcloud sql instances describe dt-prod
backendType: SECOND_GEN
connectionName: <my_instance_connection_name>
createTime: '2022-12-26T11:27:51.767Z'
databaseInstalledVersion: POSTGRES_14_4
databaseVersion: POSTGRES_14 etc...
It seems to get all the information from DB in the Private subnet.
C:\Program Files (x86)\Google\Cloud SDK>cloud_sql_proxy -instances=<my_instance_connection_name>=tcp:5432
2023/02/01 17:58:03 Listening on 0.0.0.0:5432 for <my_instance_connection_name>
2023/02/01 17:58:03 Ready for new connections
2023/02/01 17:58:03 Generated RSA key in 745.856ms
says the command prompt.
I try to connect DB from third party applications like DBeaver. What did I do wrong? How can I connect to this DB via DBeaver?
Any opinions appreciated.

Related

OpenVPN 2.8.5 hosted in EC2 Instance what is the best way to extract ziped files from local machine into cloud VPN directory?

I try to connect to IoT Controllers via VPN.
The Controllers are already set up. I only need to establish a VPN to have remote access.
For that i installed OpenVPN in a AWS EC2 Instance.
To build the Connection between OpenVPN and the Clients, i need to create certificates & keys for the server and the clients.
The documentation says that i need to extract the easy-rsa 2 script bundle (ziped files) into the home directory of the OpenVPN: https://openvpn.net/community-resources/setting-up-your-own-certificate-authority-ca/
My question: How can i unzip a file from my local machine into the home directory of a cloud hosted VPN?
UPDATE
Currently i try via scp to transfer the zip to the openvpn instance.
scp -i ~\OpenVPNKeys.pem easy-rsa-old-master.zip openvpnas#34.249.227.33:/home/
But i get the following error:
scp: /home/easy-rsa-old-master.zip: Permission denied
When i try:
scp -i ~\OpenVPNKeys.pem easy-rsa-old-master.zip openvpnas#34.249.227.33
without specifying the directory it works. I get the message:
1 Datei(en) kopiert
But then i have no clue where the file is saved. Does anayone know where files will be saved automatically?

Recommended AWS service or method to stream data from S3 into database tables

Currently business is trying to store the data in an Amazon S3 bucket. We are trying to load it into a Relational database table using a data load utility tool in the same ec2 instance where DB is located. Unfortunately, we have to download the file from S3 into the EC2 instance where the database is installed or located.
The business also says they cannot use JDBC port or afford to use a VPN Connection.
Database Name: MYSQL
Utility tool: (Utility tool must use)Business proprietary ($BPLOADUTIL -CSV --table T_BPARTNER --file local3gbfile.csv)
Can we do a data load via HTTPS and use the utility tool at the same time? do you propose any services or products that can do the expected?
Expected: Not to download the file into the EC2 instance where the database is located but at the same time, I need to load the data from the ec2 instance using the utility tool.
The solution can include (Services, Products suggestions, Web apps, or anything ) but the connection should be HTTPS only
You cannot connect to a database without using the proper protocol. For example, MySQL uses TCP protocol and connects over default port of 3306. You cannot connect to database using HTTP/HTTPs protocol using port 80/443.
You can use AWS Database Migration Service to load data in CSV format from S3 to any relational database even the one residing on an EC2 instance without downloading the file on the EC2 instance.

GCP Cloud Functions connecting to cloud sql with private IP

I'm following this example to make a connection from Cloud Function to Postgres Cloud SQL: https://cloud.google.com/functions/docs/sql.
When I create a test Cloud SQL instance with Public IP and I trigger the cloud function, it connects to the cloud SQL instance and returns something. For security reasons I can't leave Public IP on, so when I select Private IP on the cloud SQL instance I get:
Error: function crashed. Details:
could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/cloud-sql-test-250613:us-central1:myinstance-2/.s.PGSQL.5432"?
I can't get from the documentation what is the contract between cloud function and the cloud sql instance. If we are using unix domain sockets should I care at all about IPs? Does it matter if it's public or private?
If it does matter, do I have to go through all the process of setting up Private IP infrastructure? Do I need serverless VPC?
I've managed to achieve a connectivity between a Cloud Function and Cloud SQL private instance by doing this.
It seems that it does matter if you disable public IPs, whenever I disabled public IP's I kept getting ERR CONN REFUSED, which seems to be your case,to have your Cloud SQL instance only with private IP, I think you do have to use Serverless VPC.
This is what I would recommend you to try:
Make sure that all your infrastructure is in the same region (Cloud SQL, Cloud Function,VPC Connector)
Take these steps,please:
Set Cloud SQL instance to private only connectivity. (Cloud SQL Instance > Connections)
Make sure that your private CloudSQL instance is on the desired “Associated Networking” (VPC).
Create a VPC connector on the VPC network that your Cloud SQL instance is located. (the one associated with the MySql instance)
To create a connector go to: VPC Network > VPC Serverless Access > Create Connector
In VPC Network > [Your VPC] > VPC Network Peering you can check if the connection is
correct to your Cloud SQL instance.
Create a Cloud Function using the code in your desired language. (You can test with the examples in the documentation.)
When you create your Cloud Function make sure to set it in the same region, while also adding the VPC Connector you created to the "Egress Settings" option in your Cloud Function.
If you try to create a VPC Connector through the GCP Console, you will only get 1 zone to pick from. But if you use the cloud shell you can define another areas. You can try that with this command and in these areas.
gcloud beta compute networks vpc-access connectors create [CONNECTOR_NAME] \
--network [VPC_NETWORK] \
--region [REGION] \
--range [IP_RANGE]
Areas:
us-central1, us-east1, europe-west1
Please let me know if this worked for you.
UPDATE:
Hello again alobodzk,
Try making your Cloud Function in Python (once making sure that all of the previous steps are OK).
Try this code:
Cloud Function index.js (Replace all the connector data with your own credentials)
import mysql.connector
from mysql.connector import Error
def mysql_demo(request):
import mysql.connector
from mysql.connector import Error
try:
connection = mysql.connector.connect(host='CloudSQL Instance Private IP', database='Database Name, user='UserName', password='Password')
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL database... MySQL Server version on ",db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print ("Your connected to - ", record)
except Error as e :
print ("Error while connecting to MySQL", e)
finally:
#closing database connection.
if(connection.is_connected()):
cursor.close()
connection.close()
print("MySQL connection is closed")
# [END functions_sql_mysql]
Cloud Function requirements.txt
psycopg2==2.7.7
PyMySQL==0.9.3
mysql-connector-python

Connecting to Postgres using private IP

When creating my Postgres Cloud SQL instance I specified that would like to connect to it using private IP and chose my default network.
My VM sits in the same default network.
Now, I follow instructions as described here https://cloud.google.com/sql/docs/postgres/connect-compute-engine
and try executing
psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres
from my VM, but get this error:
psql: could not connect to server: Connection timed out Is the server
running on host "CLOUD_SQL_PRIVATE_IP_ADDR" and accepting TCP/IP connections on
port 5432?
Anything I am under-looking?
P.S. My Service Networking API (whatever that is) is enabled.
If you have ssh to a VM in the same network you can connect to Cloud SQL using cloud SQL proxy:
Open the ssh window (VM-instances in Computer engine and click on ssh), then download the proxy file with:
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
Execute, in the ssh shell
chmod +x cloud_sql_proxy
Create a service account with role Cloud SQL Client and create an api key. Download the json key in your local computer.
In the ssh vm shell click on the wheel and "upload", and upload the key file
5.
./cloud_sql_proxy -instances=<Instance connection name>=tcp:5432 -credential_file=<name of the json file>
where "Instance connection name" can be found in SQL-Overview -> Connect to this instance
Finally
psql "host=127.0.0.1 port=5432 sslmode=disable user=<your-user-name> dbname=<your-db-name>"
On the other hand, if you want to connect to cloud sql from your local computer and the cloud sql instance does not have a public ip you have to connect through a bastion host configuration.
https://cloud.google.com/solutions/connecting-securely
According to this document connect via private ip, you need to setup following item:
You must have enabled the Service Networking API for your project. If you are using shared VPC , you also need to enable this API for the host project.
Enabling APIs requires the servicemanagement.services.bind IAM permission.
Establishing private services access requires the Network Administrator IAM role.
After private services access is established for your network, you do not need the Network Administrator role to configure an instance to use private IP.

Presto on EMR: external access

I have setup an EMR cluster with Presto installed and running. I can query my data on the server using presto-cli, but I am not entirely sure how to configure Presto to be accessible externally (e.g. from Tableau on my laptop).
I have looked at all the configuration/properties files in /usr/lib/presto/, but none of them seem to have anything to do with remote access setup (i.e. setting up user credentials and port).
My question is, how does one go about setting up remote access? Any help would be appreciated.
EDIT: I was able to connect to Presto (thanks to #franklinsijo); here are the nitpicks:
change the discovery URI in config.properties to the EMR server's public DNS
ensure that your local IP address is whitelisted to access the port specified in config.properties
Presto Web connector for Tableau can be configured to run queries from Tableau. Unlike other tableau connectors, you cannot run live queries on Presto but can create tableau extracts. Refer here for configuration procedure.
As with configurations from Presto end, edit the configurations of Presto coordinator in config.properties file. The value of discovery.uri is required to setup the Tableau connector.