Google Spanner: JDBC Connection Strings? - google-cloud-platform

While Spanner looks exciting, the documentation for the Simba JDBC driver (included in the download links here: https://cloud.google.com/spanner/docs/partners/drivers) are relatively sparse, especially when compared to the documentation for the Simba JDBC BigQuery driver (https://cloud.google.com/bigquery/partners/simba-drivers/).
In particular, the documentation only mentions one connection string:
jdbc:cloudspanner://localhost;Project=simba-cloudspanner- jdbc;Instance=test-instance;Database=example-db
... there is no information about how to specify, for example, a service account and its p12 credentials or a path to a JSON file, which many Google Cloud services use.
Can anyone share JDBC connection strings or other setup details they have successfully used to connect to the service? I have tried, for example, setting the environment variable GOOGLE_APPLICATION_CREDENTIALS and providing a JDBC string in the same style as above, but to no avail.
Ideally, I would like to use a combination of instance id, project name, database name, a service account email, and a p12 file, but am open to other authentication options.
EDIT: When attempting the GOOGLE_APPLICATION_CREDENTIALS strategy, I generated this log file, in case it might be of any help https://gist.github.com/aryeh-looker/e6b1b1617d301f0a247463216c96535d

Double-checked my work, and it looks like I am in fact able to connect with a connection string as above and by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. Would be ideal to have some other options and documentation is still a bit spotty (no mention of the environment variable), so more information could be ideal.
This is a semi-workable solution. It suffers from the fact that you cannot have multiple connections with different service accounts in the same process.
EDIT 2: This does not seem to work. I get errors about the instance not being specified when pointing to a JSON file.
EDIT: looks like with the latest release of the Spanner driver, there is a way to do this.
The latest release of the driver (1.0.4.1005) appears to support an optional JDBC parameter PvtKeyPath which takes a path to your private key as opposed to having to set the GOOGLE_APPLICATION_CREDENTIALS variable. Worth a look.
From the included PDF documentation:
So you will have a URL like: jdbc:cloudspanner://;Project=...;PvtKeyPath=/path/to/credentials.json

As the JDBC Driver supplied by Google is severely limited (does not support DML and DDL statemetns), I have written my own JDBC Driver. The driver is designed to work with JPA/Hibernate-enabled applications. The driver can be found here: https://github.com/olavloite/spanner-jdbc
This driver supports the same kind of URL's as the driver supplied by Google, including the PvtKeyPath property. It is still BETA, but I already use it for one of my own applications.

Related

Where should private service account key for Google be stored on Mac

I've created a public/private key pair as described here for the Google Cloud Platform (see graphic below)
The problem: I can't find a shred of documentation describing where to put it. This thing is not the typical SSH key pair, but rather a JSON file.
Where should it be stored on a mac to allow the gcloud command to authenticate and push to the GCP?
If you are authenticating locally with a service account to build/push with gcloud, you should set the environment variable on your mac terminal to point to the JSON key file.
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Once this environment variable is defined, all the requests will be authenticated against that Service Account using the key info from the json file.
Please consider looking at the doc below for reference:
https://cloud.google.com/docs/authentication/production
The CaioT answer is the right one if you want to use a service account key file locally.
However, the question shouldn't be asked because it's a bad practice to have service account key files. They have to be used in only few cases. Else, they are security weakness in your projects.
Have a higher look on this key file. At the end, it's only a file, stored on your mac (or elsewhere) without special security dispositions. You can copy it without any problem, edit it, copy the content. You can send it by email, push it in Git repository (might be public!)...
If you are several developers to work on the same project, it because quickly a mess to know who manage the keys. When you have a leak, it's hard to know which key has been used and need to be removed,...
So, have a closer look to this part of the documentation. I also wrote some articles to propose alternative to use them. Let me know if you are interested.

How to connect pgBadger to Google Cloud SQL

I have a database on a Google Cloud SQL instance. I want to connect the database to pgBadger which is used to analyse the query. I have tried finding various methods, but they are asking for the log file location.
I believe there are 2 major limitations preventing an easy set up that would allow you to use pgBadger with logs generated by a Cloud SQL instance.
The first is the fact that Cloud SQL logs are processed by Stackdriver, and can only be accessed through it. It is actually possible to export logs from Stackdriver, however the outcome format and destination will still not meet the requirements for using pgBadger, which leads to the second major limitation.
Cloud SQL does not allow changes in all required configuration directives. The major one is the log_line_prefix, which currently does not follow the required format and it is not possible to change it. You can actually see what flags are supported in Cloud SQL in the Supported flags documentation.
In order to use pgBadger you would need to reformat the log entries, while exporting them to a location where pgBadger could do its job. Stackdriver can stream the logs through Pub/Sub, so you could develop an app to process and store them in the format you need.
I hope this helps.

Connecting to Google Cloud Spanner from DBVisualizer

I've created a test cloud spanner instance and database have have been attempting to connect to it through DBVisualizer.
I have authenticated using the gcloud auth command, and have the driver set up within DBVisualizer.
The connection string I'm using is:
jdbc:cloudspanner://;Project=testapp;Instance=test-instance;Database=test-spanner;PvtKeyPath=/Users/userhome/.config/gcloud/application_default_credentials.json
However, when I try to connect I get the following error:
[Simba][SpannerJDBCDriver](100004) Failed to connect to Spanner: No NameResolverProviders found via ServiceLoader, including for DNS. This is probably due to a broken build. If using ProGuard, check your configuration
Is there anyway to get a connection from a DB Management Tool such as DB Visualizer?
I found a solution on MacOS at least. Copy the CloudSpannerJDBC42.jar and google-cloud-spanner-0.9.4-beta.jar to DBvisualizers lib folder. In the case of MacOS the location is:
/Applications/DbVisualizer.app/Contents/java/app/lib
Restart DBVisualizer and then you can connect.
I don't think DBVisualizer supports Cloud Spanner right now. See their documentation: https://www.dbvis.com/features/
As the product is still pretty new publicly, we'll hopefully be seeing more 3rd party support in the coming months.
I've run into similar problems with the driver supplied by Google, so I decided to develop my own. The driver has both a 'thin' version and a 'fat' version. The thin version is intended as a dependency to be included in Java applications you develop yourself. The thick version can be used for standalone purposes, such as these kind of connections. The thick version (and other) can be found here: https://github.com/olavloite/spanner-jdbc/releases
More information about the whole driver can be found on my GitHub page.
The driver does work with DBVisualizer. Follow these steps to set it up:
Download the driver and place it in your JRE/lib/ext directory (this is necessary because of dynamic loading of services done by the underlying Google Cloudspanner API). Make sure you place it in the lib/ext directory of the JRE you are actually using with DBVisualizer.
Open DBVisualizer and open Driver Manager. Click on Create a new Driver.
Give it the name Cloudspanner
URL format is jdbc:cloudspanner://localhost;Project=projectId;Instance=instanceId;Database=databaseName;PvtKeyPath=key_file
Driver class is automatically selected.
Close the Driver Manager and make a new connection using the new driver.

Could not find encryption dll dbfips16.dll with Sql Anywhere 16

I have the problem, that if i deploy my 64bit application on a customer computer, i get the error message:
encryption dll "dbfips16.dll" could not be loaded.
The curious thing is, that on my notebook and some other computers it's working pretty well. I tried to add the dll's to our deployment, but could not find it in the Sybase 16 directory, do i have to download the seperatly?
(I currently i did not want to use any encryption)
P.S. i use simple file based deployment.
EDIT
I use the sybase 16 ADO.net driver (c#).
The problem only appears on one server.
EDIT
server=***;dbn=***;charset=utf-8;links=TCPIP;UID=***;PWD=***;ENC=None
The dbfips16.dll is only loaded when the connection string tells the client to use FIPS-validated encryption. If you don't want to use encryption at all, the ENCRYPTION parameter should not be set, or should be set to "none". Also check to make sure that the SQLCONNECT environment variable is not set (or doesn't contain the ENCRYPTION parameter).
If this doesn't help, can you post the contents of the connection string and/or DSN?
Disclaimer: I work for SAP in SQL Anywhere engineering.

Using impdp/expdp with RDS Oracle on AWS

I'm very new to Amazon web services, especially using their RDS system. I have set up an Oracle database (11.2) and I now want to import a dump we made locally from our server using expdp. Apparently, the ability to use expdp/impdp on AWS is quite new. From what I understand, when creating an ORACLE database on RDS, a DATA_PUMP_DIR is automatically created. What is less obvious is how to access this directory and made our local dump available to RDS. I've tried to read the following information http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html on their website. But there is a lot of things I don't understand:
Why do I have to setup an EC2 instance when the dump file is actually on my local computer (and I can access remotely the RDS database using sqlplus or sql developper)
They are often using the 'sys' or 'system' user in their examples but, when reading the security settings for Oracle, it said that these users are made unavailable on RDS => you cannot connect to a database as Sysdba.
Could someone please point me to a simple and clear tutorial on how to use impdp on AWS ?
Thanks
It is possible to use Data Pump on RDS now.
duduklein's answer was correct when he wrote it. But the RDS docs now have details about using Oracle Data Pump. The doc page url is unmodified from the link as originally posted in the question (nice job, Amazon!) but it has new content on using Data Pump now.
It's not possible for now. I have just contacted amazon (through the premium support) for the same issue and they just told me that this is a feature request that was already passed to the RDS team, but there is no estimation of when this will be available.
The only way you can import files dumps is using the "exp" utility instead of the "expdp". In this case, you can use the "imp" utility to import data to RDS