Configure OLEDB Data Source in Coldfusion - coldfusion

I need to access a 3rd Party software database using their propriety OLEDB connector. I've installed their OLEDB software and can confirm using Regedit.
When trying to create a Data Source in the Coldfusion Administrator, I found that the OLEDB data sources have been deprecated since coldfusion MX.
On searching further, I haven't been able to find any alternative way of configuring the data source.
Can you please advise how can I access the OLEDB data source?
Any links or direction would be really appreciated.

I believe you will have to use the 'Other' option value in the datasource driver dropdown. and then provide repective info in the following page. They say that all the OLEDB drivers that might exist will not be listed in the drop down and hence you will have to o the manual way of things.
Further details here" http://www.rvclandtrust.org/CFDOCS/Advanced_ColdFusion_Administration/datasources_ADV_MJS2.html

Related

Can SQL Developer be used with the Athena JDBC Driver

I'm trying to connect to Athena using the JDBC drivers provided by Amazon, and using SQL Developer as the client. So far, I haven't had any luck with Java 1.8.181 and AthenaJDBC42-2.0.7.jar Has anyone had any luck on this front? Before I try mixing up which versions of Java, JDBC driver, and/or SQL Developer, I thought I'd at least ask if anyone has been successful using SQL Developer with the Athena JDBC drivers.
No.
SQL Developer doesn't allow for just any JDBC driver to be added...we restrict connectivity to the platforms we officially support for database migrations to the Oracle Database platform.
Athena doesn't have migration support, hence the lack of connectivity. If you need assistance with a migration, please send me a note.

driver to support to read or write to HIVE from c++ code

I have core product built on c++ which uses RDBMS namely oracle DB. We are in phase to Big data enable on this product with access to Hive tables. I know from apache spark we have libraries to directly have access to hive tables.
Now with C++ being base language, what could be possible ways to read/write data on hive on cloudera?
Note: Not looking for pull data to/fro from hive and RDBMS or vice versa.(sqoop). Looking to read or fire query execution on hive itself.
Thanks in advance.
This is what worked out for me.
1. Install ODBC driver ODBC
2. Go through Installation guide Installation Guide
3. Open the Project in Visual cpp++ and execute .

Google Spanner: JDBC Connection Strings?

While Spanner looks exciting, the documentation for the Simba JDBC driver (included in the download links here: https://cloud.google.com/spanner/docs/partners/drivers) are relatively sparse, especially when compared to the documentation for the Simba JDBC BigQuery driver (https://cloud.google.com/bigquery/partners/simba-drivers/).
In particular, the documentation only mentions one connection string:
jdbc:cloudspanner://localhost;Project=simba-cloudspanner- jdbc;Instance=test-instance;Database=example-db
... there is no information about how to specify, for example, a service account and its p12 credentials or a path to a JSON file, which many Google Cloud services use.
Can anyone share JDBC connection strings or other setup details they have successfully used to connect to the service? I have tried, for example, setting the environment variable GOOGLE_APPLICATION_CREDENTIALS and providing a JDBC string in the same style as above, but to no avail.
Ideally, I would like to use a combination of instance id, project name, database name, a service account email, and a p12 file, but am open to other authentication options.
EDIT: When attempting the GOOGLE_APPLICATION_CREDENTIALS strategy, I generated this log file, in case it might be of any help https://gist.github.com/aryeh-looker/e6b1b1617d301f0a247463216c96535d
Double-checked my work, and it looks like I am in fact able to connect with a connection string as above and by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. Would be ideal to have some other options and documentation is still a bit spotty (no mention of the environment variable), so more information could be ideal.
This is a semi-workable solution. It suffers from the fact that you cannot have multiple connections with different service accounts in the same process.
EDIT 2: This does not seem to work. I get errors about the instance not being specified when pointing to a JSON file.
EDIT: looks like with the latest release of the Spanner driver, there is a way to do this.
The latest release of the driver (1.0.4.1005) appears to support an optional JDBC parameter PvtKeyPath which takes a path to your private key as opposed to having to set the GOOGLE_APPLICATION_CREDENTIALS variable. Worth a look.
From the included PDF documentation:
So you will have a URL like: jdbc:cloudspanner://;Project=...;PvtKeyPath=/path/to/credentials.json
As the JDBC Driver supplied by Google is severely limited (does not support DML and DDL statemetns), I have written my own JDBC Driver. The driver is designed to work with JPA/Hibernate-enabled applications. The driver can be found here: https://github.com/olavloite/spanner-jdbc
This driver supports the same kind of URL's as the driver supplied by Google, including the PvtKeyPath property. It is still BETA, but I already use it for one of my own applications.

Connect to ms-access via odb-orm

I am fairly new to this library, and ORM in general but I have the following question:
Is it possible to connect to a table inside a Microsoft .mdb or .accdb local database via odb-orm (code synthesis)?
According to their website, it doesn't support MS Access.

C++ Database API's - DTL

I'm looking for a C++ API that is able to connect to different types of databases all in one; mainly MySQL, oracle and SQL Server and I believe I have found one with "DTL" ( http://dtemplatelib.sourceforge.net/ )
However, I'm struggling to connect my database on localhost. Has anyone used this before and could shed some more light on it other than what their site does with
DBConnection::GetDefaultConnection().Connect("UID=example;PWD=example;DSN=example;");
though I guess what to put in uid and pwd, I'm not sure what it's expecting in 'dsn', are there any REAL examples or have you guys used it before and could help.
This is an ODBC library, so DSN is the ODBC data source name. On Windows, these can be configured under Administrative Tools->Data Sources.
As #Dark Falcon said, the "DSN" refers to an "ODBC data source". What you get is an extra level of indirection like this:
On Windows, you normally create the ODBC data source with the "Data Sources (ODBC)" control panel, which is normally in the "Administrative Tools".
In any case, this separates the configuration/deployment "stuff" from the code. For example, if you want to use your code with a test database during development, then with the "live" database when you deploy it, you can do that without making any changes to your code, and even without changing the connection string. Instead, you change the data source to refer to production server instead of the test server.