I've created a test cloud spanner instance and database have have been attempting to connect to it through DBVisualizer.
I have authenticated using the gcloud auth command, and have the driver set up within DBVisualizer.
The connection string I'm using is:
jdbc:cloudspanner://;Project=testapp;Instance=test-instance;Database=test-spanner;PvtKeyPath=/Users/userhome/.config/gcloud/application_default_credentials.json
However, when I try to connect I get the following error:
[Simba][SpannerJDBCDriver](100004) Failed to connect to Spanner: No NameResolverProviders found via ServiceLoader, including for DNS. This is probably due to a broken build. If using ProGuard, check your configuration
Is there anyway to get a connection from a DB Management Tool such as DB Visualizer?
I found a solution on MacOS at least. Copy the CloudSpannerJDBC42.jar and google-cloud-spanner-0.9.4-beta.jar to DBvisualizers lib folder. In the case of MacOS the location is:
/Applications/DbVisualizer.app/Contents/java/app/lib
Restart DBVisualizer and then you can connect.
I don't think DBVisualizer supports Cloud Spanner right now. See their documentation: https://www.dbvis.com/features/
As the product is still pretty new publicly, we'll hopefully be seeing more 3rd party support in the coming months.
I've run into similar problems with the driver supplied by Google, so I decided to develop my own. The driver has both a 'thin' version and a 'fat' version. The thin version is intended as a dependency to be included in Java applications you develop yourself. The thick version can be used for standalone purposes, such as these kind of connections. The thick version (and other) can be found here: https://github.com/olavloite/spanner-jdbc/releases
More information about the whole driver can be found on my GitHub page.
The driver does work with DBVisualizer. Follow these steps to set it up:
Download the driver and place it in your JRE/lib/ext directory (this is necessary because of dynamic loading of services done by the underlying Google Cloudspanner API). Make sure you place it in the lib/ext directory of the JRE you are actually using with DBVisualizer.
Open DBVisualizer and open Driver Manager. Click on Create a new Driver.
Give it the name Cloudspanner
URL format is jdbc:cloudspanner://localhost;Project=projectId;Instance=instanceId;Database=databaseName;PvtKeyPath=key_file
Driver class is automatically selected.
Close the Driver Manager and make a new connection using the new driver.
Related
I'm in the process of creating a custom registry hosted in Azure DevOps.
The plan going forward will be to host some third party libraries as well as our own libraries in this custom registry.
Each project will then be using manifests in order to declare all dependencies and their required versions.
So far everything works as expected. I've already created a port out of one of our libraries and I'm currently distributing it via our custom registry.
Now the part I'm unsure how to handle.
At my company we do an "air gapped" build which means the source code is taken to some machine on a private network with no internet connection where the build is performed.
This is of course problematic as the air gapped machine will not have access to the custom ports registry we're hosting on ADO, nor will it have access to the repos hosting those projects we're distributing via our custom registry.
I'm trying to figure out a solution to this issue.
My first thought was to tell the Air Gap team to first clone the required repos to a USB stick. Then we could configure Visual Studio to use overlay-ports which would use the source that was cloned on to the USB stick and a custom port file. I have no idea if this would actually work.
I'm curious what other folks have done who might be in a similar situation?
Does anyone have any ideas on how I could handle this scenario using vcpkg?
I've been trying to Run cartography on my EC2 account for the last 2 days. I have no previous knowledge of Neo4j, But following their installation process doesn't work.
First I've tried to install Neo4j using rpm instructions for Neo4J website, no success acessing Neo4j on port 7474. Error: Connection refused.
Then I gave up trying to make Neo4J work on an EC2 installation, and used their MarketPlace AMi- Works Like a charm but I don't know what is being installed on that AMI. So I decided to install and run cartography on this instance.
My first problem was installing python, pip and java correctly. After everything working, I've discovered neo4j bolt port used my public IP, not my localhost. After thatI was able to finally execute Cartography, but Not it's giving me the following error:
neobolt.exceptions.ClientError: Supplied bookmark [FB:kcwQ40omSYgvSzKPpCQTXDOcCBSQ] does not conform to pattern neo4j:bookmark:v1:tx
Have Anyone really was able to use this?, every step along the way requires some specific libraries.
Thanks !
I maintain cartography and hope I can help (wish I saw this earlier though haha)
Few things to check:
Are you using Neo4j 4.x? cartography currently only supports 3.5.x.
To run for one AWS account,
AWS_PROFILE=profilename cartography --neo4j-uri <uri for your neo4j instance; usually bolt://localhost:7687>`
To run multiple accounts, set up an AWS config file and run
AWS_CONFIG_FILE=/path/to/your/aws/config cartography --neo4j-uri <uri for your neo4j instance; usually bolt://localhost:7687> --aws-sync-all-profiles
(see https://github.com/lyft/cartography/blob/master/docs/setup/install.md#cartography-installation)
If you have more questions feel free to open a GitHub issue or start a thread on our Slack (can talk about more specialized setups like if you're using containers or anything like that too)
While Spanner looks exciting, the documentation for the Simba JDBC driver (included in the download links here: https://cloud.google.com/spanner/docs/partners/drivers) are relatively sparse, especially when compared to the documentation for the Simba JDBC BigQuery driver (https://cloud.google.com/bigquery/partners/simba-drivers/).
In particular, the documentation only mentions one connection string:
jdbc:cloudspanner://localhost;Project=simba-cloudspanner- jdbc;Instance=test-instance;Database=example-db
... there is no information about how to specify, for example, a service account and its p12 credentials or a path to a JSON file, which many Google Cloud services use.
Can anyone share JDBC connection strings or other setup details they have successfully used to connect to the service? I have tried, for example, setting the environment variable GOOGLE_APPLICATION_CREDENTIALS and providing a JDBC string in the same style as above, but to no avail.
Ideally, I would like to use a combination of instance id, project name, database name, a service account email, and a p12 file, but am open to other authentication options.
EDIT: When attempting the GOOGLE_APPLICATION_CREDENTIALS strategy, I generated this log file, in case it might be of any help https://gist.github.com/aryeh-looker/e6b1b1617d301f0a247463216c96535d
Double-checked my work, and it looks like I am in fact able to connect with a connection string as above and by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS. Would be ideal to have some other options and documentation is still a bit spotty (no mention of the environment variable), so more information could be ideal.
This is a semi-workable solution. It suffers from the fact that you cannot have multiple connections with different service accounts in the same process.
EDIT 2: This does not seem to work. I get errors about the instance not being specified when pointing to a JSON file.
EDIT: looks like with the latest release of the Spanner driver, there is a way to do this.
The latest release of the driver (1.0.4.1005) appears to support an optional JDBC parameter PvtKeyPath which takes a path to your private key as opposed to having to set the GOOGLE_APPLICATION_CREDENTIALS variable. Worth a look.
From the included PDF documentation:
So you will have a URL like: jdbc:cloudspanner://;Project=...;PvtKeyPath=/path/to/credentials.json
As the JDBC Driver supplied by Google is severely limited (does not support DML and DDL statemetns), I have written my own JDBC Driver. The driver is designed to work with JPA/Hibernate-enabled applications. The driver can be found here: https://github.com/olavloite/spanner-jdbc
This driver supports the same kind of URL's as the driver supplied by Google, including the PvtKeyPath property. It is still BETA, but I already use it for one of my own applications.
I was wondering about the best way to deliver private web service instances to lots of users, so the user would always be able to connect to their own offline version of a service, just like running a web service from visual studios while debugging. I was struggling with setting this up in VS2013 even with the many online tutorials, but I am not sure if its not working because it was never supposed to work this way.
I have provided this in-depth explanation of my issue as i am not sure i am going about this in the right way and would appreciate feedback:
Background:
I have a web service to interface with an engine. This deals with the front-end and builds a set of commands for how to make a CAD model. These commands are for controlling the 3rd party CAD software's API. Therefore the engine can be seen to have two main functions -
Build the CAD's API instructions, which can be saved for later
Execution, where it catches the instance of the CAD software
running on the same computer and it builds the model.
The second part is restricted for the general public. Only our in-house users should be able to use it. However, they want to have an otherwise identical front-end and user experience.
The problem is, if they connect to the same engine as the public, which exists on our main server, then the engine will be looking for an instance of the CAD package on the same machine as itself - i.e the server, as stressed in the emboldened point above. What should happen is the engine finds the CAD instance running on the machine that the controlling UI is based on and it uses that for its target. I have spoke to the CAD API support and they say they do not know how to do that.
And so we get to my solution of providing an offline stand alone of my web service on each of the employees computers. This means the front-end will check at the start of the session if a localhost connection is available. If not it will use the main address, which takes it to my server. Otherwise it uses the local engine which will look perform the default behavior of looking for a CAD package on the same machine as itself. Because its locally installed that is now the right machine and it will find the CAD instance of the user successfully.
Final points:
The engine cannot be accessed by the UI directly as i am using
Unity3D for the front-end and there is .Net compatibility issues.
I need a completely self contained version of the software in the
future anyway, so eventually i have to deal with having the engine
accessed locally
I ended up using IISExpress. I got the user to install this and then get them to call a batch file installer i made which sets up the config file and moves my web project to the correct directory.
Has anyone here successfully connected to neo4j using ColdFusion?
I was able to connect to neo4j 1.6.1 using this guide as a starting point: http://ghostednotes.com/2010/04/29/using-neo4j-graph-databases-with-coldfusion
. However, it was a short lived success. I have since uninstalled neo4j 1.6.1 and installed 1.7.
I am now running Apache, CF 9.0.1 on windows XP as a local dev box. I added ...\neo4j-community-1.7\lib to my CF class path and the libraries are listed in CF Server Java Class Path. neo4j is running fine, as I can use their administrator interface: http://localhost:7474/webadmin/# . CF and Apache are also running fine. I use them daily.
While the code below works, I'd really like to 'see' what's going on using the neo4j web admininistrator. So I can coordinate my learning neo4j while using the data in a CF application.
Code: (Works)
dbroot = "/tmp/neo4jtest1/";
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'var/myFirstGraphDB');
So I tried to connect to the neo4j db graph.db . However the code fails.
Code: (fails)
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'graph.db');
Error:
Object instantiation exception.
An exception occurred while instantiating a Java object. The class must not be an interface or an abstract class. Error: ''.
If I remove the "." in graph.db it does create a "graphdb" in the neo4j data folder, and successfully connects to it. However, that db is not viewable with their admin :(
I'm a novice, so please dumb down your answer.
Ok, I think what you're trying to achieve is not possible. It is not possible to access Neo4J within CF (via Java) and have the admin interface working (caveat 1 applies).
If you have put all the jars of the Neo4J package into Adobe CF then most likely the Neo4J admin interface is looking at it's own Neo4J file system. When you create the Embedded server it is not connecting to the same database because it simply can't.
Embedded Neo4J doesn't work like a standard database connection. One Embedded Neo4J reads and writes to one directory location (key word: directory, it doesn't open a single file but a whole bunch of them). No two Neo4J instances can access the same directory location (caveat 2 applies).
Ok, the caveats:
1- it is possible, in theory, to manually start up the admin interface programatically so that it uses the Embedded server that you create via Java. The Java code looks simple enough (taken from Using the server (including web administration) with an embedded database):
// Create your embedded graph db somewhere
src = CreateObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper")
.init(graphDb);
srv.start();
// The server is now running
// until we stop it:
srv.stop();
I did not get this working, mostly because the admin server hasa bunch of dependencies that were incompatible with the rest of my setup, so I can't advise on how well the above will work.
2- it is possible to have 1 read/write Neo4J accessing one location and then have multiple read-only Neo4Js (EmbeddedReadOnlyGraphDatabase) reading the same location (but I've never tried it).
You do have the option of using the REST interface - either manually, or via the Neo4J Java REST Binding (kinda slow, though).
It might be worth reading the Deployment Scenarios documentation before getting too deep in this.
There is at least one CF/Neo4J bridge out there, but it's pretty incomplete. I have one that I worked on, but I need to figure out if I can open source it!
Just a small addition to otupman's comments. I can confirm his theory of connecting to the admin interface from CF. Adding the following jars to the CF class path seemed to be enough to get the basics up and running. You may need additional jars if you are using more advanced features. Note, I am using Tomcat so the exact jars may differ slightly for your environment
neo4j-community-1.7/lib/*.* (entire directory)
neo4j-community-1.7/system/lib: (ONLY the jars below)
asm-3.1.jar
asm-analysis-3.2.jar
asm-commons-3.2.jar
asm-tree-3.2.jar
asm-util-3.2.jar
commons-configuration-1.6.jar
jackson-core-asl-1.8.3.jar
jackson-jaxrs-1.8.3.jar
jackson-mapper-asl-1.8.3.jar
jersey-core-1.9.jar
jersey-multipart-1.9.jar
jersey-server-1.9.jar
jetty-6.1.25.jar
jetty-util-6.1.25.jar
neo4j-server-1.7-static-web.jar
neo4j-server-1.7.jar
rrd4j-2.0.7.jar
Then started the server and database in onApplicationStart
factory = createObject("java", "org.neo4j.graphdb.factory.GraphDatabaseFactory");
dbroot = ExpandPath("/neo4jtest/");
graphDb = factory.newEmbeddedDatabase(dbroot & 'myFirstGraphDB');
Bootstrapper = createObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper");
graphServer = Bootstrapper.init( graphDb );
graphServer.start();
application.graphServer = graphServer;
application.graphDb = graphDB;
And closed both in onApplicationEnd
application.graphDb.shutDown();
application.graphServer.stop();
Edit: After some further testing, I think is better to load them once in OnServerStart. Then use a shutdown hook to close them. But since this is just for a local development box, it is less critical.