I am obtaining an exception when trying to connect to my local instance of Cassandra from Python. I can connect to Cassandra with no problems using cqlsh. The version I am running is Cassandra 3.01 on ubuntu:
cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1 | CQL spec 3.3.1 | Native protocol v4]
The exception I obtain is below:
ERROR:cassandra.cluster:Control connection failed to connect, shutting down Cluster:
Traceback (most recent call last):
File "cassandra/cluster.py", line 840, in cassandra.cluster.Cluster.connect (cassandra/cluster.c:11146)
File "cassandra/cluster.py", line 2088, in cassandra.cluster.ControlConnection.connect (cassandra/cluster.c:36955)
File "cassandra/cluster.py", line 2123, in cassandra.cluster.ControlConnection._reconnect_internal (cassandra/cluster.c:37811)
NoHostAvailable: ('Unable to connect to any servers', {'127.0.0.1': InvalidRequest(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',), 'localhost': InvalidRequest(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',)
I have checked my cassandra.yaml file and it looks ok:
egrep 'rpc_port:|native_transport_port:' /etc/cassandra/cassandra.yaml
native_transport_port: 9042
rpc_port: 9160
Anything else I can look at ? Suggestions are most welcome.
It looks like you are attempting to connect to a 3.0.1 server using an older install of cqlsh or you are (somehow) using an older python driver.
The error message you are getting:
(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',)
indicates that the client driver is attempting to get table metadata from the schema_keyspaces table which pre-dates 3.0. This information is now held in the system_schema.keyspaces table.
Use pip install --upgrade cassandra-driver to upgrade cassandra-driver.
You could type python -c 'import cassandra; print cassandra.__version__' to confirm the version of this driver.
Related
I am following this tutorial to upload my existing Django project running locally on sqlite to Google Cloud Run / Postgres.
I have the cloud_sql_proxy service running and can sign into Postgres from the command line.
I am at the point of running the command
python manage.py migrate
And I get the error:
django.db.utils.OperationalError: connection to server on socket "/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
The answer to that questions is Yes, the server is running locally and accepting connections because I can log in with the Postgres client:
agerson#agersons-iMac ~ % psql "sslmode=disable dbname=postgres user=postgres hostaddr=127.0.0.1"
Password for user postgres:
psql (14.1, server 13.4)
Type "help" for help.
postgres=>
I double checked the connection string in my .env file and it has the correct UN / P
Is this scoket not getting created somehow in a previous step?
/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432
It looks like there's a mismatch between what the app is looking for and how you're launching the proxy. The error explains the problem.
You're launching the proxy like this with an incorrect region name (us-central):
cloud_sql_proxy -instances="cgps-registration-2:us-central:cgps-reg-2-postgre-sql=tcp:5432
But the app is looking for us-central1. Try this (omitting the =tcp:5432 to create a Unix socket):
cloud_sql_proxy -instances="cgps-registration-2:us-central1:cgps-reg-2-postgre-sql
I tried to follow the steps in:
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine.
I have the application container and the cloudsql proxy container running in the same pod.
After creating the cluster, logs for the proxy container seems correct:
$kubectl logs users-app-HASH1-HASH2 cloudsql-proxy
2018/08/03 18:58:45 using credential file for authentication; email=it-test#tutorial-bookshelf-xxxxxx.iam.gserviceaccount.com
2018/08/03 18:58:45 Listening on 127.0.0.1:3306 for tutorial-bookshelf-xxxxxx:asia-south1:it-sample-01
2018/08/03 18:58:45 Ready for new connections
However logs from the application container throws up an unable to connect on localhost error:
$kubectl logs users-app-HASH1-HASH2 app-container
...
19:27:38 users_app.1 | return Connection(*args, **kwargs)
19:27:38 users_app.1 | File "/usr/local/lib/python3.7/site-packages/pymysql/connections.py", line 327, in __init__
19:27:38 users_app.1 | self.connect()
19:27:38 users_app.1 | File "/usr/local/lib/python3.7/site-packages/pymysql/connections.py", line 629, in connect
19:27:38 users_app.1 | raise exc
19:27:38 users_app.1 | sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)") (Background on this error at: http://sqlalche.me/e/e3q8)
The SQLALCHEMY_DATABASE_URI is 'mysql+pymysql://{user}:{password}#/{database}?unix_socket=/cloudsql/{cloudsql_connection_name}' and is populated with the correct values (credentials that I set using kubectl secrets).
I'm sure I'm doing something silly here, so I'm hoping someone more experience on GCP could take a look and provide pointers on troubleshooting this issue.
UPDATE:
I just went to the GCP kubernetes engine page and opened up a shell on the app container and tried to connect to the cloud sql instance. That seemed to have worked.
$gcloud container cluster ......... -it /bin/sh
#python
>>> import pymysql
>>> connection = pymysql.connect(host='127.0.0.1', user='user', password='password', db='db')
>>> with connection.cursor() as cursor:
... cursor.execute("show databases;")
... tables = cursor.fetchall()
...
5
But the following (when I try and connect through sqlalchemy) fails:
>>> connection = pymysql.connect(host='127.0.0.1', user='u', password='p', db='d', unix_socket='/cloudsql/CONNECTION_NAME')
...
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on '127.0.0.1' ([Errno 2] No such file or directory)")
>>> from sqlalchemy import create_engine
>>> engine = create_engine('mysql://user:password#localhost/db')
>>> engine.connect()
Traceback (most recent call last):
...
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2002, 'Can\'t connect to local MySQL server through socket \'/run/mysqld/mysqld.sock\' (2 "No such file or directory")') (Background on this error at: http://sqlalche.me/e/e3q8)
>>> engine = create_engine('mysql+pymysql://user:password#/db?unix_socket=/cloudsql/tutorial-bookshelf-xxxx:asia-south1:test-01')
>>> engine.connect()
Traceback (most recent call last):
...
raise exc
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)") (Background on this error at: http://sqlalche.me/e/e3q8)
Connection to the CloudSQL via the proxy can be done by either a unix socket or a TCP connection, but you shouldn't be trying to use both at the same time.
I don't see any specifications on how you have configured your proxy, but if you wish to use a unix socket then your proxy instances flag should look like this: -instances=<INSTANCE_CONNECTION_NAME>. This will cause the proxy to create a unix socket in the /cloudsql directory that forwards traffic to your Cloud SQL instance. In this case, you'll set unix_socket=/cloudsql/<INSTANCE_CONNECTION_NAME> in your url.
If you are trying to connect via TCP socket, then use an instances flag like this: -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306. This will tell the proxy to listen on port 3306 and forward traffic to your Cloud SQL instance. In this case, you'll use host='127.0.0.1' and port=3306.
If you are looking for a hands on introduction to using CloudSQL on GKE, I encourage you to check out the codelab mentioned in this project: https://github.com/GoogleCloudPlatform/gmemegen
I have recently setup cloudSQL postgres via cloudsql-proxy. I have few questions for you.
The credentials you are using for cloudsql-proxy, do they have Cloud SQL Client Role.
Does your cloudsql-proxy container command look like this,
/cloud_sql_proxy",
"--dir=/cloudsql", "-instances==tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"
It might be helpful if you could share your kubernetes deployment.yml which has both the app and proxy containers.
Ok .. Posting an answer, but I'm not fully satisfied so I'll wait for more.
I was able to connect to the cloud SQL instance by changing the SQLALCHEMY_DATABASE_URI to 'mysql+pymysql://user:password#/db' (meaning I got rid of the unix socket connection string)
so :
>>> engine = create_engine('mysql+pymysql://user:password#/db')
>>> engine.connect()
<sqlalchemy.engine.base.Connection object at 0x7f2236bdc438>
worked for me. I'm not sure why I had to get rid of the unix socket connection string as I did enable the Cloud SQL API for my project.
I can connect to the DB2 (IBM AS400) in Java Spring successfully using library jt400.jar and JDBC Driver: com.ibm.as400.access.AS400JDBCDriver
But not successful in Robotframework
I tried to use below code:
Connect To Database Using Custom Params jaydebeapi 'com.ibm.db2.jcc.DB2Driver','jdbc:db2://10.53.x.x/XABZ:user=USER1;password=PASS1'
Check If Exists In Database select * from lib.btmtran where tmtxseq = 187822
Disconnect From Database
I got the error
***** Out of Package Error Occurred (2018-07-24 16:55:25.3) *****
Exception stack trace: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error:
SQLCODE=-805, SQLSTATE=51002, SQLERRMC=NULLID.SYSSH200;00;S6576b3b
, DRIVER=4.21.29
com.ibm.db2.jcc.am.kd.a(kd.java:815)
com.ibm.db2.jcc.am.kd.a(kd.java:66)
How to fix it ?
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!
I had just installed a PostgreSQL 9.1 on the Ubuntu 12.04 server (hosted by Amazon EWS).When I tried to launch the psql command, the following error message shows up.
psql: could not connect to server: No such file or directory Is the
server running locally and accepting connections on Unix domain
socket "/var/run/postgresql/.s.PGSQL.5432"?
After searching on the web, I found I have to start the Server before using it. By following this initdb link, I still cannot use the postgresql database. Are there any further work (like configuration) should I do to start the server ?
I tried to start the service : service postgresql start
Another error message shows :
No PostgreSQL clusters exist; see "man pg_createcluster"
I received this message running a new installation of Postgres 9.3 on Ubuntu 11.04. The full message was:
$ sudo /etc/init.d/postgresql start
Error: Cannot stat /var/run/postgresql
* No PostgreSQL clusters exist; see "man pg_createcluster"
Turned out that the /var/run/postgresql directory did not exist, and it is in that directory where it was attempting to create a file with the process ID. I created the directory as root and made the "postgres" user the owner, and I was able to start the server.
Further explanation found here:
http://www.postgresql.org/message-id/21044.1326496507#sss.pgh.pa.us