I am setting up a development environment with the following:
CentOS 7
Python 2.7
IBM-DB2 EE Server v11.1.4.4
ibm-db package
My earlier installation and set up went smooth with no real issues on ODBC connectivity with the local DB2 trial database version. With my new install, I keep getting the following message:
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I did try updating the python version to 3.7 but results the same. I had to reiterate here that my earlier install with the same configuration went through without any issues. I never updated neither the db2cli.ini file nor db2dsdriver file. I did try here and it fails. As much I could gather, I saw a message which read like "ibm-db does not sit well with all python versions properly".
>>> import difflib
>>> import subprocess
>>> import os
>>> import ibm_db
>>> from shutil import copyfile
>>> conn = ibm_db.connect("DATABASE","USERID","PASSWORD")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I expect the connection to go through fine without any issues.
The short answer is that the easiest way is to use a complete DSN string to establish the connection (including the hostname, port etc.), e.g:
In [1]: import ibm_db
In [2]: conn = ibm_db.connect("DATABASE=SAMPLE;HOSTNAME=localhost;PORT=60111;UID=db2v111;PWD=passw0rd;","","")
The long answer is that we should be able to use the alias from the catalog, as explained in ibm_db.connect API:
IBM_DBConnection ibm_db.connect(string database, string user, string
password [, dict options [, constant replace_quoted_literal])
database For a cataloged connection to a database, this parameter
represents the database alias in the DB2 client catalog. For an
uncataloged connection to a database, database represents a complete
connection string in the following format: DRIVER={IBM DB2 ODBC
DRIVER};DATABASE=database;HOSTNAME=hostname;PORT=port;
PROTOCOL=TCPIP;UID=username;PWD=password; where the parameters
represent the following values:
hostname - The hostname or IP address of the database server.
port - The TCP/IP port on which the database is listening for requests.
username - The username with which you are connecting to the database.
password - The password with which you are connecting to the database.
user - The username with which you are connecting to the database.
For uncataloged connections, you must pass an empty string.
password- The password with which you are connecting to the database. For uncataloged connections, you must pass an empty string.
The question is though which client catalog we will check...
It all depends whether IBM_DB_HOME was set when package was installed, as explained in README. If it was set, then Python driver will use the existing client instance and its database catalog (as well as db2cli.ini and db2dsdriver.cfg). If not, then a separate client will be fetched during the installation and deployed in Python's site-packages.
In order to check which one is the case you can run ldd against your ibm_db.so, e.g:
ldd /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/ibm_db.so | grep libdb2
libdb2.so.1 => /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/lib/libdb2.so.1 (0x00007fb6e137e000)
Based on the output I can say that in my environment the diver was linked against a driver in Python's site-packages, so it will use db2cli.ini from /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/cfg.
If I will populate it with a section:
[sample]
port=60111
hostname=localhost
database=sample
I will be able to connect just with the DSN alias:
In [4]: conn = ibm_db.connect("SAMPLE","db2v111","passw0rd")
If you want the driver to use the existing client instance, use the IBM_DB_HOME during installation.
Related
My goal is to enable the ltree postgres extension in an automated way (GCP PostgreSQL extensions).
This is the query needed to create the extension
CREATE EXTENSION IF NOT EXISTS ltree;. I verified that this command works when I manually connect to the cloudsql instance using the following method. When I do this, the migrations run fine for that instance.
gcloud sql connect MyBackendInstance --user=postgres
postgres=> \c my_database;
Password:
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You are now connected to database "my_database" as user "postgres".
mch_staging=> CREATE EXTENSION IF NOT EXISTS ltree;
CREATE EXTENSION
Now I have the following django migration from a library called django-ltree, which works perfect on my local installation.
migrations.RunSQL(
"CREATE EXTENSION IF NOT EXISTS ltree;",
"DROP EXTENSION ltree;"
)
However, when I run that migration in my pipeline (without manually installing the ltree connection using the method above), which uses the cloud_sql_proxy to connect to the database, I get the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
psycopg2.errors.InsufficientPrivilege: permission denied to create extension "ltree"
HINT: Must be superuser to create this extension.
The command in the pipeline is quite simple:
./cloud_sql_proxy -instances="$CLOUD_SQL_INSTANCES"=tcp:5432 -credential_file=gcloud-api-credentials.json &
python backend/manage.py migrate
The credentials to connect with the database are correctly defined in the django settings, we've been able to use these credentials to perform all kinds of migrations for ages.
I have tried creating a new user as described in this question, this did not solve the issue and thus seems unrelated.
UPDATE I tried running the cloud_sql_proxy locally, using the same authentication file to install a different extension, and that seems to work flawlessly.
I am using django 2.2 and mongodb as database for backend. i have inserted all data in my application.I am also using Robo3T for seeing collections of mongodb database.My database name is CIS_FYP_db. In my computer everything is going perfect but i want to transfer that project into another computer while i am transferring the project which also contains data\db file with many collections.wt files but when i am running that project in other computer it is showing me that database is blank.No data were present there and mongodb is making new database with the same name CIS_FYP_db with no collections.Please help me to solve this problem that how can i transfer my mongodb database into other computer so i can use it into my application which is already made for that database.Thanks in advance
setting.py
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'CIS_FYP_db',
}
}
When you create a connection with mongodb then database is created automatically if not exist already.
You can use mongodump command to get all the database records and mongorestore to restore your database on your new machine.
Assumption: you have setup mongoDb locally and want to migrate it to another computer.
1.Requirements:
mongodump
mongorestore
1.1.How to install?
to install above requirement u have to install [MongoDB Database
Tools]
download link: https://www.mongodb.com/try/download/database-tools
1.2.Popular error.
sometime path is not set, so try this in cmd prompt: set path="C:\Program Files\MongoDB\Server\5.0\bin"
Note: please refactor the link according to your folder path.
2.Procedure:
Note: make sure you follow Step 1
2.1. Approach
we are going to create dump of mongodb from old pc(using mongodump), then transfer that dump to new pc, and import that dump using mongorestore.
2.2.Creation of dump in old pc(from where u want to replicate database)
cmd mongodump --host localhost:27017 --out ~/Desktop/mongo-migration
above cmd will create a dump in the mentioned path==> ~/Desktop/mongo-migration
just copy that folder and transfer it to new pc
Note: if you have created authenticated user then add these flag in above cmd and provide values --username [yourUserName] --password [yourPassword] --authenticationDatabase admin
2.3.Import of dump(created from old PC)
place that dump folder somewhere and execute below cmd
mongorestore C:/....../mongo-migration/ -u root --host 127.0.0.1:27017
done :)
I'm trying to connect a python 2.7 script to Azure SQL Data Warehouse.
The coding part is done and the test cases work in our development environment. We're are coding in Python 2.7 in MacOS X and connecting to ADW via ctds.
The problem appears when we deploy on our Azure Kubernetes pod (running Debian 9).
When we try to instantiate a connection this way:
# init a connection
self._connection = ctds.connect(
server='myserver.database.windows.net',
port=1433,
user="my_user#myserver.database.windows.net",
timeout=1200,
password="XXXXXXXX",
database="my_db",
autocommit=True
)
we get an exception that only prints the user name
my_user#myserver.database.windows.net
the type of the exception is
_tds.InterfaceError
The code deployed is the exact same and also the requirements are.
The documentation we found for this exception is almost non-existent.
Do you guys recognize it? Do you know how can we go around it?
We also tried in our old AWS instances of EC2 and AWS Kubernetes (which rans the same OS as the Azure ones) and it also doesn't work.
We managed to connect to ADW via sqlcmd, so that proves the pod can in fact connect (I guess).
EDIT: SOLVED. JUST CHANGED TO PYODBC
def connection(self):
""":rtype: pyodbc.Connection"""
if self._connection is None:
env = '' # whichever way you have to identify it
# init a connection
driver = '/usr/local/lib/libmsodbcsql.17.dylib' if env == 'dev' else '{ODBC Driver 17 for SQL Server}' # my dev env is MacOS and my prod is Debian 9
connection_string = 'Driver={driver};Server=tcp:{server},{port};Database={db};Uid={user};Pwd={password};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'.format(
driver=driver,
server='myserver.database.windows.net',
port=1433,
db='mydb',
user='myuser#myserver',
password='XXXXXXXXXXXX'
)
self._connection = pyodbc.connect(connection_string, autocommit=True)
return self._connection
As Ron says, pyodbc is recommended because it enables you to use a Microsoft-supported ODBC Driver.
I'm going to go ahead and guess that ctds is failing on redirect, and you need to force your server into "proxy" mode. See: Azure SQL Connectivity Architecture
EG
# Get SQL Server ID
sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv)
# Set URI
id="$sqlserverid/connectionPolicies/Default"
# Get current connection policy
az resource show --ids $id
# Update connection policy
az resource update --ids $id --set properties.connectionType=Proxy
I am trying to use pywebhdfs module in Python to interact with Hortonworks Hadoop sandbox. I tried the following three commands:
from pywebhdfs.webhdfs import PyWebHdfsClient
hdfs = PyWebHdfsClient(user_name="root",port=50070,host="localhost")
hdfs.make_dir('/newDirectory')
I get the following error on running the last command:
ConnectionError: ('Connection aborted.', error(10035, 'A non-blocking socket operation could not be completed immediately'))
The sandbox is running and I am able to create directories directly on it using Putty. However, it doesn't work through Python.
Can someone help with this error?
I believe 'root' cannot create directory on the '/' node of HDFS, since 'root' user is not a HDFS superuser unless, of course, you changed it.
Could you confirm if you can create '/newDirectory' using root user or perhaps create the directory where root has permissions or choose another user?
I am running a flask app on an AWS EC2 server, and have been using boto to access data stored in dynamoDB. After accidentally adding boto.conf to a git commit (and push and pull on the server), I have found that my python code can no longer locate the boto.conf file. I rolled back the changes with git, but the problem remains.
The python module and boto.conf file exist in the same directory, but when the module calls
boto.config.load_credential_file('boto.conf')
I get the flask error IOError: [Errno 2] No such file or directory: 'boto.conf'.
As per Documentation:
I'm not really sure why you are using boto.config_load_credential_file. In general you can pick up the config in a file called either ~/.boto or /etc/boto.cfg.
You can also look at this questions from SO that also answers how to get the configuration for boto: Getting Credentials File in the boto.cfg for Python