I want to use python for a 32-bit odbc connection to a read-only db.
I've installed
C:\py_comp>c:\Python27\Scripts\pip.exe install pyodbc
Collecting pyodbc
Downloading pyodbc-4.0.19-cp27-cp27m-win32.whl (50kB)
100% |################################| 51kB 1.1MB/s
Installing collected packages: pyodbc
Successfully installed pyodbc-4.0.19
and opened a read only connection
conn = pyodbc.connect(r'uid=xxxx;pwd=xxxx;DRIVER={Adaptive Server Enterprise};port=1234;server=11.222.333.444;CHARSET=iso_1;db=my_db;readonly=True')
and all the above is working fine but when I try to do a select
cursor.execute("SELECT something FROM atable")
I get an error because it issues a begin transaction but it shouldn't do that.
pyodbc.Error: ('ZZZZZ', u"[ZZZZZ] [Sybase][ODBC Driver][Adaptive Server Enterprise]Attempt to BEGIN TRANSACTION in database 'my_db' failed because database is READ ONLY.\n (3906) (SQLExecDirectW)")
Python's DB API 2.0 specifies that, by default, connections should be opened with "autocommit" off so operations on the database are performed within a transaction that must be explicitly committed (or rolled back). This can result in a BEGIN TRANSACTION being sent to the database before each batch of Cursor#execute calls, and that can cause problems with operations that cannot be performed within a transaction (e.g., some DDL statements).
pyodbc allows us to enable "autocommit" on a connection, either by adding it as an argument to the connect call
cnxn = pyodbc.connect(conn_str, autocommit=True)
or by enabling it after the connection is made
cnxn = pyodbc.connect(conn_str)
cnxn.autocommit = True
That can avoid such problems by suppressing the automatic BEGIN TRANSACTION before executing an SQL command.
Related
I want to create an in-memory SQLite DB. I would like to make two connections to this in-memory DB, one to make modifications and the other to read the DB. The modifier connection would open a transaction and continue to make modifications to the DB until a specific event occurs, at which point it would commit the transaction. The other connection would run SELECT queries reading the DB. I do not want the changes that are being made by the modifier connection to be visible to the reader connection until the modifier has committed (the specified event has occurred). I would like to isolate the reader's connection to the writer's connection.
I am writing my application in C++. I have tried opening two connections like the following:
int rc1 = sqlite3_open_v2("file:db1?mode=memory", pModifyDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
int rc2 = sqlite3_open_v2("file:db1?mode=memory", pReaderDb, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
I have created a table, added some rows and committed the transaction to the DB using 'pModifyDb'. When I try to retrieve the values using the second connection 'pReaderDb' by calling sqlite3_exec(), I receive a return code of 1 (SQLITE_ERROR).
I've tried specifying the URI as "file:db1?mode=memory&cache=shared". I am not sure if the 'cache=shared' option would preserve isolation anymore. But that did not work either when the reader connection is trying to exec a SELECT query the return code was 6 (SQLITE_LOCKED). Maybe because the shared cache option unified both the connections under the hood?
If I remove the in-memory requirement from the URI, by using "file:db1" instead, everything works fine. I do not want to use file-based DB as I require high throughput and the size of the DB won't be very large (~10MB).
So I would like to know how to set up two isolated connections to a single SQLite in-memory DB?
Thanks in advance,
kris
This is not possible with an in-memory DB.
You have to use a database file.
To speed it up, put it on a RAM disk (if possible), and disable synchronous writes (PRAGMA synchronous=off) in every connection.
To allow a reader and a writer at the same time, you have to put the DB file into WAL mode.
This is seems possible since version 3.7.13 (2012-06-11):
Enabling shared-cache for an in-memory database allows two or more database connections in the same process to have access to the same in-memory database. An in-memory database in shared cache is automatically deleted and memory is reclaimed when the last connection to that database closes.
Docs
If I submit a series of SQL statements (each with GO in sqlcmd) that I want to make an reasonable attempt to run on an Azure SQL Data Warehouse, I've found in sqlcmd how to ignore errors. But I've seen if I want to abort a statement in that sequence of statements with:
KILL "SIDxxxxxxx";
The whole session ends:
Msg 111202, Level 16, State 1, Server adws_database, Line 1
111202;Query QIDyyyyyyyyyy has been cancelled.
Is there a way to not end a query session in Azure SQL Data Warehouse? Similar to how postgres's
pg_cancel_backend()
works?
In postgres the
pg_terminate_backed(<pid>)
seems to be working similarly to the ADW
KILL 'SIDxxxx'
command.
Yes, a client can cancel a running request without aborting the whole session. In SSMS this is what the red square does during query execution.
Sqlcmd doesn't expose any way to cancel a running request, though. Other client interfaces do, like the .NET SqlClient you can use SqlCommand.Cancel()
David
I have a python script which is used to interact with cassandra with datastax python driver
It has been running since March 14th, 2016 and had not problem until today.
2016-06-02 13:53:38,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
2016-06-02 13:54:18,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
Below is the function used for creating a session, and shutdown the session (session.shutdown()) every time a query is done.(Every day we only have less than 100 queries from the subscribers side, therefore I chose build connection, do the query and close it instead of leaving the connection alive)
The session is not shared between threads and processes. If i invoke the below function in python console, it connects with the DB properly, but the running script cannot connect to the DB anymore.
Any one can help or shed some light on this issue? Thanks
def get_cassandra_session(stat=None):
"""creates cluster and gets the session base on key space"""
# be aware that session cannot be shared between threads/processes
# or it will raise OperationTimedOut Exception
if config.CLUSTER_HOST2:
cluster = cassandra.cluster.Cluster([config.CLUSTER_HOST1, config.CLUSTER_HOST2])
else:
# if only one address is available, we have to use older protocol version
cluster = cassandra.cluster.Cluster([config.CLUSTER_HOST1], protocol_version=2)
if stat and type(stat) == BatchStatement:
retry_policy = cassandra.cluster.RetryPolicy()
retry_policy.on_write_timeout(BatchStatement, ConsistencyLevel, WriteType.BATCH_LOG, ConsistencyLevel.ONE,
ConsistencyLevel.ONE, retry_num=0)
cluster.default_retry_policy = retry_policy
session = cluster.connect(config.KEY_SPACE)
session.default_timeout = 30.0
return session
Specs:
python 2.7
Cassandra 2.1.11
Quotes from datastax doc:
The operation took longer than the specified (client-side) timeout to complete. This is not an error generated by Cassandra, only the driver.
The problem is I didn't touch the driver. I set the default timeout to 30.0 seconds but why it timedout in 5 seconds(it is said in the log)
The default connect timeout is five seconds. In this case you would need to set Cluster.connect_timeout. The Session default_timeout applies to execution requests.
It's still a bit surprising when any TCP connection takes more than five seconds to establish. One other thing to check would be monkey patching. Did something in the application change patching for Gevent or Eventlet? That could cause a change in default behavior for the driver.
I've learned that the gevent module interferes with the cassandra-driver
cassandra-driver (3.10)
gevent (1.1.1)
Uninstalling gevent solved the problem for me
pip uninstall gevent
I'm writing a C++ application that uses MySQL C API to connect to the database. MySQL server version is 5.6.19-log.
I need to run several SQL UPDATE, INSERT and DELETE statements in one transaction to make sure that either all changes or no changes are applied.
I found in the docs functions mysql_commit() and mysql_rollback() that finish the transaction (commit it or roll it back), but I can't find a corresponding function that starts a transaction.
Is there such a function? Am I missing something obvious?
I run UPDATE, INSERT and DELETE statements using mysql_real_query() function.
I guess I should be able to start the transaction by running START TRANSACTION SQL statement using same mysql_real_query() function. Then I should be able to commit the transaction by running COMMIT SQL statement using same mysql_real_query() function.
But then, what is the point of having dedicated mysql_commit() and mysql_rollback() functions in the API?
It looks like MySQL C API indeed doesn't have a dedicated function that is equivalent of the START TRANSACTION SQL statement.
The MySQL C API has mysql_commit() function that does the same as COMMIT SQL statement.
The MySQL C API has mysql_rollback() function that does the same as ROLLBACK SQL statement.
But, there is no function for starting the transaction in this API.
//connect to mysql server:
MYSQL *mysql = mysql_init(NULL);
mysql = mysql_real_connect(mysql, ......)
//turn off auto_commit
mysql_autocommit(mysql , false);
OR
//start a tranaction directly as follows
mysql_real_query(mysql,"BEGIN");
//run your commands:
mysql_real_query(mysql,"UPDATE...");
//commit your transaction
mysql_real_query(mysql, "COMMIT");
I'm using the django-redis and django_rq frameworks to support both redis caching and redis background task processing for my Django application on Heroku. It's worked smoothly in the past, however now I keep getting a DatabaseError SSL error: decryption failed or bad record mac every time one of my jobs get run.
I read that this error occurs commonly with Postgres in the https://devcenter.heroku.com/articles/postgres-logs-errors article, but it hasn't really been giving me anything useful for my python setup.
The problem is solved by closing the DB connection at the beginning of each job.
For example,
#job
some_job():
from django.db import connection
connection.close()
some_more_code()