I have a python script which is used to interact with cassandra with datastax python driver
It has been running since March 14th, 2016 and had not problem until today.
2016-06-02 13:53:38,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
2016-06-02 13:54:18,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)})
Below is the function used for creating a session, and shutdown the session (session.shutdown()) every time a query is done.(Every day we only have less than 100 queries from the subscribers side, therefore I chose build connection, do the query and close it instead of leaving the connection alive)
The session is not shared between threads and processes. If i invoke the below function in python console, it connects with the DB properly, but the running script cannot connect to the DB anymore.
Any one can help or shed some light on this issue? Thanks
def get_cassandra_session(stat=None):
"""creates cluster and gets the session base on key space"""
# be aware that session cannot be shared between threads/processes
# or it will raise OperationTimedOut Exception
if config.CLUSTER_HOST2:
cluster = cassandra.cluster.Cluster([config.CLUSTER_HOST1, config.CLUSTER_HOST2])
else:
# if only one address is available, we have to use older protocol version
cluster = cassandra.cluster.Cluster([config.CLUSTER_HOST1], protocol_version=2)
if stat and type(stat) == BatchStatement:
retry_policy = cassandra.cluster.RetryPolicy()
retry_policy.on_write_timeout(BatchStatement, ConsistencyLevel, WriteType.BATCH_LOG, ConsistencyLevel.ONE,
ConsistencyLevel.ONE, retry_num=0)
cluster.default_retry_policy = retry_policy
session = cluster.connect(config.KEY_SPACE)
session.default_timeout = 30.0
return session
Specs:
python 2.7
Cassandra 2.1.11
Quotes from datastax doc:
The operation took longer than the specified (client-side) timeout to complete. This is not an error generated by Cassandra, only the driver.
The problem is I didn't touch the driver. I set the default timeout to 30.0 seconds but why it timedout in 5 seconds(it is said in the log)
The default connect timeout is five seconds. In this case you would need to set Cluster.connect_timeout. The Session default_timeout applies to execution requests.
It's still a bit surprising when any TCP connection takes more than five seconds to establish. One other thing to check would be monkey patching. Did something in the application change patching for Gevent or Eventlet? That could cause a change in default behavior for the driver.
I've learned that the gevent module interferes with the cassandra-driver
cassandra-driver (3.10)
gevent (1.1.1)
Uninstalling gevent solved the problem for me
pip uninstall gevent
Related
I have a bunch of tasks within a Cloud Composer Airflow DAG, one of which is a KubernetesPodOperator. This task seems to get stuck in the scheduled state forever and so the DAG runs continuously for 15 hours without finishing (it normally takes about an hour). I have to manually mark it failed for it to end.
I've set the DAG timeout to 2 hours but it does not make any difference.
The Cloud Composer logs show the following error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server:
Connection refused
Is the server running on host "airflow-sqlproxy-service.default.svc.cluster.local" (10.7.124.107)
and accepting TCP/IP connections on port 3306?
The error log also gives me a link to this documentation about that error type: https://docs.sqlalchemy.org/en/13/errors.html#operationalerror
When the DAG is next triggered on schedule, it works fine without any fix required. This issue happens intermittently, we've not been able to reproduce it.
Does anyone know the cause of this error and how to fix it?
The reason behind the issue is related to SQLAlchemy using a session by a thread and creating a callable session that can be used later in the Airflow Code. If there are some minimum delays between the queries and sessions, MySQL might close the connection. The connection timeout is set to approximately 10 minutes.
Solutions:
Use the airflow.utils.db.provide_session decorator. This decorator
provides a valid session to the Airflow database in the session
parameter and closes the session at the end of the function.
Do not use a single long-running function. Instead, move all database
queries to separate functions, so that there are multiple functions
with the airflow.utils.db.provide_session decorator. In this case,
sessions are automatically closed after retrieving query results.
Im having a project running on python2.7. The project is old but still its necessary to update the database when a request is received. But the update process takes time and ends up with a timeout. Is there anyway return JsonResponse/Httpresponse, before updating the database so that timeout doesn't occur. I know its not logical to do so, but its a temporary fix.
Also, i cant use async since its python2
Use multiprocessing or multithreading this will execute your task with another process and send HTTP response fastly to client-side
We are working out a couple of performance issues on one of our web sites, and we have noticed that the command "SET TIME ZONE 'America/Chicago'" is being executed so often, that in a 24 hour period, just under 1 hour (or around 4% of total DB CPU resources) is spent running that command.
Note that the "USE_TZ" setting is False, so based on my understanding, everything should be stored as UTC in the database, and only converted in the UI to the local time zone when necessary.
Do you have any ideas on how we can remove this strain on the database server?
For postgres Django always sets timezone: either server's local (when USE_TZ = False) or UTC (When USE_TZ = True). That way django supports "live switching" of settings.USE_TZ for postgreSQL DB backend.
How have you actually determined that this is a bottle-neck?
Usually SET TIME ZONE is only called during creation of connection to DB. Maybe you should use persistent connections by using settings.DATABASES[...]['CONN_MAX_AGE'] = GREATER_THAN_ZERO (docs). That way connections will be reused and you'll have less calls to SET TIME ZONE. But if you use that approach you should also take closer look at your PostgreSQL configuration:
max_connections should be greater than 1+maximum concurrency of wsgi server + max number of simultaneous cron jobs that use django (if you have them) + maximum concurrency of celery workers (if you have them) + any other potential sources of connections to postgres
if you are running cron job to call pg_terminate_backend then make sure that CONN_MAX_AGE is greater than "idle timeout"
if you are running postgres on VPS, then in some cases there might be
limits on number of open sockets)
if you are using something like pgbouncer then it may already be reusing connections
if you are killing server that serves your django project with sigkill (kill -9) then it may leave some unclosed connections to DB (but I'm not sure)
I think this may also happen if you use django.utils.timezone.activate. But I'm not sure of it... This may happen if you manually call it in your code or when you are using middleware to do this
Other possible explaining: the way youre are "profiling" your requests actually shows you the time of whole transaction
I want to use python for a 32-bit odbc connection to a read-only db.
I've installed
C:\py_comp>c:\Python27\Scripts\pip.exe install pyodbc
Collecting pyodbc
Downloading pyodbc-4.0.19-cp27-cp27m-win32.whl (50kB)
100% |################################| 51kB 1.1MB/s
Installing collected packages: pyodbc
Successfully installed pyodbc-4.0.19
and opened a read only connection
conn = pyodbc.connect(r'uid=xxxx;pwd=xxxx;DRIVER={Adaptive Server Enterprise};port=1234;server=11.222.333.444;CHARSET=iso_1;db=my_db;readonly=True')
and all the above is working fine but when I try to do a select
cursor.execute("SELECT something FROM atable")
I get an error because it issues a begin transaction but it shouldn't do that.
pyodbc.Error: ('ZZZZZ', u"[ZZZZZ] [Sybase][ODBC Driver][Adaptive Server Enterprise]Attempt to BEGIN TRANSACTION in database 'my_db' failed because database is READ ONLY.\n (3906) (SQLExecDirectW)")
Python's DB API 2.0 specifies that, by default, connections should be opened with "autocommit" off so operations on the database are performed within a transaction that must be explicitly committed (or rolled back). This can result in a BEGIN TRANSACTION being sent to the database before each batch of Cursor#execute calls, and that can cause problems with operations that cannot be performed within a transaction (e.g., some DDL statements).
pyodbc allows us to enable "autocommit" on a connection, either by adding it as an argument to the connect call
cnxn = pyodbc.connect(conn_str, autocommit=True)
or by enabling it after the connection is made
cnxn = pyodbc.connect(conn_str)
cnxn.autocommit = True
That can avoid such problems by suppressing the automatic BEGIN TRANSACTION before executing an SQL command.
When I inspect celery -A proj inspect active_queues I see two servers showing their queues they are listening to and they are pointing to same default queue name celery. Still the task issued by django app gets executed twice by both servers(Once by each celery server - so two times).
I can see the transport type is also direct - the default one.
On my local task gets executed once so I am sure that the task is called only once by my django app.
What can I be missing here?
Ok, i looked up the docs, i think you need to set celerybeat-scheduler in your settings.py which makes sure tasks are being scheduled by a single scheduler.
http://celery.readthedocs.org/en/latest/configuration.html#celerybeat-scheduler
On Redis you can set the current database for the application you are running, setting the database will separate the information to use different apps.
If you are using Django the configuration is
CELERY_BROKER_VHOST = {number of the database}
If you are not using Django i beleive the configuration is CELERY_REDIS_DB or redis_db depending on your celery version
For instance for your first application could be CELERY_BROKER_VHOST = 1
For the second application could be CELERY_BROKER_VHOST = 2
and for your local development could be CELERY_BROKER_VHOST = 99
http://docs.celeryproject.org/en/latest/userguide/configuration.html#id8