Airflow scheduler process crashes if we start turning on the DAG and Trigger DAG from Airflow web-server.
Airflow Version - **v1.10.4
Redis server v=5.0.7
executor = CeleryExecutor
broker_url = 'redis://:password#redis-host:2287/0'
sql_alchemy_conn = postgresql+psycopg2://user:password#host/dbname
result_backend = 'db+postgresql://user:password#host/dbname'
Crashes with Below Error message .
scheduler_job.py:1325} ERROR - Exception when executing execute_helper
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/airflow/jobs/scheduler_job.py", line 1323, in _execute
self._execute_helper()
File "/usr/lib/python2.7/site-packages/airflow/jobs/scheduler_job.py", line 1412, in _execute_helper
self.executor.heartbeat()
File "/usr/lib/python2.7/site-packages/airflow/executors/base_executor.py", line 132, in heartbeat
self.trigger_tasks(open_slots)
File "/usr/lib/python2.7/site-packages/airflow/executors/celery_executor.py", line 203, in trigger_tasks
cached_celery_backend = tasks[0].backend
File "/usr/lib/python2.7/site-packages/celery/local.py", line 146, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/lib/python2.7/site-packages/celery/app/task.py", line 1037, in backend
return self.app.backend
File "/usr/lib/python2.7/site-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/usr/lib/python2.7/site-packages/celery/app/base.py", line 1223, in backend
return self._get_backend()
File "/usr/lib/python2.7/site-packages/celery/app/base.py", line 940, in _get_backend
self.loader)
File "/usr/lib/python2.7/site-packages/celery/app/backends.py", line 74, in by_url
return by_name(backend, loader), url
File "/usr/lib/python2.7/site-packages/celery/app/backends.py", line 54, in by_name
cls = symbol_by_name(backend, aliases)
File "/usr/lib/python2.7/site-packages/kombu/utils/imports.py", line 57, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named 'db
Why is the scheduler crashing when DAG is triggered?
I've tried running pip install DB but it didn't solve the issue.
Did you do
$ airflow initidb
before trying to start the webserver?
Also, you seem to be using python 2.7, are you sure it is compatible with the latest version of airflow you are using?
I was using python 3.5.2 with latest airflow and it did not work for me and thus I had to downgrade my airflow version a little.
As the error states. You must not have set up your db properly.
Airflow is not compatible with Python version 2.7
Run airflow with python 3.6 and then create db user and grant privileges and then run the command "airflow initdb". This will initialize your database in the airflow.
Related
I am running a linux red hat environment from AWS.
I have followed every instruction for upgrading sqlite3 to the "latest" version.
I am running python 3.9.2 (and have recompiled it with LD_RUN_PATH=/usr/local/lib ./configure) and django version 4.
I have set up a virtual environment to install and run django. I have changed the activate script to include export LD_LIBRARY_PATH="/usr/local/lib"
Upon running python manage.py runserver, I get the error django.db.utils.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher. I have opened the file /home/ec2-user/django/django/db/backends/sqlite3/base.py (where the error occurs) and right after the line with the error have include a print statement:
print("**************************\n" +
str(Database.sqlite_version) +
"\n" + str(Database.sqlite_version_info) +
"\n**************************")
which retruns:
**************************
3.28.0
(3, 28, 0)
**************************
**************************
3.28.0
(3, 28, 0)
**************************
Please let me know what additional information is needed. I have searched up and down the stack and can't find the right solution to pop this one off.
Thank you in advance!
EDIT
Here is the traceback:
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/home/ec2-user/django/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/ec2-user/django/django/utils/asyncio.py", 21 in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/ec2-user/django/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/sqlite3/base.py", line 210, in get_new_connection
create_deterministic_function('django_date_extract', 2, _sqlite_datetime_extract)
sqlite3.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/python/lib/python/3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/opt/python39/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/home/ec2-user/django/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/ec2-user/django/django/core/management/commands/runserver.py", line 126, in inner_run
self.check_migrations()
File "/home/ec2-user/django/django/core/management/base.py", line 486, in check_migrations
executor = MigrationExecutor(connectsion[DEFAULT_DB_ALIAS])
File "/home/ec2-user/django/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/home/ec2-user/django/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/home/ec2-user/django/django/db/migrations/loader.py", line 220, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/home/ec2-user/django/django/db/migrations/recorder.py", line 77, in applied_migrations
if self.has_table():
File "/home/ec2-user/django/django/db/migrations/recorder.py", line 55, in has_table
with self.connection.cursor() as cursor:
File "/home/ec2-user/django/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/home/ec2-user/django/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/home/ec2-user/django/djanog/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/ec2-user/django/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/ec2-user/django/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/ec2-user/django/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/ec2-user/django/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/ec2-user/django/django/db/backends/sqlite3/base.py", line 210, in get_new_connection
create_deterministic_function('django_date_extract', 2, _sqlite_datetime_extract)
django.db.utils.NotSupportedError: deterministic=True requires SQLite 3.8.3 or higher
I came across the same issue in my linux Centos7+python3.9.6+Django3.2.5.
Althougt the sqlite3 is updated to the lastest version. It seems that this is useless. A kind of solution is changing the database from sqlite3 to pysqlite3.
After acticate the virtualenv, install pysqlite
pip3 install pysqlite3
pip3 install pysqlite3-binary
and change db in base.py
vim python3.9.6/site-packages/django/db/backends/sqlite3/base.py
# from sqlite3 import dbapi2 as Database # annotation
from pysqlite3 import dbapi2 as Database # import pysqlite3
restart django server and it works.
I have got the same problem as yours. when I tried to deploy on Elastic Beanstalk.
In my case used Python 3.8 when I initialized EB CLI like this:
eb init -p python-3.8 django-project ⛔
and that not a good python version to run it on 64bit Amazon Linux 2 (default). change to python-3.7
eb init -p python-3.7 django-project ✅
It can be solved by recompiling python3 with the correct environment. Below are the commands for recompiling python3.
export C_INCLUDE_PATH=/PATH_TO_SQLITE/include
export CPLUS_INCLUDE_PATH=/PATH_TO_SQLITE/include
export LD_RUN_PATH=/PATH_TO_SQLITE/lib
./configure --prefix=/PATH_FOR_PYTHON
make
make install
Then a check can be done in python by the below commands
import sqlite3
conn = sqlite3.connect(':memory:')
conn.create_function('f', 2, lambda *args: None, deterministic=True)
The best I can figure at the moment is to go into /home/ec2-user/django/django/db/backends/sqlite3/base.py, change the function variable deterministic=True in get_new_connection() to deterministic=False...
This will remove the error, but seems like a super cheaty solution. If anyone has a better fix, please let me know.
Using PostgreSQL is a good choice as it does support python 3.8 and solves this error. Also as in this question is explained that it is better to use postgreSQL for production. Having said that I recommend this tutorial that teaches how to run postgres on eb. Despite it is a bit outdated it was for me very useful.
安装 pip3 install pysqlite3
打开文件/usr/local/python3/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py,找到 from sqlite3 import dbapi2 as Database 注释它,替换为from pysqlite3 import dbapi2 as Database
vim /u01/apps/python3/lib/python3.10/site-packages/django/db/backends/sqlite3/_functions.py 将deterministic=True中的函数变量get_new_connection()更改为deterministic=False ..
然后
vim /u01/apps/python3/lib/python3.10/site-packages/django/db/backends/base/base.py# def : check_database_version_supported 里面的方法注释掉 写上pass
如:
def check_database_version_supported(self):
pass
"""
Raise an error if the database version isn't supported by this
version of Django.
"""
I am running celery 4.1.1 on windows and sending requests to redis(on ubuntu), Redis is properly connected and tested from windows side. But when i run this command
celery -A acmetelbi worker --loglevel=info
i get this long error:
[tasks]
. accounts.tasks.myprinting
. acmetelbi.celery.debug_task
[2019-08-02 11:46:44,515: CRITICAL/MainProcess] Unrecoverable error:
PicklingErr
or("Can't pickle <class 'module'>: attribute lookup module on builtins
failed",)
Traceback (most recent call last):
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\worker\worker.py", line 205, in start
self.blueprint.start(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 119, in start step.start(parent)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 370, in start return self.obj.start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\base.py",
line 131, in start self.on_start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\prefork.p
y", line 112, in on_start
**self.options)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1007 , in __init__ self._create_worker_process(i)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1116, in _create_worker_process w.start()
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\process.py",
line 124, in start self._popen = self._Popen(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\context.py",
line 383, in _Popen return Popen(process_obj)
File "c:\acmedata\virtualenv\bi\lib\site-
packages\billiard\popen_spawn_win32.py",
line 79, in __init__ reduction.dump(process_obj, to_child)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\reduction.py",
line 99, in dump ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'module'>: attribute lookup
module on builtins failed
(bi) C:\acmedata\bi_solution\acmetelbi>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 165, in spawn_main exitcode = _main(fd)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 207, in _main self = pickle.load(from_parent)
EOFError: Ran out of input
i am scratching my head and unable to understand how to fix this. Please help!
my Code for creating a task in django app.
#task()
def myprinting(self):
print("I am task")
and in settings.py :
Other Celery settings
CELERY_BEAT_SCHEDULE = {
'task-number-one': {
'task': 'accounts.tasks.myprinting',
'schedule': crontab(minute='*/30'),
},
after spending many days in research i have come to conclusion that celery have limitation on windows and if you want to run celery on windows then you must have to run it with gevent command:
python manage.py celery worker -P gevent --loglevel=INFO
and then after running this worker process start the celery beat accordingly to start processing.
So I'm working towards having automated staging deployments via Jenkins and Ansible. Part of this is using a script called ec2.py from ansible in order to dynamically retrieve a list of matching servers to deploy to.
SSH-ing into the Jenkins server and running the script from the jenkins user, the script runs as expected. However, running the script from within jenkins leads to the following error:
ERROR: Inventory script (ec2/ec2.py) had an execution error: Traceback (most recent call last):
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 1262, in <module>
Ec2Inventory()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 159, in __init__
self.do_api_calls_update_cache()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 386, in do_api_calls_update_cache
self.get_instances_by_region(region)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 417, in get_instances_by_region
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/connection.py", line 1181, in get_list
xml.sax.parseString(body, h)
File "/usr/lib/python2.7/xml/sax/__init__.py", line 43, in parseString
parser = make_parser()
File "/usr/lib/python2.7/xml/sax/__init__.py", line 93, in make_parser
raise SAXReaderNotAvailable("No parsers found", None)
xml.sax._exceptions.SAXReaderNotAvailable: No parsers found
I don't know too much about python, so I'm not sure how to debug this issue further.
So it turns out the issue was to do with Jenkins overwriting the default LD_LIBRARY_PATH variable. By unsetting that variable before running python, I was able to make the python app work!
I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.
I'm trying to intall the asterisk_click2dial module on ODOO and this error comes to me in the log file:
ValueError: Routing: posting a message without model should be with a parent_id (private mesage).
2015-03-09 15:23:38,262 11093 ERROR ? werkzeug: Error on request:
Traceback (most recent call last):
File "/home/odoo/odoo/lib/python2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/home/odoo/odoo/lib/python2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/opt/odoo/openerp/service/server.py", line 281, in app
return self.app(e, s)
File "/opt/odoo/openerp/service/wsgi_server.py", line 216, in application
return application_unproxied(environ, start_response)
File "/opt/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied
result = handler(environ, start_response)
File "/opt/odoo/openerp/http.py", line 1274, in __call__
self.load_addons()
File "/opt/odoo/openerp/http.py", line 1293, in load_addons
m = __import__('openerp.addons.' + module)
File "/opt/odoo/openerp/modules/module.py", line 79, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/opt/odoo/addons/base_phone/__init__.py", line 23, in <module>
from . import wizard
File "/opt/odoo/addons/base_phone/wizard/__init__.py", line 23, in <module>
from . import number_not_found
File "/opt/odoo/addons/base_phone/wizard/number_not_found.py", line 25, in <module>
import phonenumbers
ImportError: No module named phonenumbers
The problem is just that I installed that module (phonenumbers) plus the py-Asterisk module without errors using pip install phonenumbers and pip install py-Asterisk and the error persists.
I noticed I have at least two versions of python installed (2.6 and 2.7) but both modules are installed at the same version from odoo (I can see the modules in the python2.7 cli when, for example, I write phonenumbers or search).
Has anybody any idea what is happening to me? I'd be gratefull for some specific response. Thanks.
Here the connector's page: OpenERP - Asterisk connector
Ok, my bad. During odoo instalation I created a virtual environment under the Odoo system user account which was used solely by the Odoo server so I just need to install these modules under that enviorment. This works for me:
First let’s switch from root to the odoo user, then create a new virtual environment called odoo and activate it:
su - odoo
/usr/local/bin/virtualenv --python=/usr/local/bin/python2.7 odoo
source odoo/bin/activate
(If you have the virtual environment created like me just ignore the second line). Before starting the module installation we need to add the path to the PostgreSQL binaries, otherwise the PsycoPG2 module install will fail (I ignored this one too):
export PATH=/usr/pgsql-9.3/bin:$PATH
Then I can do pip install... Thank you all for your help.