I cannot seem to be able to set month_of_year for a django celerybeat schedule. It keeps throwing an index out of range error.
Here is my schedule:
# Annual task to permanently delete all transactions that are older
# than 2 years old.
'annual-transaction-deletion': {
'task': 'project.tasks.annual_transactions_deletion',
'schedule': crontab(hour='2', minute=0, day_of_month=1, month_of_year=1)
}
I have tried setting the above month_of_year also as month_of_year=[1] and month_of_year='1'
The following stack trace is printed in the celerybeat log:
[2012-12-19 09:50:06,403: CRITICAL/MainProcess] celerybeat raised exception <type 'exceptions.IndexError'>: IndexError('list index out of range',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 100, in start_scheduler
beat.start()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 422, in start
interval = self.scheduler.tick()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 194, in tick
next_time_to_run = self.maybe_due(entry, self.publisher)
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 172, in maybe_due
is_due, next_time_to_run = entry.is_due()
File "/usr/local/lib/python2.7/dist-packages/djcelery/schedulers.py", line 65, in is_due
return self.schedule.is_due(self.last_run_at)
File "/usr/local/lib/python2.7/dist-packages/celery/schedules.py", line 502, in is_due
rem_delta = self.remaining_estimate(last_run_at)
File "/usr/local/lib/python2.7/dist-packages/celery/schedules.py", line 489, in remaining_estimate
next_hour, next_minute)
File "/usr/local/lib/python2.7/dist-packages/celery/schedules.py", line 389, in _delta_to_next
roll_over()
File "/usr/local/lib/python2.7/dist-packages/celery/schedules.py", line 372, in roll_over
months_of_year[datedata.moy],
IndexError: list index out of range
I have many other schedules set up that work but do not include month_of_year. The above schedule only needs to run once a year. I cannot help but feel this is a bug in the celery lib but I'm keen for someone to prove me wrong. Obviously if it is a bug I do not wish to resort to modifying the library files to fix it. Any help is much appreciated.
Related
When deploying the project after upgrading Django and Django-Q.
I got the following log.
Is there a way to avoid that error but still keep the tasks running in the queue to avoid downtime?
08:59:03 [Q] INFO Process-1:440 pushing tasks at 12192
Process Process-1:440:
Traceback (most recent call last):
File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/cluster.py”, line 340, in pusher
task = SignedPackage.loads(task[1])
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/signing.py”, line 31, in loads
serializer=PickleSerializer)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/core_signing.py”, line 36, in loads
return serializer().loads(data)
File “/home/ubuntu/py34env/lib/python3.6/site-packages/django_q/signing.py”, line 44, in loads
return pickle.loads(data)
AttributeError: Can’t get attribute ‘simple_class_factory’ on <module ‘django.db.models.base’ from ‘/home/ubuntu/py34env/lib/python3.6/site-packages/django/db/models/base.py’>
I am running celery 4.1.1 on windows and sending requests to redis(on ubuntu), Redis is properly connected and tested from windows side. But when i run this command
celery -A acmetelbi worker --loglevel=info
i get this long error:
[tasks]
. accounts.tasks.myprinting
. acmetelbi.celery.debug_task
[2019-08-02 11:46:44,515: CRITICAL/MainProcess] Unrecoverable error:
PicklingErr
or("Can't pickle <class 'module'>: attribute lookup module on builtins
failed",)
Traceback (most recent call last):
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\worker\worker.py", line 205, in start
self.blueprint.start(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 119, in start step.start(parent)
File "c:\acmedata\virtualenv\bi\lib\site-packages\celery\bootsteps.py",
line 370, in start return self.obj.start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\base.py",
line 131, in start self.on_start()
File "c:\acmedata\virtualenv\bi\lib\site-
packages\celery\concurrency\prefork.p
y", line 112, in on_start
**self.options)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1007 , in __init__ self._create_worker_process(i)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\pool.py", line
1116, in _create_worker_process w.start()
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\process.py",
line 124, in start self._popen = self._Popen(self)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\context.py",
line 383, in _Popen return Popen(process_obj)
File "c:\acmedata\virtualenv\bi\lib\site-
packages\billiard\popen_spawn_win32.py",
line 79, in __init__ reduction.dump(process_obj, to_child)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\reduction.py",
line 99, in dump ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'module'>: attribute lookup
module on builtins failed
(bi) C:\acmedata\bi_solution\acmetelbi>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 165, in spawn_main exitcode = _main(fd)
File "c:\acmedata\virtualenv\bi\lib\site-packages\billiard\spawn.py",
line 207, in _main self = pickle.load(from_parent)
EOFError: Ran out of input
i am scratching my head and unable to understand how to fix this. Please help!
my Code for creating a task in django app.
#task()
def myprinting(self):
print("I am task")
and in settings.py :
Other Celery settings
CELERY_BEAT_SCHEDULE = {
'task-number-one': {
'task': 'accounts.tasks.myprinting',
'schedule': crontab(minute='*/30'),
},
after spending many days in research i have come to conclusion that celery have limitation on windows and if you want to run celery on windows then you must have to run it with gevent command:
python manage.py celery worker -P gevent --loglevel=INFO
and then after running this worker process start the celery beat accordingly to start processing.
I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.
Im inserting 5000 records at once into elastic search
Total Size of these records is: 33936 (I got this using sys.getsizeof())
Elastic Search version: 1.5.0
Python 2.7
Ubuntu
Here is the following error
Traceback (most recent call last):
File "run_indexing.py", line 67, in <module>
index_policy_content(datatable, source, policyids)
File "run_indexing.py", line 60, in index_policy_content
bulk(elasticsearch_instance, actions)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers.py", line 148, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers.py", line 107, in streaming_bulk
resp = client.bulk(bulk_actions, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 70, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 568, in bulk
params=params, body=self._bulk_body(body))
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 259, in perform_request
body = body.encode('utf-8')
MemoryError
Please help me resolve the issue.
Thanks & Regards,
Afroze
If I had to guess, I'd say this memory error is happening within python as it loads and serializes its data. Try cutting way back on the batch sizes until you get something that works, and then binary search upward until it fails again. That should help you figure out a safe batch size to use.
(Other useful information you might want to include: amount of memory in the server you're running your python process on, amount of memory for your elasticsearch server node(s), amount of heap allocated to Java.)
So i have a django site that is giving me this AmbiguousTimeError. I have a job activates when a product is saved that is given a brief timeout before updating my search index. Looks like an update was made in the Daylight Savings Time hour, and pytz cannot figure out what to do with it.
How can i prevent this from happening the next time the hour shifts for DST?
[2012-11-06 14:22:52,115: ERROR/MainProcess] Unrecoverable error: AmbiguousTimeError(datetime.datetime(2012, 11, 4, 1, 11, 4, 335637),)
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/celery/worker/__init__.py", line 353, in start
component.start()
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 369, in start
self.consume_messages()
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 842, in consume_messages
self.connection.drain_events(timeout=10.0)
File "/usr/local/lib/python2.6/dist-packages/kombu/connection.py", line 191, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/kombu/transport/virtual/__init__.py", line 760, in drain_events
self._callbacks[queue](message)
File "/usr/local/lib/python2.6/dist-packages/kombu/transport/virtual/__init__.py", line 465, in _callback
return callback(message)
File "/usr/local/lib/python2.6/dist-packages/kombu/messaging.py", line 485, in _receive_callback
self.receive(decoded, message)
File "/usr/local/lib/python2.6/dist-packages/kombu/messaging.py", line 457, in receive
[callback(body, message) for callback in callbacks]
File "/usr/local/lib/python2.6/dist-packages/celery/worker/consumer.py", line 560, in receive_message
self.strategies[name](message, body, message.ack_log_error)
File "/usr/local/lib/python2.6/dist-packages/celery/worker/strategy.py", line 25, in task_message_handler
delivery_info=message.delivery_info))
File "/usr/local/lib/python2.6/dist-packages/celery/worker/job.py", line 120, in __init__
self.eta = tz_to_local(maybe_iso8601(eta), self.tzlocal, tz)
File "/usr/local/lib/python2.6/dist-packages/celery/utils/timeutils.py", line 52, in to_local
dt = make_aware(dt, orig or self.utc)
File "/usr/local/lib/python2.6/dist-packages/celery/utils/timeutils.py", line 211, in make_aware
return localize(dt, is_dst=None)
File "/usr/local/lib/python2.6/dist-packages/pytz/tzinfo.py", line 349, in localize
raise AmbiguousTimeError(dt)
AmbiguousTimeError: 2012-11-04 01:11:04.335637
EDIT: I fixed it temporarily with this code in celery:
celery/worker/job.py # line 120
try:
self.eta = tz_to_local(maybe_iso8601(eta), self.tzlocal, tz)
except:
self.eta = None
I don't want to have changes in a pip installed app, so i need to fix what i can in my code:
This runs when i save my app:
self.task_cls.apply_async(
args=[action, get_identifier(instance)],
countdown=15
)
I'm assuming that i need to somehow detect if i'm in the ambiguous time and adjust countdown.
I think i'm going to have to clear the tasks to fix this, but how can i prevent this from happening the next time the hour shifts for DST?
It's not clear what you're doing (you haven't shown any code), but basically you need to take account for the way the world works. You can't avoid having ambiguous times when you convert from local time to UTC (or to a different zone's local time) when the time goes back an hour.
Likewise you ought to be aware that there are "gap" or "impossible" times, where a reasonable-sounding local time simply doesn't occur.
I don't know what options Python gives you, but ideally an API should let you resolve ambiguous times however you want - whether that's throwing an error, giving you the earlier occurrence, the later occurrence, or something else.
Apparently, Celery solved this issue:
https://github.com/celery/celery/issues/1061