I am using cloud 9 online IDE for python dev.
Here's my code:
from celery import Celery
from celery.schedules import crontab
from datetime import timedelta
RESULT_URL = 'mongodb://********'
BROKER_URL = 'redis://*********'
app = Celery('tasks', backend=RESULT_URL,broker=BROKER_URL)
CELERY_TIMEZONE = 'UTC'
CELERYBEAT_SCHEDULE = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=30),
'args': (16, 16)
},
}
#app.task
def add(x, y):
print x+y
return x + y
And I am starting it out with the command:
celery -A tasks worker --loglevel=info --beat
Celery starts OK,but stops all activity there.Manually invoked tasks work fine.
Here's the console log:
[2015-04-16 07:53:30,954: INFO/Beat] beat: Starting...
[2015-04-16 07:53:32,696: INFO/MainProcess] Connected to redis://*******
[2015-04-16 07:53:34,722: INFO/MainProcess] mingle: searching for neighbors
[2015-04-16 07:53:37,685: INFO/MainProcess] mingle: all alone
[2015-04-16 07:53:40,343: WARNING/MainProcess] celery#*****-demo-project-563148 ready.
I am using RedisLabs free-version as the Broker and a self hosted mongo as the backend storage. Where am I going wrong?
Related
my task:
#shared_task
def my_test():
# Executes every 2 minutes
UserStatisticStatus.objects.filter(id=1).update(loot_boxes=+234)
print('Hello from celery')
app.conf.beat_schedule = {
'my-task-every-10-seconds': {
'task': 'user_statistic_status.tasks.my_test',
'schedule': timedelta(seconds=10)
}
}
my settings:
if 'RDS_DB_NAME' in os.environ:
CELERY_BROKER_URL = 'redis://<myurl>/0'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'eu-central-1',
'polling_interval': 20,
}
CELERY_RESULT_BACKEND = 'redis://<myurl>/1'
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
CELERY_TASK_ROUTES = {
'my_test': {'queue': 'default'},
}
my celery.py :
import os
from celery import Celery
from project import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('celery_app')
app.conf.task_routes = {
'my_test': {'queue': 'default'},
}
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
[2023-01-06 09:13:32,581: DEBUG/MainProcess] | Worker: Starting Beat
[2023-01-06 09:13:32,584: DEBUG/MainProcess] ^-- substep ok
[2023-01-06 09:13:32,585: DEBUG/MainProcess] | Worker: Starting Hub
[2023-01-06 09:13:32,585: DEBUG/MainProcess] ^-- substep ok
[2023-01-06 09:13:32,586: DEBUG/MainProcess] | Worker: Starting Pool
[2023-01-06 09:13:32,856: DEBUG/MainProcess] ^-- substep ok
[2023-01-06 09:13:32,864: DEBUG/MainProcess] | Worker: Starting Consumer
[2023-01-06 09:13:32,864: DEBUG/MainProcess] | Consumer: Starting Connection
[2023-01-06 09:13:32,901: INFO/Beat] beat: Starting...
[2023-01-06 09:13:32,972: DEBUG/Beat] Current schedule:
<ScheduleEntry: my-task-every-10-seconds user_statistic_status.tasks.my_test() <freq: 10.00 seconds>
[2023-01-06 09:13:32,972: DEBUG/Beat] beat: Ticking with max interval->5.00 minutes
[2023-01-06 09:13:32,973: DEBUG/Beat] beat: Waking up in 9.99 seconds.
[2023-01-06 09:13:42,969: DEBUG/Beat] beat: Synchronizing schedule...
It gets stuck here and task never executes !!!
The worker connects and shows the tasks correctly and the beat starts too but nothing happening. I've tested it with the local redis server and everything works fine.
Any help will be much appreciated
Thank you
I have a django app + redis on one server and celery on another server. I want to call celery task from django app.
My task.py on Celery Server:
from celery import Celery
app = Celery('tasks')
app.conf.broker_url = 'redis://localhost:6379/0'
#app.task(bind=True)
def test():
print('Testing')
Calling the celery task from Django Server:
from celery import Celery
celery = Celery()
celery.conf.broker_url = 'redis://localhost:6379/0'
celery.send_task('tasks.test')
I am running the celery worker using this command:
celery -A tasks worker --loglevel=INFO
When i call the celery task from django, it pings the celery server but i get the following error:
Received unregistered task of type 'tasks.test'. The message has been
ignored and discarded.
Did you remember to import the module containing this task? Or maybe
you're using relative imports?
How to fix this or is there any way to call the task?
Your task should be a shared task within celery as follows:
tasks.py
from celery import Celery, shared_task
app = Celery('tasks')
app.conf.broker_url = 'redis://localhost:6379/0'
#shared_task(name="test")
def test(self):
print('Testing')
and start celery as normal:
celery -A tasks worker --loglevel=INFO
Your application can then call your test task:
main.py
from celery import Celery
celery = Celery()
celery.conf.broker_url = 'redis://localhost:6379/0'
celery.send_task('tasks.test')
I'm trying to make use of periodic tasks but can't make it work.
I have this test task
# handler/tasks.py
from celery import Celery
app = Celery()
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls test('hello') every 2 seconds.
sender.add_periodic_task(2, test.s('hello'), name='add every 2')
#app.task
def test(arg):
print(arg)
Celery is configured
# project dir
# salaryx_django/celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'salaryx_django.settings')
app = Celery('salaryx_django')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
# salaryx_django/settings.py
# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/London'
The workers are intiated
[2022-04-25 14:57:55,424: INFO/MainProcess] Connected to redis://localhost:6379//
[2022-04-25 14:57:55,426: INFO/MainProcess] mingle: searching for neighbors
[2022-04-25 14:57:56,433: INFO/MainProcess] mingle: all alone
[2022-04-25 14:57:56,452: WARNING/MainProcess] /Users/jonas/Desktop/salaryx_django/venv/lib/python3.8/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2022-04-25 14:57:56,453: INFO/MainProcess] celery#Air-von-Jonas ready.
and Redis is waiting for connections
but nothing happens at all..
Celery Beat
(venv) jonas#Air-von-Jonas salaryx_django % celery -A salaryx_django beat
celery beat v5.2.6 (dawn-chorus) is starting.
__ - ... __ - _
LocalTime -> 2022-04-26 05:38:27
Configuration ->
. broker -> redis://localhost:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#%WARNING
. maxinterval -> 5.00 minutes (300s)
You also have to run beat.
From beat entries documentation
The add_periodic_task() function will add the entry to the beat_schedule setting behind the scenes
Simply running celery -A salaryx_django beat in another process should get you going. Read docs for more info.
I have a django project with celery integrated using redis.
My celery worker works perfectly in local development, and now I'm deploying in production.
Before daemonizing the process I want to see how celery behaves in the server. The thing is, celery beat sends the tasks correctly every minute (as I scheduled) but the worker seems not to receive it every time. Sometimes it requires 4/5 minutes until the task is received and processed. How is that possible? I have tried debugging, but there is very few information.
see my setup:
settings.py
CELERY_TIMEZONE = 'Europe/Warsaw'
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
# Other Celery settings
CELERY_BEAT_SCHEDULE = {
'task-number-one': {
'task': 'predict_assistance.alerts.tasks.check_measures',
'schedule': crontab(minute='*/1'),
},
}
tasks.py
from __future__ import absolute_import, unicode_literals
from celery import shared_task
#shared_task()
def check_measures():
print('doing something')
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
app = Celery('predict_assistance')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
hereby my logs in production:
[2020-03-11 16:09:00,028: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:09:00,038: INFO/MainProcess] Received task: predict_assistance.alerts.tasks.check_measures[86f5c999-a53c-44dc-b568-00d924b5da9e]
[2020-03-11 16:09:00,046: WARNING/ForkPoolWorker-3] doing something
[2020-03-11 16:09:00,047: INFO/ForkPoolWorker-3] predict_assistance.alerts.tasks.check_measures[86f5c999-a53c-44dc-b568-00d924b5da9e]: doing something logger
[2020-03-11 16:09:00,204: INFO/ForkPoolWorker-3] Task predict_assistance.alerts.tasks.check_measures[86f5c999-a53c-44dc-b568-00d924b5da9e] succeeded in 0.16194193065166473s: None
[2020-03-11 16:10:00,049: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:10:00,062: INFO/MainProcess] Received task: predict_assistance.alerts.tasks.check_measures[c7786f38-793f-45e6-abb2-1c901e345e8f]
[2020-03-11 16:10:00,072: WARNING/ForkPoolWorker-3] doing something
[2020-03-11 16:10:00,073: INFO/ForkPoolWorker-3] predict_assistance.alerts.tasks.check_measures[c7786f38-793f-45e6-abb2-1c901e345e8f]: doing something logger
[2020-03-11 16:10:00,242: INFO/ForkPoolWorker-3] Task predict_assistance.alerts.tasks.check_measures[c7786f38-793f-45e6-abb2-1c901e345e8f] succeeded in 0.17491870187222958s: None
[2020-03-11 16:11:00,054: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:12:00,032: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:13:00,035: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:14:00,046: INFO/Beat] Scheduler: Sending due task task-number-one (predict_assistance.alerts.tasks.check_measures)
[2020-03-11 16:14:00,053: INFO/MainProcess] Received task: predict_assistance.alerts.tasks.check_measures[e0b3ef2b-ba15-421c-9a0f-0ef9f3ebb22a]
[2020-03-11 16:14:00,065: WARNING/ForkPoolWorker-3] doing something
[2020-03-11 16:14:00,066: INFO/ForkPoolWorker-3] predict_assistance.alerts.tasks.check_measures[e0b3ef2b-ba15-421c-9a0f-0ef9f3ebb22a]: doing something logger
[2020-03-11 16:14:00,247: INFO/ForkPoolWorker-3] Task predict_assistance.alerts.tasks.check_measures[e0b3ef2b-ba15-421c-9a0f-0ef9f3ebb22a] succeeded in 0.1897202990949154s: None
Do you have any idea why this is happening?
Thanks in advance
As suggested by the comments, the solution was to switch from redis-server service to rabbitmq-server service
I am running into the issue that, when using a timezone inside my celery.py tasks run every chance it gets, thus running not according to schedule.
This is the output:
scheduler_1 | [2018-11-29 11:00:09,186: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,199: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,204: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,210: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,220: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,228: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,231: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,236: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,239: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,247: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
scheduler_1 | [2018-11-29 11:00:09,250: INFO/MainProcess] Scheduler: Sending due task Suppliers (biko.supplier.tasks.pull_supplier_data)
My celery.py:
import os
from celery import Celery
from celery.schedules import crontab
from django.conf import settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings")
# app = Celery('biko')
task_apps = [
'biko.supplier.tasks',
'biko.common.tasks',
'biko.commerce.tasks',
'biko.shop.tasks'
]
app = Celery('biko', include=task_apps)
app.config_from_object('django.conf:settings')
app.conf.timezone = 'Europe/Amsterdam'
app.autodiscover_tasks()
app.conf.ONCE = {
'backend': 'celery_once.backends.Redis',
'settings': {
'url': 'redis://' + os.getenv('REDIS_HOST'),
'blocking': True,
'default_timeout': 60 * 60,
'blocking_timeout': 86400
}
}
When I remove the app.config.timezone everything works fine.
My django settings regarding timezone...
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
Any ideas what causes these issues?