I am looking to run tasks in parallel with django celery.
Let's say the following task:
#shared_task(bind=True)
def loop_task(self):
for i in range(10):
time.sleep(1)
print(i)
return "done"
Each time a view is loaded then this task must be executed :
def view(request):
loop_task.delay()
My problem is that I want to run this task multiple times without a queue system in parallel mode. Each time a user goes to a view, there should be no queue to wait for a previous task to finish
Here is the celery command I use :
celery -A toolbox.celery worker --pool=solo -l info -n my_worker1
-------------- celery#my_worker1 v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Windows-10-10.0.22000-SP0 2022-08-01 10:22:52
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: toolbox:0x1fefe7286a0
- ** ---------- .> transport: redis://127.0.0.1:6379//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 8 (solo)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
I have already tried the solutions found here but none of them seem to do what I ask StackOverflow : Executing two tasks at the same time with Celery
I should have the following output:
0,1,2,...,9
If two users load the same page at the same time then we should have the previous output appearing twice
Result :
0,0,1,1,2,2,...,9,9
I think it's very simple to solve, but you need to test it.
Basically, you need to run task in async mode - for example, when you are trying to run task that send mass sms to multiple users, you do it in this way:
send_mass_sms.apply_async(
[
phone_numbers,
instance.body,
instance.id,
],
eta=instance.when,
)
Your code needs to be fixed this way:
def view(request):
loop_task.apply_async()
If you need to update data on website, you can store data in models and call ajax multiple times or implement logic via websockets, but this is topic for another question :)
Maybe need to start multi workers, but this does not guarantee that all tasks can be performed in parallel.
Will still has task in doesn't receive in queue. It depends on the number of workers and the speed of execution.
And if same result, you can set it in cache.
Related
I have a Django project and have setup Celery + RabbitMQ to do heavy tasks asynchronously. When I call the task, RabbitMQ admin shows the task, Celery prints that the task is received, but the task is not executed.
Here is the task's code:
#app.task
def dummy_task():
print("I'm Here")
User.objects.create(username="User1")
return "User1 Created!"
In this view I send the task to celery:
def task_view(request):
result = dummy_task.delay()
return render(request, 'display_progress.html', context={'task_id': result.task_id})
I run celery with this command:
$ celery -A proj worker -l info --concurrency=2 --without-gossip
This is output of running Celery:
-------------- celery#DESKTOP-8CHJOEG v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Windows-10-10.0.19044-SP0 2022-08-22 10:10:04
*** --- * ---
** ---------- [config]
** ---------- .> app: proj:0x23322847880
** ---------- .> transport: amqp://navid:**#localhost:5672//
** ---------- .> results:
*** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] .proj.celery.debug_task .
entitymatching.tasks.create_and_learn_machine .
entitymatching.tasks.dummy_task
[2022-08-22 10:10:04,068: INFO/MainProcess] Connected to
amqp://navid:**#127.0.0.1:5672// [2022-08-22 10:10:04,096:
INFO/MainProcess] mingle: searching for neighbors [2022-08-22
10:10:04,334: INFO/SpawnPoolWorker-1] child process 6864 calling
self.run() [2022-08-22 10:10:04,335: INFO/SpawnPoolWorker-2] child
process 12420 calling self.run() [2022-08-22 10:10:05,134:
INFO/MainProcess] mingle: all alone [2022-08-22 10:10:05,142:
WARNING/MainProcess]
C:\Users\Navid\PycharmProjects\proj\venv\lib\site-packages\celery\fixups\django.py:203:
UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments! warnings.warn('''Using settings.DEBUG leads to a memory
[2022-08-22 10:10:05,142: INFO/MainProcess] celery#DESKTOP-8CHJOEG
ready. [2022-08-22 10:10:05,143: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[97f8a2eb-0006-4d53-ba6a-7b9f8649c84a]
received [2022-08-22 10:10:05,144: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[17190479-0784-46b1-8dc6-870ead41e9c6]
received [2022-08-22 10:11:36,384: INFO/MainProcess] Task
proj.celery.debug_task[af3d633f-7b9a-4441-b375-9ce217a40ab3]
received
But "I'm Here" is not printed, and User1 is not created.
RabbitMQ shows that there are 3 "unack" messages in the queue:
You did not provide enough info although I think you have problem with your worker pools.
try adding
--pool=solo
at the end of your run command.
it will be like
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=solo
although on production I recommend using gevent as your pool.
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=gevent
I have a django 2.0.5 app using celery==4.2.1, redis==2.10.6, redis-server=4.0.9. When I start celery worker, I get this output:
-------------- celery#octopus v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.18.16-surface-linux-surface-x86_64-with-Ubuntu-18.04-bionic 2018-10-31 17:33:50
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: MemorabiliaJSON:0x7fd6c537b240
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
But in my django settings I have:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_IMPORTS = ('memorabilia.tasks',
'face_recognition.tasks',
)
My celery.py looks like:
# http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.apps import apps
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'MemorabiliaJSON.settings.tsunami')
app = Celery('MemorabiliaJSON')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: [n.name for n in apps.get_app_configs()])
The same code (shared through my git server) works on my development machine, although the redis server is a bit older - v=2.8.4. The development machine is Ubunut 14.04, and the laptop is Ubuntu 18.04. By works, I mean this is the celery output on my development machine:
-------------- celery#tsunami v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.4.0-138-generic-x86_64-with-Ubuntu-14.04-trusty 2018-10-31 17:38:09
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: MemorabiliaJSON:0x7f356e024c18
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
How do I get celery to read the django config file other than what I have in celery.py?
Thanks!
Mark
changing localhost to 127.0.0.1 solved the problem in my case :
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
I am using celery & redis.Had the same issue, and I resolved it like
In you celery.py go to line
app.config_from_object("django.conf:settings", namespace="CELERY")
You just have to remove the namespace="CELERY" and finally your code should be
app.config_from_object("django.conf:settings")
This is working perfect in my case.
firstly it was like this
And After update above settings
If you look closely at your celery output from celery#octopus, you'll see that it is connected to an amqp broker and not a redis broker: amqp://guest:**#localhost:5672//. This means that your octopus worker has been configured somewhere to point ot a rabbitmq broker, and not a redis broker. In order to correct this, you'll have to find where that rabbitmq broker configuration setting is and see how that is being pulled into celery. Because what that broker_url tells us is that somehow celery is being reconfigured elsewhere or that there are other settings that are being applied on the server.
I was having same problem. Solved by running celery with right settings files. Your configs are absolutely right, the problem is from where you are running celery. You might either providing wrong application module or not providing it at all, while running celery.
you need to specify --app=<module_which_contains_your_celery_file>
exact syntax is following:
celery --app=<APPLICATION> worker
when celery run with no --app or -A, it will use transport url = amqp://guest:#localhost:5672//**
This can be observed from your celery logs also:
in first celery log:
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
and in another (working) celery log:
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
Please update the CELERY_BROKER_URL like the following:
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
You can check the documentation regarding connecting Redis as broker in here.
In the interest of closing out this question, I will answer it. To be honest, I am not sure how I fixed the problem, but it just went away after a few changes and reboots of my system. The setup is still the same as above.
I later discovered that I had an issue with naming modules in that two modules had the same name. Once I corrected that issue, most of my other celery problems went away. However, to be clear, the redis/celery part was working before I fixed the module naming issue.
Thanks to everyone who posted suggestions to my question!
Completely unrelated to how you solved it, but I had a similar issue where the "transport" url in config was looking for port 5672 (instead of redis's 6379, the results url was correct). While debugging earlier I had removed namespace from app.config_from_object. Putting it back solved my issue. Putting it here for anybody who makes the same mistake and comes here
I am trying to run a celery task in a Django view using my_task.delay(). However, the task is never executed and the code is blocked on that line and the view never renders. I am using AWS SQS as a broker with an IAM user with full access to SQS.
What am I doing wrong?
Running celery and Django
I am running celery like this:
celery -A app worker -l info
And I am starting my Django server locally in another terminal using:
python manage.py runserver
The celery command outputs:
-------------- celery#LAPTOP-02019EM6 v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.16299 2018-02-07 13:48:18
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: app:0x6372c18
- ** ---------- .> transport: sqs://**redacted**:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF
--- ***** -----
-------------- [queues]
.> my-queue exchange=my-queue(direct) key=my-queue
[tasks]
. app.celery.debug_task
. counter.tasks.my_task
[2018-02-07 13:48:19,262: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:19,868: INFO/SpawnPoolWorker-1] child process 20196 calling self.run()
[2018-02-07 13:48:19,918: INFO/SpawnPoolWorker-4] child process 19984 calling self.run()
[2018-02-07 13:48:19,947: INFO/SpawnPoolWorker-3] child process 16024 calling self.run()
[2018-02-07 13:48:20,004: INFO/SpawnPoolWorker-2] child process 19572 calling self.run()
[2018-02-07 13:48:20,815: INFO/MainProcess] Connected to sqs://**redacted**:**#localhost//
[2018-02-07 13:48:20,930: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:21,307: WARNING/MainProcess] c:\users\nicolas\anaconda3\envs\djangocelery\lib\site-packages\celery\fixups\django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2018-02-07 13:48:21,311: INFO/MainProcess] celery#LAPTOP-02019EM6 ready.
views.py
from .tasks import my_task
def index(request):
print('New request') # This is called
my_task.delay()
# Never reaches here
return HttpResponse('test')
tasks.py
...
#shared_task
def my_task():
print('Task ran successfully') # never prints anything
settings.py
My configuration is the following:
import djcelery
djcelery.setup_loader()
CELERY_BROKER_URL = 'sqs://'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'sa-east-1',
}
CELERY_BROKER_USER = '****************'
CELERY_BROKER_PASSWORD = '***************************'
CELERY_TASK_DEFAULT_QUEUE = 'my-queue'
Versions:
I use the following version of Django and Celery:
Django==2.0.2
django-celery==3.2.2
celery==4.1.0
Thanks for your help!
A bit late, but maybe you are still interested. I got Celery with Django and SQS running and don't see any errors in your code. Maybe you missed something in the celery.py file? Here is my code for comparing.
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangoappname.settings')
# do not use namespace because default amqp broker would be called
app = Celery('lsaweb')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()
Have you also checked if SQS is getting messages (try polling in the SQS administration area)?
I'm using Celery 4.0.0 with RabbitMQ as messages broker within a django 1.9 project, using django-celery-results for results backend. I'm new to Celery and RabbitMQ. The python version is 2.7.5.
After following the instructions in the Celery docs for configuring and using celery with django, and before adding any real tasks, I tried a simple task calling using django shell (manage.py shell), sending the debug_task as defined in the celery docs.
Task is sent OK, and looking at the rabbitmq queue, I can see a new message has arrived to the correct queue on the correct virtual host.
I run the worker and it looks like it starts OK, then it arrives to the event loop and does nothing. No error is presented, not in the worker output or in the rabbitmq logs.
On the other hand, celery status on the same machine returns that there are no active nodes.
I'm probably missing something here, but I don't know what it can be.
Don't know if this is relevant, but when I use 'celery purge' to clear the messages queue, it finds the message and purges it.
Celery configuration settings as added to django settings.py:
CELERY_BROKER_URL = 'amqp://user1:passwd1#rabbithost:5672/exp'
CELERY_TIMEZONE = TIME_ZONE # Using django's TZ
CELERY_TASK_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'django-db'
Task invocation in django shell:
>>> from project.celery import debug_task
>>> debug_task
<#task: project.celery.debug_task of project:0x23cad10>
>>> r = debug_task.delay()
>>> r
<AsyncResult: 33031998-4cd8-4dfe-8e9d-bda9398525bb>
>>> r.status
u'PENDING'
Celery worker invocation:
% celery -A project worker -l info -Q celery
-------------- celery#super9 v4.0.0 (latentcall)
---- **** -----
--- * *** * -- Linux-3.10.0-327.4.5.el7.x86_64-x86_64-with-centos-7.2.1511-Core 2016-11-24 18:15:27
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: project:0x25931d0
- ** ---------- .> transport: amqp://user1:**#rabbithost:5672/exp
- ** ---------- .> results:
- *** --- * --- .> concurrency: 24 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. project.celery.debug_task
[2016-11-24 18:15:28,984: INFO/MainProcess] Connected to amqp://user1:**#rabbithost:5672/exp
[2016-11-24 18:15:29,009: INFO/MainProcess] mingle: searching for neighbors
[2016-11-24 18:15:30,035: INFO/MainProcess] mingle: all alone
/dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,072: WARNING/MainProcess] /dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,073: INFO/MainProcess] celery#super9 ready.
Checking rabbitmq queue:
% rabbitmqctl list_queues -p exp
Listing queues ...
celery 1
Celery status invocation while the worker is "ready":
% celery -A project status
Error: No nodes replied within time constraint.
Thanks.
I have a simple Raspberry pi + Django + Celery + Rabbitmq setup that I use to send and receive data from Xbee radios while users interact with the web app.
For the life of me I cant get Rabbitmq (or celery?) under control where after only a single day (sometimes a little longer) the whole system crashes due to some kind of memory leak.
What I am suspecting is that the queues are piling up and never being removed.
Heres a picture of what I see after only a few minutes of run time:
Seems that all of the queues are in the "ready" state.
What's strange is that it would appear that the workers do in fact receive the message and run the task.
The task is very small and shouldn't take longer than 1 second.
I have verified the tasks do execute to the last line and should be returning ok.
I'm no expert and have no clue what I'm actually looking at so I'm unsure if that is normal behavior and my issue lies elsewhere?
I have everything set to run as daemonized, however even when running in development modes I get same results.
I have spent the last four hours debugging with Google search and found it was taking me in circles and I was not finding clarity.
[CONFIGS AND CODE]
in /ect/default/celeryd I have set the following:
CELERY_APP="MyApp"
CELERYD_NODES="w1"
# Python interpreter from environment.
ENV_PYTHON="/home/pi/.virtualenvs/myappenv/bin/python"
# Where to chdir at start.
CELERYD_CHDIR="/home/pi/django_projects/MyApp"
# Virtual Environment Setup
ENV_MY="/home/pi/.virtualenvs/myappenv"
CELERYD="$ENV_MY/bin/celeryd"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
CELERYCTL="$ENV_MY/bin/celeryctl"
CELERYD_OPTS="--app=MyApp --concurrency=1 --loglevel=FATAL"
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
tasks.py
#celery.task
def sendStatus(modelContext, ignore_result=True, *args, **kwargs):
node = modelContext#EndNodes.objects.get(node_addr_lg=node_addr_lg)
#check age of message and proceed to send status update if it is fresh, otherwise we'll skip it
if not current_task.request.eta == None:
now_date = datetime.now().replace(tzinfo=None) #the time now
eta_date = dateutil.parser.parse(current_task.request.eta).replace(tzinfo=None)#the time this was supposed to run, remove timezone from message eta datetime
delta_seconds = (now_date - eta_date).total_seconds()#seconds from when this task was supposed to run
if delta_seconds >= node.status_timeout:#if the message was queued more than delta_seconds ago this message is too old to process
return
#now that we know the message is fresh we can proceed to process the contents and send status to xbee
hostname = current_task.request.hostname #the name/key in the schedule that might have related xbee sessions
app = Celery('app')#create a new instance of app (because documented methods didnt work)
i = app.control.inspect()
scheduled_tasks = i.scheduled()#the schedule of tasks in the queue
for task in scheduled_tasks[hostname]:#iterate through each task
xbee_session = ast.literal_eval(task['request']['kwargs'])#the request data in the message (converts unicode to dict)
if xbee_session['xbee_addr'] == node.node_addr_lg:#get any session data for this device that we may have set from model's save override
if xbee_session['type'] == 'STAT':#because we are responding with status update we look for status sessions
app.control.revoke(task['request']['id'], terminate=True)#revoke this task because it is redundant and we are sending update now
page_mode = chr(node.page_mode)#the paging mode to set on the remote device
xbee_global.tx(dest_addr_long=bytearray.fromhex(node.node_addr_lg),
frame_id='A',
dest_addr='\xFF\xFE',
data=page_mode)
celery splash:
-------------- celery#raspberrypi v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Linux-4.4.11-v7+-armv7l-with-debian-8.0
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: MyApp:0x762efe10
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: amqp://
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. MyApp.celery.debug_task
. clone_app.tasks.nodeInterval
. clone_app.tasks.nodePoll
. clone_app.tasks.nodeState
. clone_app.tasks.resetNetwork
. clone_app.tasks.sendData
. clone_app.tasks.sendStatus
[2016-10-11 03:41:12,863: WARNING/Worker-1] Got signal worker_process_init for task id None
[2016-10-11 03:41:12,913: WARNING/Worker-1] JUST OPENED
[2016-10-11 03:41:12,915: WARNING/Worker-1] /dev/ttyUSB0
[2016-10-11 03:41:12,948: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2016-10-11 03:41:13,101: INFO/MainProcess] mingle: searching for neighbors
[2016-10-11 03:41:14,206: INFO/MainProcess] mingle: all alone
[2016-10-11 03:41:14,341: WARNING/MainProcess] celery#raspberrypi ready.
[2016-10-11 03:41:16,223: WARNING/Worker-1] RAW DATA
[2016-10-11 03:41:16,225: WARNING/Worker-1] {'source_addr_long': '\x00\x13\xa2\x00#\x89\xe9\xd7', 'rf_data': '...^%:STAT:`', 'source_addr': '[*', 'id': 'rx', 'options': '\x01'}
[2016-10-11 03:41:16,458: INFO/MainProcess] Received task: clone_app.tasks.sendStatus[6e1a74ec-dca5-495f-a4fa-906a5c657b26] eta:[2016-10-11 03:41:17.307421+00:00]
I can provide additional details if required!!
And thank you for any help resolving this.
Wow, almost immedietly after posting my question I found this post and it has completely resolved my issue.
As I expected ignore_result=True was required, I just was not sure where it belonged.
Now I see no queues except maybe for the instant a worker is running a task. :)
Here's the change in tasks.py:
#From
#celery.task
def sendStatus(modelContext, ignore_result=True, *args, **kwargs):
#Some code here
#To
#celery.task(ignore_result=True)
def sendStatus(modelContext, *args, **kwargs):
#Some code here