tasks.py
from celery import Task
class SimpleTask(Task):
def run(self):
print("run")
Execute python manage.py shell
In [3]: from products.tasks import SimpleTask
In [4]: task = SimpleTask()
In [6]: task.run()
run
Successfully work and no error logs come out in worker server.
Howerver,
In [7]: task.delay()
Out[7]: <AsyncResult: a2e90b17-2af9-49b4-82df-562955beaf69>
And worker server log shows errors:
[2016-11-05 18:44:03,171: ERROR/MainProcess] Received unregistered task of type None.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
http://docs.celeryq.org/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
b'[[], {}, {"callbacks": null, "chord": null, "errbacks": null, "chain": null}]' (77b)
Traceback (most recent call last):
File "/Users/Chois/.pyenv/versions/3.5.1/envs/spacegraphy/lib/python3.5/site-packages/celery/worker/consumer/consumer.py", line 549, in on_task_received
strategy = strategies[type_]
KeyError
I don't see why this happens. If I created function-based task using #shared_task it successfully works. But only class-based Task doesn't work.
Need helps, Thanks.
I had the same issue with class based tasks, but not in Django. Fixed it by setting name attribute on the task.
class AddTask(celery.Task):
name = 'AddTask'
Related
I have used information from link to add activate Redis queue in my Flask app:
Heroku Redis
Following is the code in my app file.
from worker import conn
q = Queue(connection=conn)
#app.route('/pureredis', methods=['POST', 'GET'])
def pureredis():
import createtable
file = q.enqueue(createtable.test_task(),'http://heroku.com')
return file
What I want here is to simply call test_task() from createtable.py file and show results in /pureredis
Meanwhile I have added a worker.py file to instantiate a conn.
Error I am getting is as follows:
2023-01-03T01:50:23.899221+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.10/site-packages/rq/utils.py", line 147, in import_attribute
raise ValueError('Invalid attribute name: %s' % name)
ValueError: Invalid attribute name: {
I checked out some earlier tickets and tried but that did not help.
stockoverflow redis
Information in link below is good but does not work for Heroku:
flask with redis
You are calling the function inside the enqueue function remember this expects a Callable
from this
job = q.enqueue(createtable.test_task(),'http://heroku.com')
to this
job = q.enqueue(createtable.test_task,'http://heroku.com')
Remember to return a valid response in your view.
I have setup a scheduled task to run daily on PythonAnywhere.
The task uses the Django Commands as I found this was the preferred method to use with PythonAnywhere.
The tasks produces no errors but I don't get any output. 2022-06-16 22:56:13 -- Completed task, took 9.13 seconds, return code was 0.
I have tried uses Print() to debug areas of the code but I cannot produce any output in either the error or server logs. Even after trying print(date_today, file=sys.stderr).
I have set the path on the Scheduled Task as: (Not sure if this is correct but seems to be the only way I can get it to run without errors.)
workon advancementvenv && python3.8 /home/vicinstofsport/advancement_series/manage.py shell < /home/vicinstofsport/advancement_series/advancement/management/commands/schedule_task.py
I have tried setting the path as below but then it gets an error when I try to import from the models.py file (I know this is related to a relative import but cannot management to resolve it). Traceback (most recent call last): File "/home/vicinstofsport/advancement_series/advancement/management/commands/schedule_task.py", line 3, in <module> from advancement.models import Bookings ModuleNotFoundError: No module named 'advancement'
2022-06-17 03:41:22 -- Completed task, took 14.76 seconds, return code was 1.
Any ideas on how I can get this working? It all works fine locally if I use the command py manage.py scheduled_task just fails on PythonAnywhere.
Below is the task code and structure of the app.
from django.core.management.base import BaseCommand
import requests
from advancement.models import Bookings
from datetime import datetime, timedelta, date
import datetime
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
from django.core.mail import send_mail
import os
from decouple import config
class Command(BaseCommand):
help = 'Sends Program Survey'
def handle(self, *args, **kwargs):
# Get today's date
date_today = datetime.datetime.now().date()
# Get booking data
bookings = Bookings.objects.all()
# For each booking today, send survey email
for booking in bookings:
if booking.booking_date == date_today:
if booking.program_type == "Sport Science":
booking_template_id = 'd-bbc79704a31a4a62a5bfea90f6342b7a'
email = booking.email
booking_message = Mail(from_email=config('FROM_EMAIL'),
to_emails=[email],
)
booking_message.template_id = booking_template_id
try:
sg = SendGridAPIClient(config('SG_API'))
response = sg.send(booking_message)
except Exception as e:
print(e)
else:
booking_template_id = 'd-3167927b3e2146519ff6d9035ab59256'
email = booking.email
booking_message = Mail(from_email=config('FROM_EMAIL'),
to_emails=[email],
)
booking_message.template_id = booking_template_id
try:
sg = SendGridAPIClient(config('SG_API'))
response = sg.send(booking_message)
except Exception as e:
print(e)
else:
print('No')
Thanks in advance for any help.
Thanks Filip and Glenn, testing within the bash console and changing the directory in the task helped to fix the issue. Adding 'cd /home/vicinstofsport/advancement_series && to my task allowed the function to run.'
I have an application structured as follows.
- api
-api
settings.py
celery.py
-core
tasks.py
-scripts
cgm.py
On running the following command I can see my task get loaded into the database however it does not actually run and I'm trying to understand why.
celery -A api beat -l debug -S django_celery_beat.schedulers.DatabaseScheduler
Here is my code.
settings.py (relevant parts)
INSTALLED_APPS = (
...,
'django_celery_beat',
)
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE',
'api.settings')
app = Celery('api')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
tasks.py
from django_celery_beat.models import PeriodicTask, IntervalSchedule
schedule, created = IntervalSchedule.objects.get_or_create(
every=10,
period=IntervalSchedule.SECONDS,
)
PeriodicTask.objects.get_or_create(
interval=schedule,
name='Import Dexcom Data',
task='core.scripts.cgm.load_dexcom',
)
cgm.py
from monitor.models import GlucoseMonitor
def load_dexcom():
from core.models import User
user = User.objects.get(username='xxx')
from pydexcom import Dexcom
dexcom = Dexcom("xxx", "xxx", ous=True) # add ous=True if outside of US
bg = dexcom.get_current_glucose_reading()
data = GlucoseMonitor.objects.create(
user = user,
source = 1,
blood_glucose = bg.mmol_l,
trend = bg.trend,
created = bg.time
)
data.save()
I can run the load_dexcom() manually and it works. My guess is I'm not dot-walking the task properly and it's not finding it but it's not showing any errors in the code. When I run the celery command I can see it load the record but doesn't seem to do anything else.
edit -
Looks like I was missing the following worker command which I've then run
celery -A api worker -l DEBUG
However the output is clearly showing it can't find the script.
The full contents of the message body was:
'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b)
Traceback (most recent call last):
File "/home/robin/miniconda3/envs/api/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 581, in on_task_received
strategy = strategies[type_]
KeyError: 'core.scripts.cgm.load_dexcom'
I've tried the following iterations all give key error
load_dexcom
scripts.cgm.load_dexcom
api.core.scripts.cgm.load_dexcom
I'm not sure if this will help for certain as I'm combining some bits from various things I've got working here, but thinking if you modify tasks.py to something like the following it may help. I think you need to decorate the function you are running, but I'm not using Django for this bit so not 100% on that.
from django_celery_beat.models import PeriodicTask, IntervalSchedule
from core.scripts.cgm import load_dexcom
schedule, created = IntervalSchedule.objects.get_or_create(
every=10,
period=IntervalSchedule.SECONDS,
)
PeriodicTask.objects.get_or_create(
interval=schedule,
name='Import Dexcom Data',
task=run_load_dexcom(),
)
#app.task
def run_load_dexcom():
load_dexcom() # and doing the appropriate import at top of file
I'm doing a Django project and try to improve computing speed in backend.
The task is something like a CPU-bound conversion process
Here's my environment
Python 3.6.1
Django 1.10
PostgreSQL 9.6
And I stuck with following errors when I try to parallel a computing API by python multi-processing library.
File "D:\\project\apps\converter\models\convert_manager.py", line 1, in <module>
from apps.conversion.models import Conversion
File "D:\\project\apps\conversion\models.py", line 5, in <module>
class Conversion(models.Model):
File "C:\\virtenv\lib\site-packages\django\db\models\base.py", line 105, in __new__
app_config = apps.get_containing_app_config(module)
File "C:\\virtenv\ib\site-packages\django\apps\registry.py", line 237, in get_containing_app_config
self.check_apps_ready()
File "C:\\lib\site-packages\django\apps\registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
look like each process import Conversion model
and Conversion model is like
from django.db import models
Conversion(model.Model):
conversion_name = models.CharField(max_length=63)
conversion_user = models.CharField(max_length=31)
conversion_description = models.TextField(blank=True)
...
Below is my sample function which I want to parallel, each iteration is independent but will access or insert data into SQL.
Class ConversionJob():
...
def run(self, p_list):
list_merge_result = []
for p in p_list:
list_result = self.Couputing_api(p)
list_merge_result.extend(list_result)
and I'm try to do is
from multiprocessing import Pool
Class ConversionJob():
...
def run(self, p_list):
list_merge_result = []
p = Pool(process=4)
list_result = p.map(self.couputing_api, p_list)
list_merge_result.extend(list_result)
In computing_api(), it'll try to get current conversion's info which has completed and save into SQL before this api call, but this caused the error.
My question is
Why import Conversion model will caused Apps aren't loaded yet errors, I had google lots of article but not actually solve my problems.
I can see each Process SpawnPoolWorker-x generated and try to boot django server again(why?), each worker will stop at same errors.
computing API will try to access sql , I haven't think about how to deal with this work. (share db connections or create new connection in each process)
For others that might stumble upon this in future:
If you encounter this issue while running Python 3.8 and trying to use multiprocessing package, chances are that it is due to the sub processed are 'spawned' instead of 'forked'. This is a change with Python 3.8 on Mac OS where the default process start method is changed from 'fork' to 'spawn'.
This is a known issue with Django.
To get around it:
import multiprocessing as mp
mp.set_start_method('fork')
This post can solve the problem.
Django upgrading to 1.9 error “AppRegistryNotReady: Apps aren't loaded yet.”
I had found this answer before, but not actually solve my problems at that time.
After I repeated test, I have to add these codes before import another model,
otherwise, child-process will booting failed and give the error.
import django
django.setup()
from another.app import models
I have a Flask application with blueprints that is structured like this:
application.py
project/
form_emailer.py
blueprints/
example_form.py
wtforms-models/
example_form_model.py
templates/
example_form_template.html
I'm trying to use RQ to send emails in the background (using Flask-Mail) because our SMTP uses the Gmail servers, which can take a few seconds to complete. My function in form_emailer.py looks like this:
from flask import Flask
from flask.ext.mail import Mail, Message
from application import app, q
mail = Mail(app)
def _queue_message(message):
mail.send(message)
def sendemail(recipients, subject, body):
"""
This function gets called in a Flask blueprint.
"""
message = Message(recipients=recipients, subject=subject, body=body)
q.enqueue(_queue_message, message)
My (simplified) application.py looks like this. I'm breaking convention by using "import *" in order to simplify additions there (our __init__.py in those packages dynamically import all modules):
from flask import Flask
from redis import Redis
from rq import Queue
app = Flask(__name__)
q = Queue(connection=Redis())
from project.blueprints import *
from project.forms import *
if __name__ == "__main__":
app.run()
I have an rqworker running in the same virtual environment where my application is running, and the worker detects the task. However, I'm getting the following traceback and can't figure out how to fix this:
16:41:29 *** Listening on high, normal, low...
16:43:26 low: project.form_emailer._queue_message(<flask_mail.Message object at 0x299d690>) (bd913b3a-4e7f-4efb-b51c-8ae11d37ac00)
16:43:27 ImportError: cannot import name sendemail
Traceback (most recent call last):
...
File "./project/blueprints/example_form.py", line 4, in <module>
from project.form_emailer import sendemail
ImportError: cannot import name sendemail
I suspect this has to do with Flask's application context, but my initial attempts to use with app.app_context(): are failing; the worker is not even able to import the function I want to use. What am I doing wrong here?