Raising Error: NotRegistered when I use Flask with Celery - flask

Description
Hi, I'm learning Celery, and I read a blog.>>
Celery and the Flask Application Factory Pattern - miguelgrinberg.com
So I wrote a small program to run Flask with Celery
Code
app.__init__.py
from flask import Flask
from celery import Celery
celery = Celery(__name__, broker='amqp://127.0.0.1:5672/')
def create_app():
app = Flask(__name__)
#celery.task
def add(x, y):
print x+y
#app.route('/')
def index():
add.delay(1, 3)
return 'Hello World!'
return app
manage.py
from app import create_app
app = create_app()
if __name__ == '__main__':
app.run()
celery_worker_1.py
from app import celery, create_app()
f_app = create_app()
f_app.app_context().push()
celery_worker_2.py
from app import celery, create_app
#celery.task
def foo():
print 'Balabala...'
f_app = create_app()
f_app.app_context().push()
Problem
When I run the Flask server and celery useing:
celery -A celery_worker_1 worker -l
the Celery raised NotRegistered Error:
Traceback (most recent call last): File "D:\Python27\lib\site-packages\billiard\pool.py", line 363, in workloop
result = (True, prepare_result(fun(*args, **kwargs))) File "D:\Python27\lib\site-packages\celery\app\trace.py", line 349, in
_fast_trace_task
return _tasks[task].__trace__(uuid, args, kwargs, request)[0] File "D:\Python27\lib\site-packages\celery\app\registry.py", line 26, in __missing__
raise self.NotRegistered(key) NotRegistered: 'app.add'
But instead of using celery_worker_2:
celery -A celery_worker_2 worker -l info
the task run correctly:
[2015-11-28 15:45:56,299: INFO/MainProcess] Received task: app.add[cbe5e1d6-c5df-4141-9db1-e6313517c202]
[2015-11-28 15:45:56,302: WARNING/Worker-1] 4
[2015-11-28 15:45:56,371: INFO/MainProcess] Task app.add[cbe5e1d6-c5df-4141-9db1-e6313517c202] succeeded in 0.0699999332428s: None
Why can't the Celery run correctly with the code of celery_worker_1?
PS: I'm not good at English, you can point it out to me if you can't understand, I'd like to describe again. ThankS!

Related

How do I test that my Celery worker actually works in Django

(code at bottom)
Context: I'm working on a Django project where I need to provide the user feedback on a task that takes 15-45 seconds. In comes Celery to the rescue! I can see that Celery is performing as expected when I celery -A my_project worker -l info & python manage.py runserver.
Problem: I can't figure out how to run a celery worker in my tests. When I run python manage.py test, I get the following error:
Traceback (most recent call last):
File "/Users/pbrockman/coding/t1v/lib/python3.8/site-packages/django/test/utils.py", line 387, in inner
return func(*args, **kwargs)
File "/Users/pbrockman/coding/tcommerce/tcommerce/tests.py", line 58, in test_shared_celery_task
self.assertEqual(result.get(), 6)
File "/Users/pbrockman/coding/t1v/lib/python3.8/site-packages/celery/result.py", line 224, in get
return self.backend.wait_for_pending(
File "/Users/pbrockman/coding/t1v/lib/python3.8/site-packages/celery/backends/base.py", line 756, in wait_for_pending
meta = self.wait_for(
File "/Users/pbrockman/coding/t1v/lib/python3.8/site-packages/celery/backends/base.py", line 1087, in _is_disabled
raise NotImplementedError(E_NO_BACKEND.strip())
NotImplementedError: No result backend is configured.
Please see the documentation for more information.
Attempted solution:
I tried various combinations of #override_settings with CELERY_TASK_ALWAYS_EAGER=True, CELERY_TASK_EAGER_PROPOGATES=True, and BROKER_BACKEND='memory'.
I tried both #app.task decorator and the #shared_task decorator.
How do I see if celery is having the expected behavior in my tests?
Code
Celery Settings: my_project/celery.py
import os
from dotenv import load_dotenv
load_dotenv()
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_project.settings')
app = Celery('my_project-{os.environ.get("ENVIRONMENT")}',
broker=os.environ.get('REDISCLOUD_URL'),
include=['my_project.tasks'])
from django.conf import settings
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
if __name__ == '__main__':
app.start()
Testing: my_project/tests.py
from django.test import TestCase
from tcommerce.celery import app
from tcommerce.tasks import shared_add
from tcommerce.tasks import app_add
class CeleryTests(TestCase):
def test_shared_celery_task(self):
'#shared_task'
result = shared_add.delay(2, 4)
self.assertEqual(result.get(), 6)
def test_app_celery_task(self):
'#task.app'
result = app_add.delay(2, 4)
self.assertEqual(result.get(), 6)
Defining tasks: my_project/tasks.py
from .celery import app
from celery import shared_task
#shared_task
def shared_add(x, y):
return x + y
#app.task
def app_add(x, y):
return x + y

Executing Django management commands that spins off multiple processes and threads in windows and linux

I am relatively new to multi-threading and multi-processing. I just encountered another learn-block when i just realized that windows and linux handles multi-processing very differently. I do not know th technicalities, but I do know that it is different.
I am using a django to execute my application: python manage.py random_script, within random_script, I am importing multiprocessing and spinning of different processes. i get the following error:
File "<string>", line 1, in <module>
File "C:\FAST\Python\3.6.4\lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle)
File "C:\FAST\Python\3.6.4\lib\multiprocessing\reduction.py", line 82, in steal_handle
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect
I tried adding this at the top because my development server is windows but my production server is linux:
if 'win' in sys.platform:
print('Window')
multiprocessing.set_start_method('spawn')
else:
print('Linux')
multiprocessing.set_start_method('fork')
But to no success. When i continued to look through google, it suggest writing the portion of the process spawning under the if __name__ == '__main__': line. That would be fine if I am executing my scripts normally (i.e. python random_script.py), but I am not. I have ran out of ideas and no longer know how to proceed.
++ EDITED ++
manage.py
#!/usr/bin/env python
import os
import sys
import argparse
DEFAULT_SETTINGS_MODULE = "api.test_settings"
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", DEFAULT_SETTINGS_MODULE)
try:
from django.core.management import execute_from_command_line
except ImportError:
# The above import may fail for some other reason. Ensure that the
# issue is really that Django is missing to avoid masking other
# exceptions on Python 2.
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
raise
execute_from_command_line(sys.argv)
random_script.py:
class Command(BaseCommand):
def __init__(self):
super().__init__()
def handle(self, *args, **kwargs):
<...>
self.main()
def main(self):
<...>
Above is my manage.py and my random_script.py.
Thanks for the guidance
Every app has main module which inits / starts it.
For Django manually run management commands this is manage.py and you can set desired method in there:
# manage.py
...
if __name__ == "__main__":
import multiprocessing
if 'win' in sys.platform:
multiprocessing.set_start_method('spawn')
else:
multiprocessing.set_start_method('fork')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
...
And sample of custom management command:
# random_script.py
def calculation(x):
import time
time.sleep(1)
return x
class Command(BaseCommand):
def handle(self, *args, **options):
calc_args = [1, 2, 3, 4, 5]
with multiprocessing.Pool(processes=3) as pool:
results = pool.map(calculation, calc_args)
self.stdout.write(
self.style.SUCCESS('Success: %s' % results)
)

Can't start the worker for Running celery with Flask

I am following the example given in the following url to run celery with Flask:
http://flask.pocoo.org/docs/0.12/patterns/celery/
I followed everything word by word. The only difference being, my make_celery function is created under the following hierarchy:
package1|
|------CeleryObjCreator.py
|
CeleryObjectCraetor.py has the make_celery function under CeleryObjectCreatorClass as follows:
from celery import Celery
class CeleryObjectHelper:
def make_celery(self, app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
Now, I am facing problems with starting the celery worker.
In the end of the article, it suggests to start the celery worker as follows:
$ celery -A your_application.celery worker
In my case, I am using <> for your_application string which doesn't work and it gives the following error:
ImportError: No module named 'package1.celery'
So I am not sure what should be the value of your_application string here to start the celery worker.
EDIT
As suggested by Nour Chawich, i did try running the Flask app from the command line. my server does come up successfully.
Also, since app is a directory in my project structure where app.py is, in app.py code i replaced app = Flask(name) with flask_app = Flask(name) to separate out the variable names
But when i try to start the celery worker using command
celery -A app.celery -loglevel=info
it is not able to recognize the following imports that I have in my code
import app.myPackage as myPackage
it throws the following error
ImportError: No module named 'app'
So I am really not sure what is going on here. any ideas ?

Why does Flask + SocketIO + Gevent give me SSL EOF errors?

This is a simple code snippet that consistently repeats the issue I'm having. I'm using Python 2.7.12, Flask 0.11, Flask-SocketIO 2.7.1, and gevent 1.1.2. I understand that this is probably an issue better brought up to the responsible package's mailing list, but I can't figure out which one is responsible. However, I'm pretty sure it is a problem with gevent because that's what raises the exception.
from flask import Flask
from flask_socketio import SocketIO
from gevent import monkey
monkey.patch_all()
import ssl
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret'
socketio = SocketIO(app, async_mode='gevent')
#app.route('/')
def index():
return "Hello World!"
#socketio.on('connect')
def handle_connect_event():
print('Client connected')
if __name__ == '__main__':
socketio.run(app, host='127.0.0.1', port=8443,
certfile='ssl/server/server.cer', keyfile='ssl/server/server.key',
ca_certs='ssl/server/ca.cer', cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1_2)
And here is the error I get when the client connects:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/gevent/greenlet.py", line 534, in
result = self._run(*self.args, **self.kwargs)
File "/usr/lib/python2.7/site-packages/gevent/baseserver.py", line 25, in
return handle(*args_tuple)
File "/usr/lib/python2.7/site-packages/gevent/server.py", line 126, in wr
ssl_socket = self.wrap_socket(client_socket, **self.ssl_args)
File "/usr/lib/python2.7/site-packages/gevent/_sslgte279.py", line 691, i
ciphers=ciphers)
File "/usr/lib/python2.7/site-packages/gevent/_sslgte279.py", line 271, i
raise x
SSLEOFError: EOF occurred in violation of protocol (_ssl.c:590)
<Greenlet at 0x7fdd593c94b0: _handle_and_close_when_done(<bound method WSGInd method WSGIServer.do_close of <WSGIServer a, (<socket at 0x7fdd590f4410 SSLEOFError
My system also has OpenSSL version 1.0.2.j if that helps. Any thoughts would be appreciated!
Use patch_all on top of the code. Even before flask and socketio import.
from gevent import monkey
monkey.patch_all()
from flask import Flask
from flask_socketio import SocketIO
import ssl

Flask integration with Celery

I am trying to use Celery in my Flask Example application. Because I am creating instance in Factory method I can not use example from documentation (http://flask.pocoo.org/docs/0.10/patterns/celery/)
init.py
from celery import Celery
from flask import Flask
from config import config
def create_app():
app = Flask(__name__)
app.debug = True
app.config.from_object(config)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
def make_celery(app = None):
app = app or create_app()
celery = Celery('app', backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
tasks.py
from app import make_celery
celery = make_celery()
#celery.task
def add(a, b):
return a + b
views.py
from flask import render_template
from app.main import main
from ..tasks import add
#main.route('/', methods=['GET', 'POST'])
def index():
add.delay(5, 3)
return render_template('index.html')
I am getting an error:
$ celery -A app.tasks worker
Traceback (most recent call last):
File "...lib/python3.4/site-packages/celery/app/utils.py", line 229, in find_app
sym = symbol_by_name(app, imp=imp)
File "...lib/python3.4/site-packages/celery/bin/base.py", line 488, in symbol_by_name
return symbol_by_name(name, imp=imp)
File "...lib/python3.4/site-packages/kombu/utils/__init__.py", line 97, in symbol_by_name
return getattr(module, cls_name) if cls_name else module
AttributeError: 'module' object has no attribute 'tasks'
The -A param should point to the instance of Celery to use, not the module http://docs.celeryproject.org/en/latest/reference/celery.bin.celery.html#cmdoption-celery-a
In this case:
celery -A app.tasks.celery worker