How get get traceback on the console - django

I have this in a django view:
import logging
from django.conf import settings
fmt = getattr(settings, 'LOG_FORMAT', None)
lvl = getattr(settings, 'LOG_LEVEL', logging.INFO)
logging.basicConfig(format=fmt, level=lvl)
#api_view(['GET', 'POST'])
def index(request):
if request.GET.get("request_id"):
logging.info("standard CMH request...")
barf()
# etc
In my work environment (DEBUG=True) this prints on the console when I access the relevant page over REST from a client app:
mgregory$ foreman start
19:59:11 web.1 | started with pid 37371
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Starting gunicorn 18.0
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Listening at: http://0.0.0.0:5000 (37371)
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Using worker: sync
19:59:11 web.1 | 2014-10-10 19:59:11 [37374] [INFO] Booting worker with pid: 37374
19:59:18 web.1 | standard CMH request...
What is actually happening is that barf() is throwing an exception, because it's not defined. But this isn't appearing in the console log.
How can I get all exceptions to appear in the console log, in DEBUG=True environment?
Supplementary: is there any reason why I wouldn't have this do the same in the production environment, with DEBUG=False? My production environment does not email me, I'd love to have these exceptions in the log, I think?

Related

Call method once when Flask app started despite many Gunicorn workers

I have a simple Flask app that starts with Gunicorn which has 4 workers.
I want to clear and warmup cache when server restarted. But when I do this inside create_app() method it is executing 4 times.
def create_app(test_config=None):
app = Flask(__name__)
# ... different configuration here
t = threading.Thread(target=reset_cache, args=(app,))
t.start()
return app
[2022-10-28 09:33:33 +0000] [7] [INFO] Booting worker with pid: 7
[2022-10-28 09:33:33 +0000] [8] [INFO] Booting worker with pid: 8
[2022-10-28 09:33:33 +0000] [9] [INFO] Booting worker with pid: 9
[2022-10-28 09:33:33 +0000] [10] [INFO] Booting worker with pid: 10
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,909 INFO webapp reset_cache:38 Clearing cache
How to make it only one-time without using any queues, rq-workers or celery?
Signals, mutex, some special check of worker id (but it is always dynamic)?
Tried Haven't found any solution so far.
I used Redis locks for that.
Here is an example using flask-caching, which I had in project, but you can replace set client from whatever place you have redis client:
import time
from webapp.models import cache # cache = flask_caching.Cache()
def reset_cache(app):
with app.app_context():
client = app.extensions["cache"][cache]._write_client # redis client
lock = client.lock("warmup-cache-key")
locked = lock.acquire(blocking=False, blocking_timeout=1)
if locked:
app.logger.info("Clearing cache")
cache.clear()
app.logger.info("Warming up cache")
# function call here with `cache.set(...)`
app.logger.info("Completed warmup cache")
# time.sleep(5) # add some delay if procedure is really fast
lock.release()
It can be easily extended with threads, loops or whatever you need to set value to cache.

Flask SocketIO connection faild on production

I just deployed my flask application on development server after i checked that the socketio works well on regular run server and also using gunicorn with eventlet on local,
Now i deployed my flask app and it runners well when i open any page (HTTP) like API or so,
But when i try to connect to the websockets it says the following error in the console tab in the browser
Firefox can’t establish a connection to the server at ws://server-ip/chat/socket.io/?EIO=4&transport=websocket&sid=QClYLXcK0D0sSVYNAAAM.
This is my frontend using socketio cdn
<script src="https://cdn.socket.io/4.3.2/socket.io.min.js" integrity="sha384-KAZ4DtjNhLChOB/hxXuKqhMLYvx3b5MlT55xPEiNmREKRzeEm+RVPlTnAn0ajQNs" crossorigin="anonymous"></script>
var socket = io.connect('http://server-ip/chat/send/', {"path" : "/chat/socket.io"});
I set "path" here to the correct socket.io url, If i tried to remove it and just type the url it gives
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://37.76.245.93/socket.io/?EIO=4&transport=polling&t=NrcpeSQ. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
So i added it to redirect it to the correct url but it can't connect it using ws as shown above
I am using this command on server to run flask
gunicorn --worker-class eventlet -w 1 --bind 0.0.0.0:8000 --timeout 500 --keep-alive 500 wsgi:app
and this is my wsgi file
from chat import app
from dotenv import load_dotenv, find_dotenv
from flask_socketio import SocketIO
from messages.socket import socket_handle
load_dotenv(find_dotenv())
app = app(settings="chat.settings.dev")
socket = SocketIO(app, cors_allowed_origins=app.config['ALLOWED_CORS'])
socket_handle(socket)
The 'socket_handle' function just appends the join and message_handle functions with socket decorator to them, I think something is preventing the server to work on ws but i don't know why
I know that this need to be run as ASGI not WSGI but as socketio docs says i think using eventlet will solve this, But i also tried to replace my wsgi.py file to this
from chat import app
from dotenv import load_dotenv, find_dotenv
from flask_socketio import SocketIO
from messages.socket import socket_handle
from asgiref.wsgi import WsgiToAsgi
load_dotenv(find_dotenv())
apps = app(settings="chat.settings.dev")
socket = SocketIO(apps, cors_allowed_origins=apps.config['ALLOWED_CORS'])
socket_handle(socket)
asgi_app = WsgiToAsgi(apps)
And when i run Gunicorn command i get this
gunicorn --worker-class eventlet -w 1 --bind 0.0.0.0:8000 --timeout 500 --keep-alive 500 wsgi:asgi_app
[2021-11-28 16:17:42 +0200] [39043] [INFO] Starting gunicorn 20.1.0
[2021-11-28 16:17:42 +0200] [39043] [INFO] Listening at: http://0.0.0.0:8000 (39043)
[2021-11-28 16:17:42 +0200] [39043] [INFO] Using worker: eventlet
[2021-11-28 16:17:42 +0200] [39054] [INFO] Booting worker with pid: 39054
[2021-11-28 16:17:47 +0200] [39054] [ERROR] Error handling request /socket.io/?EIO=4&transport=polling&t=NrcwBTe
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chat-Tb0n1QCf/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 55, in handle
self.handle_request(listener_name, req, client, addr)
File "/root/.local/share/virtualenvs/chat-Tb0n1QCf/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 108, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: __call__() missing 1 required positional argument: 'send'
^C[2021-11-28 16:17:48 +0200] [39043] [INFO] Handling signal: int
[2021-11-28 16:17:48 +0200] [39054] [INFO] Worker exiting (pid: 39054)
[2021-11-28 16:17:48 +0200] [39043] [INFO] Shutting down: Master
I am using latest flask & socketio versions

Why does Foreman exit when I add a rake assets:precompile task to my Procfile?

My Ruby on Rails application uses foreman (https://github.com/ddollar/foreman) to manage the processes that need to be started when the app is run.
My Procfile looks like this:
precompile: bundle exec rake assets:precompile
web: bundle exec puma -e $PUMA_ENV
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
Running $ foreman start worked as expected until I added the first line (the precompile task).
When I run $ foreman start my output looks like this:
$ foreman start -e .env.dev
10:30:20 precompile.1 | started with pid 7309
10:30:20 web.1 | started with pid 7310
10:30:20 worker.1 | started with pid 7311
10:30:20 search.1 | started with pid 7312
10:30:22 web.1 | [7310] Puma starting in cluster mode...
10:30:22 web.1 | [7310] * Version 2.8.2 (ruby 2.1.0-p0), codename: Sir Edmund Percival Hillary
10:30:22 web.1 | [7310] * Min threads: 4, max threads: 16
10:30:22 web.1 | [7310] * Environment: development
10:30:22 web.1 | [7310] * Process workers: 2
10:30:22 web.1 | [7310] * Phased restart available
10:30:22 web.1 | [7310] * Listening on tcp://0.0.0.0:3000
10:30:22 web.1 | [7310] Use Ctrl-C to stop
10:30:23 web.1 | [7313] + Gemfile in context: /Users/username/rails_projects/lcms/Gemfile
10:30:23 web.1 | [7314] + Gemfile in context: /Users/username/rails_projects/lcms/Gemfile
10:30:30 web.1 | [7310] - Worker 1 (pid: 7314) booted, phase: 0
10:30:30 worker.1 | [Worker(host:MacBook-Pro.local pid:7311)] Starting job worker
10:30:30 web.1 | [7310] - Worker 0 (pid: 7313) booted, phase: 0
10:30:32 precompile.1 | exited with code 0
10:30:32 system | sending SIGTERM to all processes
SIGTERM received
10:30:32 web.1 | [7310] - Gracefully shutting down workers...
10:30:32 worker.1 | [Worker(host:MacBook-Pro.local pid:7311)] Exiting...
10:30:32 search.1 | exited with code 143
10:30:32 web.1 | [7310] - Goodbye!
10:30:32 web.1 | exited with code 0
10:30:33 worker.1 | exited with code 0
I don't know how to get more details on the problem. I've added $stdout.sync = true to my config/environments/development.rb, and the output is the same as without.
I've also tried both appending and prepending RAILS_ENV=development and RAILS_ENV=production to the precompile task.
How can I get my foreman/Procfile setup to precompile assets successfully, and then continue with the other tasks that start the app?
I've decided that my best option is to use a syntax that will execute my rake assets tasks prior to booting Puma and only if precompiling is successful.
So, running the commands in order with && between them seems to achieve the result that I desire
web: bundle exec rake assets:clean RAILS_ENV=$FOREMAN_ENV && bundle exec rake assets:precompile RAILS_ENV=$FOREMAN_ENV && bundle exec puma -e $FOREMAN_ENV
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
The problem is that as soon as one of the processes in foreman exists, all do. I assume this is by design which makes sense. The application is the combination of running services.
If you want to run one-off tasks you can use foreman run.
Try adding sleep to the every process after first line
web: sleep 1; bundle exec puma -e $PUMA_ENV
worker: sleep 1; bundle exec rake jobs:work
search: sleep 1; bundle exec rake sunspot:solr:run
If it works, remove sleep 1; one by one to see what cause problem.

When using Foreman with Rails 4 and a debug listener locally there is no response from the server

I want to use Foreman for local development however I also want to be able to debug my code. In order to make this happen I've used this initializer:
if Rails.env.development?
require 'debugger'
Debugger.wait_connection = true
def find_available_port
server = TCPServer.new(nil, 0)
server.addr[1]
ensure
server.close if server
end
port = find_available_port
puts "Remote debugger on port #{port}"
Debugger.start_remote(nil, port)
end
as recommended here: How to debug a rails (3.2) app started by foreman?. However, when I start foreman the browser can't seem to find anything on port 5000:
$ foreman start
09:48:18 web.1 | started with pid 25337
09:48:23 web.1 | => Booting Thin
09:48:23 web.1 | => Rails 4.0.0 application starting in development on http://0.0.0.0:5000
09:48:23 web.1 | => Run `rails server -h` for more startup options
09:48:23 web.1 | => Ctrl-C to shutdown server
09:48:23 web.1 | Remote debugger on port 57466
If I go to 0.0.0.0:5000 I see:
=> Oops! Google Chrome could not connect to 0.0.0.0:5000
This is a less than ideal solution, however...
It is possible to use the debugger with Foreman and without any initializer. Simply insert a debugger statement in your code as you would normally and you will see that the execution will stop.
Your terminal window will print out the last few lines of execution as normal and will look something like this:
11:23:57 web.1 | Rendered editor/memberships/_memberships.html.haml (76.6ms)
11:23:57 web.1 | Rendered editor/memberships/_memberships.html.haml (0.2ms)
11:23:57 web.1 | /rails/yourapp/yourapp-code/app/views/editor/memberships/_form.html.haml:2
11:23:57 web.1 | = simple_form_for membership, url: membership.path(:create_or_update) do |f|
11:23:57 web.1 | [-3, 6] in /rails/yourapp/yourapp-code/app/views/editor/memberships/_form.html.haml
11:23:57 web.1 | 1 - debugger
11:23:57 web.1 | => 2 = simple_form_for membership, url: membership.path(:create_or_update) do |f|
11:23:57 web.1 | 3 = f.fields_for :member do |u_f|
11:23:57 web.1 | 4 = u_f.input :email, placeholder: "Enter their email address", label: 'Invite a new administrator'
11:23:57 web.1 | 5 = f.submit 'Add admin', class: 'btn btn-sm btn-success'
11:23:57 web.1 | (rdb:1) membership
11:23:57 web.1 | #<MembershipDecorator:0x007fb03fee1168 #object=#<Membership id: nil, org_id: 1, member_id: nil, role: "basic", active: true, created_at: nil, updated_at: nil>, #context={}>
11:24:02 web.1 | (rdb:1) membership
11:24:02 web.1 | #<MembershipDecorator:0x007fb03fee1168 #object=#<Membership id: nil, org_id: 1, member_id: nil, role: "basic", active: true, created_at: nil, updated_at: nil>, #context={}>
Even though there's no prompt the debugger has nonetheless started
At this point there is no prompt and if you type anything it doesn't come up on screen. However it will get executed when you press return and the output will display in the terminal.
So the trick is that the debugger has started and you can use it and see its output, you just can't see what you're typing. I've no idea how to fix this but at least you can continue debugging.

Django / Celery / Kombu worker error: Received and deleted unknown message. Wrong destination?

It seems as though messages are not getting put onto the queue properly.
I'm using Django with Celery and Kombu to make use of Django's own database as a Broker Backend. All I need is a very simple Pub/Sub setup. It will eventually deploy to Heroku, so I'm using foreman to run locally. Here is the relevant code and info:
pip freeze
Django==1.4.2
celery==3.0.15
django-celery==3.0.11
kombu==2.5.6
Procfile
web: source bin/activate; python manage.py run_gunicorn -b 0.0.0.0:$PORT -w 4; python manage.py syncdb
celeryd: python manage.py celeryd -E -B --loglevel=INFO
settings.py
# Celery configuration
import djcelery
CELERY_IMPORTS = ("api.tasks",)
BROKER_URL = "django://localhost//"
djcelery.setup_loader()
put_message
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
queue.put(id)
queue.close()
api/tasks.py
#task()
def process_next_task():
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
message = queue.get(block=True, timeout=1)
id = int(message.payload)
try:
Model.objects.get(id=id)
except Model.DoesNotExist:
message.reject()
else:
# Do stuff here
message.ack()
queue.close()
In the terminal, foreman start works just fine and shows this:
started with pid 31835
17:08:22 celeryd.1 | started with pid 31836
17:08:22 web.1 | /usr/local/foreman/bin/foreman-runner: line 41: exec: source: not found
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Starting gunicorn 0.16.1
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Listening at: http://0.0.0.0:5000 (31838)
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Using worker: sync
17:08:22 web.1 | 2013-02-14 17:08:22 [31843] [INFO] Booting worker with pid: 31843
17:08:22 web.1 | 2013-02-14 17:08:22 [31844] [INFO] Booting worker with pid: 31844
17:08:22 web.1 | 2013-02-14 17:08:22 [31845] [INFO] Booting worker with pid: 31845
17:08:22 web.1 | 2013-02-14 17:08:22 [31846] [INFO] Booting worker with pid: 31846
17:08:22 celeryd.1 | [2013-02-14 17:08:22,858: INFO/Beat] Celerybeat: Starting...
17:08:22 celeryd.1 | [2013-02-14 17:08:22,870: WARNING/MainProcess] celery#myhost.local ready.
17:08:22 celeryd.1 | [2013-02-14 17:08:22,873: INFO/MainProcess] consumer: Connected to django://localhost//.
17:08:42 celeryd.1 | [2013-02-14 17:08:42,926: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
17:08:42 celeryd.1 | The full contents of the message body was: body: 25 (2b) {content_type:u'application/json' content_encoding:u'utf-8' delivery_info:{u'priority': 0, u'routing_key': u'celery', u'exchange': u'celery'}}
Those last two lines are not shown immediately, but get displayed when my API receives a POST request that runs the code in the put_message section above. I've experimented with using Kombu's fully blown-out Producer and Consumer classes with the same result.
Kombu's SimpleQueue example: http://kombu.readthedocs.org/en/latest/userguide/examples.html#hello-world-example
Celery Docs: http://docs.celeryproject.org/en/latest/index.html
Any ideas?
EDITED
Changing to --loglevel=DEBUG within the procfile changes the terminal output to the following:
08:54:33 celeryd.1 | started with pid 555
08:54:33 web.1 | started with pid 554
08:54:33 web.1 | /usr/local/foreman/bin/foreman-runner: line 41: exec: source: not found
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Starting gunicorn 0.16.1
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Listening at: http://0.0.0.0:5000 (557)
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Using worker: sync
08:54:36 web.1 | 2013-02-15 08:54:36 [564] [INFO] Booting worker with pid: 564
08:54:36 web.1 | 2013-02-15 08:54:36 [565] [INFO] Booting worker with pid: 565
08:54:36 web.1 | 2013-02-15 08:54:36 [566] [INFO] Booting worker with pid: 566
08:54:36 web.1 | 2013-02-15 08:54:36 [567] [INFO] Booting worker with pid: 567
08:54:37 celeryd.1 | [2013-02-15 08:54:37,480: DEBUG/MainProcess] [Worker] Loading modules.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] Claiming components.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] Building boot step graph.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] New boot order: {ev, queues, beat, pool, mediator, autoreloader, timers, state-db, autoscaler, consumer}
08:54:37 celeryd.1 | [2013-02-15 08:54:37,489: DEBUG/MainProcess] Starting celery.beat._Process...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,490: DEBUG/MainProcess] celery.beat._Process OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,491: DEBUG/MainProcess] Starting celery.concurrency.processes.TaskPool...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,491: INFO/Beat] Celerybeat: Starting...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,506: DEBUG/MainProcess] celery.concurrency.processes.TaskPool OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] Starting celery.worker.mediator.Mediator...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] celery.worker.mediator.Mediator OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] Starting celery.worker.consumer.BlockingConsumer...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,508: WARNING/MainProcess] celery#myhost.local ready.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,508: DEBUG/MainProcess] consumer: Re-establishing connection to the broker...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,510: INFO/MainProcess] consumer: Connected to django://localhost//.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,628: DEBUG/Beat] Current schedule:
08:54:37 celeryd.1 | <Entry: celery.backend_cleanup celery.backend_cleanup() {<crontab: * 4 * * * (m/h/d/dM/MY)>}
08:54:37 celeryd.1 | [2013-02-15 08:54:37,629: DEBUG/Beat] Celerybeat: Ticking with max interval->5.00 minutes
08:54:37 celeryd.1 | [2013-02-15 08:54:37,658: DEBUG/Beat] Celerybeat: Waking up in 5.00 minutes.
08:54:38 celeryd.1 | [2013-02-15 08:54:38,110: DEBUG/MainProcess] consumer: basic.qos: prefetch_count->16
08:54:38 celeryd.1 | [2013-02-15 08:54:38,126: DEBUG/MainProcess] consumer: Ready to accept tasks!
08:55:08 celeryd.1 | [2013-02-15 08:55:08,184: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
08:55:08 celeryd.1 | The full contents of the message body was: body: 26 (2b) {content_type:u'application/json' content_encoding:u'utf-8' delivery_info:{u'priority': 0, u'routing_key': u'celery', u'exchange': u'celery'}}
The problem was twofold:
The message format was wrong. It needs to be a dictionary according to the documentation at http://docs.celeryproject.org/en/latest/internals/protocol.html which #asksol provided, and following the example at the bottom of that page.
Example Message
{"id": "4cc7438e-afd4-4f8f-a2f3-f46567e7ca77",
"task": "celery.task.PingTask",
"args": [],
"kwargs": {},
"retries": 0,
"eta": "2009-11-17T12:30:56.527191"}
put_message
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
message = {
'task': 'process-next-task',
'id': str(uuid.uuid4()),
'args': [id],
"kwargs": {},
"retries": 0,
"eta": str(datetime.datetime.now())
}
queue.put(message)
queue.close()
The Procfile process is a consumer that runs the task, so there's no need to set up a consumer within the task. I just needed to use a parameters that I sent in when I published the message.
api/tasks.py
#task(serializer='json', name='process-next-task')
def process_next_task(id):
try:
Model.objects.get(id=int(id))
except Model.DoesNotExist:
pass
else:
# Do stuff here
This is not solution for this Question,
but mark for issue in use celery4.0.2
output like:
[2017-02-09 17:45:12,136: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
The full contents of the message body was: body: [[], {}, {u'errbacks': None, u'callbacks': None, u'chord': None, u'chain': None}] (77b)
{content_type:'application/json' content_encoding:'utf-8'
delivery_info:{'consumer_tag': 'None4', 'redelivered': False, 'routing_key': 'test2', 'delivery_tag': 1L, 'exchange': ''} headers={'\xe5\xca.\xdb\x00\x00\x00\x00\x00': None, 'P&5\x07\x00': None, 'T\nKB\x00\x00\x00': '3f6295b3-c85c-4188-b424-d186da7e2edb', 'N\xfd\x17=\x00\x00': 'gen23043#hy-ts-bf-01', '\xcfb\xddR': 'py', '9*\xa8': None, '\xb7/b\x84\x00\x00\x00': 0, '\xe0\x0b\xfa\x89\x00\x00\x00': None, '\xdfR\xc4x\x00\x00\x00\x00\x00': [None, None], 'T3\x1d ': 'celeryserver.tasks.test', '\xae\xbf': '3f6295b3-c85c-4188-b424-d186da7e2edb', '\x11s\x1f\xd8\x00\x00\x00\x00': '()', 'UL\xa1\xfc\x00\x00\x00\x00\x00\x00': '{}'}}
solution:
https://github.com/celery/celery/issues/3675
# call this command many times, until it says it's not installed
pip uninstall librabbitmq
Thanks for https://github.com/ask
Apparently librabbitmq issue is related to new default protocol in celery 4.x. You can switch to previous protocol version by either putting CELERY_TASK_PROTOCOL = 1 in your settings if you're using Django or settings app.conf.task_protocol = 1 in celeryconf.py
Then you'll be able to queue task from with another task.
my: [celery 3.1.25; django=1.11]
ADD celery exchange in settings.py
CELERY_QUEUES = {
"celery": {"exchange": "celery",
"routing_key": "celery"}
}
OR use by this
# I declare queue
ch = settings.CELERY_APP.connection().channel()
ex = Exchange("implicit", channel=ch)
q = Queue(name="implicit", routing_key="implicit", channel=ch, exchange=ex)
q.declare() # <-- here
producer = ch.Producer(routing_key=q.routing_key, exchange=q.exchange)
# publish
producer.publish("text")
or U can use the second version
from kombu docs
from kombu import Exchange, Queue
task_queue = Queue('tasks', Exchange('tasks'), routing_key='tasks')
producer.publish(
{'hello': 'world'},
retry=True,
exchange=task_queue.exchange,
routing_key=task_queue.routing_key,
declare=[task_queue], # <-- declares exchange, queue and binds.
)